If the global configuration option 'enable_open_expired' is set
to true in the config, then the client will be able to make a
request with the header 'x-open-expired' set to true in order
to access an object that has expired, provided it is in its
grace period. If this config flag is set to false, the client
will not be able to access any expired objects, even with the
header, which is the default behavior unless the flag is set.
When a client sets a 'x-open-expired' header to a true value for a
GET/HEAD/POST request the proxy will forward x-backend-open-expired to
storage server. The storage server will allow clients that set
x-backend-open-expired to open and read an object that has not yet
been reaped by the object-expirer, even after the x-delete-at time
has passed.
The header is always ignored when used with temporary URLs.
Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com>
Related-Change: I106103438c4162a561486ac73a09436e998ae1f0
Change-Id: Ibe7dde0e3bf587d77e14808b169c02f8fb3dddb3
Add an example of a delay_reaping config option with quoted key.
Change-Id: I0c7ead6795822ea0fb0e81abc1e4685d7946942c
Related-Change: I106103438c4162a561486ac73a09436e998ae1f0
The object expirer can be configured to delay the reaping of
objects from disk after their expiration time using account
and container level delay_reaping values. The delay_reaping
value of accounts and containers in seconds is configured in
the object server config. The object expirer references these
configured values to only reap objects from specified accounts
and containers after their corresponding delays.
The goal of the delay_reaping feature is to prevent accidental or
premature data loss if an object marked for deletion with the
'x-delete-at' feature should not be reaped immediately, for
whatever reason.
Configuring the delay_reaping value at a granular account and
container level is beneficial for being able to keep storage
capacity consumption in control while maintaining a desired
data recovery window.
This patch also adds a sample configuration, documentation, and
tests for bad configurations and grace period functionality.
Co-Authored-By: Anish Kachinthaya <akachinthaya@nvidia.com>
Change-Id: I106103438c4162a561486ac73a09436e998ae1f0
Add some test assertions to cover the first-byte timing metrics
introduced in the related change.
Add ttfb param to log_request docstring.
Change-Id: I530652dd672d7d4e5eac351ccbad318773414f7d
Related-Change: I1611e34846e586703e9d3709fa64e8df41f2d685
The main motivation here is that mock.call becomes a namedtuple and you
can say `m.call_args_list[0].args` instead of `m.call_args_list[0][0]`
Change-Id: Ibb1a64ef0bfdebf06d26636cdb6ea191c10705f7
We stuff the access key into the request path until we get back a
more-authoritative account name from auth. But it needs to be a WSGI
string when we do!
Closes-Bug: #2058748
Change-Id: I34adb8141cc9e62d17a27f01c63f40d1dd25991c
Any of these directories may get unlinked between when we saw them in
their parent's directory listing and when we go to descend.
Change-Id: I1dfc0ee1d9e70cb0600557cde980bd5880bd40b3
Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: Ic940ff424aef9cc402bf54ebe5e5fc16330fc25c
The last time I really looked at this was probably Yoga, when we were
targetting 3.6 through 3.9 (and left 3.7 and 3.8 as experimental jobs).
Now, though, OpenStack is targetting 3.8 through 3.11; as before, we
can assume that if tests pass on those two versions, they should pass
on the versions in-between, too. (But still have them as experimental,
on-demand jobs).
See https://governance.openstack.org/tc/reference/runtimes/2024.1.html
Keep 2.7 and 3.6 testing as our own self-imposed minimums.
Change-Id: I7700aa3c93df311644655e7ebaf0b67aa692ee80
This change allows individual SLO segments to be downloaded by adding
an extra 'part-number' query parameter to the GET request. You can
also retrieve the Content-Length of an individual segment with a HEAD
request.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I7af0dc9898ca35f042b52dd5db000072f2c7512e
Currently when the memcachering `_get_conns` method runs out of memcached
servers to try and so fails to yield anything we log a:
All memcached servers error-limited
However, this error message isn't entirely accurate. It can also fail
because it failed to connect all it's memcached servers not just because
they're error limited.
You can disable error-limiting of memcached servers. So in this case
this error message is a red-herring.
Downstream we use a mcrouter client on each node which itself talks to a
bunch of memcache servers. Therefore in swift's memcachering client we
only configure the 1 mcrouter client as a single server in the ring.
Because of this we disable memcached error-limiting.
If the node gets too overloaded we've had timeouts talking to the local
mcrouter client. This fires off error-limitted log messages which can
confuse things.
Because it's possible to turn off error-limiting, the log line isn't
quite adequate anymore. So this patch changes it to:
No more memcached servers to try
Change-Id: I97fb4f3ee2ac45831aae14a782b2c6dc73e82d85