Merge remote-tracking branch 'remotes/origin/master' into merge-master

Change-Id: Id7595623c1b7bbb798784312af5a924d8ebaf6ff
This commit is contained in:
Romain LE DISEZ 2020-03-18 07:56:05 -04:00
commit 79bd2e59e5
195 changed files with 18675 additions and 2329 deletions

View File

@ -69,6 +69,25 @@
NOSE_COVER_HTML_DIR: '{toxinidir}/cover'
post-run: tools/playbooks/common/cover-post.yaml
- job:
name: swift-tox-py38
parent: swift-tox-base
nodeset: ubuntu-bionic
description: |
Run unit-tests for swift under cPython version 3.8.
Uses tox with the ``py38`` environment.
It sets TMPDIR to an XFS mount point created via
tools/test-setup.sh.
vars:
tox_envlist: py38
bindep_profile: test py38
python_version: 3.8
tox_environment:
NOSE_COVER_HTML: 1
NOSE_COVER_HTML_DIR: '{toxinidir}/cover'
post-run: tools/playbooks/common/cover-post.yaml
- job:
name: swift-tox-func-py27
parent: swift-tox-base
@ -243,6 +262,10 @@
# This tox env get run twice; once for Keystone and once for tempauth
tox_envlist: func
devstack_localrc:
# Other services are fine to run py3
USE_PYTHON3: true
# explicitly state that we want to test swift under py2
DISABLED_PYTHON3_PACKAGES: 'swift'
SWIFT_HASH: changeme
# We don't need multiple replicas to run purely functional tests.
# In fact, devstack special cases some things when there's only
@ -290,6 +313,7 @@
nodeset: centos-7
description: |
Setup a SAIO dev environment and run ceph-s3tests
timeout: 2400
pre-run:
- tools/playbooks/common/install_dependencies.yaml
- tools/playbooks/saio_single_node_setup/setup_saio.yaml
@ -371,12 +395,6 @@
run: tools/playbooks/multinode_setup/run.yaml
post-run: tools/playbooks/probetests/post.yaml
- job:
name: swift-multinode-rolling-upgrade-queens
parent: swift-multinode-rolling-upgrade
vars:
previous_swift_version: origin/stable/queens
- job:
name: swift-multinode-rolling-upgrade-rocky
parent: swift-multinode-rolling-upgrade
@ -389,6 +407,12 @@
vars:
previous_swift_version: origin/stable/stein
- job:
name: swift-multinode-rolling-upgrade-train
parent: swift-multinode-rolling-upgrade
vars:
previous_swift_version: origin/stable/train
- job:
name: swift-tox-lower-constraints
parent: openstack-tox-lower-constraints
@ -513,6 +537,10 @@
irrelevant-files:
- ^(api-ref|doc|releasenotes)/.*$
- ^test/(functional|probe)/.*$
- swift-tox-py38:
irrelevant-files:
- ^(api-ref|doc|releasenotes)/.*$
- ^test/(functional|probe)/.*$
# Functional tests
- swift-tox-func-py27:
@ -644,6 +672,7 @@
- swift-tox-py27
- swift-tox-py36
- swift-tox-py37
- swift-tox-py38
- swift-tox-func-py27
- swift-tox-func-encryption-py27
- swift-tox-func-domain-remap-staticweb-py27
@ -707,9 +736,9 @@
- swift-tox-func-py27-centos-7
- swift-tox-func-encryption-py27-centos-7
- swift-tox-func-ec-py27-centos-7
- swift-multinode-rolling-upgrade-queens
- swift-multinode-rolling-upgrade-rocky
- swift-multinode-rolling-upgrade-stein
- swift-multinode-rolling-upgrade-train
post:
jobs:

View File

@ -38,6 +38,7 @@ Contributors
------------
Aaron Rosen (arosen@nicira.com)
Adrian Smith (adrian_f_smith@dell.com)
Adrien Pensart (adrien.pensart@corp.ovh.com)
Akihiro Motoki (amotoki@gmail.com)
Akihito Takai (takaiak@nttdata.co.jp)
Alex Gaynor (alex.gaynor@gmail.com)
@ -91,6 +92,7 @@ Cheng Li (shcli@cn.ibm.com)
chengebj5238 (chengebj@inspur.com)
chenxiangui (chenxiangui@inspur.com)
Chmouel Boudjnah (chmouel@enovance.com)
Chris Smart (chris.smart@humanservices.gov.au)
Chris Wedgwood (cw@f00f.org)
Christian Berendt (berendt@b1-systems.de)
Christian Hugo (hugo.christian@web.de)
@ -358,6 +360,7 @@ Sascha Peilicke (saschpe@gmx.de)
Saverio Proto (saverio.proto@switch.ch)
Scott Simpson (sasimpson@gmail.com)
Sean McGinnis (sean.mcginnis@gmail.com)
SeongSoo Cho (ppiyakk2@printf.kr)
Sergey Kraynev (skraynev@mirantis.com)
Sergey Lukjanov (slukjanov@mirantis.com)
Shane Wang (shane.wang@intel.com)

209
CHANGELOG
View File

@ -1,3 +1,93 @@
swift (2.24.0)
* Added a new object versioning mode, with APIs for querying and
accessing old versions. For more information, see the documentation
at https://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.versioned_writes.object_versioning
* Added support for S3 versioning using the above new mode.
* Added a new middleware to allow accounts and containers to opt-in to
RFC-compliant ETags. For more information, see the documentation at
https://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.etag_quoter
Clients should be aware of the fact that ETags may be quoted for RFC
compliance; this may become the default behavior in some future release.
* Proxy, account, container, and object servers now support "seamless
reloads" via `SIGUSR1`. This is similar to the existing graceful
restarts but keeps the server socket open the whole time, reducing
service downtime.
* New buckets created via the S3 API will now store multi-part upload
data in the same storage policy as other data rather than the
cluster's default storage policy.
* Device region and zone can now be changed via `swift-ring-builder`.
Note that this may cause a lot of data movement on the next rebalance
as the builder tries to reach full dispersion.
* Added support for Python 3.8.
* The container sharder can now handle containers with special
characters in their names.
* Internal client no longer logs object DELETEs as status 499.
* Objects with an `X-Delete-At` value in the far future no longer cause
backend server errors.
* The bulk extract middleware once again allows clients to specify metadata
(including expiration timestamps) for all objects in the archive.
* Container sync now synchronizes static symlinks in a way similar to
static large objects.
* `swift_source` is set for more sub-requests in the proxy-server. See
https://docs.openstack.org/swift/latest/logs.html#swift-source
* Errors encountered while validating static symlink targets no longer
cause BadResponseLength errors in the proxy-server.
* On Python 3, the KMS keymaster now works with secrets stored
in Barbican with a text/plain payload-content-type.
* On Python 3, the formpost middleware now works with unicode file names.
* Several utility scripts now work better on Python 3:
* swift-account-audit
* swift-dispersion-populate
* swift-drive-recon
* swift-recon
* On Python 3, certain S3 API headers are now lower case as they
would be coming from AWS.
* Per-service `auto_create_account_prefix` settings are now deprecated
and may be ignored in a future release; if you need to use this, please
set it in the `[swift-constraints]` section of /etc/swift/swift.conf.
* Various other minor bug fixes and improvements.
swift (2.23.1, train stable backports)
* On Python 3, the KMS keymaster now works with secrets stored
in Barbican with a text/plain payload-content-type.
* Several utility scripts now work better on Python 3:
* swift-account-audit
* swift-dispersion-populate
* swift-drive-recon
* swift-recon
swift (2.23.0, OpenStack Train)
* Python 3.6 and 3.7 are now fully supported. Several py3-related
@ -132,6 +222,59 @@ swift (2.22.0)
* Various other minor bug fixes and improvements.
swift (2.21.1, stein stable backports)
* Sharding improvements
* The container-replicator now only attempts to fetch shard ranges if
the remote indicates that it has shard ranges. Further, it does so
with a timeout to prevent the process from hanging in certain cases.
* The container-replicator now correctly enqueues container-reconciler
work for sharded containers.
* Container metadata related to sharding are now removed when no
longer needed.
* S3 API improvements
* Unsigned payloads work with v4 signatures once more.
* Multipart upload parts may now be copied from other multipart uploads.
* CompleteMultipartUpload requests with a Content-MD5 now work.
* Content-Type can now be updated when copying an object.
* Fixed v1 listings that end with a non-ASCII object name.
* Background corruption-detection improvements
* Detect and remove invalid entries from hashes.pkl
* When object path is not a directory, just quarantine it,
rather than the whole suffix.
* Static Large Object sizes in listings for versioned containers are
now more accurate.
* When refetching Static Large Object manifests, non-manifest responses
are now handled better.
* Cross-account symlinks now store correct account information in
container listings. This was previously fixed in 2.22.0.
* Requesting multiple ranges from a Dynamic Large Object now returns the
entire object instead of incorrect data. This was previously fixed in
2.23.0.
* When making backend requests, the proxy-server now ensures query
parameters are always properly quoted. Previously, the proxy would
encounter an error on Python 2.7.17 if the client included non-ASCII
query parameters in object requests. This was previously fixed in
2.23.0.
swift (2.21.0, OpenStack Stein)
* Change the behavior of the EC reconstructor to perform a
@ -298,6 +441,72 @@ swift (2.20.0)
* Various other minor bug fixes and improvements.
swift (2.19.2, rocky stable backports)
* Sharding improvements
* The container-replicator now only attempts to fetch shard ranges if
the remote indicates that it has shard ranges. Further, it does so
with a timeout to prevent the process from hanging in certain cases.
* The container-replicator now correctly enqueues container-reconciler
work for sharded containers.
* S3 API improvements
* Fixed an issue where v4 signatures would not be validated against
the body of the request, allowing a replay attack if request headers
were captured by a malicious third party. Note that unsigned payloads
still function normally.
* CompleteMultipartUpload requests with a Content-MD5 now work.
* Fixed v1 listings that end with a non-ASCII object name.
* Multipart object segments are now actually deleted when the
multipart object is deleted via the S3 API.
* Fixed an issue that caused Delete Multiple Objects requests with
large bodies to 400. This was previously fixed in 2.20.0.
* Fixed an issue where non-ASCII Keystone EC2 credentials would not get
mapped to the correct account. This was previously fixed in 2.20.0.
* Background corruption-detection improvements
* Detect and remove invalid entries from hashes.pkl
* When object path is not a directory, just quarantine it,
rather than the whole suffix.
* Fixed a bug where encryption would store the incorrect key
metadata if the object name starts with a slash.
* Fixed an issue where an object server failure during a client
download could leave an open socket between the proxy and
client.
* Static Large Object sizes in listings for versioned containers are
now more accurate.
* When refetching Static Large Object manifests, non-manifest responses
are now handled better.
* Cross-account symlinks now store correct account information in
container listings. This was previously fixed in 2.22.0.
* Requesting multiple ranges from a Dynamic Large Object now returns the
entire object instead of incorrect data. This was previously fixed in
2.23.0.
* When making backend requests, the proxy-server now ensures query
parameters are always properly quoted. Previously, the proxy would
encounter an error on Python 2.7.17 if the client included non-ASCII
query parameters in object requests. This was previously fixed in
2.23.0.
swift (2.19.1, rocky stable backports)
* Prevent PyKMIP's kmip_protocol logger from logging at DEBUG.

View File

@ -1,18 +1,15 @@
========================
Team and repository tags
========================
===============
OpenStack Swift
===============
.. image:: https://governance.openstack.org/tc/badges/swift.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
Swift
=====
A distributed object storage system designed to scale from a single
machine to thousands of servers. Swift is optimized for multi-tenancy
and high concurrency. Swift is ideal for backups, web and mobile
OpenStack Swift is a distributed object storage system designed to scale
from a single machine to thousands of servers. Swift is optimized for
multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile
content, and any other unstructured data that can grow without bound.
Swift provides a simple, REST-based API fully documented at

View File

@ -42,8 +42,8 @@ don't work - or at least fail in frequently common enough scenarios to
be considered "horribly broken". A comment in our review that says
roughly "I ran this on my machine and observed ``description of
behavior change is supposed to achieve``" is the most powerful defense
we have against the terrible terrible scorn from our fellow Swift
developers and operators when we accidentally merge bad code.
we have against the terrible scorn from our fellow Swift developers
and operators when we accidentally merge bad code.
If you're doing a fair amount of reviews - you will participate in
merging a change that will break my clusters - it's cool - I'll do it

View File

@ -108,7 +108,6 @@ class Auditor(object):
consistent = False
print(' MD5 does not match etag for "%s" on %s/%s'
% (path, node['ip'], node['device']))
etags.append((resp.getheader('ETag'), node))
else:
conn = http_connect(node['ip'], node['port'],
node['device'], part, 'HEAD',
@ -120,6 +119,12 @@ class Auditor(object):
print(' Bad status HEADing object "%s" on %s/%s'
% (path, node['ip'], node['device']))
continue
override_etag = resp.getheader(
'X-Object-Sysmeta-Container-Update-Override-Etag')
if override_etag:
etags.append((override_etag, node))
else:
etags.append((resp.getheader('ETag'), node))
except Exception:
self.object_exceptions += 1

View File

@ -179,8 +179,8 @@ Logging level. The default is INFO.
Enables request logging. The default is True.
.IP "\fBset log_address\fR
Logging address. The default is /dev/log.
.IP "\fBauto_create_account_prefix\fR
The default is ".".
.IP "\fBauto_create_account_prefix [deprecated]\fR"
The default is ".". Should be configured in swift.conf instead.
.IP "\fBreplication_server\fR
Configure parameter for creating specific server.
To handle all verbs, including replication verbs, do not specify
@ -394,7 +394,7 @@ reclaim_age. This ensures that once the account-reaper has deleted a
container there is sufficient time for the container-updater to report to the
account before the account DB is removed.
.IP \fBreap_warn_after\fR
If the account fails to be be reaped due to a persistent error, the
If the account fails to be reaped due to a persistent error, the
account reaper will log a message such as:
Account <name> has not been reaped since <date>
You can search logs for this message if space is not being reclaimed

View File

@ -191,8 +191,8 @@ Request timeout to external services. The default is 3 seconds.
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBallow_versions\fR
The default is false.
.IP \fBauto_create_account_prefix\fR
The default is '.'.
.IP "\fBauto_create_account_prefix [deprecated]\fR"
The default is '.'. Should be configured in swift.conf instead.
.IP \fBreplication_server\fR
Configure parameter for creating specific server.
To handle all verbs, including replication verbs, do not specify
@ -362,7 +362,7 @@ Request timeout to external services. The default is 3 seconds.
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBcontainers_per_second\fR
Maximum containers updated per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 50.
.IP \fBslowdown\fR
.IP "\fBslowdown [deprecated]\fR"
Slowdown will sleep that amount between containers. The default is 0.01 seconds. Deprecated in favor of containers_per_second
.IP \fBaccount_suppression_time\fR
Seconds to suppress updating an account that has generated an error. The default is 60 seconds.

View File

@ -202,8 +202,8 @@ This is normally \fBegg:swift#proxy_logging\fR. See proxy-server.conf-sample for
.RS 3
.IP \fBinterval\fR
Replaces run_pause with the more standard "interval", which means the replicator won't pause unless it takes less than the interval set. The default is 300.
.IP "\fBauto_create_account_prefix\fR
The default is ".".
.IP "\fBauto_create_account_prefix [deprecated]\fR"
The default is ".". Should be configured in swift.conf instead.
.IP \fBexpiring_objects_account_name\fR
The default is 'expiring_objects'.
.IP \fBreport_interval\fR

View File

@ -216,8 +216,8 @@ On PUTs, sync data every n MB. The default is 512.
Comma separated list of headers that can be set in metadata on an object.
This list is in addition to X-Object-Meta-* headers and cannot include Content-Type, etag, Content-Length, or deleted.
The default is 'Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object, Cache-Control, Content-Language, Expires, X-Robots-Tag'.
.IP "\fBauto_create_account_prefix\fR"
The default is '.'.
.IP "\fBauto_create_account_prefix [deprecated]\fR"
The default is '.'. Should be configured in swift.conf instead.
.IP "\fBreplication_server\fR"
Configure parameter for creating specific server
To handle all verbs, including replication verbs, do not specify
@ -504,7 +504,7 @@ Number of updater workers to spawn. The default is 1.
Request timeout to external services. The default is 10 seconds.
.IP \fBobjects_per_second\fR
Maximum objects updated per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 50.
.IP \fBslowdown\fR
.IP "\fBslowdown [deprecated]\fR"
Slowdown will sleep that amount between objects. The default is 0.01 seconds. Deprecated in favor of objects_per_second.
.IP "\fBrecon_cache_path\fR"
The recon_cache_path simply sets the directory where stats for a few items will be stored.

View File

@ -766,10 +766,12 @@ Template used to format access logs. All words surrounded by curly brackets will
.IP "log_info"
.IP "start_time (timestamp at the receiving, timestamp)"
.IP "end_time (timestamp at the end of the treatment, timestamp)"
.IP "ttfb (duration between request and first bytes is sent)"
.IP "policy_index"
.IP "account (account name, anonymizable)"
.IP "container (container name, anonymizable)"
.IP "object (object name, anonymizable)"
.IP "pid (PID of the process emitting the log line)"
.PD
.RE
@ -1050,8 +1052,9 @@ is false.
.IP \fBaccount_autocreate\fR
If set to 'true' authorized accounts that do not yet exist within the Swift cluster
will be automatically created. The default is set to false.
.IP \fBauto_create_account_prefix\fR
Prefix used when automatically creating accounts. The default is '.'.
.IP "\fBauto_create_account_prefix [deprecated]\fR"
Prefix used when automatically creating accounts. The default is '.'. Should
be configured in swift.conf instead.
.IP \fBmax_containers_per_account\fR
If set to a positive value, trying to create a container when the account
already has at least this maximum containers will result in a 403 Forbidden.
@ -1061,8 +1064,6 @@ recheck_account_existence before the 403s kick in.
This is a comma separated list of account hashes that ignore the max_containers_per_account cap.
.IP \fBdeny_host_headers\fR
Comma separated list of Host headers to which the proxy will deny requests. The default is empty.
.IP \fBput_queue_depth\fR
Depth of the proxy put queue. The default is 10.
.IP \fBsorting_method\fR
Storage nodes can be chosen at random (shuffle - default), by using timing
measurements (timing), or by using an explicit match (affinity).

View File

@ -87,6 +87,7 @@ allows one to use the keywords such as "all", "main" and "rest" for the <server>
.IP "\fIno-wait\fR: \t\t\t spawn server and return immediately"
.IP "\fIonce\fR: \t\t\t start server and run one pass on supporting daemons"
.IP "\fIreload\fR: \t\t\t graceful shutdown then restart on supporting servers"
.IP "\fIreload-seamless\fR: \t\t reload supporting servers with no downtime"
.IP "\fIrestart\fR: \t\t\t stops then restarts server"
.IP "\fIshutdown\fR: \t\t allow current requests to finish on supporting servers"
.IP "\fIstart\fR: \t\t\t starts a server"

View File

@ -201,6 +201,10 @@ Use a comma-separated list in case of multiple allowed versions, for example
valid_api_versions = v0,v1,v2.
This is only enforced for account, container and object requests. The allowed
api versions are by default excluded from /info.
.IP "\fBauto_create_account_prefix\fR"
auto_create_account_prefix specifies the prefix for system accounts, such as
those used by the object-expirer, and container-sharder.
Default is ".".

View File

@ -91,23 +91,16 @@ ceph_s3:
s3tests.functional.test_s3.test_put_object_ifnonmatch_overwrite_existed_failed: {status: KNOWN}
s3tests.functional.test_s3.test_set_cors: {status: KNOWN}
s3tests.functional.test_s3.test_stress_bucket_acls_changes: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_concurrent_object_create_and_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_concurrent_object_create_concurrent_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_object_acl: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_bucket_create_suspend: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_copy_obj_version: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker_create: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_overwrite_multipart: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_read_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_read_remove_head: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_all: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_special_names: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_list_marker: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite_suspended: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_removal: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_suspend_versions: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_suspend_versions_simple: {status: KNOWN}
s3tests.functional.test_s3_website.check_can_test_website: {status: KNOWN}
@ -177,9 +170,6 @@ ceph_s3:
s3tests.functional.test_s3.test_lifecycle_set_multipart: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_noncurrent: {status: KNOWN}
s3tests.functional.test_s3.test_multipart_copy_invalid_range: {status: KNOWN}
s3tests.functional.test_s3.test_multipart_copy_versioned: {status: KNOWN}
s3tests.functional.test_s3.test_object_copy_versioned_bucket: {status: KNOWN}
s3tests.functional.test_s3.test_object_copy_versioning_multipart_upload: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_empty_conditions: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_tags_anonymous_request: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_tags_authenticated_request: {status: KNOWN}

View File

@ -1,93 +1,13 @@
ceph_s3:
<nose.suite.ContextSuite context=s3tests.functional>:teardown: {status: KNOWN}
<nose.suite.ContextSuite context=s3tests_boto3.functional>:teardown: {status: KNOWN}
<nose.suite.ContextSuite context=test_routing_generator>:setup: {status: KNOWN}
s3tests.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2: {status: KNOWN}
s3tests.functional.test_headers.test_bucket_create_bad_authorization_none: {status: KNOWN}
s3tests.functional.test_headers.test_object_create_bad_authorization_invalid_aws2: {status: KNOWN}
s3tests.functional.test_headers.test_object_create_bad_authorization_none: {status: KNOWN}
s3tests.functional.test_s3.test_100_continue: {status: KNOWN}
s3tests.functional.test_s3.test_atomic_conditional_write_1mb: {status: KNOWN}
s3tests.functional.test_s3.test_atomic_dual_conditional_write_1mb: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_acl_grant_email: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_acl_grant_email_notexist: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_acl_grant_nonexist_user: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_acl_no_grants: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_header_acl_grants: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_list_objects_anonymous: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_list_objects_anonymous_fail: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_recreate_not_overriding: {status: KNOWN}
s3tests.functional.test_s3.test_cors_origin_response: {status: KNOWN}
s3tests.functional.test_s3.test_cors_origin_wildcard: {status: KNOWN}
s3tests.functional.test_s3.test_list_buckets_anonymous: {status: KNOWN}
s3tests.functional.test_s3.test_list_buckets_invalid_auth: {status: KNOWN}
s3tests.functional.test_s3.test_logging_toggle: {status: KNOWN}
s3tests.functional.test_s3.test_multipart_resend_first_finishes_last: {status: KNOWN}
s3tests.functional.test_s3.test_object_copy_canned_acl: {status: KNOWN}
s3tests.functional.test_s3.test_object_header_acl_grants: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_get: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_get_bucket_acl: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_get_bucket_gone: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_get_object_acl: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_get_object_gone: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_put: {status: KNOWN}
s3tests.functional.test_s3.test_object_raw_put_write_access: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_anonymous_request: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_authenticated_request: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_authenticated_request_bad_access_key: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_case_insensitive_condition_fields: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_condition_is_case_sensitive: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_escaped_field_values: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_expired_policy: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_expires_is_case_sensitive: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_ignored_header: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_invalid_access_key: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_invalid_content_length_argument: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_invalid_date_format: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_invalid_request_field_value: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_invalid_signature: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_missing_conditions_list: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_missing_content_length_argument: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_missing_expires_condition: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_missing_policy_condition: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_missing_signature: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_no_key_specified: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_request_missing_policy_specified_field: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_set_invalid_success_code: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_set_key_from_filename: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_set_success_code: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_success_redirect_action: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_upload_larger_than_chunk: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_upload_size_below_minimum: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_upload_size_limit_exceeded: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_user_specified_header: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifmatch_failed: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifmatch_good: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifmatch_nonexisted_failed: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifmatch_overwrite_existed_good: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifnonmatch_failed: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifnonmatch_good: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifnonmatch_nonexisted_good: {status: KNOWN}
s3tests.functional.test_s3.test_put_object_ifnonmatch_overwrite_existed_failed: {status: KNOWN}
s3tests.functional.test_s3.test_set_cors: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_concurrent_object_create_and_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_concurrent_object_create_concurrent_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_object_acl: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_bucket_create_suspend: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_copy_obj_version: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_multi_object_delete_with_marker_create: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_overwrite_multipart: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_read_remove: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_read_remove_head: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_all: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_create_versions_remove_special_names: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_list_marker: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_overwrite_suspended: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_plain_null_version_removal: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_suspend_versions: {status: KNOWN}
s3tests.functional.test_s3.test_versioning_obj_suspend_versions_simple: {status: KNOWN}
s3tests.functional.test_s3_website.check_can_test_website: {status: KNOWN}
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_base: {status: KNOWN}
s3tests.functional.test_s3_website.test_website_bucket_private_redirectall_path: {status: KNOWN}
@ -117,68 +37,230 @@ ceph_s3:
s3tests.functional.test_s3_website.test_website_xredirect_private_relative: {status: KNOWN}
s3tests.functional.test_s3_website.test_website_xredirect_public_abs: {status: KNOWN}
s3tests.functional.test_s3_website.test_website_xredirect_public_relative: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_list_return_data_versioning: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_acl: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_another_bucket: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_different_tenant: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_set_condition_operator_end_with_IfExists: {status: KNOWN}
s3tests.functional.test_s3.test_delete_tags_obj_public: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_invalid_md5: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_method_head: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_multipart_bad_download: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_multipart_invalid_chunks_1: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_multipart_invalid_chunks_2: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_no_key: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_no_md5: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_other_key: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_post_object_authenticated_request: {status: KNOWN}
s3tests.functional.test_s3.test_encryption_sse_c_present: {status: KNOWN}
s3tests.functional.test_s3.test_get_obj_head_tagging: {status: KNOWN}
s3tests.functional.test_s3.test_get_obj_tagging: {status: KNOWN}
s3tests.functional.test_s3.test_get_tags_acl_public: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_deletemarker_expiration: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_expiration: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_expiration_date: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_get: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_get_no_id: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_id_too_long: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_multipart_expiration: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_noncur_expiration: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_rules_conflicted: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_same_id: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_date: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_deletemarker: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_empty_filter: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_filter: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_multipart: {status: KNOWN}
s3tests.functional.test_s3.test_lifecycle_set_noncurrent: {status: KNOWN}
s3tests.functional.test_s3.test_multipart_copy_invalid_range: {status: KNOWN}
s3tests.functional.test_s3.test_multipart_copy_versioned: {status: KNOWN}
s3tests.functional.test_s3.test_object_copy_versioned_bucket: {status: KNOWN}
s3tests.functional.test_s3.test_object_copy_versioning_multipart_upload: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_empty_conditions: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_tags_anonymous_request: {status: KNOWN}
s3tests.functional.test_s3.test_post_object_tags_authenticated_request: {status: KNOWN}
s3tests.functional.test_s3.test_put_delete_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_excess_key_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_excess_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_excess_val_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_max_kvsize_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_max_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_modify_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_obj_with_tags: {status: KNOWN}
s3tests.functional.test_s3.test_put_tags_acl_public: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_method_head: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_multipart_invalid_chunks_1: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_multipart_invalid_chunks_2: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_multipart_upload: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_post_object_authenticated_request: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_present: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_read_declare: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_transfer_13b: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_transfer_1MB: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_transfer_1b: {status: KNOWN}
s3tests.functional.test_s3.test_sse_kms_transfer_1kb: {status: KNOWN}
s3tests.functional.test_s3.test_versioned_object_acl_no_version_specified: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_put_obj_enc: {status: KNOWN}
s3tests.functional.test_s3.test_bucket_policy_put_obj_request_obj_tag: {status: KNOWN}
s3tests.functional.test_s3.test_append_object_position_wrong: {status: KNOWN}
s3tests.functional.test_s3.test_append_normal_object: {status: KNOWN}
s3tests.functional.test_s3.test_append_object: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_bucket_create_bad_authorization_empty: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_bucket_create_bad_authorization_invalid_aws2: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_bucket_create_bad_authorization_none: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_bucket_create_bad_date_none_aws2: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_authorization_empty: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_authorization_incorrect_aws2: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_authorization_invalid_aws2: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_authorization_none: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_contentlength_mismatch_above: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_contentlength_mismatch_below_aws2: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_contentlength_none: {status: KNOWN}
s3tests_boto3.functional.test_headers.test_object_create_bad_date_none_aws2: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_100_continue: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_abort_multipart_upload: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_atomic_conditional_write_1mb: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_atomic_dual_conditional_write_1mb: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_email: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_email_notexist: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_nonexist_user: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_userid_fullcontrol: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_userid_read: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_userid_readacp: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_userid_write: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_grant_userid_writeacp: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_acl_no_grants: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_create_exists: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_create_naming_bad_long: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_create_naming_bad_punctuation: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_create_naming_bad_short_empty: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_head_extended: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_header_acl_grants: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_delimiter_empty: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_objects_anonymous: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_objects_anonymous_fail: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_return_data: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_return_data_versioning: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_list_unordered: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_listv2_delimiter_empty: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_listv2_objects_anonymous: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_listv2_objects_anonymous_fail: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_listv2_unordered: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_acl: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_another_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_different_tenant: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_get_obj_acl_existing_tag: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_get_obj_existing_tag: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_get_obj_tagging_existing_tag: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_acl: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_copy_source: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_copy_source_meta: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_enc: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_grant: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_request_obj_tag: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_put_obj_tagging_existing_tag: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_policy_set_condition_operator_end_with_IfExists: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucket_recreate_not_overriding: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucketv2_policy: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucketv2_policy_acl: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucketv2_policy_another_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_bucketv2_policy_different_tenant: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_cors_header_option: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_cors_origin_response: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_cors_origin_wildcard: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_delete_tags_obj_public: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_key_no_sse_c: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_invalid_md5: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_method_head: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_multipart_bad_download: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_multipart_invalid_chunks_1: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_multipart_invalid_chunks_2: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_multipart_upload: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_no_key: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_no_md5: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_other_key: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_post_object_authenticated_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_encryption_sse_c_present: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_get_obj_head_tagging: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_get_obj_tagging: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_get_tags_acl_public: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_deletemarker_expiration: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration_date: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration_days0: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration_header_head: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration_header_put: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_expiration_versioning_enabled: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_get: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_get_no_id: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_id_too_long: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_multipart_expiration: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_noncur_expiration: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_same_id: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_date: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_deletemarker: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_empty_filter: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_filter: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_multipart: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecycle_set_noncurrent: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_lifecyclev2_expiration: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_list_buckets_anonymous: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_list_buckets_invalid_auth: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_logging_toggle: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_multipart_copy_invalid_range: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_multipart_resend_first_finishes_last: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_multipart_upload: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_multipart_upload_empty: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_acl_canned_bucketownerread: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_anon_put: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_anon_put_write_access: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_delete_key_bucket_gone: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_header_acl_grants: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_delete_object_with_legal_hold_off: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_delete_object_with_legal_hold_on: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_delete_object_with_retention: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_legal_hold_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_obj_lock: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_obj_lock_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_obj_metadata: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_obj_retention: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_get_obj_retention_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_legal_hold_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_legal_hold_invalid_status: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_lock: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_lock_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_lock_invalid_days: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_increase_period: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_invalid_bucket: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_invalid_mode: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_override_default_retention: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_shorten_period: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_shorten_period_bypass: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_put_obj_retention_versionid: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_suspend_versioning: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_lock_uploading_obj: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_bucket_acl: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_bucket_gone: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_object_acl: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_object_gone: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_x_amz_expires_out_max_range: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_get_x_amz_expires_out_positive_range: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_raw_put_authenticated_expired: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_metadata_empty_to_unreadable_prefix: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_metadata_empty_to_unreadable_suffix: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_metadata_overwrite_to_unreadable_prefix: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_metadata_overwrite_to_unreadable_suffix: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_non_utf8_metadata: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_object_set_get_unicode_metadata: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_anonymous_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_authenticated_no_content_type: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_authenticated_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_authenticated_request_bad_access_key: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_case_insensitive_condition_fields: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_condition_is_case_sensitive: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_empty_conditions: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_escaped_field_values: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_expired_policy: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_expires_is_case_sensitive: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_ignored_header: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_invalid_access_key: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_invalid_content_length_argument: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_invalid_date_format: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_invalid_request_field_value: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_invalid_signature: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_missing_conditions_list: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_missing_content_length_argument: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_missing_expires_condition: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_missing_policy_condition: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_missing_signature: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_no_key_specified: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_request_missing_policy_specified_field: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_set_invalid_success_code: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_set_key_from_filename: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_set_success_code: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_success_redirect_action: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_tags_anonymous_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_tags_authenticated_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_upload_larger_than_chunk: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_upload_size_below_minimum: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_upload_size_limit_exceeded: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_post_object_user_specified_header: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_delete_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_excess_key_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_excess_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_excess_val_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_max_kvsize_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_max_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_modify_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_obj_with_tags: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifmatch_failed: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifmatch_good: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifmatch_nonexisted_failed: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifmatch_overwrite_existed_good: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifnonmatch_failed: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifnonmatch_good: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifnonmatch_nonexisted_good: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_object_ifnonmatch_overwrite_existed_failed: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_put_tags_acl_public: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_set_cors: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_set_tagging: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_method_head: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_multipart_invalid_chunks_1: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_multipart_invalid_chunks_2: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_multipart_upload: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_not_declared: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_post_object_authenticated_request: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_present: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_read_declare: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_transfer_13b: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_transfer_1MB: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_transfer_1b: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_sse_kms_transfer_1kb: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_versioning_bucket_multipart_upload_return_version_id: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_versioning_multi_object_delete_with_marker_create: {status: KNOWN}
s3tests_boto3.functional.test_s3.test_versioning_obj_plain_null_version_overwrite: {status: KNOWN}

View File

@ -25,7 +25,6 @@ log_level = INFO
[object-expirer]
interval = 300
# auto_create_account_prefix = .
# report_interval = 300
# concurrency is the level of concurrency to use to do the work, this value
# must be set to at least 1

View File

@ -68,6 +68,7 @@ use = egg:swift#gatekeeper
[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true
allow_object_versioning = true
[filter:copy]
use = egg:swift#copy
@ -82,7 +83,6 @@ use = egg:swift#symlink
[filter:s3api]
use = egg:swift#s3api
s3_acl = yes
dns_compliant_bucket_names = no
check_bucket_owner = yes
# Example to create root secret: `openssl rand -base64 32`

View File

@ -1362,20 +1362,29 @@ Swift services are generally managed with ``swift-init``. the general usage is
``swift-init <service> <command>``, where service is the Swift service to
manage (for example object, container, account, proxy) and command is one of:
========== ===============================================
Command Description
---------- -----------------------------------------------
start Start the service
stop Stop the service
restart Restart the service
shutdown Attempt to gracefully shutdown the service
reload Attempt to gracefully restart the service
========== ===============================================
=============== ===============================================
Command Description
--------------- -----------------------------------------------
start Start the service
stop Stop the service
restart Restart the service
shutdown Attempt to gracefully shutdown the service
reload Attempt to gracefully restart the service
reload-seamless Attempt to seamlessly restart the service
=============== ===============================================
A graceful shutdown or reload will finish any current requests before
completely stopping the old service. There is also a special case of
``swift-init all <command>``, which will run the command for all swift
services.
A graceful shutdown or reload will allow all server workers to finish any
current requests before exiting. The parent server process exits immediately.
A seamless reload will make new configuration settings active, with no window
where client requests fail due to there being no active listen socket.
The parent server process will re-exec itself, retaining its existing PID.
After the re-exec'ed parent server process binds its listen sockets, the old
listen sockets are closed and old server workers finish any current requests
before exiting.
There is also a special case of ``swift-init all <command>``, which will run
the command for all swift services.
In cases where there are multiple configs for a service, a specific config
can be managed with ``swift-init <service>.<config> <command>``.

View File

@ -28,7 +28,7 @@ The supported headers are,
| | Space separated. |
+------------------------------------------------+------------------------------+
In addition the the values set in container metadata, some cluster-wide values
In addition the values set in container metadata, some cluster-wide values
may also be configured using the ``strict_cors_mode``, ``cors_allow_origin``
and ``cors_expose_headers`` in ``proxy-server.conf``. See
``proxy-server.conf-sample`` for more information.

View File

@ -383,26 +383,28 @@ An example of common configuration file can be found at etc/swift.conf-sample
The following configuration options are available:
=================== ========== =============================================
Option Default Description
------------------- ---------- ---------------------------------------------
max_header_size 8192 max_header_size is the max number of bytes in
the utf8 encoding of each header. Using 8192
as default because eventlet use 8192 as max
size of header line. This value may need to
be increased when using identity v3 API
tokens including more than 7 catalog entries.
See also include_service_catalog in
proxy-server.conf-sample (documented in
overview_auth.rst).
extra_header_count 0 By default the maximum number of allowed
headers depends on the number of max
allowed metadata settings plus a default
value of 32 for regular http headers.
If for some reason this is not enough (custom
middleware for example) it can be increased
with the extra_header_count constraint.
=================== ========== =============================================
========================== ========== =============================================
Option Default Description
-------------------------- ---------- ---------------------------------------------
max_header_size 8192 max_header_size is the max number of bytes in
the utf8 encoding of each header. Using 8192
as default because eventlet use 8192 as max
size of header line. This value may need to
be increased when using identity v3 API
tokens including more than 7 catalog entries.
See also include_service_catalog in
proxy-server.conf-sample (documented in
overview_auth.rst).
extra_header_count 0 By default the maximum number of allowed
headers depends on the number of max
allowed metadata settings plus a default
value of 32 for regular http headers.
If for some reason this is not enough (custom
middleware for example) it can be increased
with the extra_header_count constraint.
auto_create_account_prefix . Prefix used when automatically creating
accounts.
========================== ========== =============================================
---------------------------
Object Server Configuration
@ -600,8 +602,6 @@ allowed_headers Content-Disposition, Comma separated list o
Content-Language,
Expires,
X-Robots-Tag
auto_create_account_prefix . Prefix used when automatically
creating accounts.
replication_server Configure parameter for creating
specific server. To handle all verbs,
including replication verbs, do not
@ -1017,8 +1017,6 @@ log_address /dev/log Logging directory
interval 300 Time in seconds to wait between
expirer passes
report_interval 300 Frequency of status logs in seconds.
auto_create_account_prefix . Prefix used when automatically
creating accounts.
concurrency 1 Level of concurrency to use to do the work,
this value must be set to at least 1
expiring_objects_account_name expiring_objects name for legacy expirer task queue
@ -1196,7 +1194,6 @@ set log_address /dev/log Logging directory
node_timeout 3 Request timeout to external services
conn_timeout 0.5 Connection timeout to external services
allow_versions false Enable/Disable object versioning feature
auto_create_account_prefix . Prefix used when automatically
replication_server Configure parameter for creating
specific server. To handle all verbs,
including replication verbs, do not
@ -1551,8 +1548,6 @@ set log_level INFO Logging level
set log_requests True Whether or not to log each
request
set log_address /dev/log Logging directory
auto_create_account_prefix . Prefix used when automatically
creating accounts.
replication_server Configure parameter for creating
specific server. To handle all verbs,
including replication verbs, do not
@ -1748,7 +1743,7 @@ delay_reaping 0 Normally, the reaper begins deleting
time for the container-updater to report
to the account before the account DB is
removed.
reap_warn_after 2892000 If the account fails to be be reaped due
reap_warn_after 2892000 If the account fails to be reaped due
to a persistent error, the account reaper
will log a message such as:
Account <name> has not been reaped since <date>

View File

@ -1,5 +1,7 @@
.. _saio:
=======================
SAIO - Swift All In One
SAIO (Swift All In One)
=======================
.. note::

View File

@ -84,6 +84,7 @@ start_time High-resolution timestamp from the start of the request.
(timestamp)
end_time High-resolution timestamp from the end of the request.
(timestamp)
ttfb Duration between the request and the first bytes is sent.
policy_index The value of the storage policy index.
account The account part extracted from the path of the request.
(anonymizable)
@ -91,6 +92,7 @@ container The container part extracted from the path of the request.
(anonymizable)
object The object part extracted from the path of the request.
(anonymizable)
pid PID of the process emitting the log line.
=================== ==========================================================
In one log line, all of the above fields are space-separated and url-encoded.
@ -126,6 +128,7 @@ SW :ref:`staticweb`
TU :ref:`tempurl`
BD :ref:`bulk` (delete)
EA :ref:`bulk` (extract)
AQ :ref:`account-quotas`
CQ :ref:`container-quotas`
CS :ref:`container-sync`
TA :ref:`common_tempauth`
@ -138,6 +141,9 @@ VW :ref:`versioned_writes`
SSC :ref:`copy`
SYM :ref:`symlink`
SH :ref:`sharding_doc`
S3 :ref:`s3api`
OV :ref:`object_versioning`
EQ :ref:`etag_quoter`
======================= =============================

View File

@ -4,6 +4,8 @@
Middleware
**********
.. _account-quotas:
Account Quotas
==============
@ -205,6 +207,15 @@ Encryption middleware should be deployed in conjunction with the
:members:
:show-inheritance:
.. _etag_quoter:
Etag Quoter
===========
.. automodule:: swift.common.middleware.etag_quoter
:members:
:show-inheritance:
.. _formpost:
FormPost
@ -276,12 +287,12 @@ Name Check (Forbidden Character Filter)
:members:
:show-inheritance:
.. _versioned_writes:
.. _object_versioning:
Object Versioning
=================
.. automodule:: swift.common.middleware.versioned_writes
.. automodule:: swift.common.middleware.versioned_writes.object_versioning
:members:
:show-inheritance:
@ -369,6 +380,15 @@ TempURL
:members:
:show-inheritance:
.. _versioned_writes:
Versioned Writes
=================
.. automodule:: swift.common.middleware.versioned_writes.legacy
:members:
:show-inheritance:
XProfile
==============

View File

@ -404,7 +404,7 @@ In summary, the DB states that any container replica may be in are:
more shard ranges in addition to its own shard range. The retiring database
has been unlinked.
- COLLAPSED - There is only one database, the fresh database, which has only
its its own shard range and store object records.
its own shard range and store object records.
.. note::

View File

@ -45,6 +45,12 @@ synchronization key.
are being synced, then you should follow the instructions for
:ref:`symlink_container_sync_client_config` to be compatible with symlinks.
Be aware that symlinks may be synced before their targets even if they are
in the same container and were created after the target objects. In such
cases, a GET for the symlink will fail with a ``404 Not Found`` error. If
the target has been overwritten, a GET may produce an older version (for
dynamic links) or a ``409 Conflict`` error (for static links).
--------------------------
Configuring Container Sync
--------------------------
@ -69,34 +75,34 @@ Each section name is the name of a sync realm. A sync realm is a set of
clusters that have agreed to allow container syncing with each other. Realm
names will be considered case insensitive.
The key is the overall cluster-to-cluster key used in combination with the
``key`` is the overall cluster-to-cluster key used in combination with the
external users' key that they set on their containers'
``X-Container-Sync-Key`` metadata header values. These keys will be used to
sign each request the container sync daemon makes and used to validate each
incoming container sync request.
The key2 is optional and is an additional key incoming requests will be checked
against. This is so you can rotate keys if you wish; you move the existing key
to key2 and make a new key value.
``key2`` is optional and is an additional key incoming requests will be checked
against. This is so you can rotate keys if you wish; you move the existing ``key``
to ``key2`` and make a new ``key`` value.
Any values in the realm section whose names begin with ``cluster_`` will
indicate the name and endpoint of a cluster and will be used by external users in
their containers' ``X-Container-Sync-To`` metadata header values with the format
"//realm_name/cluster_name/account_name/container_name". Realm and cluster
``//realm_name/cluster_name/account_name/container_name``. Realm and cluster
names are considered case insensitive.
The endpoint is what the container sync daemon will use when sending out
requests to that cluster. Keep in mind this endpoint must be reachable by all
container servers, since that is where the container sync daemon runs. Note
that the endpoint ends with /v1/ and that the container sync daemon will then
add the account/container/obj name after that.
that the endpoint ends with ``/v1/`` and that the container sync daemon will then
add the ``account/container/obj`` name after that.
Distribute this ``container-sync-realms.conf`` file to all your proxy servers
and container servers.
You also need to add the container_sync middleware to your proxy pipeline. It
needs to be after any memcache middleware and before any auth middleware. The
container_sync section only needs the "use" item. For example::
``[filter:container_sync]`` section only needs the ``use`` item. For example::
[pipeline:main]
pipeline = healthcheck proxy-logging cache container_sync tempauth proxy-logging proxy-server
@ -106,9 +112,9 @@ container_sync section only needs the "use" item. For example::
The container sync daemon will use an internal client to sync objects. Even if
you don't configure the internal client, the container sync daemon will work
with default configuration. The default configuration is as same as
with default configuration. The default configuration is the same as
``internal-client.conf-sample``. If you want to configure the internal client,
please update ``internal_client_conf_path`` of container-server.conf. The
please update ``internal_client_conf_path`` in ``container-server.conf``. The
configuration file at the path will be used for the internal client.
-------------------------------------------------------
@ -146,12 +152,12 @@ backend container server needs to be given this list of hosts in the
Logging Container Sync
----------------------
Tracking sync progress, problems, and just general activity can only be
achieved with log processing currently for container synchronization. In that
light, you may wish to set the above `log_` options to direct the
Currently, log processing is the only way to track sync progress, problems,
and even just general activity for container synchronization. In that
light, you may wish to set the above ``log_`` options to direct the
container-sync logs to a different file for easier monitoring. Additionally, it
should be noted there is no way for an end user to detect sync progress or
problems other than HEADing both containers and comparing the overall
should be noted there is no way for an end user to monitor sync progress or
detect problems other than HEADing both containers and comparing the overall
information.
@ -160,40 +166,57 @@ information.
Container Sync Statistics
-----------------------------
Container Sync INFO level logs contains activity metrics and accounting
information foe insightful tracking.
Container Sync INFO level logs contain activity metrics and accounting
information for insightful tracking.
Currently two different statistics are collected:
About once an hour or so, accumulated statistics of all operations performed
by Container Sync are reported to the log file with the following format:
"Since (time): (sync) synced [(delete) deletes, (put) puts], (skip) skipped,
(fail) failed"
time: last report time
sync: number of containers with sync turned on that were successfully synced
delete: number of successful DELETE object requests to the target cluster
put: number of successful PUT object request to the target cluster
skip: number of containers whose sync has been turned off, but are not
yet cleared from the sync store
fail: number of containers with failure (due to exception, timeout or other
reason)
by Container Sync are reported to the log file with the following format::
Since (time): (sync) synced [(delete) deletes, (put) puts], (skip) skipped, (fail) failed
time
last report time
sync
number of containers with sync turned on that were successfully synced
delete
number of successful DELETE object requests to the target cluster
put
number of successful PUT object request to the target cluster
skip
number of containers whose sync has been turned off, but are not
yet cleared from the sync store
fail
number of containers with failure (due to exception, timeout or other
reason)
For each container synced, per container statistics are reported with the
following format:
Container sync report: (container), time window start: (start), time window
end: %(end), puts: (puts), posts: (posts), deletes: (deletes), bytes: (bytes),
sync_point1: (point1), sync_point2: (point2), total_rows: (total)
container: account/container statistics are for
start: report start time
end: report end time
puts: number of successful PUT object requests to the target container
posts: N/A (0)
deletes: number of successful DELETE object requests to the target container
bytes: number of bytes sent over the network to the target container
point1: progress indication - the container's x_container_sync_point1
point2: progress indication - the container's x_container_sync_point2
total: number of objects processed at the container
following format::
it is possible that more than one server syncs a container, therefore logfiles
Container sync report: (container), time window start: (start), time window end: %(end), puts: (puts), posts: (posts), deletes: (deletes), bytes: (bytes), sync_point1: (point1), sync_point2: (point2), total_rows: (total)
container
account/container statistics are for
start
report start time
end
report end time
puts
number of successful PUT object requests to the target container
posts
N/A (0)
deletes
number of successful DELETE object requests to the target container
bytes
number of bytes sent over the network to the target container
point1
progress indication - the container's ``x_container_sync_point1``
point2
progress indication - the container's ``x_container_sync_point2``
total
number of objects processed at the container
It is possible that more than one server syncs a container, therefore log files
from all servers need to be evaluated
@ -239,11 +262,11 @@ we'll make next::
-k 'secret' container1
The ``-t`` indicates the cluster to sync to, which is the realm name of the
section from container-sync-realms.conf, followed by the cluster name from
that section (without the cluster\_ prefix), followed by the account and container
section from ``container-sync-realms.conf``, followed by the cluster name from
that section (without the ``cluster_`` prefix), followed by the account and container
names we want to sync to. The ``-k`` specifies the secret key the two containers will share for
synchronization; this is the user key, the cluster key in
container-sync-realms.conf will also be used behind the scenes.
``container-sync-realms.conf`` will also be used behind the scenes.
Now, we'll do something similar for the second cluster's container::
@ -268,7 +291,7 @@ as it gets synchronized over to the second::
.. note::
If you're an operator running SAIO and just testing, each time you
If you're an operator running :ref:`saio` and just testing, each time you
configure a container for synchronization and place objects in the
source container you will need to ensure that container-sync runs
before attempting to retrieve objects from the target container.
@ -325,7 +348,7 @@ Old-Style: Using the ``swift`` tool to set up synchronized containers
You must be the account admin on the account to set synchronization targets
and keys.
This is for the old-style of container syncing using allowed_sync_hosts.
This is for the old-style of container syncing using ``allowed_sync_hosts``.
You simply tell each container where to sync to and give it a secret
synchronization key. First, let's get the account details for our two cluster
@ -397,7 +420,7 @@ They'd all need to share the same secret synchronization key.
Old-Style: Using curl (or other tools) instead
----------------------------------------------
This is for the old-style of container syncing using allowed_sync_hosts.
This is for the old-style of container syncing using ``allowed_sync_hosts``.
So what's ``swift`` doing behind the scenes? Nothing overly complicated. It
translates the ``-t <value>`` option into an ``X-Container-Sync-To: <value>``
@ -441,8 +464,8 @@ is deleted from ``sync-containers``.
In addition to the container-server, the container-replicator process does the
job of identifying containers that should be synchronized. This is done by
scanning the local devices for container databases and checking for
x-container-sync-to and x-container-sync-key metadata values. If they exist
then a symlink to the container database is created in a sync-containers
``x-container-sync-to`` and ``x-container-sync-key`` metadata values. If they exist
then a symlink to the container database is created in a ``sync-containers``
sub-directory on the same device.
Similarly, when the container sync metadata keys are deleted, the container
@ -465,7 +488,7 @@ Two sync points are kept in each container database. When syncing a
container, the container-sync process figures out which replica of the
container it has. In a standard 3-replica scenario, the process will
have either replica number 0, 1, or 2. This is used to figure out
which rows are belong to this sync process and which ones don't.
which rows belong to this sync process and which ones don't.
An example may help. Assume a replica count of 3 and database row IDs
are 1..6. Also, assume that container-sync is running on this

View File

@ -78,6 +78,12 @@ on all object servers in this phase::
services (replicators, reconstructors and reconcilers)- they have to skip
partitions during relinking.
.. note::
The relinking command must run as the same user as the daemon processes
(usually swift). It will create files and directories that must be
manipulable by the daemon processes (server, auditor, replicator, ...).
Relinking might take some time; while there is no data copied or actually
moved, the tool still needs to walk the whole file system and create new hard
links as required.

View File

@ -62,7 +62,7 @@ Amazon S3 operations
+------------------------------------------------+------------------+--------------+
| `Object tagging`_ | Core-API | Yes |
+------------------------------------------------+------------------+--------------+
| `Versioning`_ | Versioning | No |
| `Versioning`_ | Versioning | Yes |
+------------------------------------------------+------------------+--------------+
| `Bucket notification`_ | Notifications | No |
+------------------------------------------------+------------------+--------------+

View File

@ -0,0 +1,50 @@
# SAIO (Swift All in One)
SAIO is a containerized instance of Openstack Swift object storage. It is
running the main services of Swift, designed to provide an endpoint for
application developers to test against both the Swift and AWS S3 API. It can
also be used when integrating with a CI/CD system. These images are not
configured to provide data durability and are not intended for production use.
# Quickstart
```
docker pull openstackswift/saio
docker run -d -p 8080:8080 openstackswift/saio
```
### Test against Swift API:
Example using swift client to target endpoint:
```
swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
```
### Test against S3 API:
Example using s3cmd to test AWS S3:
1. Create config file:
```
[default]
access_key = test:tester
secret_key = testing
host_base = localhost:8080
host_bucket = localhost:8080
use_https = False
```
2. Test with s3cmd:
```
s3cmd -c s3cfg_saio mb s3://bucket
```
# Quick Reference
- **Image tags**: `latest` automatically built/published by Zuul, follows
master branch. Releases are also tagged in case you want to test against
a specific release.
- **Source Code**: github.com/openstack/swift
- **Maintained by**: Openstack Swift community
- **Feedback/Questions**: #openstack-swift on freenode

View File

@ -64,6 +64,7 @@ use = egg:swift#gatekeeper
[filter:versioned_writes]
use = egg:swift#versioned_writes
allow_versioned_writes = true
allow_object_versioning = true
[filter:copy]
use = egg:swift#copy

View File

@ -57,6 +57,9 @@ bind_port = 6202
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# Enable this option to log all sqlite3 queries (requires python >=3.3)
# db_query_logging = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
@ -88,8 +91,6 @@ use = egg:swift#account
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,

View File

@ -63,6 +63,9 @@ bind_port = 6201
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# Enable this option to log all sqlite3 queries (requires python >=3.3)
# db_query_logging = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
@ -81,10 +84,6 @@ bind_port = 6201
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
#
# The prefix used for hidden auto-created accounts, for example accounts in
# which shard containers are created. Defaults to '.'.
# auto_create_account_prefix = .
[pipeline:main]
pipeline = healthcheck recon container-server
@ -101,7 +100,6 @@ use = egg:swift#container
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify

View File

@ -39,7 +39,6 @@
[object-expirer]
# interval = 300
# auto_create_account_prefix = .
# expiring_objects_account_name = expiring_objects
# report_interval = 300
#

View File

@ -133,9 +133,6 @@ use = egg:swift#object
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object, Cache-Control, Content-Language, Expires, X-Robots-Tag
#
# auto_create_account_prefix = .
#
# The number of threads in eventlet's thread pool. Most IO will occur
# in the object server's main thread, but certain "heavy" IO

View File

@ -16,12 +16,13 @@ bind_port = 8080
#
# Allows the ability to withhold sections from showing up in the public calls
# to /info. You can withhold subsections by separating the dict level with a
# ".". The following would cause the sections 'container_quotas' and 'tempurl'
# to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete. Default value is 'swift.valid_api_versions' which allows all
# registered features to be listed via HTTP GET /info except
# swift.valid_api_versions information
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl
# ".". Default value is 'swift.valid_api_versions, swift.auto_create_account_prefix'
# which allows all registered features to be listed via HTTP GET /info except
# swift.valid_api_versions and swift.auto_create_account_prefix information.
# As an example, the following would cause the sections 'container_quotas' and
# 'tempurl' to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete.
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl, bulk_delete.max_failed_deletes
# Use an integer to override the number of pre-forked processes that will
# accept connections. Should default to the number of effective cpu
@ -185,12 +186,6 @@ use = egg:swift#proxy
# Comma separated list of Host headers to which the proxy will deny requests.
# deny_host_headers =
#
# Prefix used when automatically creating accounts.
# auto_create_account_prefix = .
#
# Depth of the proxy put queue.
# put_queue_depth = 10
#
# During GET and HEAD requests, storage nodes can be chosen at random
# (shuffle), by using timing measurements (timing), or by using an explicit
# region/zone match (affinity). Using timing measurements may allow for lower
@ -603,6 +598,17 @@ auth_uri = http://keystonehost:5000/v3
# Connect/read timeout to use when communicating with Keystone
http_timeout = 10.0
# Number of seconds to cache the S3 secret. By setting this to a positive
# number, the S3 authorization validation checks can happen locally.
# secret_cache_duration = 0
# If S3 secret caching is enabled, Keystone auth credentials to be used to
# validate S3 authorization must be provided here. The appropriate options
# are the same as used in the authtoken middleware above. The values are
# likely the same as used in the authtoken middleware.
# Note that the Keystone auth credentials used by s3token will need to be
# able to view all project credentials too.
# SSL-related options
# insecure = False
# certfile =
@ -855,6 +861,17 @@ use = egg:swift#name_check
# maximum_length = 255
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
# Note: Etag quoter should be placed just after cache in the pipeline.
[filter:etag-quoter]
use = egg:swift#etag_quoter
# Historically, Swift has emitted bare MD5 hex digests as ETags, which is not
# RFC compliant. With this middleware in the pipeline, users can opt-in to
# RFC-compliant ETags on a per-account or per-container basis.
#
# Set to true to enable RFC-compliant ETags cluster-wide by default. Users
# can still opt-out by setting appropriate account or container metadata.
# enable_by_default = false
[filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/
@ -1011,6 +1028,11 @@ use = egg:swift#gatekeeper
# difficult-to-delete data.
# shunt_inbound_x_timestamp = true
#
# Set this to true if you want to allow clients to access and manipulate the
# (normally internal-to-swift) null namespace by including a header like
# X-Allow-Reserved-Names: true
# allow_reserved_names_header = false
#
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
@ -1072,6 +1094,8 @@ use = egg:swift#versioned_writes
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false
# Enables Swift object-versioning API
# allow_object_versioning = false
# Note: Put after auth and before dlo and slo middlewares.
# If you don't put it in the pipeline, it will be inserted for you.

View File

@ -194,3 +194,8 @@ aliases = yellow, orange
# api versions are by default excluded from /info.
# valid_api_versions = v1,v1.0
# The prefix used for hidden auto-created accounts, for example accounts in
# which shard containers are created. It defaults to '.'; don't change it.
# auto_create_account_prefix = .

View File

@ -0,0 +1,92 @@
---
features:
- |
Added a new object versioning mode, with APIs for querying and
accessing old versions. For more information, see `the documentation
<https://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.versioned_writes.object_versioning>`__.
- |
Added support for S3 versioning using the above new mode.
- |
Added a new middleware to allow accounts and containers to opt-in to
RFC-compliant ETags. For more information, see `the documentation
<https://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.etag_quoter>`__.
Clients should be aware of the fact that ETags may be quoted for RFC
compliance; this may become the default behavior in some future release.
- |
Proxy, account, container, and object servers now support "seamless
reloads" via ``SIGUSR1``. This is similar to the existing graceful
restarts but keeps the server socket open the whole time, reducing
service downtime.
- |
New buckets created via the S3 API will now store multi-part upload
data in the same storage policy as other data rather than the
cluster's default storage policy.
- |
Device region and zone can now be changed via ``swift-ring-builder``.
Note that this may cause a lot of data movement on the next rebalance
as the builder tries to reach full dispersion.
- |
Added support for Python 3.8.
deprecations:
- |
Per-service ``auto_create_account_prefix`` settings are now deprecated
and may be ignored in a future release; if you need to use this, please
set it in the ``[swift-constraints]`` section of ``/etc/swift/swift.conf``.
fixes:
- |
The container sharder can now handle containers with special
characters in their names.
- |
Internal client no longer logs object DELETEs as status 499.
- |
Objects with an ``X-Delete-At`` value in the far future no longer cause
backend server errors.
- |
The bulk extract middleware once again allows clients to specify metadata
(including expiration timestamps) for all objects in the archive.
- |
Container sync now synchronizes static symlinks in a way similar to
static large objects.
- |
``swift_source`` is set for more sub-requests in the proxy-server. See
`the documentation <https://docs.openstack.org/swift/latest/logs.html#swift-source>`__.
- |
Errors encountered while validating static symlink targets no longer
cause ``BadResponseLength`` errors in the proxy-server.
- |
On Python 3, the KMS keymaster now works with secrets stored
in Barbican with a ``text/plain`` payload-content-type.
- |
On Python 3, the formpost middleware now works with unicode file names.
- |
On Python 3, certain S3 API headers are now lower case as they
would be coming from AWS.
- |
Several utility scripts now work better on Python 3:
* ``swift-account-audit``
* ``swift-dispersion-populate``
* ``swift-drive-recon``
* ``swift-recon``

View File

@ -129,6 +129,7 @@ paste.filter_factory =
symlink = swift.common.middleware.symlink:filter_factory
s3api = swift.common.middleware.s3api.s3api:filter_factory
s3token = swift.common.middleware.s3api.s3token:filter_factory
etag_quoter = swift.common.middleware.etag_quoter:filter_factory
swift.diskfile =
replication.fs = swift.obj.diskfile:DiskFileManager
@ -156,6 +157,9 @@ keywords = _ l_ lazy_gettext
mapping_file = babel.cfg
output_file = swift/locale/swift.pot
[bdist_wheel]
universal = 1
[nosetests]
exe = 1
verbosity = 2

View File

@ -22,7 +22,7 @@ import sqlite3
import six
from swift.common.utils import Timestamp
from swift.common.utils import Timestamp, RESERVED_BYTE
from swift.common.db import DatabaseBroker, utf8encode, zero_like
DATADIR = 'accounts'
@ -189,20 +189,6 @@ class AccountBroker(DatabaseBroker):
self._db_version = 1
return self._db_version
def _delete_db(self, conn, timestamp, force=False):
"""
Mark the DB as deleted.
:param conn: DB connection object
:param timestamp: timestamp to mark as deleted
"""
conn.execute("""
UPDATE account_stat
SET delete_timestamp = ?,
status = 'DELETED',
status_changed_at = ?
WHERE delete_timestamp < ? """, (timestamp, timestamp, timestamp))
def _commit_puts_load(self, item_list, entry):
"""See :func:`swift.common.db.DatabaseBroker._commit_puts_load`"""
# check to see if the update includes policy_index or not
@ -369,7 +355,7 @@ class AccountBroker(DatabaseBroker):
''').fetchone())
def list_containers_iter(self, limit, marker, end_marker, prefix,
delimiter, reverse=False):
delimiter, reverse=False, allow_reserved=False):
"""
Get a list of containers sorted by name starting at marker onward, up
to limit entries. Entries will begin with the prefix and will not have
@ -381,6 +367,7 @@ class AccountBroker(DatabaseBroker):
:param prefix: prefix query
:param delimiter: delimiter for query
:param reverse: reverse the result order.
:param allow_reserved: exclude names with reserved-byte by default
:returns: list of tuples of (name, object_count, bytes_used,
put_timestamp, 0)
@ -424,6 +411,9 @@ class AccountBroker(DatabaseBroker):
elif prefix:
query += ' name >= ? AND'
query_args.append(prefix)
if not allow_reserved:
query += ' name >= ? AND'
query_args.append(chr(ord(RESERVED_BYTE) + 1))
if self.get_db_version(conn) < 1:
query += ' +deleted = 0'
else:
@ -455,7 +445,7 @@ class AccountBroker(DatabaseBroker):
curs.close()
return results
end = name.find(delimiter, len(prefix))
if end > 0:
if end >= 0:
if reverse:
end_marker = name[:end + len(delimiter)]
else:

View File

@ -262,7 +262,7 @@ class AccountReaper(Daemon):
container_limit *= len(nodes)
try:
containers = list(broker.list_containers_iter(
container_limit, '', None, None, None))
container_limit, '', None, None, None, allow_reserved=True))
while containers:
try:
for (container, _junk, _junk, _junk, _junk) in containers:
@ -282,7 +282,8 @@ class AccountReaper(Daemon):
self.logger.exception(
'Exception with containers for account %s', account)
containers = list(broker.list_containers_iter(
container_limit, containers[-1][0], None, None, None))
container_limit, containers[-1][0], None, None, None,
allow_reserved=True))
log_buf = ['Completed pass on account %s' % account]
except (Exception, Timeout):
self.logger.exception('Exception with account %s', account)

View File

@ -26,12 +26,14 @@ from swift.account.backend import AccountBroker, DATADIR
from swift.account.utils import account_listing_response, get_response_headers
from swift.common.db import DatabaseConnectionError, DatabaseAlreadyExists
from swift.common.request_helpers import get_param, \
split_and_validate_path
split_and_validate_path, validate_internal_account, \
validate_internal_container, constrain_req_limit
from swift.common.utils import get_logger, hash_path, public, \
Timestamp, storage_directory, config_true_value, \
timing_stats, replication, get_log_line, \
config_fallocate_value, fs_has_free_space
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
from swift.common.constraints import valid_timestamp, check_utf8, \
check_drive, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common import constraints
from swift.common.db_replicator import ReplicatorRpc
from swift.common.base_storage_server import BaseStorageServer
@ -44,6 +46,32 @@ from swift.common.swob import HTTPAccepted, HTTPBadRequest, \
from swift.common.request_helpers import is_sys_or_user_meta
def get_account_name_and_placement(req):
"""
Split and validate path for an account.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account = split_and_validate_path(req, 3)
validate_internal_account(account)
return drive, part, account
def get_container_name_and_placement(req):
"""
Split and validate path for a container.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account, container = split_and_validate_path(req, 3, 4)
validate_internal_container(account, container)
return drive, part, account, container
class AccountController(BaseStorageServer):
"""WSGI controller for the account server."""
@ -58,10 +86,22 @@ class AccountController(BaseStorageServer):
self.replicator_rpc = ReplicatorRpc(self.root, DATADIR, AccountBroker,
self.mount_check,
logger=self.logger)
self.auto_create_account_prefix = \
conf.get('auto_create_account_prefix') or '.'
if conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
self.auto_create_account_prefix = \
conf['auto_create_account_prefix']
else:
self.auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
swift.common.db.QUERY_LOGGING = \
config_true_value(conf.get('db_query_logging', 'f'))
self.fallocate_reserve, self.fallocate_is_percent = \
config_fallocate_value(conf.get('fallocate_reserve', '1%'))
@ -96,7 +136,7 @@ class AccountController(BaseStorageServer):
@timing_stats()
def DELETE(self, req):
"""Handle HTTP DELETE request."""
drive, part, account = split_and_validate_path(req, 3)
drive, part, account = get_account_name_and_placement(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
@ -120,7 +160,7 @@ class AccountController(BaseStorageServer):
@timing_stats()
def PUT(self, req):
"""Handle HTTP PUT request."""
drive, part, account, container = split_and_validate_path(req, 3, 4)
drive, part, account, container = get_container_name_and_placement(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
@ -185,7 +225,7 @@ class AccountController(BaseStorageServer):
@timing_stats()
def HEAD(self, req):
"""Handle HTTP HEAD request."""
drive, part, account = split_and_validate_path(req, 3)
drive, part, account = get_account_name_and_placement(req)
out_content_type = listing_formats.get_listing_content_type(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -204,19 +244,11 @@ class AccountController(BaseStorageServer):
@timing_stats()
def GET(self, req):
"""Handle HTTP GET request."""
drive, part, account = split_and_validate_path(req, 3)
drive, part, account = get_account_name_and_placement(req)
prefix = get_param(req, 'prefix')
delimiter = get_param(req, 'delimiter')
limit = constraints.ACCOUNT_LISTING_LIMIT
given_limit = get_param(req, 'limit')
reverse = config_true_value(get_param(req, 'reverse'))
if given_limit and given_limit.isdigit():
limit = int(given_limit)
if limit > constraints.ACCOUNT_LISTING_LIMIT:
return HTTPPreconditionFailed(
request=req,
body='Maximum limit is %d' %
constraints.ACCOUNT_LISTING_LIMIT)
limit = constrain_req_limit(req, constraints.ACCOUNT_LISTING_LIMIT)
marker = get_param(req, 'marker', '')
end_marker = get_param(req, 'end_marker')
out_content_type = listing_formats.get_listing_content_type(req)
@ -262,7 +294,7 @@ class AccountController(BaseStorageServer):
@timing_stats()
def POST(self, req):
"""Handle HTTP POST request."""
drive, part, account = split_and_validate_path(req, 3)
drive, part, account = get_account_name_and_placement(req)
req_timestamp = valid_timestamp(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -280,8 +312,8 @@ class AccountController(BaseStorageServer):
start_time = time.time()
req = Request(env)
self.logger.txn_id = req.headers.get('x-trans-id', None)
if not check_utf8(wsgi_to_str(req.path_info)):
res = HTTPPreconditionFailed(body='Invalid UTF8 or contains NULL')
if not check_utf8(wsgi_to_str(req.path_info), internal=True):
res = HTTPPreconditionFailed(body='Invalid UTF8')
else:
try:
# disallow methods which are not publicly accessible

View File

@ -17,6 +17,7 @@ import json
import six
from swift.common import constraints
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPOk, HTTPNoContent, str_to_wsgi
from swift.common.utils import Timestamp
@ -71,15 +72,17 @@ def get_response_headers(broker):
def account_listing_response(account, req, response_content_type, broker=None,
limit='', marker='', end_marker='', prefix='',
delimiter='', reverse=False):
limit=constraints.ACCOUNT_LISTING_LIMIT,
marker='', end_marker='', prefix='', delimiter='',
reverse=False):
if broker is None:
broker = FakeAccountBroker()
resp_headers = get_response_headers(broker)
account_list = broker.list_containers_iter(limit, marker, end_marker,
prefix, delimiter, reverse)
prefix, delimiter, reverse,
req.allow_reserved_names)
data = []
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
in account_list:

View File

@ -26,10 +26,6 @@ from eventlet import GreenPool, hubs, patcher, Timeout
from eventlet.pools import Pool
from swift.common import direct_client
try:
from swiftclient import get_auth
except ImportError:
from swift.common.internal_client import get_auth
from swift.common.internal_client import SimpleClient
from swift.common.ring import Ring
from swift.common.exceptions import ClientException
@ -365,6 +361,11 @@ Usage: %%prog [options] [conf_file]
def generate_report(conf, policy_name=None):
try:
# Delay importing so urllib3 will import monkey-patched modules
from swiftclient import get_auth
except ImportError:
from swift.common.internal_client import get_auth
global json_output
json_output = config_true_value(conf.get('dump_json', 'no'))
if policy_name is None:

View File

@ -227,6 +227,7 @@ def print_db_info_metadata(db_type, info, metadata, drop_prefixes=False):
if db_type == 'container':
print(' Container: %s' % container)
print(' Deleted: %s' % info['is_deleted'])
path_hash = hash_path(account, container)
if db_type == 'container':
print(' Container Hash: %s' % path_hash)
@ -442,6 +443,7 @@ def print_info(db_type, db_file, swift_dir='/etc/swift', stale_reads_ok=False,
raise
account = info['account']
container = None
info['is_deleted'] = broker.is_deleted()
if db_type == 'container':
container = info['container']
info['is_root'] = broker.is_root_container()

View File

@ -158,6 +158,7 @@ All three steps may be performed with one sub-command::
from __future__ import print_function
import argparse
import json
import os.path
import sys
import time
@ -513,8 +514,8 @@ def main(args=None):
print('\nA sub-command is required.')
return 1
logger = get_logger({}, name='ContainerBroker', log_to_console=True)
broker = ContainerBroker(args.container_db, logger=logger,
skip_commits=True)
broker = ContainerBroker(os.path.realpath(args.container_db),
logger=logger, skip_commits=True)
try:
broker.get_info()
except Exception as exc:

View File

@ -39,6 +39,7 @@ MAX_ACCOUNT_NAME_LENGTH = 256
MAX_CONTAINER_NAME_LENGTH = 256
VALID_API_VERSIONS = ["v1", "v1.0"]
EXTRA_HEADER_COUNT = 0
AUTO_CREATE_ACCOUNT_PREFIX = '.'
# If adding an entry to DEFAULT_CONSTRAINTS, note that
# these constraints are automatically published by the
@ -58,6 +59,7 @@ DEFAULT_CONSTRAINTS = {
'max_container_name_length': MAX_CONTAINER_NAME_LENGTH,
'valid_api_versions': VALID_API_VERSIONS,
'extra_header_count': EXTRA_HEADER_COUNT,
'auto_create_account_prefix': AUTO_CREATE_ACCOUNT_PREFIX,
}
SWIFT_CONSTRAINTS_LOADED = False
@ -76,7 +78,7 @@ def reload_constraints():
constraints_conf = ConfigParser()
if constraints_conf.read(utils.SWIFT_CONF_FILE):
SWIFT_CONSTRAINTS_LOADED = True
for name in DEFAULT_CONSTRAINTS:
for name, default in DEFAULT_CONSTRAINTS.items():
try:
value = constraints_conf.get('swift-constraints', name)
except NoOptionError:
@ -85,9 +87,12 @@ def reload_constraints():
# We are never going to find the section for another option
break
else:
try:
value = int(value)
except ValueError:
if isinstance(default, int):
value = int(value) # Go ahead and let it error
elif isinstance(default, str):
pass # No translation needed, I guess
else:
# Hope we want a list!
value = utils.list_from_csv(value)
OVERRIDE_CONSTRAINTS[name] = value
for name, default in DEFAULT_CONSTRAINTS.items():
@ -346,12 +351,13 @@ def check_delete_headers(request):
return request
def check_utf8(string):
def check_utf8(string, internal=False):
"""
Validate if a string is valid UTF-8 str or unicode and that it
does not contain any null character.
does not contain any reserved characters.
:param string: string to be validated
:param internal: boolean, allows reserved characters if True
:returns: True if the string is valid utf-8 str or unicode and
contains no null characters, False otherwise
"""
@ -382,7 +388,9 @@ def check_utf8(string):
if any(0xD800 <= ord(codepoint) <= 0xDFFF
for codepoint in decoded):
return False
return b'\x00' not in encoded
if b'\x00' != utils.RESERVED_BYTE and b'\x00' in encoded:
return False
return True if internal else utils.RESERVED_BYTE not in encoded
# If string is unicode, decode() will raise UnicodeEncodeError
# So, we should catch both UnicodeDecodeError & UnicodeEncodeError
except UnicodeError:
@ -413,6 +421,7 @@ def check_name_format(req, name, target_type):
body='%s name cannot contain slashes' % target_type)
return name
check_account_format = functools.partial(check_name_format,
target_type='Account')
check_container_format = functools.partial(check_name_format,

View File

@ -132,6 +132,7 @@ class DaemonStrategy(object):
def setup(self, **kwargs):
utils.validate_configuration()
utils.drop_privileges(self.daemon.conf.get('user', 'swift'))
utils.clean_up_daemon_hygiene()
utils.capture_stdio(self.logger, **kwargs)
def kill_children(*args):

View File

@ -43,6 +43,8 @@ from swift.common.swob import HTTPBadRequest
#: Whether calls will be made to preallocate disk space for database files.
DB_PREALLOCATION = False
#: Whether calls will be made to log queries (py3 only)
QUERY_LOGGING = False
#: Timeout for trying to connect to a DB
BROKER_TIMEOUT = 25
#: Pickle protocol to use
@ -57,19 +59,23 @@ def utf8encode(*args):
for s in args]
def native_str_keys(metadata):
def native_str_keys_and_values(metadata):
if six.PY2:
uni_keys = [k for k in metadata if isinstance(k, six.text_type)]
for k in uni_keys:
sv = metadata[k]
del metadata[k]
metadata[k.encode('utf-8')] = sv
metadata[k.encode('utf-8')] = [
x.encode('utf-8') if isinstance(x, six.text_type) else x
for x in sv]
else:
bin_keys = [k for k in metadata if isinstance(k, six.binary_type)]
for k in bin_keys:
sv = metadata[k]
del metadata[k]
metadata[k.decode('utf-8')] = sv
metadata[k.decode('utf-8')] = [
x.decode('utf-8') if isinstance(x, six.binary_type) else x
for x in sv]
ZERO_LIKE_VALUES = {None, '', 0, '0'}
@ -181,7 +187,7 @@ def chexor(old, name, timestamp):
return '%032x' % (int(old, 16) ^ int(new, 16))
def get_db_connection(path, timeout=30, okay_to_create=False):
def get_db_connection(path, timeout=30, logger=None, okay_to_create=False):
"""
Returns a properly configured SQLite database connection.
@ -194,6 +200,8 @@ def get_db_connection(path, timeout=30, okay_to_create=False):
connect_time = time.time()
conn = sqlite3.connect(path, check_same_thread=False,
factory=GreenDBConnection, timeout=timeout)
if QUERY_LOGGING and logger and not six.PY2:
conn.set_trace_callback(logger.debug)
if path != ':memory:' and not okay_to_create:
# attempt to detect and fail when connect creates the db file
stat = os.stat(path)
@ -272,13 +280,15 @@ class DatabaseBroker(object):
"""
if self._db_file == ':memory:':
tmp_db_file = None
conn = get_db_connection(self._db_file, self.timeout)
conn = get_db_connection(self._db_file, self.timeout, self.logger)
else:
mkdirs(self.db_dir)
fd, tmp_db_file = mkstemp(suffix='.tmp', dir=self.db_dir)
os.close(fd)
conn = sqlite3.connect(tmp_db_file, check_same_thread=False,
factory=GreenDBConnection, timeout=0)
if QUERY_LOGGING and not six.PY2:
conn.set_trace_callback(self.logger.debug)
# creating dbs implicitly does a lot of transactions, so we
# pick fast, unsafe options here and do a big fsync at the end.
with closing(conn.cursor()) as cur:
@ -339,7 +349,8 @@ class DatabaseBroker(object):
# of the system were "racing" each other.
raise DatabaseAlreadyExists(self.db_file)
renamer(tmp_db_file, self.db_file)
self.conn = get_db_connection(self.db_file, self.timeout)
self.conn = get_db_connection(self.db_file, self.timeout,
self.logger)
else:
self.conn = conn
@ -356,7 +367,14 @@ class DatabaseBroker(object):
self.update_metadata(cleared_meta)
# then mark the db as deleted
with self.get() as conn:
self._delete_db(conn, timestamp)
conn.execute(
"""
UPDATE %s_stat
SET delete_timestamp = ?,
status = 'DELETED',
status_changed_at = ?
WHERE delete_timestamp < ? """ % self.db_type,
(timestamp, timestamp, timestamp))
conn.commit()
@property
@ -442,7 +460,8 @@ class DatabaseBroker(object):
if not self.conn:
if self.db_file != ':memory:' and os.path.exists(self.db_file):
try:
self.conn = get_db_connection(self.db_file, self.timeout)
self.conn = get_db_connection(self.db_file, self.timeout,
self.logger)
except (sqlite3.DatabaseError, DatabaseConnectionError):
self.possibly_quarantine(*sys.exc_info())
else:
@ -468,7 +487,8 @@ class DatabaseBroker(object):
"""Use with the "with" statement; locks a database."""
if not self.conn:
if self.db_file != ':memory:' and os.path.exists(self.db_file):
self.conn = get_db_connection(self.db_file, self.timeout)
self.conn = get_db_connection(self.db_file, self.timeout,
self.logger)
else:
raise DatabaseConnectionError(self.db_file, "DB doesn't exist")
conn = self.conn
@ -862,7 +882,7 @@ class DatabaseBroker(object):
metadata = self.get_raw_metadata()
if metadata:
metadata = json.loads(metadata)
native_str_keys(metadata)
native_str_keys_and_values(metadata)
else:
metadata = {}
return metadata
@ -924,7 +944,7 @@ class DatabaseBroker(object):
self.db_type)
md = row[0]
md = json.loads(md) if md else {}
native_str_keys(md)
native_str_keys_and_values(md)
except sqlite3.OperationalError as err:
if 'no such column: metadata' not in str(err):
raise

View File

@ -217,6 +217,8 @@ class Replicator(Daemon):
self.reclaim_age = float(conf.get('reclaim_age', 86400 * 7))
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
swift.common.db.QUERY_LOGGING = \
config_true_value(conf.get('db_query_logging', 'f'))
self._zero_stats()
self.recon_cache_path = conf.get('recon_cache_path',
'/var/cache/swift')
@ -426,7 +428,7 @@ class Replicator(Daemon):
Make an http_connection using ReplConnection
:param node: node dictionary from the ring
:param partition: partition partition to send in the url
:param partition: partition to send in the url
:param db_file: DB file
:returns: ReplConnection object
@ -579,7 +581,8 @@ class Replicator(Daemon):
shouldbehere = True
responses = []
try:
broker = self.brokerclass(object_file, pending_timeout=30)
broker = self.brokerclass(object_file, pending_timeout=30,
logger=self.logger)
broker.reclaim(now - self.reclaim_age,
now - (self.reclaim_age * 2))
info = broker.get_replication_info()

View File

@ -29,11 +29,11 @@ from six.moves.http_client import HTTPException
from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ClientException
from swift.common.utils import Timestamp, FileLikeIter
from swift.common.swob import normalize_etag
from swift.common.utils import Timestamp, FileLikeIter, quote
from swift.common.http import HTTP_NO_CONTENT, HTTP_INSUFFICIENT_STORAGE, \
is_success, is_server_error
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.utils import quote
class DirectClientException(ClientException):
@ -100,6 +100,7 @@ def _make_req(node, part, method, path, headers, stype,
if content_length is None:
headers['Transfer-Encoding'] = 'chunked'
headers.setdefault('X-Backend-Allow-Reserved-Names', 'true')
with Timeout(conn_timeout):
conn = http_connect(node['ip'], node['port'], node['device'], part,
method, path, headers=headers)
@ -193,6 +194,7 @@ def gen_headers(hdrs_in=None, add_ts=True):
hdrs_out['X-Timestamp'] = Timestamp.now().internal
if 'user-agent' not in hdrs_out:
hdrs_out['User-Agent'] = 'direct-client %s' % os.getpid()
hdrs_out.setdefault('X-Backend-Allow-Reserved-Names', 'true')
return hdrs_out
@ -483,7 +485,7 @@ def direct_put_object(node, part, account, container, name, contents,
if headers is None:
headers = {}
if etag:
headers['ETag'] = etag.strip('"')
headers['ETag'] = normalize_etag(etag)
if content_type is not None:
headers['Content-Type'] = content_type
else:
@ -496,7 +498,7 @@ def direct_put_object(node, part, account, container, name, contents,
'Object', conn_timeout, response_timeout, contents=contents,
content_length=content_length, chunk_size=chunk_size)
return resp.getheader('etag').strip('"')
return normalize_etag(resp.getheader('etag'))
def direct_post_object(node, part, account, container, name, headers,

View File

@ -16,13 +16,6 @@
import six
def _title(s):
if six.PY2:
return s.title()
else:
return s.encode('latin1').title().decode('latin1')
class HeaderKeyDict(dict):
"""
A dict that title-cases all keys on the way in, so as to be
@ -36,35 +29,43 @@ class HeaderKeyDict(dict):
self.update(base_headers)
self.update(kwargs)
@staticmethod
def _title(s):
if six.PY2:
return s.title()
else:
return s.encode('latin1').title().decode('latin1')
def update(self, other):
if hasattr(other, 'keys'):
for key in other.keys():
self[_title(key)] = other[key]
self[self._title(key)] = other[key]
else:
for key, value in other:
self[_title(key)] = value
self[self._title(key)] = value
def __getitem__(self, key):
return dict.get(self, _title(key))
return dict.get(self, self._title(key))
def __setitem__(self, key, value):
key = self._title(key)
if value is None:
self.pop(_title(key), None)
self.pop(key, None)
elif six.PY2 and isinstance(value, six.text_type):
return dict.__setitem__(self, _title(key), value.encode('utf-8'))
return dict.__setitem__(self, key, value.encode('utf-8'))
elif six.PY3 and isinstance(value, six.binary_type):
return dict.__setitem__(self, _title(key), value.decode('latin-1'))
return dict.__setitem__(self, key, value.decode('latin-1'))
else:
return dict.__setitem__(self, _title(key), str(value))
return dict.__setitem__(self, key, str(value))
def __contains__(self, key):
return dict.__contains__(self, _title(key))
return dict.__contains__(self, self._title(key))
def __delitem__(self, key):
return dict.__delitem__(self, _title(key))
return dict.__delitem__(self, self._title(key))
def get(self, key, default=None):
return dict.get(self, _title(key), default)
return dict.get(self, self._title(key), default)
def setdefault(self, key, value=None):
if key not in self:
@ -72,4 +73,4 @@ class HeaderKeyDict(dict):
return self[key]
def pop(self, key, default=None):
return dict.pop(self, _title(key), default)
return dict.pop(self, self._title(key), default)

View File

@ -138,6 +138,7 @@ HTTP_REQUEST_HEADER_FIELDS_TOO_LARGE = 431
HTTP_NO_RESPONSE = 444
HTTP_RETRY_WITH = 449
HTTP_BLOCKED_BY_WINDOWS_PARENTAL_CONTROLS = 450
HTTP_RATE_LIMITED = 498
HTTP_CLIENT_CLOSED_REQUEST = 499
###############################################################################

View File

@ -25,9 +25,10 @@ import zlib
from time import gmtime, strftime, time
from zlib import compressobj
from swift.common.constraints import AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.exceptions import ClientException
from swift.common.http import (HTTP_NOT_FOUND, HTTP_MULTIPLE_CHOICES,
is_server_error)
is_client_error, is_server_error)
from swift.common.swob import Request, bytes_to_wsgi
from swift.common.utils import quote, closing_if_possible
from swift.common.wsgi import loadapp, pipeline_property
@ -158,7 +159,7 @@ class InternalClient(object):
container_ring = pipeline_property('container_ring')
account_ring = pipeline_property('account_ring')
auto_create_account_prefix = pipeline_property(
'auto_create_account_prefix', default='.')
'auto_create_account_prefix', default=AUTO_CREATE_ACCOUNT_PREFIX)
def make_request(
self, method, path, headers, acceptable_statuses, body_file=None,
@ -184,6 +185,7 @@ class InternalClient(object):
headers = dict(headers)
headers['user-agent'] = self.user_agent
headers.setdefault('x-backend-allow-reserved-names', 'true')
for attempt in range(self.request_tries):
resp = exc_type = exc_value = exc_traceback = None
req = Request.blank(
@ -384,6 +386,34 @@ class InternalClient(object):
return self._iter_items(path, marker, end_marker, prefix,
acceptable_statuses)
def create_account(self, account):
"""
Creates an account.
:param account: Account to create.
:raises UnexpectedResponse: Exception raised when requests fail
to get a response with an acceptable status
:raises Exception: Exception is raised when code fails in an
unexpected way.
"""
path = self.make_path(account)
self.make_request('PUT', path, {}, (201, 202))
def delete_account(self, account, acceptable_statuses=(2, HTTP_NOT_FOUND)):
"""
Deletes an account.
:param account: Account to delete.
:param acceptable_statuses: List of status for valid responses,
defaults to (2, HTTP_NOT_FOUND).
:raises UnexpectedResponse: Exception raised when requests fail
to get a response with an acceptable status
:raises Exception: Exception is raised when code fails in an
unexpected way.
"""
path = self.make_path(account)
self.make_request('DELETE', path, {}, acceptable_statuses)
def get_account_info(
self, account, acceptable_statuses=(2, HTTP_NOT_FOUND)):
"""
@ -499,7 +529,8 @@ class InternalClient(object):
self.make_request('PUT', path, headers, acceptable_statuses)
def delete_container(
self, account, container, acceptable_statuses=(2, HTTP_NOT_FOUND)):
self, account, container, headers=None,
acceptable_statuses=(2, HTTP_NOT_FOUND)):
"""
Deletes a container.
@ -514,8 +545,9 @@ class InternalClient(object):
unexpected way.
"""
headers = headers or {}
path = self.make_path(account, container)
self.make_request('DELETE', path, {}, acceptable_statuses)
self.make_request('DELETE', path, headers, acceptable_statuses)
def get_container_metadata(
self, account, container, metadata_prefix='',
@ -654,7 +686,7 @@ class InternalClient(object):
return self._get_metadata(path, metadata_prefix, acceptable_statuses,
headers=headers, params=params)
def get_object(self, account, container, obj, headers,
def get_object(self, account, container, obj, headers=None,
acceptable_statuses=(2,), params=None):
"""
Gets an object.
@ -756,13 +788,17 @@ class InternalClient(object):
path, metadata, metadata_prefix, acceptable_statuses)
def upload_object(
self, fobj, account, container, obj, headers=None):
self, fobj, account, container, obj, headers=None,
acceptable_statuses=(2,), params=None):
"""
:param fobj: File object to read object's content from.
:param account: The object's account.
:param container: The object's container.
:param obj: The object.
:param headers: Headers to send with request, defaults to empty dict.
:param acceptable_statuses: List of acceptable statuses for request.
:param params: A dict of params to be set in request query string,
defaults to None.
:raises UnexpectedResponse: Exception raised when requests fail
to get a response with an acceptable status
@ -774,7 +810,8 @@ class InternalClient(object):
if 'Content-Length' not in headers:
headers['Transfer-Encoding'] = 'chunked'
path = self.make_path(account, container, obj)
self.make_request('PUT', path, headers, (2,), fobj)
self.make_request('PUT', path, headers, acceptable_statuses, fobj,
params=params)
def get_auth(url, user, key, auth_version='1.0', **kwargs):
@ -893,14 +930,17 @@ class SimpleClient(object):
self.attempts += 1
try:
return self.base_request(method, **kwargs)
except urllib2.HTTPError as err:
if is_client_error(err.getcode() or 500):
raise ClientException('Client error',
http_status=err.getcode())
elif self.attempts > retries:
raise ClientException('Raise too many retries',
http_status=err.getcode())
except (socket.error, httplib.HTTPException, urllib2.URLError) \
as err:
if self.attempts > retries:
if isinstance(err, urllib2.HTTPError):
raise ClientException('Raise too many retries',
http_status=err.getcode())
else:
raise
raise
sleep(backoff)
backoff = min(backoff * 2, self.max_backoff)

View File

@ -47,6 +47,7 @@ REST_SERVERS = [s for s in ALL_SERVERS if s not in MAIN_SERVERS]
# aliases mapping
ALIASES = {'all': ALL_SERVERS, 'main': MAIN_SERVERS, 'rest': REST_SERVERS}
GRACEFUL_SHUTDOWN_SERVERS = MAIN_SERVERS
SEAMLESS_SHUTDOWN_SERVERS = MAIN_SERVERS
START_ONCE_SERVERS = REST_SERVERS
# These are servers that match a type (account-*, container-*, object-*) but
# don't use that type-server.conf file and instead use their own.
@ -366,6 +367,21 @@ class Manager(object):
status += m.start(**kwargs)
return status
@command
def reload_seamless(self, **kwargs):
"""seamlessly re-exec, then shutdown of old listen sockets on
supporting servers
"""
kwargs.pop('graceful', None)
kwargs['seamless'] = True
status = 0
for server in self.servers:
signaled_pids = server.stop(**kwargs)
if not signaled_pids:
print(_('No %s running') % server)
status += 1
return status
@command
def force_reload(self, **kwargs):
"""alias for reload
@ -629,13 +645,17 @@ class Server(object):
"""Kill running pids
:param graceful: if True, attempt SIGHUP on supporting servers
:param seamless: if True, attempt SIGUSR1 on supporting servers
:returns: a dict mapping pids (ints) to pid_files (paths)
"""
graceful = kwargs.get('graceful')
seamless = kwargs.get('seamless')
if graceful and self.server in GRACEFUL_SHUTDOWN_SERVERS:
sig = signal.SIGHUP
elif seamless and self.server in SEAMLESS_SHUTDOWN_SERVERS:
sig = signal.SIGUSR1
else:
sig = signal.SIGTERM
return self.signal_pids(sig, **kwargs)

View File

@ -107,8 +107,9 @@ class AccountQuotaMiddleware(object):
content_length = (request.content_length or 0)
account_info = get_account_info(request.environ, self.app)
if not account_info or not account_info['bytes']:
account_info = get_account_info(request.environ, self.app,
swift_source='AQ')
if not account_info:
return self.app
try:
quota = int(account_info['meta'].get('quota-bytes', -1))

View File

@ -98,6 +98,10 @@ The bulk middleware will handle xattrs stored by both GNU and BSD tar (2).
Only xattrs ``user.mime_type`` and ``user.meta.*`` are processed. Other
attributes are ignored.
In addition to the extended attributes, the object metadata and the
x-delete-at/x-delete-after headers set in the request are also assigned to the
extracted objects.
Notes:
(1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar
@ -206,6 +210,7 @@ from swift.common.utils import get_logger, register_swift_info, \
StreamingPile
from swift.common import constraints
from swift.common.http import HTTP_UNAUTHORIZED, HTTP_NOT_FOUND, HTTP_CONFLICT
from swift.common.request_helpers import is_user_meta
from swift.common.wsgi import make_subrequest
@ -452,7 +457,8 @@ class Bulk(object):
failed_files.append([wsgi_quote(str_to_wsgi(obj_name)),
HTTPPreconditionFailed().status])
continue
yield (obj_name, delete_path)
yield (obj_name, delete_path,
obj_to_delete.get('version_id'))
def objs_then_containers(objs_to_delete):
# process all objects first
@ -462,13 +468,17 @@ class Bulk(object):
yield delete_filter(lambda name: '/' not in name.strip('/'),
objs_to_delete)
def do_delete(obj_name, delete_path):
def do_delete(obj_name, delete_path, version_id):
delete_obj_req = make_subrequest(
req.environ, method='DELETE',
path=wsgi_quote(str_to_wsgi(delete_path)),
headers={'X-Auth-Token': req.headers.get('X-Auth-Token')},
body='', agent='%(orig)s ' + user_agent,
swift_source=swift_source)
if version_id is None:
delete_obj_req.params = {}
else:
delete_obj_req.params = {'version-id': version_id}
return (delete_obj_req.get_response(self.app), obj_name, 0)
with StreamingPile(self.delete_concurrency) as pile:
@ -622,6 +632,12 @@ class Bulk(object):
'X-Auth-Token': req.headers.get('X-Auth-Token'),
}
# Copy some whitelisted headers to the subrequest
for k, v in req.headers.items():
if ((k.lower() in ('x-delete-at', 'x-delete-after'))
or is_user_meta('object', k)):
create_headers[k] = v
create_obj_req = make_subrequest(
req.environ, method='PUT',
path=wsgi_quote(destination),

View File

@ -27,7 +27,7 @@ maximum lookup depth. If a match is found, the environment's Host header is
rewritten and the request is passed further down the WSGI chain.
"""
from six.moves import range
import six
from swift import gettext_ as _
@ -41,7 +41,8 @@ else: # executed if the try block finishes with no errors
MODULE_DEPENDENCY_MET = True
from swift.common.middleware import RewriteContext
from swift.common.swob import Request, HTTPBadRequest
from swift.common.swob import Request, HTTPBadRequest, \
str_to_wsgi, wsgi_to_str
from swift.common.utils import cache_from_env, get_logger, is_valid_ip, \
list_from_csv, parse_socket_string, register_swift_info
@ -130,9 +131,10 @@ class CNAMELookupMiddleware(object):
if not self.storage_domain:
return self.app(env, start_response)
if 'HTTP_HOST' in env:
requested_host = given_domain = env['HTTP_HOST']
requested_host = env['HTTP_HOST']
else:
requested_host = given_domain = env['SERVER_NAME']
requested_host = env['SERVER_NAME']
given_domain = wsgi_to_str(requested_host)
port = ''
if ':' in given_domain:
given_domain, port = given_domain.rsplit(':', 1)
@ -148,6 +150,8 @@ class CNAMELookupMiddleware(object):
if self.memcache:
memcache_key = ''.join(['cname-', a_domain])
found_domain = self.memcache.get(memcache_key)
if six.PY2 and found_domain:
found_domain = found_domain.encode('utf-8')
if found_domain is None:
ttl, found_domain = lookup_cname(a_domain, self.resolver)
if self.memcache and ttl > 0:
@ -166,9 +170,10 @@ class CNAMELookupMiddleware(object):
{'given_domain': given_domain,
'found_domain': found_domain})
if port:
env['HTTP_HOST'] = ':'.join([found_domain, port])
env['HTTP_HOST'] = ':'.join([
str_to_wsgi(found_domain), port])
else:
env['HTTP_HOST'] = found_domain
env['HTTP_HOST'] = str_to_wsgi(found_domain)
error = False
break
else:

View File

@ -15,6 +15,7 @@
import os
from swift.common.constraints import valid_api_version
from swift.common.container_sync_realms import ContainerSyncRealms
from swift.common.swob import HTTPBadRequest, HTTPUnauthorized, wsgify
from swift.common.utils import (
@ -67,8 +68,35 @@ class ContainerSync(object):
@wsgify
def __call__(self, req):
if req.path == '/info':
# Ensure /info requests get the freshest results
self.register_info()
return self.app
try:
(version, acc, cont, obj) = req.split_path(3, 4, True)
bad_path = False
except ValueError:
bad_path = True
# use of bad_path bool is to avoid recursive tracebacks
if bad_path or not valid_api_version(version):
return self.app
# validate container-sync metdata update
info = get_container_info(
req.environ, self.app, swift_source='CS')
sync_to = req.headers.get('x-container-sync-to')
if req.method in ('PUT', 'POST') and cont and not obj:
versions_cont = info.get(
'sysmeta', {}).get('versions-container')
if sync_to and versions_cont:
raise HTTPBadRequest(
'Cannot configure container sync on a container '
'with object versioning configured.',
request=req)
if not self.allow_full_urls:
sync_to = req.headers.get('x-container-sync-to')
if sync_to and not sync_to.startswith('//'):
raise HTTPBadRequest(
body='Full URLs are not allowed for X-Container-Sync-To '
@ -90,8 +118,6 @@ class ContainerSync(object):
req.environ.setdefault('swift.log_info', []).append(
'cs:no-local-realm-key')
else:
info = get_container_info(
req.environ, self.app, swift_source='CS')
user_key = info.get('sync_key')
if not user_key:
req.environ.setdefault('swift.log_info', []).append(
@ -134,10 +160,9 @@ class ContainerSync(object):
# syntax and might be synced before its segments, so stop SLO
# middleware from performing the usual manifest validation.
req.environ['swift.slo_override'] = True
# Similar arguments for static symlinks
req.environ['swift.symlink_override'] = True
if req.path == '/info':
# Ensure /info requests get the freshest results
self.register_info()
return self.app

View File

@ -319,6 +319,9 @@ class ServerSideCopyMiddleware(object):
if 'last-modified' in source_resp.headers:
resp_headers['X-Copied-From-Last-Modified'] = \
source_resp.headers['last-modified']
if 'X-Object-Version-Id' in source_resp.headers:
resp_headers['X-Copied-From-Version-Id'] = \
source_resp.headers['X-Object-Version-Id']
# Existing sys and user meta of source object is added to response
# headers in addition to the new ones.
_copy_headers(sink_req.headers, resp_headers)
@ -374,6 +377,8 @@ class ServerSideCopyMiddleware(object):
sink_req.headers.update(req.headers)
params = sink_req.params
params_updated = False
if params.get('multipart-manifest') == 'get':
if 'X-Static-Large-Object' in source_resp.headers:
params['multipart-manifest'] = 'put'
@ -381,6 +386,13 @@ class ServerSideCopyMiddleware(object):
del params['multipart-manifest']
sink_req.headers['X-Object-Manifest'] = \
source_resp.headers['X-Object-Manifest']
params_updated = True
if 'version-id' in params:
del params['version-id']
params_updated = True
if params_updated:
sink_req.params = params
# Set swift.source, data source, content length and etag

View File

@ -25,7 +25,7 @@ from swift.common.request_helpers import get_object_transient_sysmeta, \
strip_user_meta_prefix, is_user_meta, update_etag_is_at_header, \
get_container_update_override_key
from swift.common.swob import Request, Match, HTTPException, \
HTTPUnprocessableEntity, wsgi_to_bytes, bytes_to_wsgi
HTTPUnprocessableEntity, wsgi_to_bytes, bytes_to_wsgi, normalize_etag
from swift.common.utils import get_logger, config_true_value, \
MD5_OF_EMPTY_STRING
@ -263,7 +263,7 @@ class EncrypterObjContext(CryptoWSGIContext):
ciphertext_etag = enc_input_proxy.ciphertext_md5.hexdigest()
mod_resp_headers = [
(h, v if (h.lower() != 'etag' or
v.strip('"') != ciphertext_etag)
normalize_etag(v) != ciphertext_etag)
else plaintext_etag)
for h, v in mod_resp_headers]
@ -369,7 +369,10 @@ class Encrypter(object):
return self.app(env, start_response)
try:
req.split_path(4, 4, True)
is_object_request = True
except ValueError:
is_object_request = False
if not is_object_request:
return self.app(env, start_response)
if req.method in ('GET', 'HEAD'):

View File

@ -128,10 +128,11 @@ from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
from swift.common.swob import Request, Response, \
HTTPRequestedRangeNotSatisfiable, HTTPBadRequest, HTTPConflict, \
str_to_wsgi, wsgi_to_str, wsgi_quote, wsgi_unquote
str_to_wsgi, wsgi_to_str, wsgi_quote, wsgi_unquote, normalize_etag
from swift.common.utils import get_logger, \
RateLimitedIterator, quote, close_if_possible, closing_if_possible
from swift.common.request_helpers import SegmentedIterable
from swift.common.request_helpers import SegmentedIterable, \
update_ignore_range_header
from swift.common.wsgi import WSGIContext, make_subrequest, load_app_config
@ -333,7 +334,7 @@ class GetContext(WSGIContext):
if h.lower() != "etag"]
etag = md5()
for seg_dict in segments:
etag.update(seg_dict['hash'].strip('"').encode('utf8'))
etag.update(normalize_etag(seg_dict['hash']).encode('utf8'))
response_headers.append(('Etag', '"%s"' % etag.hexdigest()))
app_iter = None
@ -369,6 +370,7 @@ class GetContext(WSGIContext):
Otherwise, simply pass it through.
"""
update_ignore_range_header(req, 'X-Object-Manifest')
resp_iter = self._app_call(req.environ)
# make sure this response is for a dynamic large object manifest

View File

@ -0,0 +1,127 @@
# Copyright (c) 2010-2020 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This middleware fix the Etag header of responses so that it is RFC compliant.
`RFC 7232 <https://tools.ietf.org/html/rfc7232#section-2.3>`__ specifies that
the value of the Etag header must be double quoted.
It must be placed at the beggining of the pipeline, right after cache::
[pipeline:main]
pipeline = ... cache etag-quoter ...
[filter:etag-quoter]
use = egg:swift#etag_quoter
Set ``X-Account-Rfc-Compliant-Etags: true`` at the account
level to have any Etags in object responses be double quoted, as in
``"d41d8cd98f00b204e9800998ecf8427e"``. Alternatively, you may
only fix Etags in a single container by setting
``X-Container-Rfc-Compliant-Etags: true`` on the container.
This may be necessary for Swift to work properly with some CDNs.
Either option may also be explicitly *disabled*, so you may enable quoted
Etags account-wide as above but turn them off for individual containers
with ``X-Container-Rfc-Compliant-Etags: false``. This may be
useful if some subset of applications expect Etags to be bare MD5s.
"""
from swift.common.constraints import valid_api_version
from swift.common.http import is_success
from swift.common.swob import Request
from swift.common.utils import config_true_value, register_swift_info
from swift.proxy.controllers.base import get_account_info, get_container_info
class EtagQuoterMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
def __call__(self, env, start_response):
req = Request(env)
try:
version, account, container, obj = req.split_path(
2, 4, rest_with_last=True)
is_swifty_request = valid_api_version(version)
except ValueError:
is_swifty_request = False
if not is_swifty_request:
return self.app(env, start_response)
if not obj:
typ = 'Container' if container else 'Account'
client_header = 'X-%s-Rfc-Compliant-Etags' % typ
sysmeta_header = 'X-%s-Sysmeta-Rfc-Compliant-Etags' % typ
if client_header in req.headers:
if req.headers[client_header]:
req.headers[sysmeta_header] = config_true_value(
req.headers[client_header])
else:
req.headers[sysmeta_header] = ''
if req.headers.get(client_header.replace('X-', 'X-Remove-', 1)):
req.headers[sysmeta_header] = ''
def translating_start_response(status, headers, exc_info=None):
return start_response(status, [
(client_header if h.title() == sysmeta_header else h,
v) for h, v in headers
], exc_info)
return self.app(env, translating_start_response)
container_info = get_container_info(env, self.app, 'EQ')
if not container_info or not is_success(container_info['status']):
return self.app(env, start_response)
flag = container_info.get('sysmeta', {}).get('rfc-compliant-etags')
if flag is None:
account_info = get_account_info(env, self.app, 'EQ')
if not account_info or not is_success(account_info['status']):
return self.app(env, start_response)
flag = account_info.get('sysmeta', {}).get(
'rfc-compliant-etags')
if flag is None:
flag = self.conf.get('enable_by_default', 'false')
if not config_true_value(flag):
return self.app(env, start_response)
status, headers, resp_iter = req.call_application(self.app)
for i, (header, value) in enumerate(headers):
if header.lower() == 'etag':
if not value.startswith(('"', 'W/"')) or \
not value.endswith('"'):
headers[i] = (header, '"%s"' % value)
start_response(status, headers)
return resp_iter
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info(
'etag_quoter', enable_by_default=config_true_value(
conf.get('enable_by_default', 'false')))
def etag_quoter_filter(app):
return EtagQuoterMiddleware(app, conf)
return etag_quoter_filter

View File

@ -129,13 +129,14 @@ from time import time
import six
from six.moves.urllib.parse import quote
from swift.common.constraints import valid_api_version
from swift.common.exceptions import MimeInvalid
from swift.common.middleware.tempurl import get_tempurl_keys_from_metadata
from swift.common.utils import streq_const_time, register_swift_info, \
parse_content_disposition, parse_mime_headers, \
iter_multipart_mime_documents, reiterate, close_if_possible
from swift.common.wsgi import make_pre_authed_env
from swift.common.swob import HTTPUnauthorized, wsgi_to_str
from swift.common.swob import HTTPUnauthorized, wsgi_to_str, str_to_wsgi
from swift.proxy.controllers.base import get_account_info, get_container_info
@ -365,7 +366,8 @@ class FormPost(object):
if not subenv['PATH_INFO'].endswith('/') and \
subenv['PATH_INFO'].count('/') < 4:
subenv['PATH_INFO'] += '/'
subenv['PATH_INFO'] += attributes['filename'] or 'filename'
subenv['PATH_INFO'] += str_to_wsgi(
attributes['filename'] or 'filename')
if 'x_delete_at' in attributes:
try:
subenv['HTTP_X_DELETE_AT'] = int(attributes['x_delete_at'])
@ -442,8 +444,8 @@ class FormPost(object):
:returns: list of tempurl keys
"""
parts = env['PATH_INFO'].split('/', 4)
if len(parts) < 4 or parts[0] or parts[1] != 'v1' or not parts[2] or \
not parts[3]:
if len(parts) < 4 or parts[0] or not valid_api_version(parts[1]) \
or not parts[2] or not parts[3]:
return []
account_info = get_account_info(env, self.app, swift_source='FP')

View File

@ -75,6 +75,8 @@ class GatekeeperMiddleware(object):
self.outbound_condition = make_exclusion_test(outbound_exclusions)
self.shunt_x_timestamp = config_true_value(
conf.get('shunt_inbound_x_timestamp', 'true'))
self.allow_reserved_names_header = config_true_value(
conf.get('allow_reserved_names_header', 'false'))
def __call__(self, env, start_response):
req = Request(env)
@ -89,6 +91,11 @@ class GatekeeperMiddleware(object):
self.logger.debug('shunted request headers: %s' %
[('X-Timestamp', ts)])
if 'X-Allow-Reserved-Names' in req.headers \
and self.allow_reserved_names_header:
req.headers['X-Backend-Allow-Reserved-Names'] = \
req.headers.pop('X-Allow-Reserved-Names')
def gatekeeper_response(status, response_headers, exc_info=None):
def fixed_response_headers():
def relative_path(value):

View File

@ -21,7 +21,8 @@ from swift.common.constraints import valid_api_version
from swift.common.http import HTTP_NO_CONTENT
from swift.common.request_helpers import get_param
from swift.common.swob import HTTPException, HTTPNotAcceptable, Request, \
RESPONSE_REASONS, HTTPBadRequest
RESPONSE_REASONS, HTTPBadRequest, wsgi_quote, wsgi_to_bytes
from swift.common.utils import RESERVED, get_logger, list_from_csv
#: Mapping of query string ``format=`` values to their corresponding
@ -73,8 +74,6 @@ def to_xml(document_element):
def account_to_xml(listing, account_name):
if isinstance(account_name, bytes):
account_name = account_name.decode('utf-8')
doc = Element('account', name=account_name)
doc.text = '\n'
for record in listing:
@ -91,8 +90,6 @@ def account_to_xml(listing, account_name):
def container_to_xml(listing, base_name):
if isinstance(base_name, bytes):
base_name = base_name.decode('utf-8')
doc = Element('container', name=base_name)
for record in listing:
if 'subdir' in record:
@ -119,8 +116,33 @@ def listing_to_text(listing):
class ListingFilter(object):
def __init__(self, app):
def __init__(self, app, conf, logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='listing-filter')
def filter_reserved(self, listing, account, container):
new_listing = []
for entry in list(listing):
for key in ('name', 'subdir'):
value = entry.get(key, '')
if six.PY2:
value = value.encode('utf-8')
if RESERVED in value:
if container:
self.logger.warning(
'Container listing for %s/%s had '
'reserved byte in %s: %r',
wsgi_quote(account), wsgi_quote(container),
key, value)
else:
self.logger.warning(
'Account listing for %s had '
'reserved byte in %s: %r',
wsgi_quote(account), key, value)
break # out of the *key* loop; check next entry
else:
new_listing.append(entry)
return new_listing
def __call__(self, env, start_response):
req = Request(env)
@ -128,10 +150,10 @@ class ListingFilter(object):
# account and container only
version, acct, cont = req.split_path(2, 3)
except ValueError:
is_container_req = False
is_account_or_container_req = False
else:
is_container_req = True
if not is_container_req:
is_account_or_container_req = True
if not is_account_or_container_req:
return self.app(env, start_response)
if not valid_api_version(version) or req.method not in ('GET', 'HEAD'):
@ -145,10 +167,16 @@ class ListingFilter(object):
return err(env, start_response)
params = req.params
can_vary = 'format' not in params
params['format'] = 'json'
req.params = params
# Give other middlewares a chance to be in charge
env.setdefault('swift.format_listing', True)
status, headers, resp_iter = req.call_application(self.app)
if not env.get('swift.format_listing'):
start_response(status, headers)
return resp_iter
header_to_index = {}
resp_content_type = resp_length = None
@ -160,11 +188,22 @@ class ListingFilter(object):
elif header == 'content-length':
header_to_index[header] = i
resp_length = int(value)
elif header == 'vary':
header_to_index[header] = i
if not status.startswith('200 '):
if not status.startswith(('200 ', '204 ')):
start_response(status, headers)
return resp_iter
if can_vary:
if 'vary' in header_to_index:
value = headers[header_to_index['vary']][1]
if 'accept' not in list_from_csv(value.lower()):
headers[header_to_index['vary']] = (
'Vary', value + ', Accept')
else:
headers.append(('Vary', 'Accept'))
if resp_content_type != 'application/json':
start_response(status, headers)
return resp_iter
@ -201,15 +240,21 @@ class ListingFilter(object):
start_response(status, headers)
return [body]
if not req.allow_reserved_names:
listing = self.filter_reserved(listing, acct, cont)
try:
if out_content_type.endswith('/xml'):
if cont:
body = container_to_xml(listing, cont)
body = container_to_xml(
listing, wsgi_to_bytes(cont).decode('utf-8'))
else:
body = account_to_xml(listing, acct)
body = account_to_xml(
listing, wsgi_to_bytes(acct).decode('utf-8'))
elif out_content_type == 'text/plain':
body = listing_to_text(listing)
# else, json -- we continue down here to be sure we set charset
else:
body = json.dumps(listing).encode('ascii')
except KeyError:
# listing was in a bad format -- funky static web listing??
start_response(status, headers)
@ -226,4 +271,9 @@ class ListingFilter(object):
def filter_factory(global_conf, **local_conf):
return ListingFilter
conf = global_conf.copy()
conf.update(local_conf)
def listing_filter(app):
return ListingFilter(app, conf)
return listing_filter

View File

@ -71,6 +71,7 @@ if this is a middleware subrequest or not. A log processor calculating
bandwidth usage will want to only sum up logs with no swift.source.
"""
import os
import time
from swift.common.swob import Request
@ -90,6 +91,7 @@ class ProxyLoggingMiddleware(object):
def __init__(self, app, conf, logger=None):
self.app = app
self.pid = os.getpid()
self.log_formatter = LogStringFormatter(default='-', quote=True)
self.log_msg_template = conf.get(
'log_msg_template', (
@ -171,7 +173,9 @@ class ProxyLoggingMiddleware(object):
'request_time': '0.05',
'source': '',
'log_info': '',
'policy_index': ''
'policy_index': '',
'ttfb': '0.05',
'pid': '42'
}
try:
self.log_formatter.format(self.log_msg_template, **replacements)
@ -193,7 +197,7 @@ class ProxyLoggingMiddleware(object):
return value
def log_request(self, req, status_int, bytes_received, bytes_sent,
start_time, end_time, resp_headers=None):
start_time, end_time, resp_headers=None, ttfb=0):
"""
Log a request.
@ -269,6 +273,8 @@ class ProxyLoggingMiddleware(object):
'log_info':
','.join(req.environ.get('swift.log_info', '')),
'policy_index': policy_index,
'ttfb': ttfb,
'pid': self.pid,
}
self.access_logger.info(
self.log_formatter.format(self.log_msg_template,
@ -361,7 +367,7 @@ class ProxyLoggingMiddleware(object):
while not chunk:
chunk = next(iterator)
except StopIteration:
chunk = ''
chunk = b''
for h, v in start_response_args[0][1]:
if h.lower() in ('content-length', 'transfer-encoding'):
break
@ -377,18 +383,20 @@ class ProxyLoggingMiddleware(object):
# Log timing information for time-to-first-byte (GET requests only)
method = self.method_from_req(req)
ttfb = 0.0
if method == 'GET':
status_int = status_int_for_logging()
policy_index = get_policy_index(req.headers, resp_headers)
metric_name = self.statsd_metric_name(req, status_int, method)
metric_name_policy = self.statsd_metric_name_policy(
req, status_int, method, policy_index)
ttfb = time.time() - start_time
if metric_name:
self.access_logger.timing_since(
metric_name + '.first-byte.timing', start_time)
self.access_logger.timing(
metric_name + '.first-byte.timing', ttfb * 1000)
if metric_name_policy:
self.access_logger.timing_since(
metric_name_policy + '.first-byte.timing', start_time)
self.access_logger.timing(
metric_name_policy + '.first-byte.timing', ttfb * 1000)
bytes_sent = 0
client_disconnect = False
@ -406,7 +414,8 @@ class ProxyLoggingMiddleware(object):
status_int = status_int_for_logging(client_disconnect)
self.log_request(
req, status_int, input_proxy.bytes_received, bytes_sent,
start_time, time.time(), resp_headers=resp_headers)
start_time, time.time(), resp_headers=resp_headers,
ttfb=ttfb)
close_method = getattr(iterable, 'close', None)
if callable(close_method):
close_method()

View File

@ -128,9 +128,14 @@ class BaseAclHandler(object):
raise Exception('No permission to be checked exists')
if resource == 'object':
version_id = self.req.params.get('versionId')
if version_id is None:
query = {}
else:
query = {'version-id': version_id}
resp = self.req.get_acl_response(app, 'HEAD',
container, obj,
headers)
headers, query=query)
acl = resp.object_acl
elif resource == 'container':
resp = self.req.get_acl_response(app, 'HEAD',
@ -460,4 +465,9 @@ ACL_MAP = {
# Initiate Multipart Upload
('POST', 'HEAD', 'container'):
{'Permission': 'WRITE'},
# Versioning
('PUT', 'POST', 'container'):
{'Permission': 'WRITE'},
('DELETE', 'GET', 'container'):
{'Permission': 'WRITE'},
}

View File

@ -21,13 +21,16 @@ from six.moves.urllib.parse import quote
from swift.common import swob
from swift.common.http import HTTP_OK
from swift.common.utils import json, public, config_true_value
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
from swift.common.utils import json, public, config_true_value, Timestamp, \
get_swift_info
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.etree import Element, SubElement, tostring, \
fromstring, XMLSyntaxError, DocumentInvalid
from swift.common.middleware.s3api.s3response import HTTPOk, S3NotImplemented, \
InvalidArgument, \
from swift.common.middleware.s3api.etree import Element, SubElement, \
tostring, fromstring, XMLSyntaxError, DocumentInvalid
from swift.common.middleware.s3api.s3response import \
HTTPOk, S3NotImplemented, InvalidArgument, \
MalformedXML, InvalidLocationConstraint, NoSuchBucket, \
BucketNotEmpty, InternalError, ServiceUnavailable, NoSuchKey
from swift.common.middleware.s3api.utils import MULTIUPLOAD_SUFFIX
@ -94,36 +97,38 @@ class BucketController(Controller):
return HTTPOk(headers=resp.headers)
@public
def GET(self, req):
"""
Handle GET Bucket (List Objects) request
"""
max_keys = req.get_validated_param(
'max-keys', self.conf.max_bucket_listing)
# TODO: Separate max_bucket_listing and default_bucket_listing
tag_max_keys = max_keys
max_keys = min(max_keys, self.conf.max_bucket_listing)
def _parse_request_options(self, req, max_keys):
encoding_type = req.params.get('encoding-type')
if encoding_type is not None and encoding_type != 'url':
err_msg = 'Invalid Encoding Method specified in Request'
raise InvalidArgument('encoding-type', encoding_type, err_msg)
# in order to judge that truncated is valid, check whether
# max_keys + 1 th element exists in swift.
query = {
'format': 'json',
'limit': max_keys + 1,
}
if 'prefix' in req.params:
query.update({'prefix': req.params['prefix']})
query['prefix'] = req.params['prefix']
if 'delimiter' in req.params:
query.update({'delimiter': req.params['delimiter']})
query['delimiter'] = req.params['delimiter']
fetch_owner = False
if 'versions' in req.params:
query['versions'] = req.params['versions']
listing_type = 'object-versions'
if 'key-marker' in req.params:
query.update({'marker': req.params['key-marker']})
query['marker'] = req.params['key-marker']
version_marker = req.params.get('version-id-marker')
if version_marker is not None:
if version_marker != 'null':
try:
Timestamp(version_marker)
except ValueError:
raise InvalidArgument(
'version-id-marker',
req.params['version-id-marker'],
'Invalid version id specified')
query['version_marker'] = version_marker
elif 'version-id-marker' in req.params:
err_msg = ('A version-id marker cannot be specified without '
'a key marker.')
@ -132,132 +137,190 @@ class BucketController(Controller):
elif int(req.params.get('list-type', '1')) == 2:
listing_type = 'version-2'
if 'start-after' in req.params:
query.update({'marker': req.params['start-after']})
query['marker'] = req.params['start-after']
# continuation-token overrides start-after
if 'continuation-token' in req.params:
decoded = b64decode(req.params['continuation-token'])
if not six.PY2:
decoded = decoded.decode('utf8')
query.update({'marker': decoded})
query['marker'] = decoded
if 'fetch-owner' in req.params:
fetch_owner = config_true_value(req.params['fetch-owner'])
else:
listing_type = 'version-1'
if 'marker' in req.params:
query.update({'marker': req.params['marker']})
query['marker'] = req.params['marker']
return encoding_type, query, listing_type, fetch_owner
def _build_versions_result(self, req, objects, is_truncated):
elem = Element('ListVersionsResult')
SubElement(elem, 'Name').text = req.container_name
SubElement(elem, 'Prefix').text = req.params.get('prefix')
SubElement(elem, 'KeyMarker').text = req.params.get('key-marker')
SubElement(elem, 'VersionIdMarker').text = req.params.get(
'version-id-marker')
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['name']
SubElement(elem, 'NextVersionIdMarker').text = \
objects[-1].get('version') or 'null'
if 'subdir' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['subdir']
SubElement(elem, 'NextVersionIdMarker').text = 'null'
return elem
def _build_base_listing_element(self, req):
elem = Element('ListBucketResult')
SubElement(elem, 'Name').text = req.container_name
SubElement(elem, 'Prefix').text = req.params.get('prefix')
return elem
def _build_list_bucket_result_type_one(self, req, objects, encoding_type,
is_truncated):
elem = self._build_base_listing_element(req)
SubElement(elem, 'Marker').text = req.params.get('marker')
if is_truncated and 'delimiter' in req.params:
if 'name' in objects[-1]:
name = objects[-1]['name']
else:
name = objects[-1]['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(elem, 'NextMarker').text = name
# XXX: really? no NextMarker when no delimiter??
return elem
def _build_list_bucket_result_type_two(self, req, objects, is_truncated):
elem = self._build_base_listing_element(req)
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['name'].encode('utf8'))
if 'subdir' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['subdir'].encode('utf8'))
if 'continuation-token' in req.params:
SubElement(elem, 'ContinuationToken').text = \
req.params['continuation-token']
if 'start-after' in req.params:
SubElement(elem, 'StartAfter').text = \
req.params['start-after']
SubElement(elem, 'KeyCount').text = str(len(objects))
return elem
def _finish_result(self, req, elem, tag_max_keys, encoding_type,
is_truncated):
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
if 'delimiter' in req.params:
SubElement(elem, 'Delimiter').text = req.params['delimiter']
if encoding_type == 'url':
SubElement(elem, 'EncodingType').text = encoding_type
SubElement(elem, 'IsTruncated').text = \
'true' if is_truncated else 'false'
def _add_subdir(self, elem, o, encoding_type):
common_prefixes = SubElement(elem, 'CommonPrefixes')
name = o['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(common_prefixes, 'Prefix').text = name
def _add_object(self, req, elem, o, encoding_type, listing_type,
fetch_owner):
name = o['name']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
if listing_type == 'object-versions':
if o['content_type'] == DELETE_MARKER_CONTENT_TYPE:
contents = SubElement(elem, 'DeleteMarker')
else:
contents = SubElement(elem, 'Version')
SubElement(contents, 'Key').text = name
SubElement(contents, 'VersionId').text = o.get(
'version_id') or 'null'
if 'object_versioning' in get_swift_info():
SubElement(contents, 'IsLatest').text = (
'true' if o['is_latest'] else 'false')
else:
SubElement(contents, 'IsLatest').text = 'true'
else:
contents = SubElement(elem, 'Contents')
SubElement(contents, 'Key').text = name
SubElement(contents, 'LastModified').text = \
o['last_modified'][:-3] + 'Z'
if contents.tag != 'DeleteMarker':
if 's3_etag' in o:
# New-enough MUs are already in the right format
etag = o['s3_etag']
elif 'slo_etag' in o:
# SLOs may be in something *close* to the MU format
etag = '"%s-N"' % o['slo_etag'].strip('"')
else:
# Normal objects just use the MD5
etag = o['hash']
if len(etag) < 2 or etag[::len(etag) - 1] != '""':
# Normal objects just use the MD5
etag = '"%s"' % o['hash']
# This also catches sufficiently-old SLOs, but we have
# no way to identify those from container listings
# Otherwise, somebody somewhere (proxyfs, maybe?) made this
# look like an RFC-compliant ETag; we don't need to
# quote-wrap.
SubElement(contents, 'ETag').text = etag
SubElement(contents, 'Size').text = str(o['bytes'])
if fetch_owner or listing_type != 'version-2':
owner = SubElement(contents, 'Owner')
SubElement(owner, 'ID').text = req.user_id
SubElement(owner, 'DisplayName').text = req.user_id
if contents.tag != 'DeleteMarker':
SubElement(contents, 'StorageClass').text = 'STANDARD'
def _add_objects_to_result(self, req, elem, objects, encoding_type,
listing_type, fetch_owner):
for o in objects:
if 'subdir' in o:
self._add_subdir(elem, o, encoding_type)
else:
self._add_object(req, elem, o, encoding_type, listing_type,
fetch_owner)
@public
def GET(self, req):
"""
Handle GET Bucket (List Objects) request
"""
max_keys = req.get_validated_param(
'max-keys', self.conf.max_bucket_listing)
tag_max_keys = max_keys
# TODO: Separate max_bucket_listing and default_bucket_listing
max_keys = min(max_keys, self.conf.max_bucket_listing)
encoding_type, query, listing_type, fetch_owner = \
self._parse_request_options(req, max_keys)
resp = req.get_response(self.app, query=query)
objects = json.loads(resp.body)
# in order to judge that truncated is valid, check whether
# max_keys + 1 th element exists in swift.
is_truncated = max_keys > 0 and len(objects) > max_keys
objects = objects[:max_keys]
if listing_type == 'object-versions':
elem = Element('ListVersionsResult')
SubElement(elem, 'Name').text = req.container_name
SubElement(elem, 'Prefix').text = req.params.get('prefix')
SubElement(elem, 'KeyMarker').text = req.params.get('key-marker')
SubElement(elem, 'VersionIdMarker').text = req.params.get(
'version-id-marker')
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['name']
if 'subdir' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['subdir']
SubElement(elem, 'NextVersionIdMarker').text = 'null'
elem = self._build_versions_result(req, objects, is_truncated)
elif listing_type == 'version-2':
elem = self._build_list_bucket_result_type_two(
req, objects, is_truncated)
else:
elem = Element('ListBucketResult')
SubElement(elem, 'Name').text = req.container_name
SubElement(elem, 'Prefix').text = req.params.get('prefix')
if listing_type == 'version-1':
SubElement(elem, 'Marker').text = req.params.get('marker')
if is_truncated and 'delimiter' in req.params:
if 'name' in objects[-1]:
name = objects[-1]['name']
else:
name = objects[-1]['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(elem, 'NextMarker').text = name
elif listing_type == 'version-2':
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['name'].encode('utf-8'))
if 'subdir' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['subdir'].encode('utf-8'))
if 'continuation-token' in req.params:
SubElement(elem, 'ContinuationToken').text = \
req.params['continuation-token']
if 'start-after' in req.params:
SubElement(elem, 'StartAfter').text = \
req.params['start-after']
SubElement(elem, 'KeyCount').text = str(len(objects))
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
if 'delimiter' in req.params:
SubElement(elem, 'Delimiter').text = req.params['delimiter']
if encoding_type == 'url':
SubElement(elem, 'EncodingType').text = encoding_type
SubElement(elem, 'IsTruncated').text = \
'true' if is_truncated else 'false'
for o in objects:
if 'subdir' not in o:
name = o['name']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
if listing_type == 'object-versions':
contents = SubElement(elem, 'Version')
SubElement(contents, 'Key').text = name
SubElement(contents, 'VersionId').text = 'null'
SubElement(contents, 'IsLatest').text = 'true'
else:
contents = SubElement(elem, 'Contents')
SubElement(contents, 'Key').text = name
SubElement(contents, 'LastModified').text = \
o['last_modified'][:-3] + 'Z'
if 's3_etag' in o:
# New-enough MUs are already in the right format
etag = o['s3_etag']
elif 'slo_etag' in o:
# SLOs may be in something *close* to the MU format
etag = '"%s-N"' % o['slo_etag'].strip('"')
else:
etag = o['hash']
if len(etag) < 2 or etag[::len(etag) - 1] != '""':
# Normal objects just use the MD5
etag = '"%s"' % o['hash']
# This also catches sufficiently-old SLOs, but we have
# no way to identify those from container listings
# Otherwise, somebody somewhere (proxyfs, maybe?) made this
# look like an RFC-compliant ETag; we don't need to
# quote-wrap.
SubElement(contents, 'ETag').text = etag
SubElement(contents, 'Size').text = str(o['bytes'])
if fetch_owner or listing_type != 'version-2':
owner = SubElement(contents, 'Owner')
SubElement(owner, 'ID').text = req.user_id
SubElement(owner, 'DisplayName').text = req.user_id
SubElement(contents, 'StorageClass').text = 'STANDARD'
for o in objects:
if 'subdir' in o:
common_prefixes = SubElement(elem, 'CommonPrefixes')
name = o['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(common_prefixes, 'Prefix').text = name
elem = self._build_list_bucket_result_type_one(
req, objects, encoding_type, is_truncated)
self._finish_result(
req, elem, tag_max_keys, encoding_type, is_truncated)
self._add_objects_to_result(
req, elem, objects, encoding_type, listing_type, fetch_owner)
body = tostring(elem)
@ -297,6 +360,7 @@ class BucketController(Controller):
"""
Handle DELETE Bucket request
"""
# NB: object_versioning is responsible for cleaning up its container
if self.conf.allow_multipart_uploads:
self._delete_segments_bucket(req)
resp = req.get_response(self.app)

View File

@ -17,15 +17,15 @@ import copy
import json
from swift.common.constraints import MAX_OBJECT_NAME_LENGTH
from swift.common.utils import public, StreamingPile
from swift.common.utils import public, StreamingPile, get_swift_info
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, SubElement, \
fromstring, tostring, XMLSyntaxError, DocumentInvalid
from swift.common.middleware.s3api.s3response import HTTPOk, S3NotImplemented, \
NoSuchKey, ErrorResponse, MalformedXML, UserKeyMustBeSpecified, \
AccessDenied, MissingRequestBodyError
from swift.common.middleware.s3api.s3response import HTTPOk, \
S3NotImplemented, NoSuchKey, ErrorResponse, MalformedXML, \
UserKeyMustBeSpecified, AccessDenied, MissingRequestBodyError
class MultiObjectDeleteController(Controller):
@ -35,12 +35,10 @@ class MultiObjectDeleteController(Controller):
"""
def _gen_error_body(self, error, elem, delete_list):
for key, version in delete_list:
if version is not None:
# TODO: delete the specific version of the object
raise S3NotImplemented()
error_elem = SubElement(elem, 'Error')
SubElement(error_elem, 'Key').text = key
if version is not None:
SubElement(error_elem, 'VersionId').text = version
SubElement(error_elem, 'Code').text = error.__class__.__name__
SubElement(error_elem, 'Message').text = error._msg
@ -105,21 +103,32 @@ class MultiObjectDeleteController(Controller):
body = self._gen_error_body(error, elem, delete_list)
return HTTPOk(body=body)
if any(version is not None for _key, version in delete_list):
# TODO: support deleting specific versions of objects
if 'object_versioning' not in get_swift_info() and any(
version not in ('null', None)
for _key, version in delete_list):
raise S3NotImplemented()
def do_delete(base_req, key, version):
req = copy.copy(base_req)
req.environ = copy.copy(base_req.environ)
req.object_name = key
if version:
req.params = {'version-id': version, 'symlink': 'get'}
try:
query = req.gen_multipart_manifest_delete_query(self.app)
try:
query = req.gen_multipart_manifest_delete_query(
self.app, version=version)
except NoSuchKey:
query = {}
if version:
query['version-id'] = version
query['symlink'] = 'get'
resp = req.get_response(self.app, method='DELETE', query=query,
headers={'Accept': 'application/json'})
# Have to read the response to actually do the SLO delete
if query:
if query.get('multipart-manifest'):
try:
delete_result = json.loads(resp.body)
if delete_result['Errors']:
@ -144,6 +153,12 @@ class MultiObjectDeleteController(Controller):
pass
except ErrorResponse as e:
return key, {'code': e.__class__.__name__, 'message': e._msg}
except Exception:
self.logger.exception(
'Unexpected Error handling DELETE of %r %r' % (
req.container_name, key))
return key, {'code': 'Server Error', 'message': 'Server Error'}
return key, None
with StreamingPile(self.conf.multi_delete_concurrency) as pile:

View File

@ -68,7 +68,7 @@ import time
import six
from swift.common.swob import Range, bytes_to_wsgi
from swift.common.swob import Range, bytes_to_wsgi, normalize_etag
from swift.common.utils import json, public, reiterate
from swift.common.db import utf8encode
from swift.common.request_helpers import get_container_update_override_key
@ -100,10 +100,19 @@ def _get_upload_info(req, app, upload_id):
container = req.container_name + MULTIUPLOAD_SUFFIX
obj = '%s/%s' % (req.object_name, upload_id)
# XXX: if we leave the copy-source header, somewhere later we might
# drop in a ?version-id=... query string that's utterly inappropriate
# for the upload marker. Until we get around to fixing that, just pop
# it off for now...
copy_source = req.headers.pop('X-Amz-Copy-Source', None)
try:
return req.get_response(app, 'HEAD', container=container, obj=obj)
except NoSuchKey:
raise NoSuchUpload(upload_id=upload_id)
finally:
# ...making sure to restore any copy-source before returning
if copy_source is not None:
req.headers['X-Amz-Copy-Source'] = copy_source
def _check_upload_info(req, app, upload_id):
@ -620,10 +629,7 @@ class UploadController(Controller):
raise InvalidPartOrder(upload_id=upload_id)
previous_number = part_number
etag = part_elem.find('./ETag').text
if len(etag) >= 2 and etag[0] == '"' and etag[-1] == '"':
# strip double quotes
etag = etag[1:-1]
etag = normalize_etag(part_elem.find('./ETag').text)
if len(etag) != 32 or any(c not in '0123456789abcdef'
for c in etag):
raise InvalidPart(upload_id=upload_id,

View File

@ -13,15 +13,21 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from swift.common.http import HTTP_OK, HTTP_PARTIAL_CONTENT, HTTP_NO_CONTENT
from swift.common.request_helpers import update_etag_is_at_header
from swift.common.swob import Range, content_range_header_value
from swift.common.utils import public, list_from_csv
from swift.common.swob import Range, content_range_header_value, \
normalize_etag
from swift.common.utils import public, list_from_csv, get_swift_info
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
from swift.common.middleware.s3api.utils import S3Timestamp, sysmeta_header
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.s3response import S3NotImplemented, \
InvalidRange, NoSuchKey, InvalidArgument, HTTPNoContent
InvalidRange, NoSuchKey, InvalidArgument, HTTPNoContent, \
PreconditionFailed
class ObjectController(Controller):
@ -68,8 +74,7 @@ class ObjectController(Controller):
continue
had_match = True
for value in list_from_csv(req.headers[match_header]):
if value.startswith('"') and value.endswith('"'):
value = value[1:-1]
value = normalize_etag(value)
if value.endswith('-N'):
# Deal with fake S3-like etags for SLOs uploaded via Swift
req.headers[match_header] += ', ' + value[:-2]
@ -78,11 +83,20 @@ class ObjectController(Controller):
# Update where to look
update_etag_is_at_header(req, sysmeta_header('object', 'etag'))
resp = req.get_response(self.app)
object_name = req.object_name
version_id = req.params.get('versionId')
if version_id not in ('null', None) and \
'object_versioning' not in get_swift_info():
raise S3NotImplemented()
query = {} if version_id is None else {'version-id': version_id}
resp = req.get_response(self.app, query=query)
if req.method == 'HEAD':
resp.app_iter = None
if 'x-amz-meta-deleted' in resp.headers:
raise NoSuchKey(object_name)
for key in ('content-type', 'content-language', 'expires',
'cache-control', 'content-disposition',
'content-encoding'):
@ -125,12 +139,14 @@ class ObjectController(Controller):
req.headers['X-Amz-Copy-Source-Range'],
'Illegal copy header')
req.check_copy_source(self.app)
if not req.headers.get('Content-Type'):
# can't setdefault because it can be None for some reason
req.headers['Content-Type'] = 'binary/octet-stream'
resp = req.get_response(self.app)
if 'X-Amz-Copy-Source' in req.headers:
resp.append_copy_resp_body(req.controller_name,
req_timestamp.s3xmlformat)
# delete object metadata from response
for key in list(resp.headers.keys()):
if key.lower().startswith('x-amz-meta-'):
@ -143,20 +159,63 @@ class ObjectController(Controller):
def POST(self, req):
raise S3NotImplemented()
def _restore_on_delete(self, req):
resp = req.get_response(self.app, 'GET', req.container_name, '',
query={'prefix': req.object_name,
'versions': True})
if resp.status_int != HTTP_OK:
return resp
old_versions = json.loads(resp.body)
resp = None
for item in old_versions:
if item['content_type'] == DELETE_MARKER_CONTENT_TYPE:
resp = None
break
try:
resp = req.get_response(self.app, 'PUT', query={
'version-id': item['version_id']})
except PreconditionFailed:
self.logger.debug('skipping failed PUT?version-id=%s' %
item['version_id'])
continue
# if that worked, we'll go ahead and fix up the status code
resp.status_int = HTTP_NO_CONTENT
break
return resp
@public
def DELETE(self, req):
"""
Handle DELETE Object request
"""
if 'versionId' in req.params and \
req.params['versionId'] != 'null' and \
'object_versioning' not in get_swift_info():
raise S3NotImplemented()
try:
query = req.gen_multipart_manifest_delete_query(self.app)
try:
query = req.gen_multipart_manifest_delete_query(
self.app, version=req.params.get('versionId'))
except NoSuchKey:
query = {}
req.headers['Content-Type'] = None # Ignore client content-type
if 'versionId' in req.params:
query['version-id'] = req.params['versionId']
query['symlink'] = 'get'
resp = req.get_response(self.app, query=query)
if query and resp.status_int == HTTP_OK:
if query.get('multipart-manifest') and resp.status_int == HTTP_OK:
for chunk in resp.app_iter:
pass # drain the bulk-deleter response
resp.status = HTTP_NO_CONTENT
resp.body = b''
if resp.sw_headers.get('X-Object-Current-Version-Id') == 'null':
new_resp = self._restore_on_delete(req)
if new_resp:
resp = new_resp
except NoSuchKey:
# expect to raise NoSuchBucket when the bucket doesn't exist
req.get_container_info(self.app)

View File

@ -13,12 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.utils import public
from swift.common.utils import public, get_swift_info, config_true_value
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, tostring
from swift.common.middleware.s3api.s3response import HTTPOk, S3NotImplemented
from swift.common.middleware.s3api.etree import Element, tostring, \
fromstring, XMLSyntaxError, DocumentInvalid, SubElement
from swift.common.middleware.s3api.s3response import HTTPOk, \
S3NotImplemented, MalformedXML
MAX_PUT_VERSIONING_BODY_SIZE = 10240
class VersioningController(Controller):
@ -36,13 +40,16 @@ class VersioningController(Controller):
"""
Handles GET Bucket versioning.
"""
req.get_response(self.app, method='HEAD')
sysmeta = req.get_container_info(self.app).get('sysmeta', {})
# Just report there is no versioning configured here.
elem = Element('VersioningConfiguration')
if sysmeta.get('versions-enabled'):
SubElement(elem, 'Status').text = (
'Enabled' if config_true_value(sysmeta['versions-enabled'])
else 'Suspended')
body = tostring(elem)
return HTTPOk(body=body, content_type="text/plain")
return HTTPOk(body=body, content_type=None)
@public
@bucket_operation
@ -50,4 +57,25 @@ class VersioningController(Controller):
"""
Handles PUT Bucket versioning.
"""
raise S3NotImplemented()
if 'object_versioning' not in get_swift_info():
raise S3NotImplemented()
xml = req.xml(MAX_PUT_VERSIONING_BODY_SIZE)
try:
elem = fromstring(xml, 'VersioningConfiguration')
status = elem.find('./Status').text
except (XMLSyntaxError, DocumentInvalid):
raise MalformedXML()
except Exception as e:
self.logger.error(e)
raise
if status not in ['Enabled', 'Suspended']:
raise MalformedXML()
# Set up versioning
# NB: object_versioning responsible for ensuring its container exists
req.headers['X-Versions-Enabled'] = str(status == 'Enabled').lower()
req.get_response(self.app, 'POST')
return HTTPOk()

View File

@ -34,7 +34,7 @@ from swift.common.http import HTTP_OK, HTTP_CREATED, HTTP_ACCEPTED, \
HTTP_PARTIAL_CONTENT, HTTP_NOT_MODIFIED, HTTP_PRECONDITION_FAILED, \
HTTP_REQUESTED_RANGE_NOT_SATISFIABLE, HTTP_LENGTH_REQUIRED, \
HTTP_BAD_REQUEST, HTTP_REQUEST_TIMEOUT, HTTP_SERVICE_UNAVAILABLE, \
is_success
HTTP_TOO_MANY_REQUESTS, HTTP_RATE_LIMITED, is_success
from swift.common.constraints import check_utf8
from swift.proxy.controllers.base import get_container_info, \
@ -53,7 +53,7 @@ from swift.common.middleware.s3api.s3response import AccessDenied, \
InternalError, NoSuchBucket, NoSuchKey, PreconditionFailed, InvalidRange, \
MissingContentLength, InvalidStorageClass, S3NotImplemented, InvalidURI, \
MalformedXML, InvalidRequest, RequestTimeout, InvalidBucketName, \
BadDigest, AuthorizationHeaderMalformed, \
BadDigest, AuthorizationHeaderMalformed, SlowDown, \
AuthorizationQueryParametersError, ServiceUnavailable
from swift.common.middleware.s3api.exception import NotS3Request, \
BadSwiftRequest
@ -253,15 +253,16 @@ class SigV4Mixin(object):
self.params.get('X-Amz-Algorithm'))
try:
cred_param = self._parse_credential(
self.params['X-Amz-Credential'])
sig = self.params['X-Amz-Signature']
swob.wsgi_to_str(self.params['X-Amz-Credential']))
sig = swob.wsgi_to_str(self.params['X-Amz-Signature'])
if not sig:
raise AccessDenied()
except KeyError:
raise AccessDenied()
try:
signed_headers = self.params['X-Amz-SignedHeaders']
signed_headers = swob.wsgi_to_str(
self.params['X-Amz-SignedHeaders'])
except KeyError:
# TODO: make sure if is it malformed request?
raise AuthorizationHeaderMalformed()
@ -298,7 +299,7 @@ class SigV4Mixin(object):
:raises: AuthorizationHeaderMalformed
"""
auth_str = self.headers['Authorization']
auth_str = swob.wsgi_to_str(self.headers['Authorization'])
cred_param = self._parse_credential(auth_str.partition(
"Credential=")[2].split(',')[0])
sig = auth_str.partition("Signature=")[2].split(',')[0]
@ -371,7 +372,7 @@ class SigV4Mixin(object):
headers_to_sign = [
(key, value) for key, value in sorted(headers_lower_dict.items())
if key in self._signed_headers]
if swob.wsgi_to_str(key) in self._signed_headers]
if len(headers_to_sign) != len(self._signed_headers):
# NOTE: if we are missing the header suggested via
@ -646,9 +647,9 @@ class S3Request(swob.Request):
:raises: AccessDenied
"""
try:
access = self.params['AWSAccessKeyId']
expires = self.params['Expires']
sig = self.params['Signature']
access = swob.wsgi_to_str(self.params['AWSAccessKeyId'])
expires = swob.wsgi_to_str(self.params['Expires'])
sig = swob.wsgi_to_str(self.params['Signature'])
except KeyError:
raise AccessDenied()
@ -664,7 +665,7 @@ class S3Request(swob.Request):
:returns: a tuple of access_key and signature
:raises: AccessDenied
"""
auth_str = self.headers['Authorization']
auth_str = swob.wsgi_to_str(self.headers['Authorization'])
if not auth_str.startswith('AWS ') or ':' not in auth_str:
raise AccessDenied()
# This means signature format V2
@ -877,20 +878,16 @@ class S3Request(swob.Request):
except KeyError:
return None
if '?' in src_path:
src_path, qs = src_path.split('?', 1)
query = parse_qsl(qs, True)
if not query:
pass # ignore it
elif len(query) > 1 or query[0][0] != 'versionId':
raise InvalidArgument('X-Amz-Copy-Source',
self.headers['X-Amz-Copy-Source'],
'Unsupported copy source parameter.')
elif query[0][1] != 'null':
# TODO: once we support versioning, we'll need to translate
# src_path to the proper location in the versions container
raise S3NotImplemented('Versioning is not yet supported')
self.headers['X-Amz-Copy-Source'] = src_path
src_path, qs = src_path.partition('?')[::2]
parsed = parse_qsl(qs, True)
if not parsed:
query = {}
elif len(parsed) == 1 and parsed[0][0] == 'versionId':
query = {'version-id': parsed[0][1]}
else:
raise InvalidArgument('X-Amz-Copy-Source',
self.headers['X-Amz-Copy-Source'],
'Unsupported copy source parameter.')
src_path = unquote(src_path)
src_path = src_path if src_path.startswith('/') else ('/' + src_path)
@ -900,19 +897,15 @@ class S3Request(swob.Request):
headers.update(self._copy_source_headers())
src_resp = self.get_response(app, 'HEAD', src_bucket, src_obj,
headers=headers)
headers=headers, query=query)
if src_resp.status_int == 304: # pylint: disable-msg=E1101
raise PreconditionFailed()
self.headers['X-Amz-Copy-Source'] = \
'/' + self.headers['X-Amz-Copy-Source'].lstrip('/')
source_container, source_obj = \
split_path(self.headers['X-Amz-Copy-Source'], 1, 2, True)
if (self.container_name == source_container and
self.object_name == source_obj and
if (self.container_name == src_bucket and
self.object_name == src_obj and
self.headers.get('x-amz-metadata-directive',
'COPY') == 'COPY'):
'COPY') == 'COPY' and
not query):
raise InvalidRequest("This copy request is illegal "
"because it is trying to copy an "
"object to itself without "
@ -920,6 +913,12 @@ class S3Request(swob.Request):
"storage class, website redirect "
"location or encryption "
"attributes.")
# We've done some normalizing; write back so it's ready for
# to_swift_req
self.headers['X-Amz-Copy-Source'] = quote(src_path)
if query:
self.headers['X-Amz-Copy-Source'] += \
'?versionId=' + query['version-id']
return src_resp
def _canonical_uri(self):
@ -1064,6 +1063,7 @@ class S3Request(swob.Request):
account = self.account
env = self.environ.copy()
env['swift.infocache'] = self.environ.setdefault('swift.infocache', {})
def sanitize(value):
if set(value).issubset(string.printable):
@ -1109,8 +1109,10 @@ class S3Request(swob.Request):
env['HTTP_X_OBJECT_META_' + key[16:]] = sanitize(env[key])
del env[key]
if 'HTTP_X_AMZ_COPY_SOURCE' in env:
env['HTTP_X_COPY_FROM'] = env['HTTP_X_AMZ_COPY_SOURCE']
copy_from_version_id = ''
if 'HTTP_X_AMZ_COPY_SOURCE' in env and env['REQUEST_METHOD'] == 'PUT':
env['HTTP_X_COPY_FROM'], copy_from_version_id = env[
'HTTP_X_AMZ_COPY_SOURCE'].partition('?versionId=')[::2]
del env['HTTP_X_AMZ_COPY_SOURCE']
env['CONTENT_LENGTH'] = '0'
if env.pop('HTTP_X_AMZ_METADATA_DIRECTIVE', None) == 'REPLACE':
@ -1143,16 +1145,16 @@ class S3Request(swob.Request):
path = '/v1/%s' % (account)
env['PATH_INFO'] = path
query_string = ''
params = []
if query is not None:
params = []
for key, value in sorted(query.items()):
if value is not None:
params.append('%s=%s' % (key, quote(str(value))))
else:
params.append(key)
query_string = '&'.join(params)
env['QUERY_STRING'] = query_string
if copy_from_version_id and not (query and query.get('version-id')):
params.append('version-id=' + copy_from_version_id)
env['QUERY_STRING'] = '&'.join(params)
return swob.Request.blank(quote(path), environ=env, body=body,
headers=headers)
@ -1218,7 +1220,7 @@ class S3Request(swob.Request):
def _bucket_put_accepted_error(self, container, app):
sw_req = self.to_swift_req('HEAD', container, None)
info = get_container_info(sw_req.environ, app)
info = get_container_info(sw_req.environ, app, swift_source='S3')
sysmeta = info.get('sysmeta', {})
try:
acl = json.loads(sysmeta.get('s3api-acl',
@ -1292,6 +1294,7 @@ class S3Request(swob.Request):
HTTP_REQUEST_ENTITY_TOO_LARGE: EntityTooLarge,
HTTP_LENGTH_REQUIRED: MissingContentLength,
HTTP_REQUEST_TIMEOUT: RequestTimeout,
HTTP_PRECONDITION_FAILED: PreconditionFailed,
},
'POST': {
HTTP_NOT_FOUND: not_found_handler,
@ -1371,6 +1374,8 @@ class S3Request(swob.Request):
raise AccessDenied()
if status == HTTP_SERVICE_UNAVAILABLE:
raise ServiceUnavailable()
if status in (HTTP_RATE_LIMITED, HTTP_TOO_MANY_REQUESTS):
raise SlowDown()
raise InternalError('unexpected status code %d' % status)
@ -1429,7 +1434,7 @@ class S3Request(swob.Request):
# if we have already authenticated, yes we can use the account
# name like as AUTH_xxx for performance efficiency
sw_req = self.to_swift_req(app, self.container_name, None)
info = get_container_info(sw_req.environ, app)
info = get_container_info(sw_req.environ, app, swift_source='S3')
if is_success(info['status']):
return info
elif info['status'] == 404:
@ -1445,14 +1450,16 @@ class S3Request(swob.Request):
return headers_to_container_info(
headers, resp.status_int) # pylint: disable-msg=E1101
def gen_multipart_manifest_delete_query(self, app, obj=None):
def gen_multipart_manifest_delete_query(self, app, obj=None, version=None):
if not self.allow_multipart_uploads:
return None
query = {'multipart-manifest': 'delete'}
return {}
if not obj:
obj = self.object_name
resp = self.get_response(app, 'HEAD', obj=obj)
return query if resp.is_slo else None
query = {'symlink': 'get'}
if version is not None:
query['version-id'] = version
resp = self.get_response(app, 'HEAD', obj=obj, query=query)
return {'multipart-manifest': 'delete'} if resp.is_slo else {}
def set_acl_handler(self, handler):
pass

View File

@ -17,6 +17,7 @@ import re
from collections import MutableMapping
from functools import partial
from swift.common import header_key_dict
from swift.common import swob
from swift.common.utils import config_true_value
from swift.common.request_helpers import is_sys_meta
@ -24,44 +25,25 @@ from swift.common.request_helpers import is_sys_meta
from swift.common.middleware.s3api.utils import snake_to_camel, \
sysmeta_prefix, sysmeta_header
from swift.common.middleware.s3api.etree import Element, SubElement, tostring
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
class HeaderKey(str):
class HeaderKeyDict(header_key_dict.HeaderKeyDict):
"""
A string object that normalizes string as S3 clients expect with title().
Similar to the Swift's normal HeaderKeyDict class, but its key name is
normalized as S3 clients expect.
"""
def title(self):
if self.lower() == 'etag':
@staticmethod
def _title(s):
s = header_key_dict.HeaderKeyDict._title(s)
if s.lower() == 'etag':
# AWS Java SDK expects only 'ETag'.
return 'ETag'
if self.lower().startswith('x-amz-'):
if s.lower().startswith('x-amz-'):
# AWS headers returned by S3 are lowercase.
return self.lower()
return str.title(self)
class HeaderKeyDict(swob.HeaderKeyDict):
"""
Similar to the HeaderKeyDict class in Swift, but its key name is normalized
as S3 clients expect.
"""
def __getitem__(self, key):
return swob.HeaderKeyDict.__getitem__(self, HeaderKey(key))
def __setitem__(self, key, value):
return swob.HeaderKeyDict.__setitem__(self, HeaderKey(key), value)
def __contains__(self, key):
return swob.HeaderKeyDict.__contains__(self, HeaderKey(key))
def __delitem__(self, key):
return swob.HeaderKeyDict.__delitem__(self, HeaderKey(key))
def get(self, key, default=None):
return swob.HeaderKeyDict.get(self, HeaderKey(key), default)
def pop(self, key, default=None):
return swob.HeaderKeyDict.pop(self, HeaderKey(key), default)
return swob.bytes_to_wsgi(swob.wsgi_to_bytes(s).lower())
return s
class S3ResponseBase(object):
@ -116,7 +98,7 @@ class S3Response(S3ResponseBase, swob.Response):
# Handle swift headers
for key, val in sw_headers.items():
_key = key.lower()
_key = swob.bytes_to_wsgi(swob.wsgi_to_bytes(key).lower())
if _key.startswith('x-object-meta-'):
# Note that AWS ignores user-defined headers with '=' in the
@ -129,9 +111,16 @@ class S3Response(S3ResponseBase, swob.Response):
'etag', 'last-modified', 'x-robots-tag',
'cache-control', 'expires'):
headers[key] = val
elif _key == 'x-object-version-id':
headers['x-amz-version-id'] = val
elif _key == 'x-copied-from-version-id':
headers['x-amz-copy-source-version-id'] = val
elif _key == 'x-static-large-object':
# for delete slo
self.is_slo = config_true_value(val)
elif _key == 'x-backend-content-type' and \
val == DELETE_MARKER_CONTENT_TYPE:
headers['x-amz-delete-marker'] = 'true'
# Check whether we stored the AWS-style etag on upload
override_etag = s3_sysmeta_headers.get(
@ -237,7 +226,7 @@ class ErrorResponse(S3ResponseBase, swob.HTTPException):
def _dict_to_etree(self, parent, d):
for key, value in d.items():
tag = re.sub('\W', '', snake_to_camel(key))
tag = re.sub(r'\W', '', snake_to_camel(key))
elem = SubElement(parent, tag)
if isinstance(value, (dict, MutableMapping)):
@ -501,7 +490,7 @@ class MalformedPOSTRequest(ErrorResponse):
class MalformedXML(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The XML you provided was not well-formed or did not validate ' \
'against our published schema.'
'against our published schema'
class MaxMessageLengthExceeded(ErrorResponse):

View File

@ -179,10 +179,12 @@ class S3Token(object):
self._verify = None
self._secret_cache_duration = int(conf.get('secret_cache_duration', 0))
if self._secret_cache_duration > 0:
if self._secret_cache_duration < 0:
raise ValueError('secret_cache_duration must be non-negative')
if self._secret_cache_duration:
try:
auth_plugin = keystone_loading.get_plugin_loader(
conf.get('auth_type'))
conf.get('auth_type', 'password'))
available_auth_options = auth_plugin.get_options()
auth_options = {}
for option in available_auth_options:

View File

@ -95,8 +95,8 @@ def validate_bucket_name(name, dns_compliant_bucket_names):
elif name.endswith('.'):
# Bucket names must not end with dot
return False
elif re.match("^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.)"
"{3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$",
elif re.match(r"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.)"
r"{3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$",
name):
# Bucket names cannot be formatted as an IP Address
return False

View File

@ -331,7 +331,7 @@ from swift.common.swob import Request, HTTPBadRequest, HTTPServerError, \
HTTPMethodNotAllowed, HTTPRequestEntityTooLarge, HTTPLengthRequired, \
HTTPOk, HTTPPreconditionFailed, HTTPException, HTTPNotFound, \
HTTPUnauthorized, HTTPConflict, HTTPUnprocessableEntity, \
HTTPServiceUnavailable, Response, Range, \
HTTPServiceUnavailable, Response, Range, normalize_etag, \
RESPONSE_REASONS, str_to_wsgi, wsgi_to_str, wsgi_quote
from swift.common.utils import get_logger, config_true_value, \
get_valid_utf8_str, override_bytes_from_content_type, split_path, \
@ -340,7 +340,7 @@ from swift.common.utils import get_logger, config_true_value, \
Timestamp
from swift.common.request_helpers import SegmentedIterable, \
get_sys_meta_prefix, update_etag_is_at_header, resolve_etag_is_at_header, \
get_container_update_override_key
get_container_update_override_key, update_ignore_range_header
from swift.common.constraints import check_utf8
from swift.common.http import HTTP_NOT_FOUND, HTTP_UNAUTHORIZED, is_success
from swift.common.wsgi import WSGIContext, make_subrequest
@ -764,6 +764,9 @@ class SloGetContext(WSGIContext):
# saved, we can trust the object-server to respond appropriately
# to If-Match/If-None-Match requests.
update_etag_is_at_header(req, SYSMETA_SLO_ETAG)
# Tell the object server that if it's a manifest,
# we want the whole thing
update_ignore_range_header(req, 'X-Static-Large-Object')
resp_iter = self._app_call(req.environ)
# make sure this response is for a static large object manifest
@ -898,7 +901,7 @@ class SloGetContext(WSGIContext):
seg_dict['size_bytes'] = seg_dict.pop('bytes', None)
seg_dict['etag'] = seg_dict.pop('hash', None)
json_data = json.dumps(segments) # convert to string
json_data = json.dumps(segments, sort_keys=True) # convert to string
if six.PY3:
json_data = json_data.encode('utf-8')
@ -906,6 +909,8 @@ class SloGetContext(WSGIContext):
for header, value in resp_headers:
if header.lower() == 'content-length':
new_headers.append(('Content-Length', len(json_data)))
elif header.lower() == 'etag':
new_headers.append(('Etag', md5(json_data).hexdigest()))
else:
new_headers.append((header, value))
self._response_headers = new_headers
@ -1324,8 +1329,8 @@ class StaticLargeObject(object):
slo_etag.update(r.encode('ascii') if six.PY3 else r)
slo_etag = slo_etag.hexdigest()
client_etag = req.headers.get('Etag')
if client_etag and client_etag.strip('"') != slo_etag:
client_etag = normalize_etag(req.headers.get('Etag'))
if client_etag and client_etag != slo_etag:
err = HTTPUnprocessableEntity(request=req)
if heartbeat:
resp_dict = {}
@ -1414,6 +1419,9 @@ class StaticLargeObject(object):
segments = [{
'sub_slo': True,
'name': obj_path}]
if 'version-id' in req.params:
segments[0]['version_id'] = req.params['version-id']
while segments:
# We chose not to set the limit at max_manifest_segments
# in the case this value was decreased by operators.
@ -1466,6 +1474,9 @@ class StaticLargeObject(object):
new_env['REQUEST_METHOD'] = 'GET'
del(new_env['wsgi.input'])
new_env['QUERY_STRING'] = 'multipart-manifest=get'
if 'version-id' in req.params:
new_env['QUERY_STRING'] += \
'&version-id=' + req.params['version-id']
new_env['CONTENT_LENGTH'] = 0
new_env['HTTP_USER_AGENT'] = \
'%s MultipartDELETE' % new_env.get('HTTP_USER_AGENT')

View File

@ -123,13 +123,13 @@ Example usage of this middleware via ``swift``:
"""
import cgi
import json
import six
import time
from six.moves.urllib.parse import urlparse
from swift.common.request_helpers import html_escape
from swift.common.utils import human_readable, split_path, config_true_value, \
quote, register_swift_info, get_logger
from swift.common.wsgi import make_env, WSGIContext
@ -243,7 +243,7 @@ class _StaticWebContext(WSGIContext):
'Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">\n' \
'<html>\n' \
'<head>\n' \
'<title>Listing of %s</title>\n' % cgi.escape(label)
'<title>Listing of %s</title>\n' % html_escape(label)
if self._listings_css:
body += ' <link rel="stylesheet" type="text/css" ' \
'href="%s" />\n' % self._build_css_path(prefix or '')
@ -290,7 +290,7 @@ class _StaticWebContext(WSGIContext):
'<html>\n' \
' <head>\n' \
' <title>Listing of %s</title>\n' % \
cgi.escape(label)
html_escape(label)
if self._listings_css:
body += ' <link rel="stylesheet" type="text/css" ' \
'href="%s" />\n' % (self._build_css_path(prefix))
@ -309,7 +309,7 @@ class _StaticWebContext(WSGIContext):
' <th class="colname">Name</th>\n' \
' <th class="colsize">Size</th>\n' \
' <th class="coldate">Date</th>\n' \
' </tr>\n' % cgi.escape(label)
' </tr>\n' % html_escape(label)
if prefix:
body += ' <tr id="parent" class="item">\n' \
' <td class="colname"><a href="../">../</a></td>\n' \
@ -327,7 +327,7 @@ class _StaticWebContext(WSGIContext):
' <td class="colsize">&nbsp;</td>\n' \
' <td class="coldate">&nbsp;</td>\n' \
' </tr>\n' % \
(quote(subdir), cgi.escape(subdir))
(quote(subdir), html_escape(subdir))
for item in listing:
if 'name' in item:
name = item['name'] if six.PY3 else \
@ -338,17 +338,17 @@ class _StaticWebContext(WSGIContext):
item['content_type'].encode('utf-8')
bytes = human_readable(item['bytes'])
last_modified = (
cgi.escape(item['last_modified'] if six.PY3 else
item['last_modified'].encode('utf-8')).
html_escape(item['last_modified'] if six.PY3 else
item['last_modified'].encode('utf-8')).
split('.')[0].replace('T', ' '))
body += ' <tr class="item %s">\n' \
' <td class="colname"><a href="%s">%s</a></td>\n' \
' <td class="colsize">%s</td>\n' \
' <td class="coldate">%s</td>\n' \
' </tr>\n' % \
(' '.join('type-' + cgi.escape(t.lower(), quote=True)
(' '.join('type-' + html_escape(t.lower())
for t in content_type.split('/')),
quote(name), cgi.escape(name),
quote(name), html_escape(name),
bytes, last_modified)
body += ' </table>\n' \
' </body>\n' \

View File

@ -202,14 +202,16 @@ import os
from cgi import parse_header
from swift.common.utils import get_logger, register_swift_info, split_path, \
MD5_OF_EMPTY_STRING, close_if_possible, closing_if_possible
MD5_OF_EMPTY_STRING, close_if_possible, closing_if_possible, \
config_true_value, drain_and_close
from swift.common.constraints import check_account_format
from swift.common.wsgi import WSGIContext, make_subrequest
from swift.common.request_helpers import get_sys_meta_prefix, \
check_path_header, get_container_update_override_key
check_path_header, get_container_update_override_key, \
update_ignore_range_header
from swift.common.swob import Request, HTTPBadRequest, HTTPTemporaryRedirect, \
HTTPException, HTTPConflict, HTTPPreconditionFailed, wsgi_quote, \
wsgi_unquote
wsgi_unquote, status_map
from swift.common.http import is_success, HTTP_NOT_FOUND
from swift.common.exceptions import LinkIterError
from swift.common.header_key_dict import HeaderKeyDict
@ -227,6 +229,8 @@ TGT_ETAG_SYSMETA_SYMLINK_HDR = \
get_sys_meta_prefix('object') + 'symlink-target-etag'
TGT_BYTES_SYSMETA_SYMLINK_HDR = \
get_sys_meta_prefix('object') + 'symlink-target-bytes'
SYMLOOP_EXTEND = get_sys_meta_prefix('object') + 'symloop-extend'
ALLOW_RESERVED_NAMES = get_sys_meta_prefix('object') + 'allow-reserved-names'
def _validate_and_prep_request_headers(req):
@ -428,6 +432,7 @@ class SymlinkObjectContext(WSGIContext):
:param req: HTTP GET or HEAD object request
:returns: Response Iterator
"""
update_ignore_range_header(req, TGT_OBJ_SYSMETA_SYMLINK_HDR)
try:
return self._recursive_get_head(req)
except LinkIterError:
@ -463,7 +468,8 @@ class SymlinkObjectContext(WSGIContext):
resp_etag = self._response_header_value(
TGT_ETAG_SYSMETA_SYMLINK_HDR)
if symlink_target and (resp_etag or follow_softlinks):
close_if_possible(resp)
# Should be a zero-byte object
drain_and_close(resp)
found_etag = resp_etag or self._response_header_value('etag')
if target_etag and target_etag != found_etag:
raise HTTPConflict(
@ -475,11 +481,18 @@ class SymlinkObjectContext(WSGIContext):
raise LinkIterError()
# format: /<account name>/<container name>/<object name>
new_req = build_traversal_req(symlink_target)
self._loop_count += 1
if not config_true_value(
self._response_header_value(SYMLOOP_EXTEND)):
self._loop_count += 1
if config_true_value(
self._response_header_value(ALLOW_RESERVED_NAMES)):
new_req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
return self._recursive_get_head(new_req, target_etag=resp_etag)
else:
final_etag = self._response_header_value('etag')
if final_etag and target_etag and target_etag != final_etag:
# do *not* drain; we don't know how big this is
close_if_possible(resp)
body = ('Object Etag %r does not match '
'X-Symlink-Target-Etag header %r')
@ -504,21 +517,31 @@ class SymlinkObjectContext(WSGIContext):
def _validate_etag_and_update_sysmeta(self, req, symlink_target_path,
etag):
if req.environ.get('swift.symlink_override'):
req.headers[TGT_ETAG_SYSMETA_SYMLINK_HDR] = etag
req.headers[TGT_BYTES_SYSMETA_SYMLINK_HDR] = \
req.headers[TGT_BYTES_SYMLINK_HDR]
return
# next we'll make sure the E-Tag matches a real object
new_req = make_subrequest(
req.environ, path=wsgi_quote(symlink_target_path), method='HEAD',
swift_source='SYM')
if req.allow_reserved_names:
new_req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
self._last_target_path = symlink_target_path
resp = self._recursive_get_head(new_req, target_etag=etag,
follow_softlinks=False)
if self._get_status_int() == HTTP_NOT_FOUND:
raise HTTPConflict(
body='X-Symlink-Target does not exist',
request=req,
headers={
'Content-Type': 'text/plain',
'Content-Location': self._last_target_path})
if not is_success(self._get_status_int()):
return resp
drain_and_close(resp)
raise status_map[self._get_status_int()](request=req)
response_headers = HeaderKeyDict(self._response_headers)
# carry forward any etag update params (e.g. "slo_etag"), we'll append
# symlink_target_* params to this header after this method returns
@ -563,10 +586,8 @@ class SymlinkObjectContext(WSGIContext):
symlink_target_path, etag = _validate_and_prep_request_headers(req)
if etag:
resp = self._validate_etag_and_update_sysmeta(
self._validate_etag_and_update_sysmeta(
req, symlink_target_path, etag)
if resp is not None:
return resp
# N.B. TGT_ETAG_SYMLINK_HDR was converted as part of verifying it
symlink_usermeta_to_sysmeta(req.headers)
# Store info in container update that this object is a symlink.
@ -649,6 +670,9 @@ class SymlinkObjectContext(WSGIContext):
req.environ['swift.leave_relative_location'] = True
errmsg = 'The requested POST was applied to a symlink. POST ' +\
'directly to the target to apply requested metadata.'
for key, value in self._response_headers:
if key.lower().startswith('x-object-sysmeta-'):
headers[key] = value
raise HTTPTemporaryRedirect(
body=errmsg, headers=headers)
else:

View File

@ -0,0 +1,51 @@
# Copyright (c) 2019 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Implements middleware for object versioning which comprises an instance of a
:class:`~swift.common.middleware.versioned_writes.legacy.
VersionedWritesMiddleware` combined with an instance of an
:class:`~swift.common.middleware.versioned_writes.object_versioning.
ObjectVersioningMiddleware`.
"""
from swift.common.middleware.versioned_writes. \
legacy import CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC, \
VersionedWritesMiddleware
from swift.common.middleware.versioned_writes. \
object_versioning import ObjectVersioningMiddleware
from swift.common.utils import config_true_value, register_swift_info, \
get_swift_info
def filter_factory(global_conf, **local_conf):
"""Provides a factory function for loading versioning middleware."""
conf = global_conf.copy()
conf.update(local_conf)
if config_true_value(conf.get('allow_versioned_writes')):
register_swift_info('versioned_writes', allowed_flags=(
CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC))
allow_object_versioning = config_true_value(conf.get(
'allow_object_versioning'))
if allow_object_versioning:
register_swift_info('object_versioning')
def versioning_filter(app):
if allow_object_versioning:
if 'symlink' not in get_swift_info():
raise ValueError('object versioning requires symlinks')
app = ObjectVersioningMiddleware(app, conf)
return VersionedWritesMiddleware(app, conf)
return versioning_filter

View File

@ -14,6 +14,11 @@
# limitations under the License.
"""
.. note::
This middleware supports two legacy modes of object versioning that is
now replaced by a new mode. It is recommended to use the new
:ref:`Object Versioning <object_versioning>` mode for new containers.
Object versioning in swift is implemented by setting a flag on the container
to tell swift to version all objects in the container. The value of the flag is
the URL-encoded container name where the versions are stored (commonly referred
@ -225,7 +230,7 @@ import json
import time
from swift.common.utils import get_logger, Timestamp, \
register_swift_info, config_true_value, close_if_possible, FileLikeIter
config_true_value, close_if_possible, FileLikeIter, drain_and_close
from swift.common.request_helpers import get_sys_meta_prefix, \
copy_header_subset
from swift.common.wsgi import WSGIContext, make_pre_authed_request
@ -336,7 +341,8 @@ class VersionedWritesContext(WSGIContext):
lreq.environ['QUERY_STRING'] += '&reverse=on'
lresp = lreq.get_response(self.app)
if not is_success(lresp.status_int):
close_if_possible(lresp.app_iter)
# errors should be short
drain_and_close(lresp)
if lresp.status_int == HTTP_NOT_FOUND:
raise ListingIterNotFound()
elif is_client_error(lresp.status_int):
@ -377,6 +383,8 @@ class VersionedWritesContext(WSGIContext):
if source_resp.content_length is None or \
source_resp.content_length > MAX_FILE_SIZE:
# Consciously *don't* drain the response before closing;
# any logged 499 is actually rather appropriate here
close_if_possible(source_resp.app_iter)
return HTTPRequestEntityTooLarge(request=req)
@ -397,6 +405,7 @@ class VersionedWritesContext(WSGIContext):
put_req.environ['wsgi.input'] = FileLikeIter(source_resp.app_iter)
put_resp = put_req.get_response(self.app)
# the PUT was responsible for draining
close_if_possible(source_resp.app_iter)
return put_resp
@ -406,7 +415,8 @@ class VersionedWritesContext(WSGIContext):
"""
if is_success(resp.status_int):
return
close_if_possible(resp.app_iter)
# any error should be short
drain_and_close(resp)
if is_client_error(resp.status_int):
# missing container or bad permissions
raise HTTPPreconditionFailed(request=req)
@ -429,7 +439,7 @@ class VersionedWritesContext(WSGIContext):
# making any backend requests
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app)
req.environ, self.app, swift_source='VW')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
@ -439,7 +449,7 @@ class VersionedWritesContext(WSGIContext):
if get_resp.status_int == HTTP_NOT_FOUND:
# nothing to version, proceed with original request
close_if_possible(get_resp.app_iter)
drain_and_close(get_resp)
return
# check for any other errors
@ -457,10 +467,12 @@ class VersionedWritesContext(WSGIContext):
put_path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, vers_obj_name)
req.environ['QUERY_STRING'] = ''
put_resp = self._put_versioned_obj(req, put_path_info, get_resp)
self._check_response_error(req, put_resp)
close_if_possible(put_resp.app_iter)
# successful PUT response should be short
drain_and_close(put_resp)
def handle_obj_versions_put(self, req, versions_cont, api_version,
account_name, object_name):
@ -515,7 +527,7 @@ class VersionedWritesContext(WSGIContext):
marker_req.environ['swift.content_type_overridden'] = True
marker_resp = marker_req.get_response(self.app)
self._check_response_error(req, marker_resp)
close_if_possible(marker_resp.app_iter)
drain_and_close(marker_resp)
# successfully copied and created delete marker; safe to delete
return self.app
@ -529,7 +541,7 @@ class VersionedWritesContext(WSGIContext):
# if the version isn't there, keep trying with previous version
if get_resp.status_int == HTTP_NOT_FOUND:
close_if_possible(get_resp.app_iter)
drain_and_close(get_resp)
return False
self._check_response_error(req, get_resp)
@ -539,7 +551,7 @@ class VersionedWritesContext(WSGIContext):
put_resp = self._put_versioned_obj(req, put_path_info, get_resp)
self._check_response_error(req, put_resp)
close_if_possible(put_resp.app_iter)
drain_and_close(put_resp)
return get_path
def handle_obj_versions_delete_pop(self, req, versions_cont, api_version,
@ -570,7 +582,7 @@ class VersionedWritesContext(WSGIContext):
# making any backend requests
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app)
req.environ, self.app, swift_source='VW')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
@ -585,7 +597,7 @@ class VersionedWritesContext(WSGIContext):
req.environ, path=wsgi_quote(req.path_info), method='HEAD',
headers=obj_head_headers, swift_source='VW')
hresp = head_req.get_response(self.app)
close_if_possible(hresp.app_iter)
drain_and_close(hresp)
if hresp.status_int != HTTP_NOT_FOUND:
self._check_response_error(req, hresp)
@ -601,6 +613,7 @@ class VersionedWritesContext(WSGIContext):
break
obj_to_restore = bytes_to_wsgi(
version_to_restore['name'].encode('utf-8'))
req.environ['QUERY_STRING'] = ''
restored_path = self._restore_data(
req, versions_cont, api_version, account_name,
container_name, object_name, obj_to_restore)
@ -612,7 +625,7 @@ class VersionedWritesContext(WSGIContext):
method='DELETE', headers=auth_token_header,
swift_source='VW')
del_resp = old_del_req.get_response(self.app)
close_if_possible(del_resp.app_iter)
drain_and_close(del_resp)
if del_resp.status_int != HTTP_NOT_FOUND:
self._check_response_error(req, del_resp)
# else, well, it existed long enough to do the
@ -632,6 +645,7 @@ class VersionedWritesContext(WSGIContext):
# current object and delete the previous version
prev_obj_name = bytes_to_wsgi(
previous_version['name'].encode('utf-8'))
req.environ['QUERY_STRING'] = ''
restored_path = self._restore_data(
req, versions_cont, api_version, account_name,
container_name, object_name, prev_obj_name)
@ -773,7 +787,7 @@ class VersionedWritesMiddleware(object):
resp = None
is_enabled = config_true_value(allow_versioned_writes)
container_info = get_container_info(
req.environ, self.app)
req.environ, self.app, swift_source='VW')
# To maintain backwards compatibility, container version
# location could be stored as sysmeta or not, need to check both.
@ -856,16 +870,3 @@ class VersionedWritesMiddleware(object):
return error_response(env, start_response)
else:
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
if config_true_value(conf.get('allow_versioned_writes')):
register_swift_info('versioned_writes', allowed_flags=(
CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC))
def obj_versions_filter(app):
return VersionedWritesMiddleware(app, conf)
return obj_versions_filter

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi
import os
import random
import re
@ -28,6 +27,7 @@ from swift.common.middleware.x_profile.exceptions import MethodNotAllowed
from swift.common.middleware.x_profile.exceptions import DataLoadFailure
from swift.common.middleware.x_profile.exceptions import ProfileException
from swift.common.middleware.x_profile.profile_model import Stats2
from swift.common.request_helpers import html_escape
PLOTLIB_INSTALLED = True
try:
@ -454,7 +454,7 @@ class HTMLViewer(object):
fmt = '<span id="L%d" rel="#L%d">%' + max_width\
+ 'd|<code>%s</code></span>'
for line in lines:
l = cgi.escape(line, quote=None)
l = html_escape(line)
i = i + 1
if i == lineno:
fmt2 = '<span id="L%d" style="background-color: \
@ -518,7 +518,7 @@ class HTMLViewer(object):
html.append('<td>-</td>')
else:
html.append('<td>%f</td>' % (float(ct) / cc))
nfls = cgi.escape(stats.func_std_string(func))
nfls = html_escape(stats.func_std_string(func))
if nfls.split(':')[0] not in ['', 'profile'] and\
os.path.isfile(nfls.split(':')[0]):
html.append('<td><a href="%s/%s%s?format=python#L%d">\
@ -532,5 +532,5 @@ class HTMLViewer(object):
--></a></td></tr>' % (app_path,
profile_id, nfls))
except Exception as ex:
html.append("Exception:" % str(ex))
html.append("Exception:" + str(ex))
return ''.join(html)

View File

@ -29,6 +29,7 @@ import six
from swift.common.header_key_dict import HeaderKeyDict
from swift import gettext_ as _
from swift.common.constraints import AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.storage_policy import POLICIES
from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
@ -38,9 +39,10 @@ from swift.common.swob import HTTPBadRequest, \
from swift.common.utils import split_path, validate_device_partition, \
close_if_possible, maybe_multipart_byteranges_to_document_iters, \
multipart_byteranges_to_document_iters, parse_content_type, \
parse_content_range, csv_append, list_from_csv, Spliterator, quote
parse_content_range, csv_append, list_from_csv, Spliterator, quote, \
RESERVED
from swift.common.wsgi import make_subrequest
from swift.container.reconciler import MISPLACED_OBJECTS_ACCOUNT
OBJECT_TRANSIENT_SYSMETA_PREFIX = 'x-object-transient-sysmeta-'
@ -48,6 +50,15 @@ OBJECT_SYSMETA_CONTAINER_UPDATE_OVERRIDE_PREFIX = \
'x-object-sysmeta-container-update-override-'
if six.PY2:
import cgi
def html_escape(s, quote=True):
return cgi.escape(s, quote=quote)
else:
from html import escape as html_escape # noqa: F401
def get_param(req, name, default=None):
"""
Get parameters from an HTTP request ensuring proper handling UTF-8
@ -83,6 +94,66 @@ def get_param(req, name, default=None):
return value
def constrain_req_limit(req, constrained_limit):
given_limit = get_param(req, 'limit')
limit = constrained_limit
if given_limit and given_limit.isdigit():
limit = int(given_limit)
if limit > constrained_limit:
raise HTTPPreconditionFailed(
request=req, body='Maximum limit is %d' % constrained_limit)
return limit
def _validate_internal_name(name, type_='name'):
if RESERVED in name and not name.startswith(RESERVED):
raise HTTPBadRequest(body='Invalid reserved-namespace %s' % (type_))
def validate_internal_account(account):
"""
Validate internal account name.
:raises: HTTPBadRequest
"""
_validate_internal_name(account, 'account')
def validate_internal_container(account, container):
"""
Validate internal account and container names.
:raises: HTTPBadRequest
"""
if not account:
raise ValueError('Account is required')
validate_internal_account(account)
if container:
_validate_internal_name(container, 'container')
def validate_internal_obj(account, container, obj):
"""
Validate internal account, container and object names.
:raises: HTTPBadRequest
"""
if not account:
raise ValueError('Account is required')
if not container:
raise ValueError('Container is required')
validate_internal_container(account, container)
if obj and not (account.startswith(AUTO_CREATE_ACCOUNT_PREFIX) or
account == MISPLACED_OBJECTS_ACCOUNT):
_validate_internal_name(obj, 'object')
if container.startswith(RESERVED) and not obj.startswith(RESERVED):
raise HTTPBadRequest(body='Invalid user-namespace object '
'in reserved-namespace container')
elif obj.startswith(RESERVED) and not container.startswith(RESERVED):
raise HTTPBadRequest(body='Invalid reserved-namespace object '
'in user-namespace container')
def get_name_and_placement(request, minsegs=1, maxsegs=None,
rest_with_last=False):
"""
@ -273,6 +344,28 @@ def get_container_update_override_key(key):
return header.title()
def get_reserved_name(*parts):
"""
Generate a valid reserved name that joins the component parts.
:returns: a string
"""
if any(RESERVED in p for p in parts):
raise ValueError('Invalid reserved part in components')
return RESERVED + RESERVED.join(parts)
def split_reserved_name(name):
"""
Seperate a valid reserved name into the component parts.
:returns: a list of strings
"""
if not name.startswith(RESERVED):
raise ValueError('Invalid reserved name')
return name.split(RESERVED)[1:]
def remove_items(headers, condition):
"""
Removes items from a dict whose keys satisfy
@ -738,3 +831,21 @@ def resolve_etag_is_at_header(req, metadata):
alternate_etag = metadata[name]
break
return alternate_etag
def update_ignore_range_header(req, name):
"""
Helper function to update an X-Backend-Ignore-Range-If-Metadata-Present
header whose value is a list of header names which, if any are present
on an object, mean the object server should respond with a 200 instead
of a 206 or 416.
:param req: a swob Request
:param name: name of a header which, if found, indicates the proxy will
want the whole object
"""
if ',' in name:
# HTTP header names should not have commas but we'll check anyway
raise ValueError('Header name must not contain commas')
hdr = 'X-Backend-Ignore-Range-If-Metadata-Present'
req.headers[hdr] = csv_append(req.headers.get(hdr), name)

View File

@ -33,7 +33,7 @@ from six.moves import range
from time import time
from swift.common import exceptions
from swift.common.ring import RingData
from swift.common.ring.ring import RingData
from swift.common.ring.utils import tiers_for_dev, build_tier_tree, \
validate_and_normalize_address, validate_replicas_by_tier, pretty_dev

View File

@ -52,7 +52,7 @@ from six.moves import urllib
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.utils import UTC, reiterate, split_path, Timestamp, pairs, \
close_if_possible, closing_if_possible
close_if_possible, closing_if_possible, config_true_value
from swift.common.exceptions import InvalidTimestamp
@ -689,6 +689,12 @@ class Range(object):
return all_ranges
def normalize_etag(tag):
if tag and tag.startswith('"') and tag.endswith('"') and tag != '"':
return tag[1:-1]
return tag
class Match(object):
"""
Wraps a Request's If-[None-]Match header as a friendly object.
@ -701,15 +707,10 @@ class Match(object):
tag = tag.strip()
if not tag:
continue
if tag.startswith('"') and tag.endswith('"'):
self.tags.add(tag[1:-1])
else:
self.tags.add(tag)
self.tags.add(normalize_etag(tag))
def __contains__(self, val):
if val and val.startswith('"') and val.endswith('"'):
val = val[1:-1]
return '*' in self.tags or val in self.tags
return '*' in self.tags or normalize_etag(val) in self.tags
def __repr__(self):
return '%s(%r)' % (
@ -1011,6 +1012,33 @@ class Request(object):
self.query_string = urllib.parse.urlencode(param_pairs,
encoding='latin-1')
def ensure_x_timestamp(self):
"""
Similar to :attr:`timestamp`, but the ``X-Timestamp`` header will be
set if not present.
:raises HTTPBadRequest: if X-Timestamp is already set but not a valid
:class:`~swift.common.utils.Timestamp`
:returns: the request's X-Timestamp header,
as a :class:`~swift.common.utils.Timestamp`
"""
# The container sync feature includes an x-timestamp header with
# requests. If present this is checked and preserved, otherwise a fresh
# timestamp is added.
if 'HTTP_X_TIMESTAMP' in self.environ:
try:
self._timestamp = Timestamp(self.environ['HTTP_X_TIMESTAMP'])
except ValueError:
raise HTTPBadRequest(
request=self, content_type='text/plain',
body='X-Timestamp should be a UNIX timestamp float value; '
'was %r' % self.environ['HTTP_X_TIMESTAMP'])
else:
self._timestamp = Timestamp.now()
# Always normalize it to the internal form
self.environ['HTTP_X_TIMESTAMP'] = self._timestamp.internal
return self._timestamp
@property
def timestamp(self):
"""
@ -1063,6 +1091,11 @@ class Request(object):
"Provides the full url of the request"
return self.host_url + self.path_qs
@property
def allow_reserved_names(self):
return config_true_value(self.environ.get(
'HTTP_X_BACKEND_ALLOW_RESERVED_NAMES'))
def as_referer(self):
return self.method + ' ' + self.url

View File

@ -45,7 +45,9 @@ from random import random, shuffle
from contextlib import contextmanager, closing
import ctypes
import ctypes.util
from copy import deepcopy
from optparse import OptionParser
import traceback
from tempfile import gettempdir, mkstemp, NamedTemporaryFile
import glob
@ -61,7 +63,7 @@ import eventlet.semaphore
import pkg_resources
from eventlet import GreenPool, sleep, Timeout
from eventlet.green import socket, threading
from eventlet.hubs import trampoline
import eventlet.hubs
import eventlet.queue
import netifaces
import codecs
@ -75,7 +77,7 @@ from six.moves import cPickle as pickle
from six.moves.configparser import (ConfigParser, NoSectionError,
NoOptionError, RawConfigParser)
from six.moves import range, http_client
from six.moves.urllib.parse import quote as _quote
from six.moves.urllib.parse import quote as _quote, unquote
from six.moves.urllib.parse import urlparse
from swift import gettext_ as _
@ -186,6 +188,10 @@ O_TMPFILE = getattr(os, 'O_TMPFILE', 0o20000000 | os.O_DIRECTORY)
IPV6_RE = re.compile("^\[(?P<address>.*)\](:(?P<port>[0-9]+))?$")
MD5_OF_EMPTY_STRING = 'd41d8cd98f00b204e9800998ecf8427e'
RESERVED_BYTE = b'\x00'
RESERVED_STR = u'\x00'
RESERVED = '\x00'
LOG_LINE_DEFAULT_FORMAT = '{remote_addr} - - [{time.d}/{time.b}/{time.Y}' \
':{time.H}:{time.M}:{time.S} +0000] ' \
@ -324,7 +330,7 @@ def get_swift_info(admin=False, disallowed_sections=None):
:returns: dictionary of information about the swift cluster.
"""
disallowed_sections = disallowed_sections or []
info = dict(_swift_info)
info = deepcopy(_swift_info)
for section in disallowed_sections:
key_to_pop = None
sub_section_dict = info
@ -975,14 +981,13 @@ class _LibcWrapper(object):
# spurious AttributeError.
func_handle = load_libc_function(
func_name, fail_if_missing=True)
self._func_handle = func_handle
except AttributeError:
# We pass fail_if_missing=True to load_libc_function and
# then ignore the error. It's weird, but otherwise we have
# to check if self._func_handle is noop_libc_function, and
# that's even weirder.
pass
else:
self._func_handle = func_handle
self._loaded = True
@property
@ -1227,7 +1232,7 @@ class Timestamp(object):
compatible for normalized timestamps which do not include an offset.
"""
def __init__(self, timestamp, offset=0, delta=0):
def __init__(self, timestamp, offset=0, delta=0, check_bounds=True):
"""
Create a new Timestamp.
@ -1271,10 +1276,11 @@ class Timestamp(object):
raise ValueError(
'delta must be greater than %d' % (-1 * self.raw))
self.timestamp = float(self.raw * PRECISION)
if self.timestamp < 0:
raise ValueError('timestamp cannot be negative')
if self.timestamp >= 10000000000:
raise ValueError('timestamp too large')
if check_bounds:
if self.timestamp < 0:
raise ValueError('timestamp cannot be negative')
if self.timestamp >= 10000000000:
raise ValueError('timestamp too large')
@classmethod
def now(cls, offset=0, delta=0):
@ -1348,26 +1354,34 @@ class Timestamp(object):
if other is None:
return False
if not isinstance(other, Timestamp):
other = Timestamp(other)
try:
other = Timestamp(other, check_bounds=False)
except ValueError:
return False
return self.internal == other.internal
def __ne__(self, other):
if other is None:
return True
if not isinstance(other, Timestamp):
other = Timestamp(other)
return self.internal != other.internal
return not (self == other)
def __lt__(self, other):
if other is None:
return False
if not isinstance(other, Timestamp):
other = Timestamp(other)
other = Timestamp(other, check_bounds=False)
if other.timestamp < 0:
return False
if other.timestamp >= 10000000000:
return True
return self.internal < other.internal
def __hash__(self):
return hash(self.internal)
def __invert__(self):
if self.offset:
raise ValueError('Cannot invert timestamps with offsets')
return Timestamp((999999999999999 - self.raw) * PRECISION)
def encode_timestamps(t1, t2=None, t3=None, explicit=False):
"""
@ -2453,7 +2467,7 @@ def get_hub():
return None
def drop_privileges(user, call_setsid=True):
def drop_privileges(user):
"""
Sets the userid/groupid of the current process, get session leader, etc.
@ -2466,11 +2480,13 @@ def drop_privileges(user, call_setsid=True):
os.setgid(user[3])
os.setuid(user[2])
os.environ['HOME'] = user[5]
if call_setsid:
try:
os.setsid()
except OSError:
pass
def clean_up_daemon_hygiene():
try:
os.setsid()
except OSError:
pass
os.chdir('/') # in case you need to rmdir on where you started the daemon
os.umask(0o22) # ensure files are created with the correct privileges
@ -3287,6 +3303,8 @@ class GreenAsyncPile(object):
try:
self._responses.put(func(*args, **kwargs))
except Exception:
if eventlet.hubs.get_hub().debug_exceptions:
traceback.print_exception(*sys.exc_info())
self._responses.put(DEAD)
finally:
self._inflight -= 1
@ -4202,6 +4220,19 @@ def closing_if_possible(maybe_closable):
close_if_possible(maybe_closable)
def drain_and_close(response_or_app_iter):
"""
Drain and close a swob or WSGI response.
This ensures we don't log a 499 in the proxy just because we realized we
don't care about the body of an error.
"""
app_iter = getattr(response_or_app_iter, 'app_iter', response_or_app_iter)
for _chunk in app_iter:
pass
close_if_possible(app_iter)
_rfc_token = r'[^()<>@,;:\"/\[\]?={}\x00-\x20\x7f]+'
_rfc_extension_pattern = re.compile(
r'(?:\s*;\s*(' + _rfc_token + r")\s*(?:=\s*(" + _rfc_token +
@ -5552,7 +5583,7 @@ class PipeMutex(object):
# Tell eventlet to suspend the current greenthread until
# self.rfd becomes readable. This will happen when someone
# else writes to self.wfd.
trampoline(self.rfd, read=True)
eventlet.hubs.trampoline(self.rfd, read=True)
def release(self):
"""
@ -5694,6 +5725,9 @@ def get_redirect_data(response):
if 'Location' not in headers:
return None
location = urlparse(headers['Location']).path
if config_true_value(headers.get('X-Backend-Location-Is-Quoted',
'false')):
location = unquote(location)
account, container, _junk = split_path(location, 2, 3, True)
timestamp_val = headers.get('X-Backend-Redirect-Timestamp')
try:

View File

@ -18,15 +18,18 @@
from __future__ import print_function
import errno
import fcntl
import os
import signal
import time
from swift import gettext_ as _
import sys
from textwrap import dedent
import time
import eventlet
import eventlet.debug
from eventlet import greenio, GreenPool, sleep, wsgi, listen, Timeout
from eventlet import greenio, GreenPool, sleep, wsgi, listen, Timeout, \
websocket
from paste.deploy import loadwsgi
from eventlet.green import socket, ssl, os as green_os
from io import BytesIO
@ -42,10 +45,11 @@ from swift.common.swob import Request, wsgi_quote, wsgi_unquote, \
from swift.common.utils import capture_stdio, disable_fallocate, \
drop_privileges, get_logger, NullLogger, config_true_value, \
validate_configuration, get_hub, config_auto_int_value, \
reiterate
reiterate, clean_up_daemon_hygiene
SIGNUM_TO_NAME = {getattr(signal, n): n for n in dir(signal)
if n.startswith('SIG') and '_' not in n}
NOTIFY_FD_ENV_KEY = '__SWIFT_SERVER_NOTIFY_FD'
# Set maximum line size of message headers to be accepted.
wsgi.MAX_HEADER_LINE = constraints.MAX_HEADER_SIZE
@ -398,6 +402,9 @@ def loadapp(conf_file, global_conf=None, allow_modify_pipeline=True):
func = getattr(app, 'modify_wsgi_pipeline', None)
if func and allow_modify_pipeline:
func(PipelineWrapper(ctx))
# cache the freshly created app so we con't have to redo
# initialization checks and log startup messages again
ctx.app_context.create = lambda: app
return ctx.create()
@ -422,6 +429,13 @@ def load_app_config(conf_file):
class SwiftHttpProtocol(wsgi.HttpProtocol):
default_request_version = "HTTP/1.0"
def __init__(self, *args, **kwargs):
# See https://github.com/eventlet/eventlet/pull/590
self.pre_shutdown_bugfix_eventlet = not getattr(
websocket.WebSocketWSGI, '_WSGI_APP_ALWAYS_IDLE', None)
# Note this is not a new-style class, so super() won't work
wsgi.HttpProtocol.__init__(self, *args, **kwargs)
def log_request(self, *a):
"""
Turn off logging requests by the underlying WSGI software.
@ -528,6 +542,23 @@ class SwiftHttpProtocol(wsgi.HttpProtocol):
b'HTTP/1.1 100 Continue\r\n'
return environ
def _read_request_line(self):
# Note this is not a new-style class, so super() won't work
got = wsgi.HttpProtocol._read_request_line(self)
# See https://github.com/eventlet/eventlet/pull/590
if self.pre_shutdown_bugfix_eventlet:
self.conn_state[2] = wsgi.STATE_REQUEST
return got
def handle_one_request(self):
# Note this is not a new-style class, so super() won't work
got = wsgi.HttpProtocol.handle_one_request(self)
# See https://github.com/eventlet/eventlet/pull/590
if self.pre_shutdown_bugfix_eventlet:
if self.conn_state[2] != wsgi.STATE_CLOSE:
self.conn_state[2] = wsgi.STATE_IDLE
return got
class SwiftHttpProxiedProtocol(SwiftHttpProtocol):
"""
@ -662,7 +693,44 @@ def run_server(conf, logger, sock, global_conf=None):
pool.waitall()
class WorkersStrategy(object):
class StrategyBase(object):
"""
Some operations common to all strategy classes.
"""
def post_fork_hook(self):
"""
Called in each forked-off child process, prior to starting the actual
wsgi server, to perform any initialization such as drop privileges.
"""
drop_privileges(self.conf.get('user', 'swift'))
def shutdown_sockets(self):
"""
Shutdown any listen sockets.
"""
for sock in self.iter_sockets():
greenio.shutdown_safe(sock)
sock.close()
def set_close_on_exec_on_listen_sockets(self):
"""
Set the close-on-exec flag on any listen sockets.
"""
for sock in self.iter_sockets():
if six.PY2:
fcntl.fcntl(sock.fileno(), fcntl.F_SETFD, fcntl.FD_CLOEXEC)
else:
# Python 3.4 and later default to sockets having close-on-exec
# set (what PEP 0446 calls "non-inheritable"). This new method
# on socket objects is provided to toggle it.
sock.set_inheritable(False)
class WorkersStrategy(StrategyBase):
"""
WSGI server management strategy object for a single bind port and listen
socket shared by a configured number of forked-off workers.
@ -695,8 +763,7 @@ class WorkersStrategy(object):
def do_bind_ports(self):
"""
Bind the one listen socket for this strategy and drop privileges
(since the parent process will never need to bind again).
Bind the one listen socket for this strategy.
"""
try:
@ -705,7 +772,6 @@ class WorkersStrategy(object):
msg = 'bind_port wasn\'t properly set in the config file. ' \
'It must be explicitly set to a valid port number.'
return msg
drop_privileges(self.conf.get('user', 'swift'))
def no_fork_sock(self):
"""
@ -730,14 +796,6 @@ class WorkersStrategy(object):
while len(self.children) < self.worker_count:
yield self.sock, None
def post_fork_hook(self):
"""
Perform any initialization in a forked-off child process prior to
starting the wsgi server.
"""
pass
def log_sock_exit(self, sock, _unused):
"""
Log a server's exit.
@ -766,20 +824,27 @@ class WorkersStrategy(object):
"""
Called when a worker has exited.
NOTE: a re-exec'ed server can reap the dead worker PIDs from the old
server process that is being replaced as part of a service reload
(SIGUSR1). So we need to be robust to getting some unknown PID here.
:param int pid: The PID of the worker that exited.
"""
self.logger.error('Removing dead child %s from parent %s',
pid, os.getpid())
self.children.remove(pid)
if pid in self.children:
self.logger.error('Removing dead child %s from parent %s',
pid, os.getpid())
self.children.remove(pid)
else:
self.logger.info('Ignoring wait() result from unknown PID %s', pid)
def shutdown_sockets(self):
def iter_sockets(self):
"""
Shutdown any listen sockets.
Yields all known listen sockets.
"""
greenio.shutdown_safe(self.sock)
self.sock.close()
if self.sock:
yield self.sock
class PortPidState(object):
@ -901,7 +966,7 @@ class PortPidState(object):
self.sock_data_by_port[dead_port]['pids'][server_idx] = None
class ServersPerPortStrategy(object):
class ServersPerPortStrategy(StrategyBase):
"""
WSGI server management strategy object for an object-server with one listen
port per unique local port in the storage policy rings. The
@ -948,28 +1013,13 @@ class ServersPerPortStrategy(object):
def do_bind_ports(self):
"""
Bind one listen socket per unique local storage policy ring port. Then
do all the work of drop_privileges except the actual dropping of
privileges (each forked-off worker will do that post-fork in
:py:meth:`post_fork_hook`).
Bind one listen socket per unique local storage policy ring port.
"""
self._reload_bind_ports()
for port in self.bind_ports:
self._bind_port(port)
# The workers strategy drops privileges here, which we obviously cannot
# do if we want to support binding to low ports. But we do want some
# of the actions that drop_privileges did.
try:
os.setsid()
except OSError:
pass
# In case you need to rmdir where you started the daemon:
os.chdir('/')
# Ensure files are created with the correct privileges:
os.umask(0o22)
def no_fork_sock(self):
"""
This strategy does not support running in the foreground.
@ -1024,14 +1074,6 @@ class ServersPerPortStrategy(object):
# can close and forget them.
self.port_pid_state.forget_port(orphan_pair[0])
def post_fork_hook(self):
"""
Called in each child process, prior to starting the actual wsgi server,
to drop privileges.
"""
drop_privileges(self.conf.get('user', 'swift'), call_setsid=False)
def log_sock_exit(self, sock, server_idx):
"""
Log a server's exit.
@ -1050,6 +1092,7 @@ class ServersPerPortStrategy(object):
:py:meth:`new_worker_socks`.
:param int pid: The new worker process' PID
"""
port = self.port_pid_state.port_for_sock(sock)
self.logger.notice('Started child %d (PID %d) for port %d',
server_idx, pid, port)
@ -1064,14 +1107,13 @@ class ServersPerPortStrategy(object):
self.port_pid_state.forget_pid(pid)
def shutdown_sockets(self):
def iter_sockets(self):
"""
Shutdown any listen sockets.
Yields all known listen sockets.
"""
for sock in self.port_pid_state.all_socks():
greenio.shutdown_safe(sock)
sock.close()
yield sock
def run_wsgi(conf_path, app_section, *args, **kwargs):
@ -1127,10 +1169,22 @@ def run_wsgi(conf_path, app_section, *args, **kwargs):
print(error_msg)
return 1
# Do some daemonization process hygene before we fork any children or run a
# server without forking.
clean_up_daemon_hygiene()
# Redirect errors to logger and close stdio. Do this *after* binding ports;
# we use this to signal that the service is ready to accept connections.
capture_stdio(logger)
# If necessary, signal an old copy of us that it's okay to shutdown its
# listen sockets now because ours are up and ready to receive connections.
reexec_signal_fd = os.getenv(NOTIFY_FD_ENV_KEY)
if reexec_signal_fd:
reexec_signal_fd = int(reexec_signal_fd)
os.write(reexec_signal_fd, str(os.getpid()).encode('utf8'))
os.close(reexec_signal_fd)
no_fork_sock = strategy.no_fork_sock()
if no_fork_sock:
run_server(conf, logger, no_fork_sock, global_conf=global_conf)
@ -1145,6 +1199,7 @@ def run_wsgi(conf_path, app_section, *args, **kwargs):
running_context = [True, None]
signal.signal(signal.SIGTERM, stop_with_signal)
signal.signal(signal.SIGHUP, stop_with_signal)
signal.signal(signal.SIGUSR1, stop_with_signal)
while running_context[0]:
for sock, sock_info in strategy.new_worker_socks():
@ -1152,6 +1207,7 @@ def run_wsgi(conf_path, app_section, *args, **kwargs):
if pid == 0:
signal.signal(signal.SIGHUP, signal.SIG_DFL)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
signal.signal(signal.SIGUSR1, signal.SIG_DFL)
strategy.post_fork_hook()
run_server(conf, logger, sock)
strategy.log_sock_exit(sock, sock_info)
@ -1196,9 +1252,68 @@ def run_wsgi(conf_path, app_section, *args, **kwargs):
logger.error('Stopping with unexpected signal %r' %
running_context[1])
else:
logger.error('%s received', signame)
logger.error('%s received (%s)', signame, os.getpid())
if running_context[1] == signal.SIGTERM:
os.killpg(0, signal.SIGTERM)
elif running_context[1] == signal.SIGUSR1:
# set up a pipe, fork off a child to handle cleanup later,
# and rexec ourselves with an environment variable set which will
# indicate which fd (one of the pipe ends) to write a byte to
# to indicate listen socket setup is complete. That will signal
# the forked-off child to complete its listen socket shutdown.
#
# NOTE: all strategies will now require the parent process to retain
# superuser privileges so that the re'execd process can bind a new
# socket to the configured IP & port(s). We can't just reuse existing
# listen sockets because then the bind IP couldn't be changed.
#
# NOTE: we need to set all our listen sockets close-on-exec so the only
# open reference to those file descriptors will be in the forked-off
# child here who waits to shutdown the old server's listen sockets. If
# the re-exec'ed server's old listen sockets aren't closed-on-exec,
# then the old server can't actually ever exit.
strategy.set_close_on_exec_on_listen_sockets()
read_fd, write_fd = os.pipe()
orig_server_pid = os.getpid()
child_pid = os.fork()
if child_pid:
# parent; set env var for fds and reexec ourselves
os.close(read_fd)
os.putenv(NOTIFY_FD_ENV_KEY, str(write_fd))
myself = os.path.realpath(sys.argv[0])
logger.info("Old server PID=%d re'execing as: %r",
orig_server_pid, [myself] + list(sys.argv))
if hasattr(os, 'set_inheritable'):
# See https://www.python.org/dev/peps/pep-0446/
os.set_inheritable(write_fd, True)
os.execv(myself, sys.argv)
logger.error('Somehow lived past os.execv()?!')
exit('Somehow lived past os.execv()?!')
elif child_pid == 0:
# child
os.close(write_fd)
logger.info('Old server temporary child PID=%d waiting for '
"re-exec'ed PID=%d to signal readiness...",
os.getpid(), orig_server_pid)
try:
got_pid = os.read(read_fd, 30)
except Exception as e:
logger.warning('Unexpected exception while reading from '
'pipe:', exc_info=True)
else:
got_pid = got_pid.decode('ascii')
if got_pid:
logger.info('Old server temporary child PID=%d notified '
'to shutdown old listen sockets by PID=%s',
os.getpid(), got_pid)
else:
logger.warning('Old server temporary child PID=%d *NOT* '
'notified to shutdown old listen sockets; '
'the pipe just *died*.', os.getpid())
try:
os.close(read_fd)
except Exception:
pass
strategy.shutdown_sockets()
signal.signal(signal.SIGTERM, signal.SIG_IGN)

View File

@ -22,6 +22,7 @@ from uuid import uuid4
import six
from six.moves import range
from six.moves.urllib.parse import unquote
import sqlite3
from eventlet import tpool
@ -30,7 +31,8 @@ from swift.common.exceptions import LockTimeout
from swift.common.utils import Timestamp, encode_timestamps, \
decode_timestamps, extract_swift_bytes, storage_directory, hash_path, \
ShardRange, renamer, find_shard_range, MD5_OF_EMPTY_STRING, mkdirs, \
get_db_files, parse_db_filename, make_db_file_path, split_path
get_db_files, parse_db_filename, make_db_file_path, split_path, \
RESERVED_BYTE
from swift.common.db import DatabaseBroker, utf8encode, BROKER_TIMEOUT, \
zero_like, DatabaseAlreadyExists
@ -626,20 +628,6 @@ class ContainerBroker(DatabaseBroker):
SET reported_put_timestamp = 0, reported_delete_timestamp = 0,
reported_object_count = 0, reported_bytes_used = 0''')
def _delete_db(self, conn, timestamp):
"""
Mark the DB as deleted
:param conn: DB connection object
:param timestamp: timestamp to mark as deleted
"""
conn.execute("""
UPDATE container_stat
SET delete_timestamp = ?,
status = 'DELETED',
status_changed_at = ?
WHERE delete_timestamp < ? """, (timestamp, timestamp, timestamp))
def _commit_puts_load(self, item_list, entry):
"""See :func:`swift.common.db.DatabaseBroker._commit_puts_load`"""
(name, timestamp, size, content_type, etag, deleted) = entry[:6]
@ -1042,7 +1030,8 @@ class ContainerBroker(DatabaseBroker):
def list_objects_iter(self, limit, marker, end_marker, prefix, delimiter,
path=None, storage_policy_index=0, reverse=False,
include_deleted=False, since_row=None,
transform_func=None, all_policies=False):
transform_func=None, all_policies=False,
allow_reserved=False):
"""
Get a list of objects sorted by name starting at marker onward, up
to limit entries. Entries will begin with the prefix and will not
@ -1068,6 +1057,8 @@ class ContainerBroker(DatabaseBroker):
:meth:`~_transform_record`; defaults to :meth:`~_transform_record`.
:param all_policies: if True, include objects for all storage policies
ignoring any value given for ``storage_policy_index``
:param allow_reserved: exclude names with reserved-byte by default
:returns: list of tuples of (name, created_at, size, content_type,
etag, deleted)
"""
@ -1124,6 +1115,9 @@ class ContainerBroker(DatabaseBroker):
elif prefix:
query_conditions.append('name >= ?')
query_args.append(prefix)
if not allow_reserved:
query_conditions.append('name >= ?')
query_args.append(chr(ord(RESERVED_BYTE) + 1))
query_conditions.append(deleted_key + deleted_arg)
if since_row:
query_conditions.append('ROWID > ?')
@ -1242,7 +1236,7 @@ class ContainerBroker(DatabaseBroker):
limit, marker, end_marker, prefix=None, delimiter=None, path=None,
reverse=False, include_deleted=include_deleted,
transform_func=self._record_to_dict, since_row=since_row,
all_policies=True
all_policies=True, allow_reserved=True
)
def _transform_record(self, record):
@ -1995,7 +1989,7 @@ class ContainerBroker(DatabaseBroker):
def set_sharding_sysmeta(self, key, value):
"""
Updates the broker's metadata metadata stored under the given key
Updates the broker's metadata stored under the given key
prefixed with a sharding specific namespace.
:param key: metadata key in the sharding metadata namespace.
@ -2047,7 +2041,14 @@ class ContainerBroker(DatabaseBroker):
``container`` attributes respectively.
"""
path = self.get_sharding_sysmeta('Root')
path = self.get_sharding_sysmeta('Quoted-Root')
hdr = 'X-Container-Sysmeta-Shard-Quoted-Root'
if path:
path = unquote(path)
else:
path = self.get_sharding_sysmeta('Root')
hdr = 'X-Container-Sysmeta-Shard-Root'
if not path:
# Ensure account/container get populated
self._populate_instance_cache()
@ -2059,8 +2060,8 @@ class ContainerBroker(DatabaseBroker):
self._root_account, self._root_container = split_path(
'/' + path, 2, 2)
except ValueError:
raise ValueError("Expected X-Container-Sysmeta-Shard-Root to be "
"of the form 'account/container', got %r" % path)
raise ValueError("Expected %s to be of the form "
"'account/container', got %r" % (hdr, path))
@property
def root_account(self):

View File

@ -23,6 +23,7 @@ from swift import gettext_ as _
from eventlet import Timeout
import six
from six.moves.urllib.parse import quote
import swift.common.db
from swift.container.sync_store import ContainerSyncStore
@ -32,14 +33,16 @@ from swift.container.replicator import ContainerReplicatorRpc
from swift.common.db import DatabaseAlreadyExists
from swift.common.container_sync_realms import ContainerSyncRealms
from swift.common.request_helpers import get_param, \
split_and_validate_path, is_sys_or_user_meta
split_and_validate_path, is_sys_or_user_meta, \
validate_internal_container, validate_internal_obj, constrain_req_limit
from swift.common.utils import get_logger, hash_path, public, \
Timestamp, storage_directory, validate_sync_to, \
config_true_value, timing_stats, replication, \
override_bytes_from_content_type, get_log_line, \
config_fallocate_value, fs_has_free_space, list_from_csv, \
ShardRange
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
from swift.common.constraints import valid_timestamp, check_utf8, \
check_drive, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common import constraints
from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ConnectionTimeout
@ -83,6 +86,33 @@ def gen_resp_headers(info, is_deleted=False):
return headers
def get_container_name_and_placement(req):
"""
Split and validate path for a container.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account, container = split_and_validate_path(req, 4)
validate_internal_container(account, container)
return drive, part, account, container
def get_obj_name_and_placement(req):
"""
Split and validate path for an object.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
validate_internal_obj(account, container, obj)
return drive, part, account, container, obj
class ContainerController(BaseStorageServer):
"""WSGI Controller for the container server."""
@ -114,8 +144,17 @@ class ContainerController(BaseStorageServer):
self.replicator_rpc = ContainerReplicatorRpc(
self.root, DATADIR, ContainerBroker, self.mount_check,
logger=self.logger)
self.auto_create_account_prefix = \
conf.get('auto_create_account_prefix') or '.'
if conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
self.auto_create_account_prefix = \
conf['auto_create_account_prefix']
else:
self.auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
if config_true_value(conf.get('allow_versions', 'f')):
self.save_headers.append('x-versions-location')
if 'allow_versions' in conf:
@ -125,6 +164,8 @@ class ContainerController(BaseStorageServer):
'be ignored in a future release.')
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
swift.common.db.QUERY_LOGGING = \
config_true_value(conf.get('db_query_logging', 'f'))
self.sync_store = ContainerSyncStore(self.root,
self.logger,
self.mount_check)
@ -282,6 +323,11 @@ class ContainerController(BaseStorageServer):
"""
if not config_true_value(
req.headers.get('x-backend-accept-redirect', False)):
# We want to avoid fetching shard ranges for the (more
# time-sensitive) object-server update, so allow some misplaced
# objects to land between when we've started sharding and when the
# proxy learns about it. Note that this path is also used by old,
# pre-sharding updaters during a rolling upgrade.
return None
shard_ranges = broker.get_shard_ranges(
@ -294,7 +340,15 @@ class ContainerController(BaseStorageServer):
# in preference to the parent, which is the desired result.
containing_range = shard_ranges[0]
location = "/%s/%s" % (containing_range.name, obj_name)
headers = {'Location': location,
if location != quote(location) and not config_true_value(
req.headers.get('x-backend-accept-quoted-location', False)):
# Sender expects the destination to be unquoted, but it isn't safe
# to send unquoted. Eat the update for now and let the sharder
# move it later. Should only come up during rolling upgrades.
return None
headers = {'Location': quote(location),
'X-Backend-Location-Is-Quoted': 'true',
'X-Backend-Redirect-Timestamp':
containing_range.timestamp.internal}
@ -311,8 +365,7 @@ class ContainerController(BaseStorageServer):
@timing_stats()
def DELETE(self, req):
"""Handle HTTP DELETE request."""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
drive, part, account, container, obj = get_obj_name_and_placement(req)
req_timestamp = valid_timestamp(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -433,8 +486,7 @@ class ContainerController(BaseStorageServer):
@timing_stats()
def PUT(self, req):
"""Handle HTTP PUT request."""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
drive, part, account, container, obj = get_obj_name_and_placement(req)
req_timestamp = valid_timestamp(req)
if 'x-container-sync-to' in req.headers:
err, sync_to, realm, realm_key = validate_sync_to(
@ -514,8 +566,7 @@ class ContainerController(BaseStorageServer):
@timing_stats(sample_rate=0.1)
def HEAD(self, req):
"""Handle HTTP HEAD request."""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
drive, part, account, container, obj = get_obj_name_and_placement(req)
out_content_type = listing_formats.get_listing_content_type(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -632,23 +683,14 @@ class ContainerController(BaseStorageServer):
:param req: an instance of :class:`swift.common.swob.Request`
:returns: an instance of :class:`swift.common.swob.Response`
"""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
drive, part, account, container, obj = get_obj_name_and_placement(req)
path = get_param(req, 'path')
prefix = get_param(req, 'prefix')
delimiter = get_param(req, 'delimiter')
marker = get_param(req, 'marker', '')
end_marker = get_param(req, 'end_marker')
limit = constraints.CONTAINER_LISTING_LIMIT
given_limit = get_param(req, 'limit')
limit = constrain_req_limit(req, constraints.CONTAINER_LISTING_LIMIT)
reverse = config_true_value(get_param(req, 'reverse'))
if given_limit and given_limit.isdigit():
limit = int(given_limit)
if limit > constraints.CONTAINER_LISTING_LIMIT:
return HTTPPreconditionFailed(
request=req,
body='Maximum limit is %d'
% constraints.CONTAINER_LISTING_LIMIT)
out_content_type = listing_formats.get_listing_content_type(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -696,7 +738,7 @@ class ContainerController(BaseStorageServer):
container_list = src_broker.list_objects_iter(
limit, marker, end_marker, prefix, delimiter, path,
storage_policy_index=info['storage_policy_index'],
reverse=reverse)
reverse=reverse, allow_reserved=req.allow_reserved_names)
return self.create_listing(req, out_content_type, info, resp_headers,
broker.metadata, container_list, container)
@ -751,7 +793,7 @@ class ContainerController(BaseStorageServer):
"""
Handle HTTP UPDATE request (merge_items RPCs coming from the proxy.)
"""
drive, part, account, container = split_and_validate_path(req, 4)
drive, part, account, container = get_container_name_and_placement(req)
req_timestamp = valid_timestamp(req)
try:
check_drive(self.root, drive, self.mount_check)
@ -775,7 +817,7 @@ class ContainerController(BaseStorageServer):
@timing_stats()
def POST(self, req):
"""Handle HTTP POST request."""
drive, part, account, container = split_and_validate_path(req, 4)
drive, part, account, container = get_container_name_and_placement(req)
req_timestamp = valid_timestamp(req)
if 'x-container-sync-to' in req.headers:
err, sync_to, realm, realm_key = validate_sync_to(
@ -800,7 +842,7 @@ class ContainerController(BaseStorageServer):
start_time = time.time()
req = Request(env)
self.logger.txn_id = req.headers.get('x-trans-id', None)
if not check_utf8(wsgi_to_str(req.path_info)):
if not check_utf8(wsgi_to_str(req.path_info), internal=True):
res = HTTPPreconditionFailed(body='Invalid UTF8 or contains NULL')
else:
try:

View File

@ -21,10 +21,11 @@ from random import random
import os
import six
from six.moves.urllib.parse import quote
from eventlet import Timeout
from swift.common import internal_client
from swift.common.constraints import check_drive
from swift.common.constraints import check_drive, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.direct_client import (direct_put_container,
DirectClientException)
from swift.common.exceptions import DeviceUnavailable
@ -333,8 +334,18 @@ class ContainerSharder(ContainerReplicator):
def __init__(self, conf, logger=None):
logger = logger or get_logger(conf, log_route='container-sharder')
super(ContainerSharder, self).__init__(conf, logger=logger)
self.shards_account_prefix = (
(conf.get('auto_create_account_prefix') or '.') + 'shards_')
if conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
auto_create_account_prefix = \
self.conf['auto_create_account_prefix']
else:
auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
self.shards_account_prefix = (auto_create_account_prefix + 'shards_')
def percent_value(key, default):
try:
@ -564,13 +575,13 @@ class ContainerSharder(ContainerReplicator):
params=params)
except internal_client.UnexpectedResponse as err:
self.logger.warning("Failed to get shard ranges from %s: %s",
broker.root_path, err)
quote(broker.root_path), err)
return None
record_type = resp.headers.get('x-backend-record-type')
if record_type != 'shard':
err = 'unexpected record type %r' % record_type
self.logger.error("Failed to get shard ranges from %s: %s",
broker.root_path, err)
quote(broker.root_path), err)
return None
try:
@ -582,7 +593,7 @@ class ContainerSharder(ContainerReplicator):
except (ValueError, TypeError, KeyError) as err:
self.logger.error(
"Failed to get shard ranges from %s: invalid data: %r",
broker.root_path, err)
quote(broker.root_path), err)
return None
finally:
self.logger.txn_id = None
@ -645,7 +656,7 @@ class ContainerSharder(ContainerReplicator):
if not node:
raise DeviceUnavailable(
'No mounted devices found suitable for creating shard broker '
'for %s in partition %s' % (shard_range.name, part))
'for %s in partition %s' % (quote(shard_range.name), part))
shard_broker = ContainerBroker.create_broker(
os.path.join(self.root, node['device']), part, shard_range.account,
@ -655,7 +666,14 @@ class ContainerSharder(ContainerReplicator):
# Get the valid info into the broker.container, etc
shard_broker.get_info()
shard_broker.merge_shard_ranges(shard_range)
shard_broker.set_sharding_sysmeta('Root', root_path)
shard_broker.set_sharding_sysmeta('Quoted-Root', quote(root_path))
# NB: we *used* to do
# shard_broker.set_sharding_sysmeta('Root', root_path)
# but that isn't safe for container names with nulls or newlines (or
# possibly some other characters). We consciously *don't* make any
# attempt to set the old meta; during an upgrade, some shards may think
# they are in fact roots, but it cleans up well enough once everyone's
# upgraded.
shard_broker.update_metadata({
'X-Container-Sysmeta-Sharding':
('True', Timestamp.now().internal)})
@ -690,8 +708,8 @@ class ContainerSharder(ContainerReplicator):
if warnings:
self.logger.warning(
'Audit failed for root %s (%s): %s' %
(broker.db_file, broker.path, ', '.join(warnings)))
'Audit failed for root %s (%s): %s',
broker.db_file, quote(broker.path), ', '.join(warnings))
self._increment_stat('audit_root', 'failure', statsd=True)
return False
@ -734,13 +752,13 @@ class ContainerSharder(ContainerReplicator):
if warnings:
self.logger.warning(
'Audit warnings for shard %s (%s): %s' %
(broker.db_file, broker.path, ', '.join(warnings)))
'Audit warnings for shard %s (%s): %s',
broker.db_file, quote(broker.path), ', '.join(warnings))
if errors:
self.logger.warning(
'Audit failed for shard %s (%s) - skipping: %s' %
(broker.db_file, broker.path, ', '.join(errors)))
'Audit failed for shard %s (%s) - skipping: %s',
broker.db_file, quote(broker.path), ', '.join(errors))
self._increment_stat('audit_shard', 'failure', statsd=True)
return False
@ -755,7 +773,7 @@ class ContainerSharder(ContainerReplicator):
broker.empty()):
broker.delete_db(Timestamp.now().internal)
self.logger.debug('Deleted shard container %s (%s)',
broker.db_file, broker.path)
broker.db_file, quote(broker.path))
self._increment_stat('audit_shard', 'success', statsd=True)
return True
@ -890,7 +908,7 @@ class ContainerSharder(ContainerReplicator):
if not success and responses.count(True) < quorum:
self.logger.warning(
'Failed to sufficiently replicate misplaced objects: %s in %s '
'(not removing)', dest_shard_range, broker.path)
'(not removing)', dest_shard_range, quote(broker.path))
return False
if broker.get_info()['id'] != info['id']:
@ -910,7 +928,7 @@ class ContainerSharder(ContainerReplicator):
if not success:
self.logger.warning(
'Refused to remove misplaced objects: %s in %s',
dest_shard_range, broker.path)
dest_shard_range, quote(broker.path))
return success
def _move_objects(self, src_broker, src_shard_range, policy_index,
@ -953,7 +971,7 @@ class ContainerSharder(ContainerReplicator):
if unplaced:
self.logger.warning(
'Failed to find destination for at least %s misplaced objects '
'in %s' % (unplaced, src_broker.path))
'in %s', unplaced, quote(src_broker.path))
# TODO: consider executing the replication jobs concurrently
for dest_shard_range, dest_args in dest_brokers.items():
@ -1039,7 +1057,7 @@ class ContainerSharder(ContainerReplicator):
their correct shard containers, False otherwise
"""
self.logger.debug('Looking for misplaced objects in %s (%s)',
broker.path, broker.db_file)
quote(broker.path), broker.db_file)
self._increment_stat('misplaced', 'attempted')
src_broker = src_broker or broker
if src_bounds is None:
@ -1078,10 +1096,12 @@ class ContainerSharder(ContainerReplicator):
own_shard_range = broker.get_own_shard_range()
shard_ranges = broker.get_shard_ranges()
if shard_ranges and shard_ranges[-1].upper >= own_shard_range.upper:
self.logger.debug('Scan already completed for %s', broker.path)
self.logger.debug('Scan already completed for %s',
quote(broker.path))
return 0
self.logger.info('Starting scan for shard ranges on %s', broker.path)
self.logger.info('Starting scan for shard ranges on %s',
quote(broker.path))
self._increment_stat('scanned', 'attempted')
start = time.time()
@ -1126,8 +1146,16 @@ class ContainerSharder(ContainerReplicator):
shard_range.update_state(ShardRange.CREATED)
headers = {
'X-Backend-Storage-Policy-Index': broker.storage_policy_index,
'X-Container-Sysmeta-Shard-Root': broker.root_path,
'X-Container-Sysmeta-Shard-Quoted-Root': quote(
broker.root_path),
'X-Container-Sysmeta-Sharding': True}
# NB: we *used* to send along
# 'X-Container-Sysmeta-Shard-Root': broker.root_path
# but that isn't safe for container names with nulls or newlines
# (or possibly some other characters). We consciously *don't* make
# any attempt to set the old meta; during an upgrade, some shards
# may think they are in fact roots, but it cleans up well enough
# once everyone's upgraded.
success = self._send_shard_ranges(
shard_range.account, shard_range.container,
[shard_range], headers=headers)
@ -1138,7 +1166,7 @@ class ContainerSharder(ContainerReplicator):
else:
self.logger.error(
'PUT of new shard container %r failed for %s.',
shard_range, broker.path)
shard_range, quote(broker.path))
self._increment_stat('created', 'failure', statsd=True)
# break, not continue, because elsewhere it is assumed that
# finding and cleaving shard ranges progresses linearly, so we
@ -1159,8 +1187,9 @@ class ContainerSharder(ContainerReplicator):
def _cleave_shard_range(self, broker, cleaving_context, shard_range):
self.logger.info("Cleaving '%s' from row %s into %s for %r",
broker.path, cleaving_context.last_cleave_to_row,
shard_range.name, shard_range)
quote(broker.path),
cleaving_context.last_cleave_to_row,
quote(shard_range.name), shard_range)
self._increment_stat('cleaved', 'attempted')
start = time.time()
policy_index = broker.storage_policy_index
@ -1194,7 +1223,7 @@ class ContainerSharder(ContainerReplicator):
shard_broker.merge_items(objects)
if objects is None:
self.logger.info("Cleaving '%s': %r - zero objects found",
broker.path, shard_range)
quote(broker.path), shard_range)
if shard_broker.get_info()['put_timestamp'] == put_timestamp:
# This was just created; don't need to replicate this
# SR because there was nothing there. So cleanup and
@ -1226,7 +1255,7 @@ class ContainerSharder(ContainerReplicator):
source_broker.get_syncs())
else:
self.logger.debug("Cleaving '%s': %r - shard db already in sync",
broker.path, shard_range)
quote(broker.path), shard_range)
replication_quorum = self.existing_shard_replication_quorum
if shard_range.includes(own_shard_range):
@ -1253,7 +1282,7 @@ class ContainerSharder(ContainerReplicator):
self.logger.info(
'Replicating new shard container %s for %s',
shard_broker.path, shard_broker.get_own_shard_range())
quote(shard_broker.path), shard_broker.get_own_shard_range())
success, responses = self._replicate_object(
shard_part, shard_broker.db_file, node_id)
@ -1266,7 +1295,7 @@ class ContainerSharder(ContainerReplicator):
# until each shard range has been successfully cleaved
self.logger.warning(
'Failed to sufficiently replicate cleaved shard %s for %s: '
'%s successes, %s required.', shard_range, broker.path,
'%s successes, %s required.', shard_range, quote(broker.path),
replication_successes, replication_quorum)
self._increment_stat('cleaved', 'failure', statsd=True)
return CLEAVE_FAILED
@ -1284,7 +1313,7 @@ class ContainerSharder(ContainerReplicator):
cleaving_context.store(broker)
self.logger.info(
'Cleaved %s for shard range %s in %gs.',
broker.path, shard_range, elapsed)
quote(broker.path), shard_range, elapsed)
self._increment_stat('cleaved', 'success', statsd=True)
return CLEAVE_SUCCESS
@ -1292,8 +1321,8 @@ class ContainerSharder(ContainerReplicator):
# Returns True if misplaced objects have been moved and the entire
# container namespace has been successfully cleaved, False otherwise
if broker.is_sharded():
self.logger.debug('Passing over already sharded container %s/%s',
broker.account, broker.container)
self.logger.debug('Passing over already sharded container %s',
quote(broker.path))
return True
cleaving_context = CleavingContext.load(broker)
@ -1303,7 +1332,7 @@ class ContainerSharder(ContainerReplicator):
# the *retiring* db.
self.logger.debug(
'Moving any misplaced objects from sharding container: %s',
broker.path)
quote(broker.path))
bounds = self._make_default_misplaced_object_bounds(broker)
cleaving_context.misplaced_done = self._move_misplaced_objects(
broker, src_broker=broker.get_brokers()[0],
@ -1312,7 +1341,7 @@ class ContainerSharder(ContainerReplicator):
if cleaving_context.cleaving_done:
self.logger.debug('Cleaving already complete for container %s',
broker.path)
quote(broker.path))
return cleaving_context.misplaced_done
ranges_todo = broker.get_shard_ranges(marker=cleaving_context.marker)
@ -1323,12 +1352,12 @@ class ContainerSharder(ContainerReplicator):
self.logger.debug('Continuing to cleave (%s done, %s todo): %s',
cleaving_context.ranges_done,
cleaving_context.ranges_todo,
broker.path)
quote(broker.path))
else:
cleaving_context.start()
cleaving_context.ranges_todo = len(ranges_todo)
self.logger.debug('Starting to cleave (%s todo): %s',
cleaving_context.ranges_todo, broker.path)
cleaving_context.ranges_todo, quote(broker.path))
ranges_done = []
for shard_range in ranges_todo:
@ -1357,7 +1386,8 @@ class ContainerSharder(ContainerReplicator):
# sure we *also* do that if we hit a failure right off the bat
cleaving_context.store(broker)
self.logger.debug(
'Cleaved %s shard ranges for %s', len(ranges_done), broker.path)
'Cleaved %s shard ranges for %s',
len(ranges_done), quote(broker.path))
return (cleaving_context.misplaced_done and
cleaving_context.cleaving_done)
@ -1386,11 +1416,11 @@ class ContainerSharder(ContainerReplicator):
else:
self.logger.warning(
'Failed to remove retiring db file for %s',
broker.path)
quote(broker.path))
else:
self.logger.warning(
'Repeat cleaving required for %r with context: %s'
% (broker.db_files[0], dict(cleaving_context)))
'Repeat cleaving required for %r with context: %s',
broker.db_files[0], dict(cleaving_context))
cleaving_context.reset()
cleaving_context.store(broker)
@ -1400,14 +1430,14 @@ class ContainerSharder(ContainerReplicator):
candidates = find_sharding_candidates(
broker, self.shard_container_threshold, shard_ranges)
if candidates:
self.logger.debug('Identified %s sharding candidates'
% len(candidates))
self.logger.debug('Identified %s sharding candidates',
len(candidates))
broker.merge_shard_ranges(candidates)
def _find_and_enable_shrinking_candidates(self, broker):
if not broker.is_sharded():
self.logger.warning('Cannot shrink a not yet sharded container %s',
broker.path)
quote(broker.path))
return
merge_pairs = find_shrinking_candidates(
@ -1456,7 +1486,7 @@ class ContainerSharder(ContainerReplicator):
broker.get_info() # make sure account/container are populated
state = broker.get_db_state()
self.logger.debug('Starting processing %s state %s',
broker.path, state)
quote(broker.path), state)
if not self._audit_container(broker):
return
@ -1492,8 +1522,8 @@ class ContainerSharder(ContainerReplicator):
else:
self.logger.debug(
'Own shard range in state %r but no shard ranges '
'and not leader; remaining unsharded: %s'
% (own_shard_range.state_text, broker.path))
'and not leader; remaining unsharded: %s',
own_shard_range.state_text, quote(broker.path))
if state == SHARDING:
if is_leader:
@ -1512,13 +1542,14 @@ class ContainerSharder(ContainerReplicator):
cleave_complete = self._cleave(broker)
if cleave_complete:
self.logger.info('Completed cleaving of %s', broker.path)
self.logger.info('Completed cleaving of %s',
quote(broker.path))
if self._complete_sharding(broker):
state = SHARDED
self._increment_stat('visited', 'completed', statsd=True)
else:
self.logger.debug('Remaining in sharding state %s',
broker.path)
quote(broker.path))
if state == SHARDED and broker.is_root_container():
if is_leader:
@ -1539,9 +1570,8 @@ class ContainerSharder(ContainerReplicator):
# simultaneously become deleted.
self._update_root_container(broker)
self.logger.debug('Finished processing %s/%s state %s',
broker.account, broker.container,
broker.get_db_state())
self.logger.debug('Finished processing %s state %s',
quote(broker.path), broker.get_db_state())
def _one_shard_cycle(self, devices_to_shard, partitions_to_shard):
"""

View File

@ -36,13 +36,16 @@ from swift.common.internal_client import (
from swift.common.exceptions import ClientException
from swift.common.ring import Ring
from swift.common.ring.utils import is_local_device
from swift.common.swob import normalize_etag
from swift.common.utils import (
clean_content_type, config_true_value,
FileLikeIter, get_logger, hash_path, quote, validate_sync_to,
whataremyips, Timestamp, decode_timestamps)
from swift.common.daemon import Daemon
from swift.common.http import HTTP_UNAUTHORIZED, HTTP_NOT_FOUND
from swift.common.http import HTTP_UNAUTHORIZED, HTTP_NOT_FOUND, HTTP_CONFLICT
from swift.common.wsgi import ConfigString
from swift.common.middleware.versioned_writes.object_versioning import (
SYSMETA_VERSIONS_CONT, SYSMETA_VERSIONS_SYMLINK)
# The default internal client config body is to support upgrades without
@ -357,6 +360,13 @@ class ContainerSync(Daemon):
break
else:
return
if broker.metadata.get(SYSMETA_VERSIONS_CONT):
self.container_skips += 1
self.logger.increment('skips')
self.logger.warning('Skipping container %s/%s with '
'object versioning configured' % (
info['account'], info['container']))
return
if not broker.is_deleted():
sync_to = None
user_key = None
@ -560,7 +570,8 @@ class ContainerSync(Daemon):
logger=self.logger,
timeout=self.conn_timeout)
except ClientException as err:
if err.http_status != HTTP_NOT_FOUND:
if err.http_status not in (
HTTP_NOT_FOUND, HTTP_CONFLICT):
raise
self.container_deletes += 1
self.container_stats['deletes'] += 1
@ -593,6 +604,16 @@ class ContainerSync(Daemon):
headers = {}
body = None
exc = err
# skip object_versioning links; this is in case the container
# metadata is out of date
if headers.get(SYSMETA_VERSIONS_SYMLINK):
self.logger.info(
'Skipping versioning symlink %s/%s/%s ' % (
info['account'], info['container'],
row['name']))
return True
timestamp = Timestamp(headers.get('x-timestamp', 0))
if timestamp < ts_meta:
if exc:
@ -607,7 +628,7 @@ class ContainerSync(Daemon):
if key in headers:
del headers[key]
if 'etag' in headers:
headers['etag'] = headers['etag'].strip('"')
headers['etag'] = normalize_etag(headers['etag'])
if 'content-type' in headers:
headers['content-type'] = clean_content_type(
headers['content-type'])

View File

@ -6,15 +6,16 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Andi Chandler <andi@gowling.com>, 2018. #zanata
# Andi Chandler <andi@gowling.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: swift VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2019-10-04 06:59+0000\n"
"POT-Creation-Date: 2019-11-14 20:48+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2018-08-08 09:56+0000\n"
"PO-Revision-Date: 2019-11-14 11:34+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language: en_GB\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
@ -608,6 +609,9 @@ msgstr "Error: unable to locate %s"
msgid "Exception fetching fragments for %r"
msgstr "Exception fetching fragments for %r"
msgid "Exception in top-level reconstruction loop"
msgstr "Exception in top-level reconstruction loop"
#, python-format
msgid "Exception in top-level replication loop: %s"
msgstr "Exception in top-level replication loop: %s"
@ -1066,6 +1070,9 @@ msgstr "Trying to get %(status_type)s status of PUT to %(path)s"
msgid "Trying to read during GET"
msgstr "Trying to read during GET"
msgid "Trying to read object during GET (retrying)"
msgstr "Trying to read object during GET (retrying)"
msgid "Trying to send to client"
msgstr "Trying to send to client"
@ -1117,6 +1124,14 @@ msgstr "Unable to read config from %s"
msgid "Unauth %(sync_from)r => %(sync_to)r"
msgstr "Unauth %(sync_from)r => %(sync_to)r"
#, python-format
msgid ""
"Unexpected fragment data type (not quarantined) %(datadir)s: %(type)s at "
"offset 0x%(offset)x"
msgstr ""
"Unexpected fragment data type (not quarantined) %(datadir)s: %(type)s at "
"offset 0x%(offset)x"
msgid "Unhandled exception"
msgstr "Unhandled exception"
@ -1162,6 +1177,13 @@ msgstr ""
"WARNING: Unable to modify scheduling priority of process. Keeping unchanged! "
"Check logs for more info. "
msgid ""
"WARNING: object-expirer.conf is deprecated. Move object-expirers' "
"configuration into object-server.conf."
msgstr ""
"WARNING: object-expirer.conf is deprecated. Move object-expirers' "
"configuration into object-server.conf."
#, python-format
msgid "Waited %(kill_wait)s seconds for %(server)s to die; giving up"
msgstr "Waited %(kill_wait)s seconds for %(server)s to die; giving up"

View File

@ -1754,7 +1754,7 @@ class BaseDiskFileWriter(object):
msg = 'open(%s, O_TMPFILE | O_WRONLY) failed: %s \
Falling back to using mkstemp()' \
% (self._datadir, os.strerror(err.errno))
self.logger.warning(msg)
self.logger.debug(msg)
self.manager.use_linkat = False
else:
raise
@ -3375,7 +3375,7 @@ class ECDiskFileManager(BaseDiskFileManager):
"""
Returns timestamp(s) and other info extracted from a policy specific
file name. For EC policy the data file name includes a fragment index
and possibly a durable marker, both of which which must be stripped off
and possibly a durable marker, both of which must be stripped off
to retrieve the timestamp.
:param filename: the file name including extension

View File

@ -25,6 +25,7 @@ import hashlib
from eventlet import sleep, Timeout
from eventlet.greenpool import GreenPool
from swift.common.constraints import AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.daemon import Daemon
from swift.common.internal_client import InternalClient, UnexpectedResponse
from swift.common.utils import get_logger, dump_recon_cache, split_path, \
@ -112,8 +113,19 @@ class ObjectExpirer(Daemon):
self.reclaim_age = int(conf.get('reclaim_age', 604800))
def read_conf_for_queue_access(self, swift):
self.expiring_objects_account = \
(self.conf.get('auto_create_account_prefix') or '.') + \
if self.conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
auto_create_account_prefix = \
self.conf['auto_create_account_prefix']
else:
auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
self.expiring_objects_account = auto_create_account_prefix + \
(self.conf.get('expiring_objects_account_name') or
'expiring_objects')

View File

@ -687,7 +687,7 @@ class ObjectReconstructor(Daemon):
handoffs.
To avoid conflicts placing frags we'll skip through the handoffs and
only yield back those that are offset equal to to the given primary
only yield back those that are offset equal to the given primary
node index.
Nodes returned from this iterator will have 'backend_index' set.

View File

@ -17,6 +17,7 @@
import six
import six.moves.cPickle as pickle
from six.moves.urllib.parse import unquote
import json
import os
import multiprocessing
@ -35,10 +36,11 @@ from swift.common.utils import public, get_logger, \
normalize_delete_at_timestamp, get_log_line, Timestamp, \
get_expirer_container, parse_mime_headers, \
iter_multipart_mime_documents, extract_swift_bytes, safe_json_loads, \
config_auto_int_value, split_path, get_redirect_data, normalize_timestamp
config_auto_int_value, split_path, get_redirect_data, \
normalize_timestamp
from swift.common.bufferedhttp import http_connect
from swift.common.constraints import check_object_creation, \
valid_timestamp, check_utf8
valid_timestamp, check_utf8, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.exceptions import ConnectionTimeout, DiskFileQuarantined, \
DiskFileNotExist, DiskFileCollision, DiskFileNoSpace, DiskFileDeleted, \
DiskFileDeviceUnavailable, DiskFileExpired, ChunkReadTimeout, \
@ -51,13 +53,13 @@ from swift.common.base_storage_server import BaseStorageServer
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.request_helpers import get_name_and_placement, \
is_user_meta, is_sys_or_user_meta, is_object_transient_sysmeta, \
resolve_etag_is_at_header, is_sys_meta
resolve_etag_is_at_header, is_sys_meta, validate_internal_obj
from swift.common.swob import HTTPAccepted, HTTPBadRequest, HTTPCreated, \
HTTPInternalServerError, HTTPNoContent, HTTPNotFound, \
HTTPPreconditionFailed, HTTPRequestTimeout, HTTPUnprocessableEntity, \
HTTPClientDisconnect, HTTPMethodNotAllowed, Request, Response, \
HTTPInsufficientStorage, HTTPForbidden, HTTPException, HTTPConflict, \
HTTPServerError, wsgi_to_bytes, wsgi_to_str
HTTPServerError, wsgi_to_bytes, wsgi_to_str, normalize_etag
from swift.obj.diskfile import RESERVED_DATAFILE_META, DiskFileRouter
from swift.obj.expirer import build_task_obj
@ -89,6 +91,20 @@ def drain(file_like, read_size, timeout):
break
def get_obj_name_and_placement(request):
"""
Split and validate path for an object.
:param request: a swob request
:returns: a tuple of path parts and storage policy
"""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
validate_internal_obj(account, container, obj)
return device, partition, account, container, obj, policy
def _make_backend_fragments_header(fragments):
if fragments:
result = {}
@ -158,8 +174,18 @@ class ObjectController(BaseStorageServer):
for header in extra_allowed_headers:
if header not in RESERVED_DATAFILE_META:
self.allowed_headers.add(header)
self.auto_create_account_prefix = \
conf.get('auto_create_account_prefix') or '.'
if conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
self.auto_create_account_prefix = \
conf['auto_create_account_prefix']
else:
self.auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
self.expiring_objects_account = self.auto_create_account_prefix + \
(conf.get('expiring_objects_account_name') or 'expiring_objects')
self.expiring_objects_container_divisor = \
@ -351,7 +377,6 @@ class ObjectController(BaseStorageServer):
contdevices = [d.strip() for d in
headers_in.get('X-Container-Device', '').split(',')]
contpartition = headers_in.get('X-Container-Partition', '')
contpath = headers_in.get('X-Backend-Container-Path')
if len(conthosts) != len(contdevices):
# This shouldn't happen unless there's a bug in the proxy,
@ -364,6 +389,12 @@ class ObjectController(BaseStorageServer):
'devices': headers_in.get('X-Container-Device', '')})
return
contpath = headers_in.get('X-Backend-Quoted-Container-Path')
if contpath:
contpath = unquote(contpath)
else:
contpath = headers_in.get('X-Backend-Container-Path')
if contpath:
try:
# TODO: this is very late in request handling to be validating
@ -603,7 +634,8 @@ class ObjectController(BaseStorageServer):
def POST(self, request):
"""Handle HTTP POST requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
get_obj_name_and_placement(request)
req_timestamp = valid_timestamp(request)
new_delete_at = int(request.headers.get('X-Delete-At') or 0)
if new_delete_at and new_delete_at < req_timestamp:
@ -732,8 +764,9 @@ class ObjectController(BaseStorageServer):
'PUT', account, container, obj, request, update_headers,
device, policy)
# Add sysmeta to response
resp_headers = {}
# Add current content-type and sysmeta to response
resp_headers = {
'X-Backend-Content-Type': content_type_headers['Content-Type']}
for key, value in orig_metadata.items():
if is_sys_meta('object', key):
resp_headers[key] = value
@ -926,8 +959,8 @@ class ObjectController(BaseStorageServer):
if (is_sys_or_user_meta('object', val[0]) or
is_object_transient_sysmeta(val[0])))
# N.B. footers_metadata is a HeaderKeyDict
received_etag = footers_metadata.get('etag', request.headers.get(
'etag', '')).strip('"')
received_etag = normalize_etag(footers_metadata.get(
'etag', request.headers.get('etag', '')))
if received_etag and received_etag != metadata['ETag']:
raise HTTPUnprocessableEntity(request=request)
@ -995,7 +1028,7 @@ class ObjectController(BaseStorageServer):
def PUT(self, request):
"""Handle HTTP PUT requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
get_obj_name_and_placement(request)
disk_file, fsize, orig_metadata = self._pre_create_checks(
request, device, partition, account, container, obj, policy)
writer = disk_file.writer(size=fsize)
@ -1037,7 +1070,7 @@ class ObjectController(BaseStorageServer):
def GET(self, request):
"""Handle HTTP GET requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
get_obj_name_and_placement(request)
request.headers.setdefault('X-Timestamp',
normalize_timestamp(time.time()))
req_timestamp = valid_timestamp(request)
@ -1054,6 +1087,14 @@ class ObjectController(BaseStorageServer):
try:
with disk_file.open(current_time=req_timestamp):
metadata = disk_file.get_metadata()
ignore_range_headers = set(
h.strip().lower()
for h in request.headers.get(
'X-Backend-Ignore-Range-If-Metadata-Present',
'').split(','))
if ignore_range_headers.intersection(
h.lower() for h in metadata):
request.headers.pop('Range', None)
obj_size = int(metadata['Content-Length'])
file_x_ts = Timestamp(metadata['X-Timestamp'])
keep_cache = (self.keep_cache_private or
@ -1104,7 +1145,7 @@ class ObjectController(BaseStorageServer):
def HEAD(self, request):
"""Handle HTTP HEAD requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
get_obj_name_and_placement(request)
request.headers.setdefault('X-Timestamp',
normalize_timestamp(time.time()))
req_timestamp = valid_timestamp(request)
@ -1163,7 +1204,7 @@ class ObjectController(BaseStorageServer):
def DELETE(self, request):
"""Handle HTTP DELETE requests for the Swift Object Server."""
device, partition, account, container, obj, policy = \
get_name_and_placement(request, 5, 5, True)
get_obj_name_and_placement(request)
req_timestamp = valid_timestamp(request)
next_part_power = request.headers.get('X-Backend-Next-Part-Power')
try:
@ -1236,7 +1277,9 @@ class ObjectController(BaseStorageServer):
device, policy)
return response_class(
request=request,
headers={'X-Backend-Timestamp': response_timestamp.internal})
headers={'X-Backend-Timestamp': response_timestamp.internal,
'X-Backend-Content-Type': orig_metadata.get(
'Content-Type', '')})
@public
@replication
@ -1275,7 +1318,7 @@ class ObjectController(BaseStorageServer):
req = Request(env)
self.logger.txn_id = req.headers.get('x-trans-id', None)
if not check_utf8(wsgi_to_str(req.path_info)):
if not check_utf8(wsgi_to_str(req.path_info), internal=True):
res = HTTPPreconditionFailed(body='Invalid UTF8 or contains NULL')
else:
try:

View File

@ -478,8 +478,8 @@ class Receiver(object):
successes += 1
else:
self.app.logger.warning(
'ssync subrequest failed with %s: %s %s' %
(resp.status_int, method, subreq.path))
'ssync subrequest failed with %s: %s %s (%s)' %
(resp.status_int, method, subreq.path, resp.body))
failures += 1
if failures >= self.app.replication_failure_threshold and (
not successes or

View File

@ -358,6 +358,7 @@ class ObjectUpdater(Daemon):
headers_out.setdefault('X-Backend-Storage-Policy-Index',
str(int(policy)))
headers_out.setdefault('X-Backend-Accept-Redirect', 'true')
headers_out.setdefault('X-Backend-Accept-Quoted-Location', 'true')
container_path = update.get('container_path')
if container_path:
acct, cont = split_path('/' + container_path, minsegs=2)

View File

@ -44,7 +44,7 @@ import six
from swift.common.wsgi import make_pre_authed_env, make_pre_authed_request
from swift.common.utils import Timestamp, config_true_value, \
public, split_path, list_from_csv, GreenthreadSafeIterator, \
GreenAsyncPile, quorum_size, parse_content_type, close_if_possible, \
GreenAsyncPile, quorum_size, parse_content_type, drain_and_close, \
document_iters_to_http_response_body, ShardRange, find_shard_range
from swift.common.bufferedhttp import http_connect
from swift.common import constraints
@ -57,7 +57,8 @@ from swift.common.http import is_informational, is_success, is_redirection, \
HTTP_INSUFFICIENT_STORAGE, HTTP_UNAUTHORIZED, HTTP_CONTINUE, HTTP_GONE
from swift.common.swob import Request, Response, Range, \
HTTPException, HTTPRequestedRangeNotSatisfiable, HTTPServiceUnavailable, \
status_map, wsgi_to_str, str_to_wsgi, wsgi_quote
status_map, wsgi_to_str, str_to_wsgi, wsgi_quote, wsgi_unquote, \
normalize_etag
from swift.common.request_helpers import strip_sys_meta_prefix, \
strip_user_meta_prefix, is_user_meta, is_sys_meta, is_sys_or_user_meta, \
http_response_to_document_iters, is_object_transient_sysmeta, \
@ -179,6 +180,7 @@ def headers_to_container_info(headers, status_int=HTTP_OK):
'status': status_int,
'read_acl': headers.get('x-container-read'),
'write_acl': headers.get('x-container-write'),
'sync_to': headers.get('x-container-sync-to'),
'sync_key': headers.get('x-container-sync-key'),
'object_count': headers.get('x-container-object-count'),
'bytes': headers.get('x-container-bytes-used'),
@ -331,6 +333,12 @@ def get_container_info(env, app, swift_source=None):
"""
(version, wsgi_account, wsgi_container, unused) = \
split_path(env['PATH_INFO'], 3, 4, True)
if not constraints.valid_api_version(version):
# Not a valid Swift request; return 0 like we do
# if there's an account failure
return headers_to_container_info({}, 0)
account = wsgi_to_str(wsgi_account)
container = wsgi_to_str(wsgi_container)
@ -347,7 +355,8 @@ def get_container_info(env, app, swift_source=None):
# account is successful whether the account actually has .db files
# on disk or not.
is_autocreate_account = account.startswith(
getattr(app, 'auto_create_account_prefix', '.'))
getattr(app, 'auto_create_account_prefix',
constraints.AUTO_CREATE_ACCOUNT_PREFIX))
if not is_autocreate_account:
account_info = get_account_info(env, app, swift_source)
if not account_info or not is_success(account_info['status']):
@ -356,8 +365,11 @@ def get_container_info(env, app, swift_source=None):
req = _prepare_pre_auth_info_request(
env, ("/%s/%s/%s" % (version, wsgi_account, wsgi_container)),
(swift_source or 'GET_CONTAINER_INFO'))
# *Always* allow reserved names for get-info requests -- it's on the
# caller to keep the result private-ish
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
resp = req.get_response(app)
close_if_possible(resp.app_iter)
drain_and_close(resp)
# Check in infocache to see if the proxy (or anyone else) already
# populated the cache for us. If they did, just use what's there.
#
@ -385,6 +397,17 @@ def get_container_info(env, app, swift_source=None):
if info.get('sharding_state') is None:
info['sharding_state'] = 'unsharded'
versions_cont = info.get('sysmeta', {}).get('versions-container', '')
if versions_cont:
versions_cont = wsgi_unquote(str_to_wsgi(
versions_cont)).split('/')[0]
versions_req = _prepare_pre_auth_info_request(
env, ("/%s/%s/%s" % (version, wsgi_account, versions_cont)),
(swift_source or 'GET_CONTAINER_INFO'))
versions_req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
versions_info = get_container_info(versions_req.environ, app)
info['bytes'] = info['bytes'] + versions_info['bytes']
return info
@ -401,6 +424,10 @@ def get_account_info(env, app, swift_source=None):
:raises ValueError: when path doesn't contain an account
"""
(version, wsgi_account, _junk) = split_path(env['PATH_INFO'], 2, 3, True)
if not constraints.valid_api_version(version):
return headers_to_account_info({}, 0)
account = wsgi_to_str(wsgi_account)
# Check in environment cache and in memcache (in that order)
@ -412,8 +439,11 @@ def get_account_info(env, app, swift_source=None):
req = _prepare_pre_auth_info_request(
env, "/%s/%s" % (version, wsgi_account),
(swift_source or 'GET_ACCOUNT_INFO'))
# *Always* allow reserved names for get-info requests -- it's on the
# caller to keep the result private-ish
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
resp = req.get_response(app)
close_if_possible(resp.app_iter)
drain_and_close(resp)
# Check in infocache to see if the proxy (or anyone else) already
# populated the cache for us. If they did, just use what's there.
#
@ -739,6 +769,9 @@ def _get_object_info(app, env, account, container, obj, swift_source=None):
# Not in cache, let's try the object servers
path = '/v1/%s/%s/%s' % (account, container, obj)
req = _prepare_pre_auth_info_request(env, path, swift_source)
# *Always* allow reserved names for get-info requests -- it's on the
# caller to keep the result private-ish
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
resp = req.get_response(app)
# Unlike get_account_info() and get_container_info(), we don't save
# things in memcache, so we can store the info without network traffic,
@ -1193,7 +1226,8 @@ class ResumingGetter(object):
if end - begin + 1 == self.bytes_used_from_backend:
warn = False
if not req.environ.get('swift.non_client_disconnect') and warn:
self.app.logger.warning(_('Client disconnected on read'))
self.app.logger.warning('Client disconnected on read of %r',
self.path)
raise
except Exception:
self.app.logger.exception(_('Trying to send to client'))
@ -1259,9 +1293,9 @@ class ResumingGetter(object):
close_swift_conn(possible_source)
else:
if self.used_source_etag and \
self.used_source_etag != src_headers.get(
self.used_source_etag != normalize_etag(src_headers.get(
'x-object-sysmeta-ec-etag',
src_headers.get('etag', '')).strip('"'):
src_headers.get('etag', ''))):
self.statuses.append(HTTP_NOT_FOUND)
self.reasons.append('')
self.bodies.append('')
@ -1285,10 +1319,11 @@ class ResumingGetter(object):
return True
else:
if 'handoff_index' in node and \
possible_source.status == HTTP_NOT_FOUND and \
(is_server_error(possible_source.status) or
possible_source.status == HTTP_NOT_FOUND) and \
not Timestamp(src_headers.get('x-backend-timestamp', 0)):
# throw out 404s from handoff nodes unless the data is really
# on disk and had been DELETEd
# throw out 5XX and 404s from handoff nodes unless the data is
# really on disk and had been DELETEd
return False
self.statuses.append(possible_source.status)
self.reasons.append(possible_source.reason)
@ -1364,9 +1399,8 @@ class ResumingGetter(object):
# from the same object (EC). Otherwise, if the cluster has two
# versions of the same object, we might end up switching between
# old and new mid-stream and giving garbage to the client.
self.used_source_etag = src_headers.get(
'x-object-sysmeta-ec-etag',
src_headers.get('etag', '')).strip('"')
self.used_source_etag = normalize_etag(src_headers.get(
'x-object-sysmeta-ec-etag', src_headers.get('etag', '')))
self.node = node
return source, node
return None, None
@ -1913,7 +1947,7 @@ class Controller(object):
if headers:
update_headers(resp, headers[status_index])
if etag:
resp.headers['etag'] = etag.strip('"')
resp.headers['etag'] = normalize_etag(etag)
return resp
return None

View File

@ -159,13 +159,14 @@ class ContainerController(Controller):
return resp
objects = []
req_limit = int(req.params.get('limit', CONTAINER_LISTING_LIMIT))
req_limit = int(req.params.get('limit') or CONTAINER_LISTING_LIMIT)
params = req.params.copy()
params.pop('states', None)
req.headers.pop('X-Backend-Record-Type', None)
reverse = config_true_value(params.get('reverse'))
marker = wsgi_to_str(params.get('marker'))
end_marker = wsgi_to_str(params.get('end_marker'))
prefix = wsgi_to_str(params.get('prefix'))
limit = req_limit
for shard_range in shard_ranges:
@ -200,6 +201,18 @@ class ContainerController(Controller):
headers = {'X-Backend-Record-Type': 'object'}
else:
headers = None
if prefix:
if prefix > shard_range:
continue
try:
just_past = prefix[:-1] + chr(ord(prefix[-1]) + 1)
except ValueError:
pass
else:
if just_past < shard_range:
continue
self.app.logger.debug('Getting from %s %s with %s',
shard_range, shard_range.name, headers)
objs, shard_resp = self._get_container_listing(

Some files were not shown because too many files have changed in this diff Show More