Merge 'remotes/origin/master' into s3api

Change-Id: Iac91b282deaaa5268c3ad4c28b5736a8d75a6e2c
This commit is contained in:
Kota Tsuyuzaki 2017-10-16 21:04:52 +09:00
commit f94d6567a7
115 changed files with 2521 additions and 1601 deletions

View File

@ -244,7 +244,7 @@ swift (2.13.0, OpenStack Ocata)
* PUT subrequests generated from a client-side COPY will now properly log
the SSC (server-side copy) Swift source field. See
https://docs.openstack.org/developer/swift/logs.html#swift-source for
https://docs.openstack.org/swift/latest/logs.html#swift-source for
more information.
* Fixed a bug where an SLO download with a range request may have resulted
@ -391,13 +391,13 @@ swift (2.10.0, OpenStack Newton)
* Object versioning now supports a "history" mode in addition to
the older "stack" mode. The difference is in how DELETE requests
are handled. For full details, please read
http://docs.openstack.org/developer/swift/overview_object_versioning.html.
https://docs.openstack.org/swift/latest/overview_object_versioning.html.
* New config variables to change the schedule priority and I/O
scheduling class. Servers and daemons now understand
`nice_priority`, `ionice_class`, and `ionice_priority` to
schedule their relative importance. Please read
http://docs.openstack.org/developer/swift/deployment_guide.html
https://docs.openstack.org/swift/latest/admin_guide.html
for full config details.
* On newer kernels (3.15+ when using xfs), Swift will use the O_TMPFILE
@ -410,7 +410,7 @@ swift (2.10.0, OpenStack Newton)
improved in clusters that are not completely healthy.
* Significant improvements to the api-ref doc available at
http://developer.openstack.org/api-ref/object-storage/.
https://developer.openstack.org/api-ref/object-storage/.
* A PUT or POST to a container will now update the container's
Last-Modified time, and that value will be included in a
@ -464,7 +464,7 @@ swift (2.9.0)
For more information on the details of the at-rest encryption
feature, please see the docs at
http://docs.openstack.org/developer/swift/overview_encryption.html.
https://docs.openstack.org/swift/latest/overview_encryption.html.
* `swift-recon` can now be called with more than one server type.
@ -606,7 +606,7 @@ swift (2.7.0, OpenStack Mitaka)
default it will stagger the firing.
* Added an operational procedures guide to the docs. It can be
found at http://docs.openstack.org/developer/swift/ops_runbook/index.html and
found at https://docs.openstack.org/swift/latest/ops_runbook/index.html and
includes information on detecting and handling day-to-day
operational issues in a Swift cluster.
@ -776,7 +776,7 @@ swift (2.6.0)
* Container sync has been improved to more quickly find and iterate over
the containers to be synced. This reduced server load and lowers the
time required to see data propagate between two clusters. Please see
http://docs.openstack.org/developer/swift/overview_container_sync.html for more details
https://docs.openstack.org/swift/latest/overview_container_sync.html for more details
about the new on-disk structure for tracking synchronized containers.
* A container POST will now update that container's put-timestamp value.
@ -862,7 +862,7 @@ swift (2.4.0)
server config setting ("allow_versions"), if it is currently enabled.
The existing container server config setting enables existing
containers to continue being versioned. Please see
http://docs.openstack.org/developer/swift/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
https://docs.openstack.org/swift/latest/middleware.html#how-to-enable-object-versioning-in-a-swift-cluster
for further upgrade notes.
* Allow 1+ object-servers-per-disk deployment
@ -987,7 +987,7 @@ swift (2.3.0, OpenStack Kilo)
ssync for durability. Deployers are urged to do extensive testing and
not deploy production data using an erasure code storage policy.
Full docs are at http://docs.openstack.org/developer/swift/overview_erasure_code.html
Full docs are at https://docs.openstack.org/swift/latest/overview_erasure_code.html
* Add support for container TempURL Keys.
@ -996,7 +996,7 @@ swift (2.3.0, OpenStack Kilo)
* Swift now supports composite tokens. This allows another service to
act on behalf of a user, but only with that user's consent.
See http://docs.openstack.org/developer/swift/overview_auth.html for more details.
See https://docs.openstack.org/swift/latest/overview_auth.html for more details.
* Multi-region replication was improved. When replicating data to a
different region, only one replica will be pushed per replication
@ -1004,7 +1004,7 @@ swift (2.3.0, OpenStack Kilo)
locally instead of pushing more data over the inter-region network.
* Internal requests from the ratelimit middleware now properly log a
swift_source. See http://docs.openstack.org/developer/swift/logs.html for details.
swift_source. See https://docs.openstack.org/swift/latest/logs.html for details.
* Improved storage policy support for quarantine stats in swift-recon.
@ -1052,7 +1052,7 @@ swift (2.2.2)
The overload and dispersion metrics have been exposed in the
swift-ring-build CLI tools.
See http://docs.openstack.org/developer/swift/overview_ring.html
See https://docs.openstack.org/swift/latest/overview_ring.html
for more info on how data placement works now.
* Improve replication of large out-of-sync, out-of-date containers.
@ -1140,7 +1140,7 @@ swift (2.2.0, OpenStack Juno)
now requires that ACLs be set on IDs, which are unique across
domains, and further restricts setting new ACLs to only use IDs.
Please see http://docs.openstack.org/developer/swift/overview_auth.html for
Please see https://docs.openstack.org/swift/latest/overview_auth.html for
more information on configuring Swift and Keystone together.
* Swift now supports server-side account-to-account copy. Server-
@ -1257,7 +1257,7 @@ swift (2.0.0)
them. A policy is set on a Swift container at container creation
time and cannot be changed.
Full docs are at http://docs.openstack.org/developer/swift/overview_policies.html
Full docs are at https://docs.openstack.org/swift/latest/overview_policies.html
* Add profiling middleware in Swift
@ -1351,7 +1351,7 @@ swift (1.13.0)
the header is a JSON dictionary string to be interpreted by the
auth system. A reference implementation is given in TempAuth.
Please see the full docs at
http://docs.openstack.org/developer/swift/overview_auth.html
https://docs.openstack.org/swift/latest/overview_auth.html
* Added a WSGI environment flag to stop swob from always using
absolute location. This is useful if middleware needs to use
@ -1433,8 +1433,8 @@ swift (1.12.0)
* New container sync configuration option, separating the end user
from knowing the required end point and adding more secure
signed requests. See
http://docs.openstack.org/developer/swift/overview_container_sync.html for full
information.
https://docs.openstack.org/swift/latest/overview_container_sync.html
for full information.
* bulk middleware now can be configured to retry deleting containers.
@ -1699,7 +1699,7 @@ swift (1.9.0)
bugrelated to content-disposition names.
* Added crossdomain.xml middleware. See
http://docs.openstack.org/developer/swift/crossdomain.html for details
https://docs.openstack.org/swift/latest/crossdomain.html for details
* Added rsync bandwidth limit setting for object replicator
@ -1720,7 +1720,7 @@ swift (1.9.0)
* Improved container-sync resiliency
* Added example Apache config files. See
http://docs.openstack.org/developer/swift/apache_deployment_guide.html
https://docs.openstack.org/swift/latest/apache_deployment_guide.html
for more info
* If an account is marked as deleted but hasn't been reaped and is still
@ -1768,7 +1768,7 @@ swift (1.8.0, OpenStack Grizzly)
This is a change that may require an update to your proxy server
config file or custom middleware that you may be using. See the full
docs at http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.proxy_logging.
docs at https://docs.openstack.org/swift/latest/misc.html.
* Changed the default sample rate for a few high-traffic requests.

View File

@ -75,7 +75,7 @@ working on.
Getting Started
---------------
http://docs.openstack.org/developer/swift/first_contribution_swift.html
https://docs.openstack.org/swift/latest/first_contribution_swift.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
@ -116,7 +116,7 @@ Recommended workflow
====================
- Set up a `Swift All-In-One
VM <http://docs.openstack.org/developer/swift/development_saio.html>`__\ (SAIO).
VM <https://docs.openstack.org/swift/latest/development_saio.html>`__\ (SAIO).
- Make your changes. Docs and tests for your patch must land before or
with your patch.

View File

@ -31,7 +31,7 @@ To build documentation install sphinx (``pip install sphinx``), run
``python setup.py build_sphinx``, and then browse to
/doc/build/html/index.html. These docs are auto-generated after every
commit and available online at
http://docs.openstack.org/developer/swift/.
https://docs.openstack.org/swift/latest/.
For Developers
--------------
@ -39,13 +39,14 @@ For Developers
Getting Started
~~~~~~~~~~~~~~~
Swift is part of OpenStack and follows the code contribution, review, and testing processes common to all OpenStack projects.
Swift is part of OpenStack and follows the code contribution, review, and
testing processes common to all OpenStack projects.
If you would like to start contributing, check out these
`notes <CONTRIBUTING.rst>`__ to help you get started.
The best place to get started is the
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__.
`"SAIO - Swift All In One" <https://docs.openstack.org/swift/latest/development_saio.html>`__.
This document will walk you through setting up a development cluster of
Swift in a VM. The SAIO environment is ideal for running small-scale
tests against swift and trying out new features and bug fixes.
@ -72,7 +73,7 @@ continue to work.
Probe tests are "white box" tests that validate the internal workings of a
Swift cluster. They are written to work against the
`"SAIO - Swift All In One" <http://docs.openstack.org/developer/swift/development_saio.html>`__
`"SAIO - Swift All In One" <https://docs.openstack.org/swift/latest/development_saio.html>`__
dev environment. For example, a probe test may create an object, delete one
replica, and ensure that the background consistency processes find and correct
the error.
@ -119,10 +120,9 @@ For Deployers
-------------
Deployer docs are also available at
http://docs.openstack.org/developer/swift/. A good starting point is at
http://docs.openstack.org/developer/swift/deployment_guide.html
There is an `ops runbook <http://docs.openstack.org/developer/swift/ops_runbook/>`__
https://docs.openstack.org/swift/latest/. A good starting point is at
https://docs.openstack.org/swift/latest/deployment_guide.html
There is an `ops runbook <https://docs.openstack.org/swift/latest/ops_runbook/index.html>`__
that gives information about how to diagnose and troubleshoot common issues
when running a Swift cluster.
@ -138,11 +138,11 @@ For client applications, official Python language bindings are provided
at http://github.com/openstack/python-swiftclient.
Complete API documentation at
http://developer.openstack.org/api-ref/object-store/
https://developer.openstack.org/api-ref/object-store/
There is a large ecosystem of applications and libraries that support and
work with OpenStack Swift. Several are listed on the
`associated projects <http://docs.openstack.org/developer/swift/associated_projects.html>`__
`associated projects <https://docs.openstack.org/swift/latest/associated_projects.html>`__
page.
--------------

View File

@ -186,8 +186,9 @@ Example requests and responses:
X-Openstack-Request-Id: tx06021f10fc8642b2901e7-0052d58f37
Date: Tue, 14 Jan 2014 19:25:43 GMT
Error response codes:201,204,
Normal response codes: 201, 202
Error response codes: 400, 404, 507
Request
-------

View File

@ -44,9 +44,9 @@ You can also feed a list of urls to the script through stdin.
Examples!
%(cmd)s SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076
%(cmd)s SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container/object
%(cmd)s -e errors.txt SOSO_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container
%(cmd)s AUTH_88ad0b83-b2c5-4fa1-b2d6-60c597202076
%(cmd)s AUTH_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container/object
%(cmd)s -e errors.txt AUTH_88ad0b83-b2c5-4fa1-b2d6-60c597202076/container
%(cmd)s < errors.txt
%(cmd)s -c 25 -d < errors.txt
""" % {'cmd': sys.argv[0]}
@ -108,7 +108,7 @@ class Auditor(object):
consistent = False
print(' MD5 does not match etag for "%s" on %s/%s'
% (path, node['ip'], node['device']))
etags.append(resp.getheader('ETag'))
etags.append((resp.getheader('ETag'), node))
else:
conn = http_connect(node['ip'], node['port'],
node['device'], part, 'HEAD',
@ -120,7 +120,7 @@ class Auditor(object):
print(' Bad status HEADing object "%s" on %s/%s'
% (path, node['ip'], node['device']))
continue
etags.append(resp.getheader('ETag'))
etags.append((resp.getheader('ETag'), node))
except Exception:
self.object_exceptions += 1
consistent = False
@ -131,8 +131,8 @@ class Auditor(object):
consistent = False
print(" Failed fo fetch object %s at all!" % path)
elif hash:
for etag in etags:
if resp.getheader('ETag').strip('"') != hash:
for etag, node in etags:
if etag.strip('"') != hash:
consistent = False
self.object_checksum_mismatch += 1
print(' ETag mismatch for "%s" on %s/%s'

View File

@ -386,7 +386,11 @@ Connection timeout to external services. The default is 0.5 seconds.
.IP \fBdelay_reaping\fR
Normally, the reaper begins deleting account information for deleted accounts
immediately; you can set this to delay its work however. The value is in
seconds. The default is 0.
seconds. The default is 0. The sum of this value and the
container-updater interval should be less than the account-replicator
reclaim_age. This ensures that once the account-reaper has deleted a
container there is sufficient time for the container-updater to report to the
account before the account DB is removed.
.IP \fBreap_warn_after\fR
If the account fails to be be reaped due to a persistent error, the
account reaper will log a message such as:

View File

@ -0,0 +1,138 @@
.\"
.\" Author: HCLTech-SSW <hcl_ss_oss@hcl.com>
.\" Copyright (c) 2010-2017 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\" http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH container-sync-realms.conf 5 "10/09/2017" "Linux" "OpenStack Swift"
.SH NAME
.LP
.B container-sync-realms.conf
\- configuration file for the OpenStack Swift container sync realms
.SH SYNOPSIS
.LP
.B container-sync-realms.conf
.SH DESCRIPTION
.PP
This is the configuration file used by the Object storage Swift to perform container to container
synchronization. This configuration file is used to configure clusters to allow/accept sync
requests to/from other clusters. Using this configuration file, the user specifies where
to sync their container to along with a secret synchronization key.
You can find more information about container to container synchronization at
\fIhttps://docs.openstack.org/swift/latest/overview_container_sync.html\fR
The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.
Any line that begins with a '#' symbol is ignored.
You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR
.SH GLOBAL SECTION
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.
.IP "\fBmtime_check_interval\fR"
The number of seconds between checking the modified time of this config file for changes
and therefore reloading it. The default value is 300.
.RE
.PD
.SH REALM SECTIONS
.PD 1
.RS 0
Each section name is the name of a sync realm, for example [realm1].
A sync realm is a set of clusters that have agreed to allow container syncing with each other.
Realm names will be considered case insensitive. Below are the parameters that are acceptable
within this section.
.IP "\fBcluster_clustername1\fR"
Any values in the realm section whose name begin with cluster_ will indicate the name and
endpoint of a cluster and will be used by external users in their container's
X-Container-Sync-To metadata header values with the format as "realm_name/cluster_name/container_name".
The Realm and cluster names are considered to be case insensitive.
.IP "\fBcluster_clustername2\fR"
Any values in the realm section whose name begin with cluster_ will indicate the name and
endpoint of a cluster and will be used by external users in their container's
X-Container-Sync-To metadata header values with the format as "realm_name/cluster_name/container_name".
The Realm and cluster names are considered to be case insensitive.
The endpoint is what the container sync daemon will use when sending out
requests to that cluster. Keep in mind this endpoint must be reachable by all
container servers, since that is where the container sync daemon runs. Note
that the endpoint ends with /v1/ and that the container sync daemon will then
add the account/container/obj name after that.
.IP "\fBkey\fR"
The key is the overall cluster-to-cluster key used in combination with the external
users' key that they set on their containers' X-Container-Sync-Key metadata header
values. These keys will be used to sign each request the container sync daemon makes
and used to validate each incoming container sync request.
.IP "\fBkey2\fR"
The key2 is optional and is an additional key incoming requests will be checked
against. This is so you can rotate keys if you wish; you move the existing
key to key2 and make a new key value.
.RE
.PD
.SH EXAMPLE
.nf
.RS 0
[DEFAULT]
mtime_check_interval = 300
[realm1]
key = realm1key
key2 = realm1key2
cluster_clustername1 = https://host1/v1/
cluster_clustername2 = https://host2/v1/
[realm2]
key = realm2key
key2 = realm2key2
cluster_clustername3 = https://host3/v1/
cluster_clustername4 = https://host4/v1/
.RE
.fi
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
.BI swift-container-sync
and also about OpenStack Swift as a whole can be found at
.BI https://docs.openstack.org/swift/latest/overview_container_sync.html
and
.BI https://docs.openstack.org/swift/latest/
.SH "SEE ALSO"
.BR swift-container-sync(1)

View File

@ -996,8 +996,6 @@ Error count to consider a node error limited. The default is 10.
Whether account PUTs and DELETEs are even callable. If set to 'true' any authorized
user may create and delete accounts; if 'false' no one, even authorized, can. The default
is false.
.IP \fBobject_post_as_copy\fR
Deprecated. The default is False.
.IP \fBaccount_autocreate\fR
If set to 'true' authorized accounts that do not yet exist within the Swift cluster
will be automatically created. The default is set to false.
@ -1025,6 +1023,15 @@ The valid values for sorting_method are "affinity", "shuffle", and "timing".
.IP \fBtiming_expiry\fR
If the "timing" sorting_method is used, the timings will only be valid for
the number of seconds configured by timing_expiry. The default is 300.
.IP \fBconcurrent_gets\fR
If "on" then use replica count number of threads concurrently during a GET/HEAD
and return with the first successful response. In the EC case, this parameter
only affects an EC HEAD as an EC GET behaves differently. Default is "off".
.IP \fBconcurrency_timeout\fR
This parameter controls how long to wait before firing off the next
concurrent_get thread. A value of 0 would we fully concurrent, any other number
will stagger the firing of the threads. This number should be between 0 and
node_timeout. The default is the value of conn_timeout (0.5).
.IP \fBrequest_node_count\fR
Set to the number of nodes to contact for a normal request. You can use '* replicas'
at the end to have it use the number given times the number of

View File

@ -46,9 +46,9 @@ Also download files and verify md5
.SH EXAMPLES
.nf
/usr/bin/swift\-account\-audit\/ SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076
/usr/bin/swift\-account\-audit\/ SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container/object
/usr/bin/swift\-account\-audit\/ \fB\-e\fR errors.txt SOSO_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container
/usr/bin/swift\-account\-audit\/ AUTH_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076
/usr/bin/swift\-account\-audit\/ AUTH_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container/object
/usr/bin/swift\-account\-audit\/ \fB\-e\fR errors.txt AUTH_88ad0b83\-b2c5\-4fa1\-b2d6\-60c597202076/container
/usr/bin/swift\-account\-audit\/ < errors.txt
/usr/bin/swift\-account\-audit\/ \fB\-c\fR 25 \fB\-d\fR < errors.txt
.fi

View File

@ -9,7 +9,7 @@ eventlet_debug = true
[pipeline:main]
# Yes, proxy-logging appears twice. This is so that
# middleware-originated requests get logged too.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[filter:catch_errors]
use = egg:swift#catch_errors
@ -71,6 +71,9 @@ allow_versioned_writes = true
[filter:copy]
use = egg:swift#copy
[filter:listing_formats]
use = egg:swift#listing_formats
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true

View File

@ -17,13 +17,3 @@ erasure coding capability. It is entirely possible to share devices between
storage policies, but for erasure coding it may make more sense to use
not only separate devices but possibly even entire nodes dedicated for erasure
coding.
.. important::
The erasure code support in Object Storage is considered beta in Kilo.
Most major functionality is included, but it has not been tested or
validated at large scale. This feature relies on ``ssync`` for durability.
We recommend deployers do extensive testing and not deploy production
data using an erasure code storage policy.
If any bugs are found during testing, please report them to
https://bugs.launchpad.net/swift

View File

@ -1493,6 +1493,6 @@ See :ref:`custom-logger-hooks-label` for sample use cases.
Securing OpenStack Swift
------------------------
Please refer to the security guide at http://docs.openstack.org/security-guide
Please refer to the security guide at https://docs.openstack.org/security-guide
and in particular the `Object Storage
<http://docs.openstack.org/security-guide/object-storage.html>`__ section.
<https://docs.openstack.org/security-guide/object-storage.html>`__ section.

View File

@ -169,14 +169,14 @@ The API Reference describes the operations that you can perform with the
Object Storage API:
- `Storage
accounts <http://developer.openstack.org/api-ref/object-storage/index.html#accounts>`__:
accounts <https://developer.openstack.org/api-ref/object-storage/index.html#accounts>`__:
Use to perform account-level tasks.
Lists containers for a specified account. Creates, updates, and
deletes account metadata. Shows account metadata.
- `Storage
containers <http://developer.openstack.org/api-ref/object-storage/index.html#containers>`__:
containers <https://developer.openstack.org/api-ref/object-storage/index.html#containers>`__:
Use to perform container-level tasks.
Lists objects in a specified container. Creates, shows details for,
@ -184,7 +184,7 @@ Object Storage API:
container metadata.
- `Storage
objects <http://developer.openstack.org/api-ref/object-storage/index.html#objects>`__:
objects <https://developer.openstack.org/api-ref/object-storage/index.html#objects>`__:
Use to perform object-level tasks.
Creates, replaces, shows details for, and deletes objects. Copies

View File

@ -222,4 +222,4 @@ Note that if the above example is copied exactly, and used in a command
shell, then the ampersand is interpreted as an operator and the URL
will be truncated. Enclose the URL in quotation marks to avoid this.
.. _tempurl: http://docs.openstack.org/developer/python-swiftclient/cli.html#tempurl
.. _tempurl: https://docs.openstack.org/python-swiftclient/latest/cli/index.html#swift-tempurl

View File

@ -1650,7 +1650,15 @@ delay_reaping 0 Normally, the reaper begins deleting
account information for deleted accounts
immediately; you can set this to delay
its work however. The value is in seconds,
2592000 = 30 days, for example.
2592000 = 30 days, for example. The sum of
this value and the container-updater
``interval`` should be less than the
account-replicator ``reclaim_age``. This
ensures that once the account-reaper has
deleted a container there is sufficient
time for the container-updater to report
to the account before the account DB is
removed.
reap_warn_after 2892000 If the account fails to be be reaped due
to a persistent error, the account reaper
will log a message such as:
@ -1884,7 +1892,6 @@ error_suppression_limit 10 Error count to consider
node error limited
allow_account_management false Whether account PUTs and DELETEs
are even callable
object_post_as_copy false Deprecated.
account_autocreate false If set to 'true' authorized
accounts that do not yet exist
within the Swift cluster will
@ -1944,12 +1951,12 @@ concurrent_gets off Use replica count numbe
GET/HEAD and return with the
first successful response. In
the EC case, this parameter only
effects an EC HEAD as an EC GET
affects an EC HEAD as an EC GET
behaves differently.
concurrency_timeout conn_timeout This parameter controls how long
to wait before firing off the
next concurrent_get thread. A
value of 0 would we fully concurrent
value of 0 would we fully concurrent,
any other number will stagger the
firing of the threads. This number
should be between 0 and node_timeout.

View File

@ -127,9 +127,6 @@ set using environment variables:
environment variable ``SWIFT_TEST_IN_PROCESS_CONF_LOADER`` to
``ec``.
- the deprecated proxy-server ``object_post_as_copy`` option may be set using
the environment variable ``SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY``.
- logging to stdout may be enabled by setting ``SWIFT_TEST_DEBUG_LOGS``.
For example, this command would run the in-process mode functional tests with
@ -147,7 +144,6 @@ The ``tox.ini`` file also specifies test environments for running other
in-process functional test configurations, e.g.::
tox -e func-ec
tox -e func-post-as-copy
To debug the functional tests, use the 'in-process test' mode and pass the
``--pdb`` flag to ``tox``::

View File

@ -28,7 +28,7 @@ following actions:
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
* In the ``[app:proxy-server]`` section, enable automatic account creation:

View File

@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
service on any node with network connectivity to the storage nodes.
Additionally, you can install and configure the proxy service on multiple
nodes to increase performance and redundancy. For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
This section applies to Debian.

View File

@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
service on any node with network connectivity to the storage nodes.
Additionally, you can install and configure the proxy service on multiple
nodes to increase performance and redundancy. For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
This section applies to openSUSE Leap 42.2 and SUSE Linux Enterprise Server
12 SP2.

View File

@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
service on any node with network connectivity to the storage nodes.
Additionally, you can install and configure the proxy service on multiple
nodes to increase performance and redundancy. For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
This section applies to Red Hat Enterprise Linux 7 and CentOS 7.

View File

@ -10,7 +10,7 @@ the proxy service on the controller node. However, you can run the proxy
service on any node with network connectivity to the storage nodes.
Additionally, you can install and configure the proxy service on multiple
nodes to increase performance and redundancy. For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
This section applies to Ubuntu 14.04 (LTS).

View File

@ -40,7 +40,7 @@ swift client
swift-init
Script that initializes the building of the ring file, takes daemon
names as parameter and offers commands. Documented in
http://docs.openstack.org/developer/swift/admin_guide.html#managing-services.
https://docs.openstack.org/swift/latest/admin_guide.html#managing-services.
swift-recon
A cli tool used to retrieve various metrics and telemetry information
@ -48,4 +48,4 @@ swift-recon
swift-ring-builder
Storage ring build and rebalance utility. Documented in
http://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings.
https://docs.openstack.org/swift/latest/admin_guide.html#managing-the-rings.

View File

@ -9,7 +9,7 @@ maximum partitions, 3 replicas of each object, and 1 hour minimum time between
moving a partition more than once. For Object Storage, a partition indicates a
directory on a storage device rather than a conventional partition table.
For more information, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
.. note::
Perform these steps on the controller node.

View File

@ -28,7 +28,7 @@ following actions:
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
directory:

View File

@ -28,7 +28,7 @@ following actions:
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
directory:

View File

@ -28,7 +28,7 @@ following actions:
.. note::
For more information on other modules that enable additional features,
see the `Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`__.
see the `Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`__.
* In the ``[filter:recon]`` section, configure the recon (meters) cache
and lock directories:

View File

@ -14,7 +14,7 @@ Although Object Storage supports any file system with
extended attributes (xattr), testing and benchmarking
indicate the best performance and reliability on XFS. For
more information on horizontally scaling your environment, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
This section applies to openSUSE Leap 42.2 and SUSE Linux Enterprise Server
12 SP2.

View File

@ -14,7 +14,7 @@ Although Object Storage supports any file system with
extended attributes (xattr), testing and benchmarking
indicate the best performance and reliability on XFS. For
more information on horizontally scaling your environment, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
This section applies to Red Hat Enterprise Linux 7 and CentOS 7.

View File

@ -14,7 +14,7 @@ Although Object Storage supports any file system with
extended attributes (xattr), testing and benchmarking
indicate the best performance and reliability on XFS. For
more information on horizontally scaling your environment, see the
`Deployment Guide <http://docs.openstack.org/developer/swift/deployment_guide.html>`_.
`Deployment Guide <https://docs.openstack.org/swift/latest/deployment_guide.html>`_.
This section applies to Ubuntu 14.04 (LTS) and Debian.

View File

@ -104,8 +104,8 @@ can be found in the KeystoneMiddleware_ distribution.
The :ref:`keystoneauth` middleware performs authorization and mapping the
Keystone roles to Swift's ACLs.
.. _KeystoneMiddleware: http://docs.openstack.org/developer/keystonemiddleware/
.. _Keystone: http://docs.openstack.org/developer/keystone/
.. _KeystoneMiddleware: https://docs.openstack.org/keystonemiddleware/latest/
.. _Keystone: https://docs.openstack.org/keystone/latest/
.. _configuring_keystone_auth:
@ -167,7 +167,7 @@ your situation, but in short:
service. The example values shown here assume a user named 'swift' with admin
role on a project named 'service', both being in the Keystone domain with id
'default'. Refer to the `KeystoneMiddleware documentation
<http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration>`_
<https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#configuration>`_
for other examples.
* ``cache`` is set to ``swift.cache``. This means that the middleware

View File

@ -238,7 +238,7 @@ Keys currently stored in Barbican can be listed using the
The keymaster uses the explicitly configured username and password (and
project name etc.) from the `keymaster.conf` file for retrieving the encryption
root secret from an external key management system. The `Castellan library
<http://docs.openstack.org/developer/castellan/>`_ is used to communicate with
<https://docs.openstack.org/castellan/latest/>`_ is used to communicate with
Barbican.
For the proxy server, reading the encryption root secret directly from the

View File

@ -203,7 +203,11 @@ use = egg:swift#recon
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
# seconds; 2592000 = 30 days for example. The sum of this value and the
# container-updater interval should be less than the account-replicator
# reclaim_age. This ensures that once the account-reaper has deleted a
# container there is sufficient time for the container-updater to report to the
# account before the account DB is removed.
# delay_reaping = 0
#
# If the account fails to be reaped due to a persistent error, the

View File

@ -28,6 +28,7 @@ pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
# See proxy-server.conf-sample for options
[filter:cache]

View File

@ -94,7 +94,7 @@ bind_port = 8080
[pipeline:main]
# This sample pipeline uses tempauth and is used for SAIO dev work and
# testing. See below for a pipeline using keystone.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache listing_formats container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
# The following pipeline shows keystone integration. Comment out the one
# above and uncomment this one. Additional steps for integrating keystone are
@ -357,7 +357,7 @@ user_test5_tester5 = testing5 service
# Following parameters are known to work with keystonemiddleware v2.3.0
# (above v2.0.0), but checking the latest information in the wiki page[1]
# is recommended.
# 1. http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration
# 1. https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#configuration
#
# [filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
@ -544,6 +544,8 @@ use = egg:swift#domain_remap
# can be specified separated by a comma
# storage_domain = example.com
# Specify a root path part that will be added to the start of paths if not
# already present.
# path_root = v1
# Browsers can convert a host header to lowercase, so check that reseller
@ -556,6 +558,14 @@ use = egg:swift#domain_remap
# reseller_prefixes = AUTH
# default_reseller_prefix =
# Enable legacy remapping behavior for versioned path requests:
# c.a.example.com/v1/o -> /v1/AUTH_a/c/o
# instead of
# c.a.example.com/v1/o -> /v1/AUTH_a/c/v1/o
# ... by default all path parts after a remapped domain are considered part of
# the object name with no special case for the path "v1"
# mangle_client_paths = False
[filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
@ -860,8 +870,6 @@ use = egg:swift#copy
# requests are transformed into COPY requests where source and destination are
# the same. All client-visible behavior (save response time) should be
# identical.
# This option is deprecated and will be ignored in a future release.
# object_post_as_copy = false
# Note: To enable encryption, add the following 2 dependent pieces of crypto
# middleware to the proxy-server pipeline. They should be to the right of all
@ -915,3 +923,9 @@ use = egg:swift#encryption
# disable_encryption to True. However, all encryption middleware should remain
# in the pipeline in order for existing encrypted data to be read.
# disable_encryption = False
# listing_formats should be just right of the first proxy-logging middleware,
# and left of most other middlewares. If it is not already present, it will
# be automatically inserted for you.
[filter:listing_formats]
use = egg:swift#listing_formats

View File

@ -0,0 +1,67 @@
# Andi Chandler <andi@gowling.com>, 2017. #zanata
msgid ""
msgstr ""
"Project-Id-Version: Swift Release Notes 2.15.2\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-10-10 22:05+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-10-05 03:59+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "2.10.0"
msgstr "2.10.0"
msgid "2.10.1"
msgstr "2.10.1"
msgid "2.10.2"
msgstr "2.10.2"
msgid "2.11.0"
msgstr "2.11.0"
msgid "2.12.0"
msgstr "2.12.0"
msgid "2.13.0"
msgstr "2.13.0"
msgid "2.13.1"
msgstr "2.13.1"
msgid "2.14.0"
msgstr "2.14.0"
msgid "2.15.0"
msgstr "2.15.0"
msgid "2.15.1"
msgstr "2.15.1"
msgid ""
"A PUT or POST to a container will now update the container's Last-Modified "
"time, and that value will be included in a GET/HEAD response."
msgstr ""
"A PUT or POST to a container will now update the container's Last-Modified "
"time, and that value will be included in a GET/HEAD response."
msgid "Current (Unreleased) Release Notes"
msgstr "Current (Unreleased) Release Notes"
msgid "Swift Release Notes"
msgstr "Swift Release Notes"
msgid "domain_remap now accepts a list of domains in \"storage_domain\"."
msgstr "domain_remap now accepts a list of domains in \"storage_domain\"."
msgid "name_check and cname_lookup keys have been added to `/info`."
msgstr "name_check and cname_lookup keys have been added to `/info`."
msgid "swift-recon now respects storage policy aliases."
msgstr "swift-recon now respects storage policy aliases."

View File

@ -106,6 +106,7 @@ paste.filter_factory =
keymaster = swift.common.middleware.crypto.keymaster:filter_factory
encryption = swift.common.middleware.crypto:filter_factory
kms_keymaster = swift.common.middleware.crypto.kms_keymaster:filter_factory
listing_formats = swift.common.middleware.listing_formats:filter_factory
[build_sphinx]
all_files = 1

View File

@ -24,15 +24,16 @@ import swift.common.db
from swift.account.backend import AccountBroker, DATADIR
from swift.account.utils import account_listing_response, get_response_headers
from swift.common.db import DatabaseConnectionError, DatabaseAlreadyExists
from swift.common.request_helpers import get_param, get_listing_content_type, \
from swift.common.request_helpers import get_param, \
split_and_validate_path
from swift.common.utils import get_logger, hash_path, public, \
Timestamp, storage_directory, config_true_value, \
json, timing_stats, replication, get_log_line
from swift.common.constraints import check_mount, valid_timestamp, check_utf8
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
from swift.common import constraints
from swift.common.db_replicator import ReplicatorRpc
from swift.common.base_storage_server import BaseStorageServer
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPAccepted, HTTPBadRequest, \
HTTPCreated, HTTPForbidden, HTTPInternalServerError, \
HTTPMethodNotAllowed, HTTPNoContent, HTTPNotFound, \
@ -87,7 +88,7 @@ class AccountController(BaseStorageServer):
def DELETE(self, req):
"""Handle HTTP DELETE request."""
drive, part, account = split_and_validate_path(req, 3)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
req_timestamp = valid_timestamp(req)
broker = self._get_account_broker(drive, part, account)
@ -101,7 +102,7 @@ class AccountController(BaseStorageServer):
def PUT(self, req):
"""Handle HTTP PUT request."""
drive, part, account, container = split_and_validate_path(req, 3, 4)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
if container: # put account container
if 'x-timestamp' not in req.headers:
@ -167,8 +168,8 @@ class AccountController(BaseStorageServer):
def HEAD(self, req):
"""Handle HTTP HEAD request."""
drive, part, account = split_and_validate_path(req, 3)
out_content_type = get_listing_content_type(req)
if self.mount_check and not check_mount(self.root, drive):
out_content_type = listing_formats.get_listing_content_type(req)
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account,
pending_timeout=0.1,
@ -201,9 +202,9 @@ class AccountController(BaseStorageServer):
constraints.ACCOUNT_LISTING_LIMIT)
marker = get_param(req, 'marker', '')
end_marker = get_param(req, 'end_marker')
out_content_type = get_listing_content_type(req)
out_content_type = listing_formats.get_listing_content_type(req)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account,
pending_timeout=0.1,
@ -224,7 +225,7 @@ class AccountController(BaseStorageServer):
"""
post_args = split_and_validate_path(req, 3)
drive, partition, hash = post_args
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
try:
args = json.load(req.environ['wsgi.input'])
@ -240,7 +241,7 @@ class AccountController(BaseStorageServer):
"""Handle HTTP POST request."""
drive, part, account = split_and_validate_path(req, 3)
req_timestamp = valid_timestamp(req)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account)
if broker.is_deleted():

View File

@ -14,8 +14,8 @@
# limitations under the License.
import json
from xml.sax import saxutils
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPOk, HTTPNoContent
from swift.common.utils import Timestamp
from swift.common.storage_policy import POLICIES
@ -78,43 +78,27 @@ def account_listing_response(account, req, response_content_type, broker=None,
account_list = broker.list_containers_iter(limit, marker, end_marker,
prefix, delimiter, reverse)
if response_content_type == 'application/json':
data = []
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
in account_list:
if is_subdir:
data.append({'subdir': name})
else:
data.append(
{'name': name, 'count': object_count,
'bytes': bytes_used,
'last_modified': Timestamp(put_timestamp).isoformat})
data = []
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
in account_list:
if is_subdir:
data.append({'subdir': name.decode('utf8')})
else:
data.append(
{'name': name.decode('utf8'), 'count': object_count,
'bytes': bytes_used,
'last_modified': Timestamp(put_timestamp).isoformat})
if response_content_type.endswith('/xml'):
account_list = listing_formats.account_to_xml(data, account)
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
elif response_content_type.endswith('/json'):
account_list = json.dumps(data)
elif response_content_type.endswith('/xml'):
output_list = ['<?xml version="1.0" encoding="UTF-8"?>',
'<account name=%s>' % saxutils.quoteattr(account)]
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
in account_list:
if is_subdir:
output_list.append(
'<subdir name=%s />' % saxutils.quoteattr(name))
else:
item = '<container><name>%s</name><count>%s</count>' \
'<bytes>%s</bytes><last_modified>%s</last_modified>' \
'</container>' % \
(saxutils.escape(name), object_count,
bytes_used, Timestamp(put_timestamp).isoformat)
output_list.append(item)
output_list.append('</account>')
account_list = '\n'.join(output_list)
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
elif data:
account_list = listing_formats.listing_to_text(data)
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
else:
if not account_list:
resp = HTTPNoContent(request=req, headers=resp_headers)
resp.content_type = response_content_type
resp.charset = 'utf-8'
return resp
account_list = '\n'.join(r[0] for r in account_list) + '\n'
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
ret = HTTPNoContent(request=req, headers=resp_headers)
ret.content_type = response_content_type
ret.charset = 'utf-8'
return ret

0
swift/cli/dispersion_report.py Executable file → Normal file
View File

View File

@ -218,7 +218,6 @@ def _parse_set_weight_values(argvish):
# --options format,
# but not both. If both are specified, raise an error.
try:
devs = []
if not new_cmd_format:
if len(args) % 2 != 0:
print(Commands.set_weight.__doc__.strip())
@ -227,7 +226,7 @@ def _parse_set_weight_values(argvish):
devs_and_weights = izip(islice(argvish, 0, len(argvish), 2),
islice(argvish, 1, len(argvish), 2))
for devstr, weightstr in devs_and_weights:
devs.extend(builder.search_devs(
devs = (builder.search_devs(
parse_search_value(devstr)) or [])
weight = float(weightstr)
_set_weight_values(devs, weight, opts)
@ -236,7 +235,7 @@ def _parse_set_weight_values(argvish):
print(Commands.set_weight.__doc__.strip())
exit(EXIT_ERROR)
devs.extend(builder.search_devs(
devs = (builder.search_devs(
parse_search_values_from_opts(opts)) or [])
weight = float(args[0])
_set_weight_values(devs, weight, opts)

View File

@ -15,6 +15,7 @@
import functools
import os
from os.path import isdir # tighter scoped import for mocking
import time
import six
@ -104,11 +105,6 @@ reload_constraints()
MAX_BUFFERED_SLO_SEGMENTS = 10000
#: Query string format= values to their corresponding content-type values
FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
'xml': 'application/xml'}
# By default the maximum number of allowed headers depends on the number of max
# allowed metadata settings plus a default value of 36 for swift internally
# generated headers and regular http headers. If for some reason this is not
@ -234,9 +230,9 @@ def check_dir(root, drive):
:param root: base path where the dir is
:param drive: drive name to be checked
:returns: True if it is a valid directoy, False otherwise
:returns: full path to the device, or None if drive fails to validate
"""
return os.path.isdir(os.path.join(root, drive))
return check_drive(root, drive, False)
def check_mount(root, drive):
@ -248,12 +244,31 @@ def check_mount(root, drive):
:param root: base path where the devices are mounted
:param drive: drive name to be checked
:returns: True if it is a valid mounted device, False otherwise
:returns: full path to the device, or None if drive fails to validate
"""
return check_drive(root, drive, True)
def check_drive(root, drive, mount_check):
"""
Validate the path given by root and drive is a valid existing directory.
:param root: base path where the devices are mounted
:param drive: drive name to be checked
:param mount_check: additionally require path is mounted
:returns: full path to the device, or None if drive fails to validate
"""
if not (urllib.parse.quote_plus(drive) == drive):
return False
return None
path = os.path.join(root, drive)
return utils.ismount(path)
if mount_check:
if utils.ismount(path):
return path
else:
if isdir(path):
return path
return None
def check_float(string):

View File

@ -221,9 +221,9 @@ class Replicator(Daemon):
'replication_last': now},
self.rcache, self.logger)
self.logger.info(' '.join(['%s:%s' % item for item in
self.stats.items() if item[0] in
sorted(self.stats.items()) if item[0] in
('no_change', 'hashmatch', 'rsync', 'diff', 'ts_repl',
'empty', 'diff_capped')]))
'empty', 'diff_capped', 'remote_merge')]))
def _add_failure_stats(self, failure_devs_info):
for node, dev in failure_devs_info:

View File

@ -88,19 +88,20 @@ def _get_direct_account_container(path, stype, node, part,
Do not use directly use the get_direct_account or
get_direct_container instead.
"""
qs = 'format=json'
params = ['format=json']
if marker:
qs += '&marker=%s' % quote(marker)
params.append('marker=%s' % quote(marker))
if limit:
qs += '&limit=%d' % limit
params.append('limit=%d' % limit)
if prefix:
qs += '&prefix=%s' % quote(prefix)
params.append('prefix=%s' % quote(prefix))
if delimiter:
qs += '&delimiter=%s' % quote(delimiter)
params.append('delimiter=%s' % quote(delimiter))
if end_marker:
qs += '&end_marker=%s' % quote(end_marker)
params.append('end_marker=%s' % quote(end_marker))
if reverse:
qs += '&reverse=%s' % quote(reverse)
params.append('reverse=%s' % quote(reverse))
qs = '&'.join(params)
with Timeout(conn_timeout):
conn = http_connect(node['ip'], node['port'], node['device'], part,
'GET', path, query_string=qs,

View File

@ -772,12 +772,14 @@ class SimpleClient(object):
if name:
url = '%s/%s' % (url.rstrip('/'), quote(name))
else:
url += '?format=json'
params = ['format=json']
if prefix:
url += '&prefix=%s' % prefix
params.append('prefix=%s' % prefix)
if marker:
url += '&marker=%s' % quote(marker)
params.append('marker=%s' % quote(marker))
url += '?' + '&'.join(params)
req = urllib2.Request(url, headers=headers, data=contents)
if proxy:

View File

@ -164,7 +164,7 @@ class MemcacheRing(object):
if isinstance(e, Timeout):
logging.error("Timeout %(action)s to memcached: %(server)s",
{'action': action, 'server': server})
elif isinstance(e, socket.error):
elif isinstance(e, (socket.error, MemcacheConnectionError)):
logging.error("Error %(action)s to memcached: %(server)s: %(err)s",
{'action': action, 'server': server, 'err': e})
else:
@ -283,7 +283,11 @@ class MemcacheRing(object):
with Timeout(self._io_timeout):
sock.sendall('get %s\r\n' % key)
line = fp.readline().strip().split()
while line[0].upper() != 'END':
while True:
if not line:
raise MemcacheConnectionError('incomplete read')
if line[0].upper() == 'END':
break
if line[0].upper() == 'VALUE' and line[1] == key:
size = int(line[3])
value = fp.read(size)
@ -329,6 +333,8 @@ class MemcacheRing(object):
with Timeout(self._io_timeout):
sock.sendall('%s %s %s\r\n' % (command, key, delta))
line = fp.readline().strip().split()
if not line:
raise MemcacheConnectionError('incomplete read')
if line[0].upper() == 'NOT_FOUND':
add_val = delta
if command == 'decr':
@ -444,7 +450,11 @@ class MemcacheRing(object):
sock.sendall('get %s\r\n' % ' '.join(keys))
line = fp.readline().strip().split()
responses = {}
while line[0].upper() != 'END':
while True:
if not line:
raise MemcacheConnectionError('incomplete read')
if line[0].upper() == 'END':
break
if line[0].upper() == 'VALUE':
size = int(line[3])
value = fp.read(size)

View File

@ -112,20 +112,6 @@ If a request is sent without the query parameter, an attempt will be made to
copy the whole object but will fail if the object size is
greater than 5GB.
-------------------
Object Post as Copy
-------------------
Historically, this has been a feature (and a configurable option with default
set to True) in proxy server configuration. This has been moved to server side
copy middleware and the default changed to False.
When ``object_post_as_copy`` is set to ``true``, an incoming POST request is
morphed into a COPY request where source and destination objects are same.
This feature was necessary because of a previous behavior where POSTS would
update the metadata on the object but not on the container. As a result,
features like container sync would not work correctly. This is no longer the
case and this option is now deprecated. It will be removed in a future release.
"""
import os
@ -137,8 +123,7 @@ from swift.common.utils import get_logger, \
config_true_value, FileLikeIter, read_conf_dir, close_if_possible
from swift.common.swob import Request, HTTPPreconditionFailed, \
HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException
from swift.common.http import HTTP_MULTIPLE_CHOICES, HTTP_CREATED, \
is_success, HTTP_OK
from swift.common.http import HTTP_MULTIPLE_CHOICES, is_success, HTTP_OK
from swift.common.constraints import check_account_format, MAX_FILE_SIZE
from swift.common.request_helpers import copy_header_subset, remove_items, \
is_sys_meta, is_sys_or_user_meta, is_object_transient_sysmeta
@ -238,13 +223,7 @@ class ServerSideCopyWebContext(WSGIContext):
return app_resp
def _adjust_put_response(self, req, additional_resp_headers):
if 'swift.post_as_copy' in req.environ:
# Older editions returned 202 Accepted on object POSTs, so we'll
# convert any 201 Created responses to that for compatibility with
# picky clients.
if self._get_status_int() == HTTP_CREATED:
self._response_status = '202 Accepted'
elif is_success(self._get_status_int()):
if is_success(self._get_status_int()):
for header, value in additional_resp_headers.items():
self._response_headers.append((header, value))
@ -269,17 +248,12 @@ class ServerSideCopyMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route="copy")
# Read the old object_post_as_copy option from Proxy app just in case
# someone has set it to false (non default). This wouldn't cause
# problems during upgrade.
self._load_object_post_as_copy_conf(conf)
self.object_post_as_copy = \
config_true_value(conf.get('object_post_as_copy', 'false'))
if self.object_post_as_copy:
msg = ('object_post_as_copy=true is deprecated; remove all '
'references to it from %s to disable this warning. This '
'option will be ignored in a future release' % conf.get(
'__file__', 'proxy-server.conf'))
msg = ('object_post_as_copy=true is deprecated; This '
'option is now ignored')
self.logger.warning(msg)
def _load_object_post_as_copy_conf(self, conf):
@ -330,9 +304,6 @@ class ServerSideCopyMiddleware(object):
elif req.method == 'COPY':
req.environ['swift.orig_req_method'] = req.method
return self.handle_COPY(req, start_response)
elif req.method == 'POST' and self.object_post_as_copy:
req.environ['swift.orig_req_method'] = req.method
return self.handle_object_post_as_copy(req, start_response)
elif req.method == 'OPTIONS':
# Does not interfere with OPTIONS response from
# (account,container) servers and /info response.
@ -343,21 +314,6 @@ class ServerSideCopyMiddleware(object):
return self.app(env, start_response)
def handle_object_post_as_copy(self, req, start_response):
req.method = 'PUT'
req.path_info = '/v1/%s/%s/%s' % (
self.account_name, self.container_name, self.object_name)
req.headers['Content-Length'] = 0
req.headers.pop('Range', None)
req.headers['X-Copy-From'] = quote('/%s/%s' % (self.container_name,
self.object_name))
req.environ['swift.post_as_copy'] = True
params = req.params
# for post-as-copy always copy the manifest itself if source is *LO
params['multipart-manifest'] = 'get'
req.params = params
return self.handle_PUT(req, start_response)
def handle_COPY(self, req, start_response):
if not req.headers.get('Destination'):
return HTTPPreconditionFailed(request=req,
@ -394,11 +350,6 @@ class ServerSideCopyMiddleware(object):
source_req.headers.pop('X-Backend-Storage-Policy-Index', None)
source_req.path_info = quote(source_path)
source_req.headers['X-Newest'] = 'true'
if 'swift.post_as_copy' in req.environ:
# We're COPYing one object over itself because of a POST; rely on
# the PUT for write authorization, don't require read authorization
source_req.environ['swift.authorize'] = lambda req: None
source_req.environ['swift.authorize_override'] = True
# in case we are copying an SLO manifest, set format=raw parameter
params = source_req.params
@ -470,11 +421,7 @@ class ServerSideCopyMiddleware(object):
def is_object_sysmeta(k):
return is_sys_meta('object', k)
if 'swift.post_as_copy' in sink_req.environ:
# Post-as-copy: ignore new sysmeta, copy existing sysmeta
remove_items(sink_req.headers, is_object_sysmeta)
copy_header_subset(source_resp, sink_req, is_object_sysmeta)
elif config_true_value(req.headers.get('x-fresh-metadata', 'false')):
if config_true_value(req.headers.get('x-fresh-metadata', 'false')):
# x-fresh-metadata only applies to copy, not post-as-copy: ignore
# existing user metadata, update existing sysmeta with new
copy_header_subset(source_resp, sink_req, is_object_sysmeta)
@ -497,9 +444,8 @@ class ServerSideCopyMiddleware(object):
params['multipart-manifest'] = 'put'
if 'X-Object-Manifest' in source_resp.headers:
del params['multipart-manifest']
if 'swift.post_as_copy' not in sink_req.environ:
sink_req.headers['X-Object-Manifest'] = \
source_resp.headers['X-Object-Manifest']
sink_req.headers['X-Object-Manifest'] = \
source_resp.headers['X-Object-Manifest']
sink_req.params = params
# Set swift.source, data source, content length and etag

View File

@ -15,7 +15,6 @@
import base64
import json
import xml.etree.cElementTree as ElementTree
from swift import gettext_ as _
from swift.common.http import is_success
@ -23,7 +22,7 @@ from swift.common.middleware.crypto.crypto_utils import CryptoWSGIContext, \
load_crypto_meta, extract_crypto_meta, Crypto
from swift.common.exceptions import EncryptionException
from swift.common.request_helpers import get_object_transient_sysmeta, \
get_listing_content_type, get_sys_meta_prefix, get_user_meta_prefix
get_sys_meta_prefix, get_user_meta_prefix
from swift.common.swob import Request, HTTPException, HTTPInternalServerError
from swift.common.utils import get_logger, config_true_value, \
parse_content_range, closing_if_possible, parse_content_type, \
@ -352,15 +351,12 @@ class DecrypterContContext(BaseDecrypterContext):
if is_success(self._get_status_int()):
# only decrypt body of 2xx responses
out_content_type = get_listing_content_type(req)
if out_content_type == 'application/json':
handler = self.process_json_resp
keys = self.get_decryption_keys(req)
elif out_content_type.endswith('/xml'):
handler = self.process_xml_resp
keys = self.get_decryption_keys(req)
else:
handler = keys = None
handler = keys = None
for header, value in self._response_headers:
if header.lower() == 'content-type' and \
value.split(';', 1)[0] == 'application/json':
handler = self.process_json_resp
keys = self.get_decryption_keys(req)
if handler and keys:
try:
@ -398,24 +394,6 @@ class DecrypterContContext(BaseDecrypterContext):
obj_dict['hash'] = self.decrypt_value_with_meta(ciphertext, key)
return obj_dict
def process_xml_resp(self, key, resp_iter):
"""
Parses xml body listing and decrypt encrypted entries. Updates
Content-Length header with new body length and return a body iter.
"""
with closing_if_possible(resp_iter):
resp_body = ''.join(resp_iter)
tree = ElementTree.fromstring(resp_body)
for elem in tree.iter('hash'):
ciphertext = elem.text.encode('utf8')
plain = self.decrypt_value_with_meta(ciphertext, key)
elem.text = plain.decode('utf8')
new_body = ElementTree.tostring(tree, encoding='UTF-8').replace(
"<?xml version='1.0' encoding='UTF-8'?>",
'<?xml version="1.0" encoding="UTF-8"?>', 1)
self.update_content_length(len(new_body))
return [new_body]
class Decrypter(object):
"""Middleware for decrypting data and user metadata."""

View File

@ -151,7 +151,7 @@ class GetContext(WSGIContext):
method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent=('%(orig)s ' + 'DLO MultipartGET'), swift_source='DLO')
con_req.query_string = 'format=json&prefix=%s' % quote(prefix)
con_req.query_string = 'prefix=%s' % quote(prefix)
if marker:
con_req.query_string += '&marker=%s' % quote(marker)

View File

@ -17,42 +17,91 @@
"""
Domain Remap Middleware
Middleware that translates container and account parts of a domain to
path parameters that the proxy server understands.
Middleware that translates container and account parts of a domain to path
parameters that the proxy server understands.
container.account.storageurl/object gets translated to
container.account.storageurl/path_root/account/container/object
Translation is only performed when the request URL's host domain matches one of
a list of domains. This list may be configured by the option
``storage_domain``, and defaults to the single domain ``example.com``.
account.storageurl/path_root/container/object gets translated to
account.storageurl/path_root/account/container/object
If not already present, a configurable ``path_root``, which defaults to ``v1``,
will be added to the start of the translated path.
Browsers can convert a host header to lowercase, so check that reseller
prefix on the account is the correct case. This is done by comparing the
items in the reseller_prefixes config option to the found prefix. If they
match except for case, the item from reseller_prefixes will be used
instead of the found reseller prefix. When none match, the default reseller
prefix is used. When no default reseller prefix is configured, any request with
an account prefix not in that list will be ignored by this middleware.
reseller_prefixes defaults to 'AUTH'.
For example, with the default configuration::
container.AUTH-account.example.com/object
container.AUTH-account.example.com/v1/object
would both be translated to::
container.AUTH-account.example.com/v1/AUTH_account/container/object
and::
AUTH-account.example.com/container/object
AUTH-account.example.com/v1/container/object
would both be translated to::
AUTH-account.example.com/v1/AUTH_account/container/object
Additionally, translation is only performed when the account name in the
translated path starts with a reseller prefix matching one of a list configured
by the option ``reseller_prefixes``, or when no match is found but a
``default_reseller_prefix`` has been configured.
The ``reseller_prefixes`` list defaults to the single prefix ``AUTH``. The
``default_reseller_prefix`` is not configured by default.
Browsers can convert a host header to lowercase, so the middleware checks that
the reseller prefix on the account name is the correct case. This is done by
comparing the items in the ``reseller_prefixes`` config option to the found
prefix. If they match except for case, the item from ``reseller_prefixes`` will
be used instead of the found reseller prefix. The middleware will also replace
any hyphen ('-') in the account name with an underscore ('_').
For example, with the default configuration::
auth-account.example.com/container/object
AUTH-account.example.com/container/object
auth_account.example.com/container/object
AUTH_account.example.com/container/object
would all be translated to::
<unchanged>.example.com/v1/AUTH_account/container/object
When no match is found in ``reseller_prefixes``, the
``default_reseller_prefix`` config option is used. When no
``default_reseller_prefix`` is configured, any request with an account prefix
not in the ``reseller_prefixes`` list will be ignored by this middleware.
For example, with ``default_reseller_prefix = AUTH``::
account.example.com/container/object
would be translated to::
account.example.com/v1/AUTH_account/container/object
Note that this middleware requires that container names and account names
(except as described above) must be DNS-compatible. This means that the
account name created in the system and the containers created by users
cannot exceed 63 characters or have UTF-8 characters. These are
restrictions over and above what swift requires and are not explicitly
checked. Simply put, the this middleware will do a best-effort attempt to
derive account and container names from elements in the domain name and
put those derived values into the URL path (leaving the Host header
unchanged).
(except as described above) must be DNS-compatible. This means that the account
name created in the system and the containers created by users cannot exceed 63
characters or have UTF-8 characters. These are restrictions over and above what
Swift requires and are not explicitly checked. Simply put, this middleware
will do a best-effort attempt to derive account and container names from
elements in the domain name and put those derived values into the URL path
(leaving the ``Host`` header unchanged).
Also note that using container sync with remapped domain names is not
advised. With container sync, you should use the true storage end points as
sync destinations.
Also note that using :doc:`overview_container_sync` with remapped domain names
is not advised. With :doc:`overview_container_sync`, you should use the true
storage end points as sync destinations.
"""
from swift.common.middleware import RewriteContext
from swift.common.swob import Request, HTTPBadRequest
from swift.common.utils import list_from_csv, register_swift_info
from swift.common.utils import config_true_value, list_from_csv, \
register_swift_info
class _DomainRemapContext(RewriteContext):
@ -78,12 +127,14 @@ class DomainRemapMiddleware(object):
if not s.startswith('.')]
self.storage_domain += [s for s in list_from_csv(storage_domain)
if s.startswith('.')]
self.path_root = '/' + conf.get('path_root', 'v1').strip('/')
self.path_root = conf.get('path_root', 'v1').strip('/') + '/'
prefixes = conf.get('reseller_prefixes', 'AUTH')
self.reseller_prefixes = list_from_csv(prefixes)
self.reseller_prefixes_lower = [x.lower()
for x in self.reseller_prefixes]
self.default_reseller_prefix = conf.get('default_reseller_prefix')
self.mangle_client_paths = config_true_value(
conf.get('mangle_client_paths'))
def __call__(self, env, start_response):
if not self.storage_domain:
@ -129,14 +180,14 @@ class DomainRemapMiddleware(object):
# account prefix is not in config list. bail.
return self.app(env, start_response)
requested_path = path = env['PATH_INFO']
new_path_parts = [self.path_root, account]
requested_path = env['PATH_INFO']
path = requested_path[1:]
new_path_parts = ['', self.path_root[:-1], account]
if container:
new_path_parts.append(container)
if path.startswith(self.path_root):
if self.mangle_client_paths and (path + '/').startswith(
self.path_root):
path = path[len(self.path_root):]
if path.startswith('/'):
path = path[1:]
new_path_parts.append(path)
new_path = '/'.join(new_path_parts)
env['PATH_INFO'] = new_path

View File

@ -36,6 +36,7 @@ from swift.common.utils import get_logger, config_true_value
from swift.common.request_helpers import (
remove_items, get_sys_meta_prefix, OBJECT_TRANSIENT_SYSMETA_PREFIX
)
from six.moves.urllib.parse import urlsplit
import re
#: A list of python regular expressions that will be used to
@ -89,9 +90,29 @@ class GatekeeperMiddleware(object):
[('X-Timestamp', ts)])
def gatekeeper_response(status, response_headers, exc_info=None):
def fixed_response_headers():
def relative_path(value):
parsed = urlsplit(v)
new_path = parsed.path
if parsed.query:
new_path += ('?%s' % parsed.query)
if parsed.fragment:
new_path += ('#%s' % parsed.fragment)
return new_path
if not env.get('swift.leave_relative_location'):
return response_headers
else:
return [
(k, v) if k.lower() != 'location' else
(k, relative_path(v)) for (k, v) in response_headers
]
response_headers = fixed_response_headers()
removed = filter(
lambda h: self.outbound_condition(h[0]),
response_headers)
if removed:
self.logger.debug('removed response headers: %s' % removed)
new_headers = filter(

View File

@ -0,0 +1,211 @@
# Copyright (c) 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import six
from xml.etree.cElementTree import Element, SubElement, tostring
from swift.common.constraints import valid_api_version
from swift.common.http import HTTP_NO_CONTENT
from swift.common.request_helpers import get_param
from swift.common.swob import HTTPException, HTTPNotAcceptable, Request, \
RESPONSE_REASONS
#: Mapping of query string ``format=`` values to their corresponding
#: content-type values.
FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
'xml': 'application/xml'}
#: Maximum size of a valid JSON container listing body. If we receive
#: a container listing response larger than this, assume it's a staticweb
#: response and pass it on to the client.
# Default max object length is 1024, default container listing limit is 1e4;
# add a fudge factor for things like hash, last_modified, etc.
MAX_CONTAINER_LISTING_CONTENT_LENGTH = 1024 * 10000 * 2
def get_listing_content_type(req):
"""
Determine the content type to use for an account or container listing
response.
:param req: request object
:returns: content type as a string (e.g. text/plain, application/json)
:raises HTTPNotAcceptable: if the requested content type is not acceptable
:raises HTTPBadRequest: if the 'format' query param is provided and
not valid UTF-8
"""
query_format = get_param(req, 'format')
if query_format:
req.accept = FORMAT2CONTENT_TYPE.get(
query_format.lower(), FORMAT2CONTENT_TYPE['plain'])
out_content_type = req.accept.best_match(
['text/plain', 'application/json', 'application/xml', 'text/xml'])
if not out_content_type:
raise HTTPNotAcceptable(request=req)
return out_content_type
def account_to_xml(listing, account_name):
doc = Element('account', name=account_name.decode('utf-8'))
doc.text = '\n'
for record in listing:
if 'subdir' in record:
name = record.pop('subdir')
sub = SubElement(doc, 'subdir', name=name)
else:
sub = SubElement(doc, 'container')
for field in ('name', 'count', 'bytes', 'last_modified'):
SubElement(sub, field).text = six.text_type(
record.pop(field))
sub.tail = '\n'
return tostring(doc, encoding='UTF-8').replace(
"<?xml version='1.0' encoding='UTF-8'?>",
'<?xml version="1.0" encoding="UTF-8"?>', 1)
def container_to_xml(listing, base_name):
doc = Element('container', name=base_name.decode('utf-8'))
for record in listing:
if 'subdir' in record:
name = record.pop('subdir')
sub = SubElement(doc, 'subdir', name=name)
SubElement(sub, 'name').text = name
else:
sub = SubElement(doc, 'object')
for field in ('name', 'hash', 'bytes', 'content_type',
'last_modified'):
SubElement(sub, field).text = six.text_type(
record.pop(field))
return tostring(doc, encoding='UTF-8').replace(
"<?xml version='1.0' encoding='UTF-8'?>",
'<?xml version="1.0" encoding="UTF-8"?>', 1)
def listing_to_text(listing):
def get_lines():
for item in listing:
if 'name' in item:
yield item['name'].encode('utf-8') + b'\n'
else:
yield item['subdir'].encode('utf-8') + b'\n'
return b''.join(get_lines())
class ListingFilter(object):
def __init__(self, app):
self.app = app
def __call__(self, env, start_response):
req = Request(env)
try:
# account and container only
version, acct, cont = req.split_path(2, 3)
except ValueError:
return self.app(env, start_response)
if not valid_api_version(version) or req.method not in ('GET', 'HEAD'):
return self.app(env, start_response)
# OK, definitely have an account/container request.
# Get the desired content-type, then force it to a JSON request.
try:
out_content_type = get_listing_content_type(req)
except HTTPException as err:
return err(env, start_response)
params = req.params
params['format'] = 'json'
req.params = params
status, headers, resp_iter = req.call_application(self.app)
header_to_index = {}
resp_content_type = resp_length = None
for i, (header, value) in enumerate(headers):
header = header.lower()
if header == 'content-type':
header_to_index[header] = i
resp_content_type = value.partition(';')[0]
elif header == 'content-length':
header_to_index[header] = i
resp_length = int(value)
if not status.startswith('200 '):
start_response(status, headers)
return resp_iter
if resp_content_type != 'application/json':
start_response(status, headers)
return resp_iter
if resp_length is None or \
resp_length > MAX_CONTAINER_LISTING_CONTENT_LENGTH:
start_response(status, headers)
return resp_iter
def set_header(header, value):
if value is None:
del headers[header_to_index[header]]
else:
headers[header_to_index[header]] = (
headers[header_to_index[header]][0], str(value))
if req.method == 'HEAD':
set_header('content-type', out_content_type + '; charset=utf-8')
set_header('content-length', None) # don't know, can't determine
start_response(status, headers)
return resp_iter
body = b''.join(resp_iter)
try:
listing = json.loads(body)
# Do a couple sanity checks
if not isinstance(listing, list):
raise ValueError
if not all(isinstance(item, dict) for item in listing):
raise ValueError
except ValueError:
# Static web listing that's returning invalid JSON?
# Just pass it straight through; that's about all we *can* do.
start_response(status, headers)
return [body]
try:
if out_content_type.endswith('/xml'):
if cont:
body = container_to_xml(listing, cont)
else:
body = account_to_xml(listing, acct)
elif out_content_type == 'text/plain':
body = listing_to_text(listing)
# else, json -- we continue down here to be sure we set charset
except KeyError:
# listing was in a bad format -- funky static web listing??
start_response(status, headers)
return [body]
if not body:
status = '%s %s' % (HTTP_NO_CONTENT,
RESPONSE_REASONS[HTTP_NO_CONTENT][0])
set_header('content-type', out_content_type + '; charset=utf-8')
set_header('content-length', len(body))
start_response(status, headers)
return [body]
def filter_factory(global_conf, **local_conf):
return ListingFilter

View File

@ -209,7 +209,7 @@ class ReconMiddleware(object):
continue
try:
mounted = check_mount(self.devices, entry)
mounted = bool(check_mount(self.devices, entry))
except OSError as err:
mounted = str(err)
mpoint = {'device': entry, 'mounted': mounted}
@ -225,7 +225,7 @@ class ReconMiddleware(object):
continue
try:
mounted = check_mount(self.devices, entry)
mounted = bool(check_mount(self.devices, entry))
except OSError as err:
devices.append({'device': entry, 'mounted': str(err),
'size': '', 'used': '', 'avail': ''})

View File

@ -27,7 +27,7 @@ Uploading the Manifest
----------------------
After the user has uploaded the objects to be concatenated, a manifest is
uploaded. The request must be a PUT with the query parameter::
uploaded. The request must be a ``PUT`` with the query parameter::
?multipart-manifest=put
@ -47,52 +47,49 @@ range (optional) the (inclusive) range within the object to
use as a segment. If omitted, the entire object is used.
=========== ========================================================
The format of the list will be:
.. code::
The format of the list will be::
[{"path": "/cont/object",
"etag": "etagoftheobjectsegment",
"size_bytes": 10485760,
"range": "1048576-2097151"}, ...]
"range": "1048576-2097151"},
...]
The number of object segments is limited to a configurable amount, default
1000. Each segment must be at least 1 byte. On upload, the middleware will
head every segment passed in to verify:
1. the segment exists (i.e. the HEAD was successful);
2. the segment meets minimum size requirements;
3. if the user provided a non-null etag, the etag matches;
4. if the user provided a non-null size_bytes, the size_bytes matches; and
5. if the user provided a range, it is a singular, syntactically correct range
that is satisfiable given the size of the object.
1. the segment exists (i.e. the ``HEAD`` was successful);
2. the segment meets minimum size requirements;
3. if the user provided a non-null ``etag``, the etag matches;
4. if the user provided a non-null ``size_bytes``, the size_bytes matches; and
5. if the user provided a ``range``, it is a singular, syntactically correct
range that is satisfiable given the size of the object.
Note that the etag and size_bytes keys are optional; if omitted, the
Note that the ``etag`` and ``size_bytes`` keys are optional; if omitted, the
verification is not performed. If any of the objects fail to verify (not
found, size/etag mismatch, below minimum size, invalid range) then the user
will receive a 4xx error response. If everything does match, the user will
receive a 2xx response and the SLO object is ready for downloading.
Behind the scenes, on success, a json manifest generated from the user input is
sent to object servers with an extra "X-Static-Large-Object: True" header
and a modified Content-Type. The items in this manifest will include the etag
and size_bytes for each segment, regardless of whether the client specified
them for verification. The parameter: swift_bytes=$total_size will be
appended to the existing Content-Type, where total_size is the sum of all
the included segments' size_bytes. This extra parameter will be hidden from
the user.
Behind the scenes, on success, a JSON manifest generated from the user input is
sent to object servers with an extra ``X-Static-Large-Object: True`` header
and a modified ``Content-Type``. The items in this manifest will include the
``etag`` and ``size_bytes`` for each segment, regardless of whether the client
specified them for verification. The parameter ``swift_bytes=$total_size`` will
be appended to the existing ``Content-Type``, where ``$total_size`` is the sum
of all the included segments' ``size_bytes``. This extra parameter will be
hidden from the user.
Manifest files can reference objects in separate containers, which will improve
concurrent upload speed. Objects can be referenced by multiple manifests. The
segments of a SLO manifest can even be other SLO manifests. Treat them as any
other object i.e., use the Etag and Content-Length given on the PUT of the
sub-SLO in the manifest to the parent SLO.
other object i.e., use the ``Etag`` and ``Content-Length`` given on the ``PUT``
of the sub-SLO in the manifest to the parent SLO.
While uploading a manifest, a user can send Etag for verification. It needs to
be md5 of the segments' etags, if there is no range specified. For example, if
the manifest to be uploaded looks like this:
.. code::
While uploading a manifest, a user can send ``Etag`` for verification. It needs
to be md5 of the segments' etags, if there is no range specified. For example,
if the manifest to be uploaded looks like this::
[{"path": "/cont/object1",
"etag": "etagoftheobjectsegment1",
@ -101,16 +98,12 @@ the manifest to be uploaded looks like this:
"etag": "etagoftheobjectsegment2",
"size_bytes": 10485760}]
The Etag of the above manifest would be md5 of etagoftheobjectsegment1 and
etagoftheobjectsegment2. This could be computed in the following way:
.. code::
The Etag of the above manifest would be md5 of ``etagoftheobjectsegment1`` and
``etagoftheobjectsegment2``. This could be computed in the following way::
echo -n 'etagoftheobjectsegment1etagoftheobjectsegment2' | md5sum
If a manifest to be uploaded with a segment range looks like this:
.. code::
If a manifest to be uploaded with a segment range looks like this::
[{"path": "/cont/object1",
"etag": "etagoftheobjectsegmentone",
@ -122,10 +115,8 @@ If a manifest to be uploaded with a segment range looks like this:
"range": "3-4"}]
While computing the Etag of the above manifest, internally each segment's etag
will be taken in the form of 'etagvalue:rangevalue;'. Hence the Etag of the
above manifest would be:
.. code::
will be taken in the form of ``etagvalue:rangevalue;``. Hence the Etag of the
above manifest would be::
echo -n 'etagoftheobjectsegmentone:1-2;etagoftheobjectsegmenttwo:3-4;' \
| md5sum
@ -136,65 +127,65 @@ Range Specification
-------------------
Users now have the ability to specify ranges for SLO segments.
Users can now include an optional 'range' field in segment descriptions
Users can now include an optional ``range`` field in segment descriptions
to specify which bytes from the underlying object should be used for the
segment data. Only one range may be specified per segment.
.. note::
.. note::
The 'etag' and 'size_bytes' fields still describe the backing object as a
whole.
The ``etag`` and ``size_bytes`` fields still describe the backing object
as a whole.
If a user uploads this manifest:
If a user uploads this manifest::
.. code::
[{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "0-1048576"},
{"path": "/con/obj_seg_2", "size_bytes": 2097152,
"range": "512-1550000"},
{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "-2048"}]
[{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "0-1048576"},
{"path": "/con/obj_seg_2", "size_bytes": 2097152,
"range": "512-1550000"},
{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "-2048"}]
The segment will consist of the first 1048576 bytes of /con/obj_seg_1,
followed by bytes 513 through 1550000 (inclusive) of /con/obj_seg_2, and
finally bytes 2095104 through 2097152 (i.e., the last 2048 bytes) of
/con/obj_seg_1.
.. note::
.. note::
The minimum sized range is 1 byte. This is the same as the minimum
segment size.
The minimum sized range is 1 byte. This is the same as the minimum
segment size.
-------------------------
Retrieving a Large Object
-------------------------
A GET request to the manifest object will return the concatenation of the
A ``GET`` request to the manifest object will return the concatenation of the
objects from the manifest much like DLO. If any of the segments from the
manifest are not found or their Etag/Content Length have changed since upload,
the connection will drop. In this case a 409 Conflict will be logged in the
proxy logs and the user will receive incomplete results. Note that this will be
enforced regardless of whether the user performed per-segment validation during
upload.
manifest are not found or their ``Etag``/``Content-Length`` have changed since
upload, the connection will drop. In this case a ``409 Conflict`` will be
logged in the proxy logs and the user will receive incomplete results. Note
that this will be enforced regardless of whether the user performed per-segment
validation during upload.
The headers from this GET or HEAD request will return the metadata attached
to the manifest object itself with some exceptions::
The headers from this ``GET`` or ``HEAD`` request will return the metadata
attached to the manifest object itself with some exceptions:
Content-Length: the total size of the SLO (the sum of the sizes of
the segments in the manifest)
X-Static-Large-Object: True
Etag: the etag of the SLO (generated the same way as DLO)
===================== ==================================================
Header Value
===================== ==================================================
Content-Length the total size of the SLO (the sum of the sizes of
the segments in the manifest)
X-Static-Large-Object the string "True"
Etag the etag of the SLO (generated the same way as DLO)
===================== ==================================================
A GET request with the query parameter::
A ``GET`` request with the query parameter::
?multipart-manifest=get
will return a transformed version of the original manifest, containing
additional fields and different key names. For example, the first manifest in
the example above would look like this:
.. code::
the example above would look like this::
[{"name": "/cont/object",
"hash": "etagoftheobjectsegment",
@ -222,9 +213,10 @@ left to the user to use caution in handling the segments.
Deleting a Large Object
-----------------------
A DELETE request will just delete the manifest object itself.
A ``DELETE`` request will just delete the manifest object itself. The segment
data referenced by the manifest will remain unchanged.
A DELETE with a query parameter::
A ``DELETE`` with a query parameter::
?multipart-manifest=delete
@ -235,22 +227,22 @@ itself. The failure response will be similar to the bulk delete middleware.
Modifying a Large Object
------------------------
PUTs / POSTs will work as expected, PUTs will just overwrite the manifest
object for example.
``PUT`` and ``POST`` requests will work as expected; ``PUT``\s will just
overwrite the manifest object for example.
------------------
Container Listings
------------------
In a container listing the size listed for SLO manifest objects will be the
total_size of the concatenated segments in the manifest. The overall
X-Container-Bytes-Used for the container (and subsequently for the account)
will not reflect total_size of the manifest but the actual size of the json
``total_size`` of the concatenated segments in the manifest. The overall
``X-Container-Bytes-Used`` for the container (and subsequently for the account)
will not reflect ``total_size`` of the manifest but the actual size of the JSON
data stored. The reason for this somewhat confusing discrepancy is we want the
container listing to reflect the size of the manifest object when it is
downloaded. We do not, however, want to count the bytes-used twice (for both
the manifest and the segments it's referring to) in the container and account
metadata which can be used for stats purposes.
metadata which can be used for stats and billing purposes.
"""
from collections import defaultdict
@ -296,20 +288,20 @@ def parse_and_validate_input(req_body, req_path):
Given a request body, parses it and returns a list of dictionaries.
The output structure is nearly the same as the input structure, but it
is not an exact copy. Given a valid input dictionary `d_in`, its
corresponding output dictionary `d_out` will be as follows:
is not an exact copy. Given a valid input dictionary ``d_in``, its
corresponding output dictionary ``d_out`` will be as follows:
* d_out['etag'] == d_in['etag']
* d_out['etag'] == d_in['etag']
* d_out['path'] == d_in['path']
* d_out['path'] == d_in['path']
* d_in['size_bytes'] can be a string ("12") or an integer (12), but
d_out['size_bytes'] is an integer.
* d_in['size_bytes'] can be a string ("12") or an integer (12), but
d_out['size_bytes'] is an integer.
* (optional) d_in['range'] is a string of the form "M-N", "M-", or
"-N", where M and N are non-negative integers. d_out['range'] is the
corresponding swob.Range object. If d_in does not have a key
'range', neither will d_out.
* (optional) d_in['range'] is a string of the form "M-N", "M-", or
"-N", where M and N are non-negative integers. d_out['range'] is the
corresponding swob.Range object. If d_in does not have a key
'range', neither will d_out.
:raises HTTPException: on parse errors or semantic errors (e.g. bogus
JSON structure, syntactically invalid ranges)
@ -435,7 +427,7 @@ class SloGetContext(WSGIContext):
agent='%(orig)s SLO MultipartGET', swift_source='SLO')
sub_resp = sub_req.get_response(self.slo.app)
if not is_success(sub_resp.status_int):
if not sub_resp.is_success:
close_if_possible(sub_resp.app_iter)
raise ListingIterError(
'ERROR: while fetching %s, GET of submanifest %s '
@ -615,8 +607,9 @@ class SloGetContext(WSGIContext):
thing with them. Returns an iterator suitable for sending up the WSGI
chain.
:param req: swob.Request object; is a GET or HEAD request aimed at
what may be a static large object manifest (or may not).
:param req: :class:`~swift.common.swob.Request` object; is a ``GET`` or
``HEAD`` request aimed at what may (or may not) be a static
large object manifest.
:param start_response: WSGI start_response callable
"""
if req.params.get('multipart-manifest') != 'get':
@ -898,7 +891,9 @@ class StaticLargeObject(object):
The response body (only on GET, of course) will consist of the
concatenation of the segments.
:params req: a swob.Request with a path referencing an object
:param req: a :class:`~swift.common.swob.Request` with a path
referencing an object
:param start_response: WSGI start_response callable
:raises HttpException: on errors
"""
return SloGetContext(self).handle_slo_get_or_head(req, start_response)
@ -910,13 +905,11 @@ class StaticLargeObject(object):
save a manifest generated from the user input. Uses WSGIContext to
call self and start_response and returns a WSGI iterator.
:params req: a swob.Request with an obj in path
:param req: a :class:`~swift.common.swob.Request` with an obj in path
:param start_response: WSGI start_response callable
:raises HttpException: on errors
"""
try:
vrs, account, container, obj = req.split_path(1, 4, True)
except ValueError:
return self.app(req.environ, start_response)
vrs, account, container, obj = req.split_path(4, rest_with_last=True)
if req.content_length > self.max_manifest_size:
raise HTTPRequestEntityTooLarge(
"Manifest File > %d bytes" % self.max_manifest_size)
@ -1073,7 +1066,8 @@ class StaticLargeObject(object):
A generator function to be used to delete all the segments and
sub-segments referenced in a manifest.
:params req: a swob.Request with an SLO manifest in path
:param req: a :class:`~swift.common.swob.Request` with an SLO manifest
in path
:raises HTTPPreconditionFailed: on invalid UTF8 in request path
:raises HTTPBadRequest: on too many buffered sub segments and
on invalid SLO manifest path
@ -1109,8 +1103,12 @@ class StaticLargeObject(object):
def get_slo_segments(self, obj_name, req):
"""
Performs a swob.Request and returns the SLO manifest's segments.
Performs a :class:`~swift.common.swob.Request` and returns the SLO
manifest's segments.
:param obj_name: the name of the object being deleted,
as ``/container/object``
:param req: the base :class:`~swift.common.swob.Request`
:raises HTTPServerError: on unable to load obj_name or
on unable to load the SLO manifest data.
:raises HTTPBadRequest: on not an SLO manifest
@ -1151,7 +1149,7 @@ class StaticLargeObject(object):
Will delete all the segments in the SLO manifest and then, if
successful, will delete the manifest file.
:params req: a swob.Request with an obj in path
:param req: a :class:`~swift.common.swob.Request` with an obj in path
:returns: swob.Response whose app_iter set to Bulk.handle_delete_iter
"""
req.headers['Content-Type'] = None # Ignore content-type from client

View File

@ -260,7 +260,7 @@ class _StaticWebContext(WSGIContext):
env, 'GET', '/%s/%s/%s' % (
self.version, self.account, self.container),
self.agent, swift_source='SW')
tmp_env['QUERY_STRING'] = 'delimiter=/&format=json'
tmp_env['QUERY_STRING'] = 'delimiter=/'
if prefix:
tmp_env['QUERY_STRING'] += '&prefix=%s' % quote(prefix)
else:
@ -465,8 +465,8 @@ class _StaticWebContext(WSGIContext):
env, 'GET', '/%s/%s/%s' % (
self.version, self.account, self.container),
self.agent, swift_source='SW')
tmp_env['QUERY_STRING'] = 'limit=1&format=json&delimiter' \
'=/&limit=1&prefix=%s' % quote(self.obj + '/')
tmp_env['QUERY_STRING'] = 'limit=1&delimiter=/&prefix=%s' % (
quote(self.obj + '/'), )
resp = self._app_call(tmp_env)
body = ''.join(resp)
if not is_success(self._get_status_int()) or not body or \

View File

@ -177,8 +177,6 @@ from __future__ import print_function
from time import time
from traceback import format_exc
from uuid import uuid4
from hashlib import sha1
import hmac
import base64
from eventlet import Timeout
@ -437,20 +435,21 @@ class TempAuth(object):
s3_auth_details = env.get('swift3.auth_details')
if s3_auth_details:
if 'check_signature' not in s3_auth_details:
self.logger.warning(
'Swift3 did not provide a check_signature function; '
'upgrade Swift3 if you want to use it with tempauth')
return None
account_user = s3_auth_details['access_key']
signature_from_user = s3_auth_details['signature']
if account_user not in self.users:
return None
account, user = account_user.split(':', 1)
account_id = self.users[account_user]['url'].rsplit('/', 1)[-1]
path = env['PATH_INFO']
env['PATH_INFO'] = path.replace(account_user, account_id, 1)
valid_signature = base64.encodestring(hmac.new(
self.users[account_user]['key'],
s3_auth_details['string_to_sign'],
sha1).digest()).strip()
if signature_from_user != valid_signature:
user = self.users[account_user]
account = account_user.split(':', 1)[0]
account_id = user['url'].rsplit('/', 1)[-1]
if not s3_auth_details['check_signature'](user['key']):
return None
env['PATH_INFO'] = env['PATH_INFO'].replace(
account_user, account_id, 1)
groups = self._get_user_groups(account, account_user, account_id)
return groups

View File

@ -329,8 +329,7 @@ class VersionedWritesContext(WSGIContext):
env, method='GET', swift_source='VW',
path='/v1/%s/%s' % (account_name, lcontainer))
lreq.environ['QUERY_STRING'] = \
'format=json&prefix=%s&marker=%s' % (
quote(lprefix), quote(marker))
'prefix=%s&marker=%s' % (quote(lprefix), quote(marker))
if end_marker:
lreq.environ['QUERY_STRING'] += '&end_marker=%s' % (
quote(end_marker))
@ -826,8 +825,7 @@ class VersionedWritesMiddleware(object):
allow_versioned_writes)
except HTTPException as error_response:
return error_response(env, start_response)
elif (obj and req.method in ('PUT', 'DELETE') and
not req.environ.get('swift.post_as_copy')):
elif (obj and req.method in ('PUT', 'DELETE')):
try:
return self.object_request(
req, api_version, account, container, obj,

View File

@ -31,10 +31,9 @@ from swift.common.header_key_dict import HeaderKeyDict
from swift import gettext_ as _
from swift.common.storage_policy import POLICIES
from swift.common.constraints import FORMAT2CONTENT_TYPE
from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
from swift.common.swob import HTTPBadRequest, HTTPNotAcceptable, \
from swift.common.swob import HTTPBadRequest, \
HTTPServiceUnavailable, Range, is_chunked, multi_range_iterator
from swift.common.utils import split_path, validate_device_partition, \
close_if_possible, maybe_multipart_byteranges_to_document_iters, \
@ -70,28 +69,6 @@ def get_param(req, name, default=None):
return value
def get_listing_content_type(req):
"""
Determine the content type to use for an account or container listing
response.
:param req: request object
:returns: content type as a string (e.g. text/plain, application/json)
:raises HTTPNotAcceptable: if the requested content type is not acceptable
:raises HTTPBadRequest: if the 'format' query param is provided and
not valid UTF-8
"""
query_format = get_param(req, 'format')
if query_format:
req.accept = FORMAT2CONTENT_TYPE.get(
query_format.lower(), FORMAT2CONTENT_TYPE['plain'])
out_content_type = req.accept.best_match(
['text/plain', 'application/json', 'application/xml', 'text/xml'])
if not out_content_type:
raise HTTPNotAcceptable(request=req)
return out_content_type
def get_name_and_placement(request, minsegs=1, maxsegs=None,
rest_with_last=False):
"""

View File

@ -525,7 +525,9 @@ class RingBuilder(object):
# we'll gather a few times, or until we archive the plan
for gather_count in range(MAX_BALANCE_GATHER_COUNT):
self._gather_parts_for_balance(assign_parts, replica_plan)
self._gather_parts_for_balance(assign_parts, replica_plan,
# firsrt attempt go for disperse
gather_count == 0)
if not assign_parts:
# most likely min part hours
finish_status = 'Unable to finish'
@ -1097,6 +1099,12 @@ class RingBuilder(object):
:param start: offset into self.parts to begin search
:param replica_plan: replicanth targets for tiers
"""
tier2children = self._build_tier2children()
parts_wanted_in_tier = defaultdict(int)
for dev in self._iter_devs():
wanted = max(dev['parts_wanted'], 0)
for tier in dev['tiers']:
parts_wanted_in_tier[tier] += wanted
# Last, we gather partitions from devices that are "overweight" because
# they have more partitions than their parts_wanted.
for offset in range(self.parts):
@ -1128,8 +1136,17 @@ class RingBuilder(object):
replicas_at_tier[tier] <
replica_plan[tier]['max']
for tier in dev['tiers']):
# we're stuck by replica plan
continue
for t in reversed(dev['tiers']):
if replicas_at_tier[t] - 1 < replica_plan[t]['min']:
# we're stuck at tier t
break
if sum(parts_wanted_in_tier[c]
for c in tier2children[t]
if c not in dev['tiers']) <= 0:
# we're stuck by weight
continue
# this is the most overweight_device holding a replica
# of this part that can shed it according to the plan
dev['parts_wanted'] += 1
@ -1141,15 +1158,19 @@ class RingBuilder(object):
self._replica2part2dev[replica][part] = NONE_DEV
for tier in dev['tiers']:
replicas_at_tier[tier] -= 1
parts_wanted_in_tier[tier] -= 1
self._set_part_moved(part)
break
def _gather_parts_for_balance(self, assign_parts, replica_plan):
def _gather_parts_for_balance(self, assign_parts, replica_plan,
disperse_first):
"""
Gather parts that look like they should move for balance reasons.
A simple gathers of parts that looks dispersible normally works out,
we'll switch strategies if things don't seem to move.
:param disperse_first: boolean, avoid replicas on overweight devices
that need to be there for dispersion
"""
# pick a random starting point on the other side of the ring
quarter_turn = (self.parts // 4)
@ -1162,10 +1183,10 @@ class RingBuilder(object):
'last_start': self._last_part_gather_start})
self._last_part_gather_start = start
self._gather_parts_for_balance_can_disperse(
assign_parts, start, replica_plan)
if not assign_parts:
self._gather_parts_for_balance_forced(assign_parts, start)
if disperse_first:
self._gather_parts_for_balance_can_disperse(
assign_parts, start, replica_plan)
self._gather_parts_for_balance_forced(assign_parts, start)
def _gather_parts_for_balance_forced(self, assign_parts, start, **kwargs):
"""

View File

@ -52,6 +52,7 @@ import datetime
import eventlet
import eventlet.debug
import eventlet.greenthread
import eventlet.patcher
import eventlet.semaphore
from eventlet import GreenPool, sleep, Timeout, tpool
from eventlet.green import socket, threading
@ -77,12 +78,6 @@ from swift.common.http import is_success, is_redirection, HTTP_NOT_FOUND, \
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.linkat import linkat
if six.PY3:
stdlib_queue = eventlet.patcher.original('queue')
else:
stdlib_queue = eventlet.patcher.original('Queue')
stdlib_threading = eventlet.patcher.original('threading')
# logging doesn't import patched as cleanly as one would like
from logging.handlers import SysLogHandler
import logging
@ -470,6 +465,18 @@ def config_read_prefixed_options(conf, prefix_name, defaults):
return params
def eventlet_monkey_patch():
"""
Install the appropriate Eventlet monkey patches.
"""
# NOTE(sileht):
# monkey-patching thread is required by python-keystoneclient;
# monkey-patching select is required by oslo.messaging pika driver
# if thread is monkey-patched.
eventlet.patcher.monkey_patch(all=False, socket=True, select=True,
thread=True)
def noop_libc_function(*args):
return 0

View File

@ -412,12 +412,7 @@ def run_server(conf, logger, sock, global_conf=None):
wsgi.WRITE_TIMEOUT = int(conf.get('client_timeout') or 60)
eventlet.hubs.use_hub(get_hub())
# NOTE(sileht):
# monkey-patching thread is required by python-keystoneclient;
# monkey-patching select is required by oslo.messaging pika driver
# if thread is monkey-patched.
eventlet.patcher.monkey_patch(all=False, socket=True, select=True,
thread=True)
utils.eventlet_monkey_patch()
eventlet_debug = config_true_value(conf.get('eventlet_debug', 'no'))
eventlet.debug.hub_exceptions(eventlet_debug)
wsgi_logger = NullLogger()

View File

@ -19,7 +19,6 @@ import time
import traceback
import math
from swift import gettext_ as _
from xml.etree.cElementTree import Element, SubElement, tostring
from eventlet import Timeout
@ -29,17 +28,18 @@ from swift.container.backend import ContainerBroker, DATADIR
from swift.container.replicator import ContainerReplicatorRpc
from swift.common.db import DatabaseAlreadyExists
from swift.common.container_sync_realms import ContainerSyncRealms
from swift.common.request_helpers import get_param, get_listing_content_type, \
from swift.common.request_helpers import get_param, \
split_and_validate_path, is_sys_or_user_meta
from swift.common.utils import get_logger, hash_path, public, \
Timestamp, storage_directory, validate_sync_to, \
config_true_value, timing_stats, replication, \
override_bytes_from_content_type, get_log_line
from swift.common.constraints import check_mount, valid_timestamp, check_utf8
from swift.common.constraints import valid_timestamp, check_utf8, check_drive
from swift.common import constraints
from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ConnectionTimeout
from swift.common.http import HTTP_NOT_FOUND, is_success
from swift.common.middleware import listing_formats
from swift.common.storage_policy import POLICIES
from swift.common.base_storage_server import BaseStorageServer
from swift.common.header_key_dict import HeaderKeyDict
@ -111,6 +111,11 @@ class ContainerController(BaseStorageServer):
conf.get('auto_create_account_prefix') or '.'
if config_true_value(conf.get('allow_versions', 'f')):
self.save_headers.append('x-versions-location')
if 'allow_versions' in conf:
self.logger.warning('Option allow_versions is deprecated. '
'Configure the versioned_writes middleware in '
'the proxy-server instead. This option will '
'be ignored in a future release.')
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
self.sync_store = ContainerSyncStore(self.root,
@ -263,7 +268,7 @@ class ContainerController(BaseStorageServer):
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
req_timestamp = valid_timestamp(req)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
# policy index is only relevant for delete_obj (and transitively for
# auto create accounts)
@ -351,7 +356,7 @@ class ContainerController(BaseStorageServer):
self.realms_conf)
if err:
return HTTPBadRequest(err)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
requested_policy_index = self.get_and_validate_policy_index(req)
broker = self._get_container_broker(drive, part, account, container)
@ -418,8 +423,8 @@ class ContainerController(BaseStorageServer):
"""Handle HTTP HEAD request."""
drive, part, account, container, obj = split_and_validate_path(
req, 4, 5, True)
out_content_type = get_listing_content_type(req)
if self.mount_check and not check_mount(self.root, drive):
out_content_type = listing_formats.get_listing_content_type(req)
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_container_broker(drive, part, account, container,
pending_timeout=0.1,
@ -451,8 +456,8 @@ class ContainerController(BaseStorageServer):
"""
(name, created, size, content_type, etag) = record[:5]
if content_type is None:
return {'subdir': name}
response = {'bytes': size, 'hash': etag, 'name': name,
return {'subdir': name.decode('utf8')}
response = {'bytes': size, 'hash': etag, 'name': name.decode('utf8'),
'content_type': content_type}
response['last_modified'] = Timestamp(created).isoformat
override_bytes_from_content_type(response, logger=self.logger)
@ -482,8 +487,8 @@ class ContainerController(BaseStorageServer):
request=req,
body='Maximum limit is %d'
% constraints.CONTAINER_LISTING_LIMIT)
out_content_type = get_listing_content_type(req)
if self.mount_check and not check_mount(self.root, drive):
out_content_type = listing_formats.get_listing_content_type(req)
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_container_broker(drive, part, account, container,
pending_timeout=0.1,
@ -504,36 +509,20 @@ class ContainerController(BaseStorageServer):
if value and (key.lower() in self.save_headers or
is_sys_or_user_meta('container', key)):
resp_headers[key] = value
ret = Response(request=req, headers=resp_headers,
content_type=out_content_type, charset='utf-8')
if out_content_type == 'application/json':
ret.body = json.dumps([self.update_data_record(record)
for record in container_list])
elif out_content_type.endswith('/xml'):
doc = Element('container', name=container.decode('utf-8'))
for obj in container_list:
record = self.update_data_record(obj)
if 'subdir' in record:
name = record['subdir'].decode('utf-8')
sub = SubElement(doc, 'subdir', name=name)
SubElement(sub, 'name').text = name
else:
obj_element = SubElement(doc, 'object')
for field in ["name", "hash", "bytes", "content_type",
"last_modified"]:
SubElement(obj_element, field).text = str(
record.pop(field)).decode('utf-8')
for field in sorted(record):
SubElement(obj_element, field).text = str(
record[field]).decode('utf-8')
ret.body = tostring(doc, encoding='UTF-8').replace(
"<?xml version='1.0' encoding='UTF-8'?>",
'<?xml version="1.0" encoding="UTF-8"?>', 1)
listing = [self.update_data_record(record)
for record in container_list]
if out_content_type.endswith('/xml'):
body = listing_formats.container_to_xml(listing, container)
elif out_content_type.endswith('/json'):
body = json.dumps(listing)
else:
if not container_list:
return HTTPNoContent(request=req, headers=resp_headers)
ret.body = '\n'.join(rec[0] for rec in container_list) + '\n'
body = listing_formats.listing_to_text(listing)
ret = Response(request=req, headers=resp_headers, body=body,
content_type=out_content_type, charset='utf-8')
ret.last_modified = math.ceil(float(resp_headers['X-PUT-Timestamp']))
if not ret.body:
ret.status_int = 204
return ret
@public
@ -545,7 +534,7 @@ class ContainerController(BaseStorageServer):
"""
post_args = split_and_validate_path(req, 3)
drive, partition, hash = post_args
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
try:
args = json.load(req.environ['wsgi.input'])
@ -567,7 +556,7 @@ class ContainerController(BaseStorageServer):
self.realms_conf)
if err:
return HTTPBadRequest(err)
if self.mount_check and not check_mount(self.root, drive):
if not check_drive(self.root, drive, self.mount_check):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_container_broker(drive, part, account, container)
if broker.is_deleted():

View File

@ -77,6 +77,7 @@ pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
# See proxy-server.conf-sample for options
[filter:cache]

View File

@ -23,7 +23,7 @@ from swift import gettext_ as _
from random import random, shuffle
from tempfile import mkstemp
from eventlet import spawn, patcher, Timeout
from eventlet import spawn, Timeout
import swift.common.db
from swift.container.backend import ContainerBroker, DATADIR
@ -31,7 +31,8 @@ from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ConnectionTimeout
from swift.common.ring import Ring
from swift.common.utils import get_logger, config_true_value, ismount, \
dump_recon_cache, majority_size, Timestamp, ratelimit_sleep
dump_recon_cache, majority_size, Timestamp, ratelimit_sleep, \
eventlet_monkey_patch
from swift.common.daemon import Daemon
from swift.common.http import is_success, HTTP_INTERNAL_SERVER_ERROR
@ -155,8 +156,7 @@ class ContainerUpdater(Daemon):
pid2filename[pid] = tmpfilename
else:
signal.signal(signal.SIGTERM, signal.SIG_DFL)
patcher.monkey_patch(all=False, socket=True, select=True,
thread=True)
eventlet_monkey_patch()
self.no_changes = 0
self.successes = 0
self.failures = 0
@ -190,7 +190,7 @@ class ContainerUpdater(Daemon):
"""
Run the updater once.
"""
patcher.monkey_patch(all=False, socket=True, select=True, thread=True)
eventlet_monkey_patch()
self.logger.info(_('Begin container update single threaded sweep'))
begin = time.time()
self.no_changes = 0

View File

@ -57,7 +57,7 @@ from pyeclib.ec_iface import ECDriverError, ECInvalidFragmentMetadata, \
ECBadFragmentChecksum, ECInvalidParameter
from swift import gettext_ as _
from swift.common.constraints import check_mount, check_dir
from swift.common.constraints import check_drive
from swift.common.request_helpers import is_sys_meta
from swift.common.utils import mkdirs, Timestamp, \
storage_directory, hash_path, renamer, fallocate, fsync, fdatasync, \
@ -86,7 +86,8 @@ METADATA_KEY = 'user.swift.metadata'
DROP_CACHE_WINDOW = 1024 * 1024
# These are system-set metadata keys that cannot be changed with a POST.
# They should be lowercase.
DATAFILE_SYSTEM_META = set('content-length deleted etag'.split())
RESERVED_DATAFILE_META = {'content-length', 'deleted', 'etag'}
DATAFILE_SYSTEM_META = {'x-static-large-object'}
DATADIR_BASE = 'objects'
ASYNCDIR_BASE = 'async_pending'
TMP_BASE = 'tmp'
@ -1191,12 +1192,11 @@ class BaseDiskFileManager(object):
# we'll do some kind of check unless explicitly forbidden
if mount_check is not False:
if mount_check or self.mount_check:
check = check_mount
mount_check = True
else:
check = check_dir
if not check(self.devices, device):
return None
return os.path.join(self.devices, device)
mount_check = False
return check_drive(self.devices, device, mount_check)
return join(self.devices, device)
@contextmanager
def replication_lock(self, device):
@ -2416,7 +2416,8 @@ class BaseDiskFile(object):
self._merge_content_type_metadata(ctype_file)
sys_metadata = dict(
[(key, val) for key, val in self._datafile_metadata.items()
if key.lower() in DATAFILE_SYSTEM_META
if key.lower() in (RESERVED_DATAFILE_META |
DATAFILE_SYSTEM_META)
or is_sys_meta('object', key)])
self._metadata.update(self._metafile_metadata)
self._metadata.update(sys_metadata)

View File

@ -27,7 +27,7 @@ from swift.common.exceptions import DiskFileQuarantined, DiskFileNotExist, \
DiskFileCollision, DiskFileDeleted, DiskFileNotOpen
from swift.common.request_helpers import is_sys_meta
from swift.common.swob import multi_range_iterator
from swift.obj.diskfile import DATAFILE_SYSTEM_META
from swift.obj.diskfile import DATAFILE_SYSTEM_META, RESERVED_DATAFILE_META
class InMemoryFileSystem(object):
@ -433,7 +433,8 @@ class DiskFile(object):
# with the object data.
immutable_metadata = dict(
[(key, val) for key, val in cur_mdata.items()
if key.lower() in DATAFILE_SYSTEM_META
if key.lower() in (RESERVED_DATAFILE_META |
DATAFILE_SYSTEM_META)
or is_sys_meta('object', key)])
metadata.update(immutable_metadata)
metadata['name'] = self._name

View File

@ -1097,8 +1097,9 @@ class ObjectReconstructor(Daemon):
self.part_count += len(partitions)
for partition in partitions:
part_path = join(obj_path, partition)
if partition in ('auditor_status_ALL.json',
'auditor_status_ZBF.json'):
if (partition.startswith('auditor_status_') and
partition.endswith('.json')):
# ignore auditor status files
continue
if not partition.isdigit():
self.logger.warning(

View File

@ -55,7 +55,7 @@ from swift.common.swob import HTTPAccepted, HTTPBadRequest, HTTPCreated, \
HTTPClientDisconnect, HTTPMethodNotAllowed, Request, Response, \
HTTPInsufficientStorage, HTTPForbidden, HTTPException, HTTPConflict, \
HTTPServerError
from swift.obj.diskfile import DATAFILE_SYSTEM_META, DiskFileRouter
from swift.obj.diskfile import RESERVED_DATAFILE_META, DiskFileRouter
def iter_mime_headers_and_bodies(wsgi_input, mime_boundary, read_chunk_size):
@ -148,7 +148,7 @@ class ObjectController(BaseStorageServer):
]
self.allowed_headers = set()
for header in extra_allowed_headers:
if header not in DATAFILE_SYSTEM_META:
if header not in RESERVED_DATAFILE_META:
self.allowed_headers.add(header)
self.auto_create_account_prefix = \
conf.get('auto_create_account_prefix') or '.'
@ -526,11 +526,6 @@ class ObjectController(BaseStorageServer):
override = key.lower().replace(override_prefix, 'x-')
update_headers[override] = val
def _preserve_slo_manifest(self, update_metadata, orig_metadata):
if 'X-Static-Large-Object' in orig_metadata:
update_metadata['X-Static-Large-Object'] = \
orig_metadata['X-Static-Large-Object']
@public
@timing_stats()
def POST(self, request):
@ -573,7 +568,6 @@ class ObjectController(BaseStorageServer):
if req_timestamp > orig_timestamp:
metadata = {'X-Timestamp': req_timestamp.internal}
self._preserve_slo_manifest(metadata, orig_metadata)
metadata.update(val for val in request.headers.items()
if (is_user_meta('object', val[0]) or
is_object_transient_sysmeta(val[0])))

View File

@ -21,13 +21,14 @@ import time
from swift import gettext_ as _
from random import random
from eventlet import spawn, patcher, Timeout
from eventlet import spawn, Timeout
from swift.common.bufferedhttp import http_connect
from swift.common.exceptions import ConnectionTimeout
from swift.common.ring import Ring
from swift.common.utils import get_logger, renamer, write_pickle, \
dump_recon_cache, config_true_value, ismount, ratelimit_sleep
dump_recon_cache, config_true_value, ismount, ratelimit_sleep, \
eventlet_monkey_patch
from swift.common.daemon import Daemon
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.storage_policy import split_policy_string, PolicyError
@ -106,8 +107,7 @@ class ObjectUpdater(Daemon):
pids.append(pid)
else:
signal.signal(signal.SIGTERM, signal.SIG_DFL)
patcher.monkey_patch(all=False, socket=True, select=True,
thread=True)
eventlet_monkey_patch()
self.successes = 0
self.failures = 0
forkbegin = time.time()

View File

@ -18,7 +18,6 @@ from six.moves.urllib.parse import unquote
from swift import gettext_ as _
from swift.account.utils import account_listing_response
from swift.common.request_helpers import get_listing_content_type
from swift.common.middleware.acl import parse_acl, format_acl
from swift.common.utils import public
from swift.common.constraints import check_metadata
@ -26,6 +25,7 @@ from swift.common import constraints
from swift.common.http import HTTP_NOT_FOUND, HTTP_GONE
from swift.proxy.controllers.base import Controller, clear_info_cache, \
set_info_cache
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPBadRequest, HTTPMethodNotAllowed
from swift.common.request_helpers import get_sys_meta_prefix
@ -67,6 +67,9 @@ class AccountController(Controller):
concurrency = self.app.account_ring.replica_count \
if self.app.concurrent_gets else 1
node_iter = self.app.iter_nodes(self.app.account_ring, partition)
params = req.params
params['format'] = 'json'
req.params = params
resp = self.GETorHEAD_base(
req, _('Account'), node_iter, partition,
req.swift_entity_path.rstrip('/'), concurrency)
@ -86,8 +89,9 @@ class AccountController(Controller):
# creates the account if necessary. If we feed it a perfect
# lie, it'll just try to create the container without
# creating the account, and that'll fail.
resp = account_listing_response(self.account_name, req,
get_listing_content_type(req))
resp = account_listing_response(
self.account_name, req,
listing_formats.get_listing_content_type(req))
resp.headers['X-Backend-Fake-Account-Listing'] = 'yes'
# Cache this. We just made a request to a storage node and got

View File

@ -787,14 +787,16 @@ class ResumingGetter(object):
this request. This will change the Range header
so that the next req will start where it left off.
:raises ValueError: if invalid range header
:raises HTTPRequestedRangeNotSatisfiable: if begin + num_bytes
> end of range + 1
:raises RangeAlreadyComplete: if begin + num_bytes == end of range + 1
"""
if 'Range' in self.backend_headers:
req_range = Range(self.backend_headers['Range'])
try:
req_range = Range(self.backend_headers.get('Range'))
except ValueError:
req_range = None
if req_range:
begin, end = req_range.ranges[0]
if begin is None:
# this is a -50 range req (last 50 bytes of file)
@ -818,6 +820,9 @@ class ResumingGetter(object):
else:
self.backend_headers['Range'] = 'bytes=%d-' % num_bytes
# Reset so if we need to do this more than once, we don't double-up
self.bytes_used_from_backend = 0
def pop_range(self):
"""
Remove the first byterange from our Range header.

View File

@ -100,6 +100,9 @@ class ContainerController(Controller):
concurrency = self.app.container_ring.replica_count \
if self.app.concurrent_gets else 1
node_iter = self.app.iter_nodes(self.app.container_ring, part)
params = req.params
params['format'] = 'json'
req.params = params
resp = self.GETorHEAD_base(
req, _('Container'), node_iter, part,
req.swift_entity_path, concurrency)
@ -177,11 +180,11 @@ class ContainerController(Controller):
headers = self._backend_requests(req, len(containers),
account_partition, accounts,
policy_index)
clear_info_cache(self.app, req.environ,
self.account_name, self.container_name)
resp = self.make_requests(
req, self.app.container_ring,
container_partition, 'PUT', req.swift_entity_path, headers)
clear_info_cache(self.app, req.environ,
self.account_name, self.container_name)
return resp
@public

View File

@ -66,16 +66,19 @@ required_filters = [
'after_fn': lambda pipe: (['catch_errors']
if pipe.startswith('catch_errors')
else [])},
{'name': 'listing_formats', 'after_fn': lambda _junk: [
'catch_errors', 'gatekeeper', 'proxy_logging', 'memcache']},
# Put copy before dlo, slo and versioned_writes
{'name': 'copy', 'after_fn': lambda _junk: [
'staticweb', 'tempauth', 'keystoneauth',
'catch_errors', 'gatekeeper', 'proxy_logging']},
{'name': 'dlo', 'after_fn': lambda _junk: [
'copy', 'staticweb', 'tempauth', 'keystoneauth',
'catch_errors', 'gatekeeper', 'proxy_logging']},
{'name': 'versioned_writes', 'after_fn': lambda _junk: [
'slo', 'dlo', 'copy', 'staticweb', 'tempauth',
'keystoneauth', 'catch_errors', 'gatekeeper', 'proxy_logging']},
# Put copy before dlo, slo and versioned_writes
{'name': 'copy', 'after_fn': lambda _junk: [
'staticweb', 'tempauth', 'keystoneauth',
'catch_errors', 'gatekeeper', 'proxy_logging']}]
]
def _label_for_policy(policy):

View File

@ -505,15 +505,6 @@ def in_process_setup(the_object_server=object_server):
'password6': 'testing6'
})
# If an env var explicitly specifies the proxy-server object_post_as_copy
# option then use its value, otherwise leave default config unchanged.
object_post_as_copy = os.environ.get(
'SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY')
if object_post_as_copy is not None:
object_post_as_copy = config_true_value(object_post_as_copy)
config['object_post_as_copy'] = str(object_post_as_copy)
_debug('Setting object_post_as_copy to %r' % object_post_as_copy)
acc1lis = listen_zero()
acc2lis = listen_zero()
con1lis = listen_zero()

View File

@ -122,8 +122,7 @@ class Connection(object):
self.username = config['username']
self.password = config['password']
self.storage_host = None
self.storage_port = None
self.storage_netloc = None
self.storage_url = None
self.conn_class = None
@ -134,9 +133,8 @@ class Connection(object):
def authenticate(self, clone_conn=None):
if clone_conn:
self.conn_class = clone_conn.conn_class
self.storage_host = clone_conn.storage_host
self.storage_netloc = clone_conn.storage_netloc
self.storage_url = clone_conn.storage_url
self.storage_port = clone_conn.storage_port
self.storage_token = clone_conn.storage_token
return
@ -162,26 +160,23 @@ class Connection(object):
if not (storage_url and storage_token):
raise AuthenticationFailed()
x = storage_url.split('/')
url = urllib.parse.urlparse(storage_url)
if x[0] == 'http:':
if url.scheme == 'http':
self.conn_class = http_client.HTTPConnection
self.storage_port = 80
elif x[0] == 'https:':
elif url.scheme == 'https':
self.conn_class = http_client.HTTPSConnection
self.storage_port = 443
else:
raise ValueError('unexpected protocol %s' % (x[0]))
raise ValueError('unexpected protocol %s' % (url.scheme))
self.storage_host = x[2].split(':')[0]
if ':' in x[2]:
self.storage_port = int(x[2].split(':')[1])
self.storage_netloc = url.netloc
# Make sure storage_url is a string and not unicode, since
# keystoneclient (called by swiftclient) returns them in
# unicode and this would cause troubles when doing
# no_safe_quote query.
self.storage_url = str('/%s/%s' % (x[3], x[4]))
self.account_name = str(x[4])
x = url.path.split('/')
self.storage_url = str('/%s/%s' % (x[1], x[2]))
self.account_name = str(x[2])
self.auth_user = auth_user
# With v2 keystone, storage_token is unicode.
# We want it to be string otherwise this would cause
@ -206,8 +201,7 @@ class Connection(object):
return json.loads(self.response.read())
def http_connect(self):
self.connection = self.conn_class(self.storage_host,
port=self.storage_port)
self.connection = self.conn_class(self.storage_netloc)
# self.connection.set_debuglevel(3)
def make_path(self, path=None, cfg=None):
@ -335,8 +329,7 @@ class Connection(object):
for (x, y) in parms.items()]
path = '%s?%s' % (path, '&'.join(query_args))
self.connection = self.conn_class(self.storage_host,
port=self.storage_port)
self.connection = self.conn_class(self.storage_netloc)
# self.connection.set_debuglevel(3)
self.connection.putrequest('PUT', path)
for key, value in headers.items():

View File

@ -379,7 +379,7 @@ class TestObject(unittest2.TestCase):
'x_delete_after'),
'', {'X-Auth-Token': token,
'Content-Length': '0',
'X-Delete-After': '1'})
'X-Delete-After': '2'})
return check_response(conn)
resp = retry(put)
resp.read()
@ -400,7 +400,7 @@ class TestObject(unittest2.TestCase):
resp = retry(get)
resp.read()
count += 1
time.sleep(1)
time.sleep(0.5)
self.assertEqual(resp.status, 404)

View File

@ -2531,6 +2531,7 @@ class TestFile(Base):
self.assertEqual(1024, f_dict['bytes'])
self.assertEqual('text/foobar', f_dict['content_type'])
self.assertEqual(etag, f_dict['hash'])
put_last_modified = f_dict['last_modified']
# now POST updated content-type to each file
file_item = self.env.container.file(file_name)
@ -2555,6 +2556,7 @@ class TestFile(Base):
self.fail('Failed to find file %r in listing' % file_name)
self.assertEqual(1024, f_dict['bytes'])
self.assertEqual('image/foobarbaz', f_dict['content_type'])
self.assertLess(put_last_modified, f_dict['last_modified'])
self.assertEqual(etag, f_dict['hash'])

View File

@ -448,7 +448,7 @@ class ProbeTest(unittest.TestCase):
else:
os.system('sudo mount %s' % device)
def make_internal_client(self, object_post_as_copy=True):
def make_internal_client(self):
tempdir = mkdtemp()
try:
conf_path = os.path.join(tempdir, 'internal_client.conf')
@ -464,14 +464,13 @@ class ProbeTest(unittest.TestCase):
[filter:copy]
use = egg:swift#copy
object_post_as_copy = %s
[filter:cache]
use = egg:swift#memcache
[filter:catch_errors]
use = egg:swift#catch_errors
""" % object_post_as_copy
"""
with open(conf_path, 'w') as f:
f.write(dedent(conf_body))
return internal_client.InternalClient(conf_path, 'test', 1)

View File

@ -93,7 +93,7 @@ class TestContainerSync(ReplProbeTest):
return source['name'], dest['name']
def _test_sync(self, object_post_as_copy):
def test_sync(self):
source_container, dest_container = self._setup_synced_containers()
# upload to source
@ -111,12 +111,10 @@ class TestContainerSync(ReplProbeTest):
self.assertIn('x-object-meta-test', resp_headers)
self.assertEqual('put_value', resp_headers['x-object-meta-test'])
# update metadata with a POST, using an internal client so we can
# vary the object_post_as_copy setting - first use post-as-copy
# update metadata with a POST
post_headers = {'Content-Type': 'image/jpeg',
'X-Object-Meta-Test': 'post_value'}
int_client = self.make_internal_client(
object_post_as_copy=object_post_as_copy)
int_client = self.make_internal_client()
int_client.set_object_metadata(self.account, source_container,
object_name, post_headers)
# sanity checks...
@ -154,12 +152,6 @@ class TestContainerSync(ReplProbeTest):
self.url, self.token, dest_container, object_name)
self.assertEqual(404, cm.exception.http_status) # sanity check
def test_sync_with_post_as_copy(self):
self._test_sync(True)
def test_sync_with_fast_post(self):
self._test_sync(False)
def test_sync_slo_manifest(self):
# Verify that SLO manifests are sync'd even if their segments can not
# be found in the destination account at time of sync'ing.

View File

@ -237,8 +237,7 @@ class TestUpdateOverridesEC(ECProbeTest):
self.assertFalse(direct_client.direct_get_container(
cnodes[0], cpart, self.account, 'c1')[1])
# use internal client for POST so we can force fast-post mode
int_client = self.make_internal_client(object_post_as_copy=False)
int_client = self.make_internal_client()
int_client.set_object_metadata(
self.account, 'c1', 'o1', {'X-Object-Meta-Fruit': 'Tomato'})
self.assertEqual(
@ -296,8 +295,7 @@ class TestUpdateOverridesEC(ECProbeTest):
content_type='test/ctype')
meta = client.head_object(self.url, self.token, 'c1', 'o1')
# use internal client for POST so we can force fast-post mode
int_client = self.make_internal_client(object_post_as_copy=False)
int_client = self.make_internal_client()
int_client.set_object_metadata(
self.account, 'c1', 'o1', {'X-Object-Meta-Fruit': 'Tomato'})
self.assertEqual(

View File

@ -47,7 +47,7 @@ class Test(ReplProbeTest):
policy=self.policy)
self.container_brain = BrainSplitter(self.url, self.token,
self.container_name)
self.int_client = self.make_internal_client(object_post_as_copy=False)
self.int_client = self.make_internal_client()
def _get_object_info(self, account, container, obj, number):
obj_conf = self.configs['object-server']

View File

@ -709,49 +709,28 @@ def quiet_eventlet_exceptions():
eventlet_debug.hub_exceptions(orig_state)
class MockTrue(object):
@contextmanager
def mock_check_drive(isdir=False, ismount=False):
"""
Instances of MockTrue evaluate like True
Any attr accessed on an instance of MockTrue will return a MockTrue
instance. Any method called on an instance of MockTrue will return
a MockTrue instance.
All device/drive/mount checking should be done through the constraints
module if we keep the mocking consistly w/i that module we can keep our
test robust to further rework on that interface.
>>> thing = MockTrue()
>>> thing
True
>>> thing == True # True == True
True
>>> thing == False # True == False
False
>>> thing != True # True != True
False
>>> thing != False # True != False
True
>>> thing.attribute
True
>>> thing.method()
True
>>> thing.attribute.method()
True
>>> thing.method().attribute
True
Replace the constraint modules underlying os calls with mocks.
:param isdir: return value of constraints isdir calls, default False
:param ismount: return value of constraints ismount calls, default False
:returns: a dict of constraint module mocks
"""
def __getattribute__(self, *args, **kwargs):
return self
def __call__(self, *args, **kwargs):
return self
def __repr__(*args, **kwargs):
return repr(True)
def __eq__(self, other):
return other is True
def __ne__(self, other):
return other is not True
mock_base = 'swift.common.constraints.'
with mocklib.patch(mock_base + 'isdir') as mock_isdir, \
mocklib.patch(mock_base + 'utils.ismount') as mock_ismount:
mock_isdir.return_value = isdir
mock_ismount.return_value = ismount
yield {
'isdir': mock_isdir,
'ismount': mock_ismount,
}
@contextmanager

View File

@ -36,7 +36,7 @@ from swift.account.server import AccountController
from swift.common.utils import (normalize_timestamp, replication, public,
mkdirs, storage_directory, Timestamp)
from swift.common.request_helpers import get_sys_meta_prefix
from test.unit import patch_policies, debug_logger
from test.unit import patch_policies, debug_logger, mock_check_drive
from swift.common.storage_policy import StoragePolicy, POLICIES
@ -47,6 +47,7 @@ class TestAccountController(unittest.TestCase):
"""Set up for testing swift.account.server.AccountController"""
self.testdir_base = mkdtemp()
self.testdir = os.path.join(self.testdir_base, 'account_server')
mkdirs(os.path.join(self.testdir, 'sda1'))
self.controller = AccountController(
{'devices': self.testdir, 'mount_check': 'false'})
@ -71,6 +72,50 @@ class TestAccountController(unittest.TestCase):
self.assertEqual(resp.headers['Server'],
(server_handler.server_type + '/' + swift_version))
def test_insufficient_storage_mount_check_true(self):
conf = {'devices': self.testdir, 'mount_check': 'true'}
account_controller = AccountController(conf)
self.assertTrue(account_controller.mount_check)
for method in account_controller.allowed_methods:
if method == 'OPTIONS':
continue
req = Request.blank('/sda1/p/a-or-suff', method=method,
headers={'x-timestamp': '1'})
with mock_check_drive() as mocks:
try:
resp = req.get_response(account_controller)
self.assertEqual(resp.status_int, 507)
mocks['ismount'].return_value = True
resp = req.get_response(account_controller)
self.assertNotEqual(resp.status_int, 507)
# feel free to rip out this last assertion...
expected = 2 if method == 'PUT' else 4
self.assertEqual(resp.status_int // 100, expected)
except AssertionError as e:
self.fail('%s for %s' % (e, method))
def test_insufficient_storage_mount_check_false(self):
conf = {'devices': self.testdir, 'mount_check': 'false'}
account_controller = AccountController(conf)
self.assertFalse(account_controller.mount_check)
for method in account_controller.allowed_methods:
if method == 'OPTIONS':
continue
req = Request.blank('/sda1/p/a-or-suff', method=method,
headers={'x-timestamp': '1'})
with mock_check_drive() as mocks:
try:
resp = req.get_response(account_controller)
self.assertEqual(resp.status_int, 507)
mocks['isdir'].return_value = True
resp = req.get_response(account_controller)
self.assertNotEqual(resp.status_int, 507)
# feel free to rip out this last assertion...
expected = 2 if method == 'PUT' else 4
self.assertEqual(resp.status_int // 100, expected)
except AssertionError as e:
self.fail('%s for %s' % (e, method))
def test_DELETE_not_found(self):
req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'DELETE',
'HTTP_X_TIMESTAMP': '0'})
@ -147,29 +192,6 @@ class TestAccountController(unittest.TestCase):
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 400)
def test_DELETE_insufficient_storage(self):
self.controller = AccountController({'devices': self.testdir})
req = Request.blank(
'/sda-null/p/a', environ={'REQUEST_METHOD': 'DELETE',
'HTTP_X_TIMESTAMP': '1'})
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 507)
def test_REPLICATE_insufficient_storage(self):
conf = {'devices': self.testdir, 'mount_check': 'true'}
self.account_controller = AccountController(conf)
def fake_check_mount(*args, **kwargs):
return False
with mock.patch("swift.common.constraints.check_mount",
fake_check_mount):
req = Request.blank('/sda1/p/suff',
environ={'REQUEST_METHOD': 'REPLICATE'},
headers={})
resp = req.get_response(self.account_controller)
self.assertEqual(resp.status_int, 507)
def test_REPLICATE_rsync_then_merge_works(self):
def fake_rsync_then_merge(self, drive, db_file, args):
return HTTPNoContent()
@ -331,13 +353,6 @@ class TestAccountController(unittest.TestCase):
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 406)
def test_HEAD_insufficient_storage(self):
self.controller = AccountController({'devices': self.testdir})
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'HEAD',
'HTTP_X_TIMESTAMP': '1'})
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 507)
def test_HEAD_invalid_format(self):
format = '%D1%BD%8A9' # invalid UTF-8; should be %E1%BD%8A9 (E -> D)
req = Request.blank('/sda1/p/a?format=' + format,
@ -569,13 +584,6 @@ class TestAccountController(unittest.TestCase):
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 400)
def test_PUT_insufficient_storage(self):
self.controller = AccountController({'devices': self.testdir})
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'PUT',
'HTTP_X_TIMESTAMP': '1'})
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 507)
def test_POST_HEAD_metadata(self):
req = Request.blank(
'/sda1/p/a', environ={'REQUEST_METHOD': 'PUT'},
@ -693,13 +701,6 @@ class TestAccountController(unittest.TestCase):
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 400)
def test_POST_insufficient_storage(self):
self.controller = AccountController({'devices': self.testdir})
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'POST',
'HTTP_X_TIMESTAMP': '1'})
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 507)
def test_POST_after_DELETE_not_found(self):
req = Request.blank('/sda1/p/a', environ={'REQUEST_METHOD': 'PUT',
'HTTP_X_TIMESTAMP': '0'})
@ -1502,13 +1503,6 @@ class TestAccountController(unittest.TestCase):
listing.append(node2.firstChild.nodeValue)
self.assertEqual(listing, ['sub.1.0', 'sub.1.1', 'sub.1.2'])
def test_GET_insufficient_storage(self):
self.controller = AccountController({'devices': self.testdir})
req = Request.blank('/sda-null/p/a', environ={'REQUEST_METHOD': 'GET',
'HTTP_X_TIMESTAMP': '1'})
resp = req.get_response(self.controller)
self.assertEqual(resp.status_int, 507)
def test_through_call(self):
inbuf = BytesIO()
errbuf = StringIO()

View File

@ -0,0 +1,21 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from swift.cli import dispersion_report
class TestDispersionReport(unittest.TestCase):
def test_placeholder(self):
self.assertTrue(callable(dispersion_report.main))

View File

@ -804,6 +804,21 @@ class TestCommands(unittest.TestCase, RunSwiftRingBuilderMixin):
ring.rebalance()
self.assertTrue(ring.validate())
def test_set_weight_old_format_two_devices(self):
# Would block without the 'yes' argument
self.create_sample_ring()
argv = ["", self.tmpfile, "set_weight",
"d2", "3.14", "d1", "6.28", "--yes"]
self.assertSystemExit(EXIT_SUCCESS, ringbuilder.main, argv)
ring = RingBuilder.load(self.tmpfile)
# Check that weight was changed
self.assertEqual(ring.devs[2]['weight'], 3.14)
self.assertEqual(ring.devs[1]['weight'], 6.28)
# Check that other devices in ring are not affected
self.assertEqual(ring.devs[0]['weight'], 100)
self.assertEqual(ring.devs[3]['weight'], 100)
def test_set_weight_ipv4_old_format(self):
self.create_sample_ring()
# Test ipv4(old format)

View File

@ -16,7 +16,6 @@ import base64
import json
import os
import unittest
from xml.dom import minidom
import mock
@ -961,138 +960,6 @@ class TestDecrypterContainerRequests(unittest.TestCase):
self.assertIn("Cipher must be AES_CTR_256",
self.decrypter.logger.get_lines_for_level('error')[0])
def _assert_element(self, name, expected, element):
self.assertEqual(element.tagName, name)
self._assert_element_contains_dict(expected, element)
def _assert_element_contains_dict(self, expected, element):
for k, v in expected.items():
entry = element.getElementsByTagName(k)
self.assertIsNotNone(entry, 'Key %s not found' % k)
actual = entry[0].childNodes[0].nodeValue
self.assertEqual(v, actual,
"Expected %s but got %s for key %s"
% (v, actual, k))
def test_GET_container_xml(self):
content_type_1 = u'\uF10F\uD20D\uB30B\u9409'
content_type_2 = 'text/plain; param=foo'
pt_etag1 = 'c6e8196d7f0fff6444b90861fe8d609d'
pt_etag2 = 'ac0374ed4d43635f803c82469d0b5a10'
key = fetch_crypto_keys()['container']
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
<container name="testc">\
<subdir name="test-subdir"><name>test-subdir</name></subdir>\
<object><hash>\
''' + encrypt_and_append_meta(pt_etag1.encode('utf8'), key) + '''\
</hash><content_type>\
''' + content_type_1 + '''\
</content_type><name>testfile</name><bytes>16</bytes>\
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
<object><hash>\
''' + encrypt_and_append_meta(pt_etag2.encode('utf8'), key) + '''\
</hash><content_type>\
''' + content_type_2 + '''\
</content_type><name>testfile2</name><bytes>24</bytes>\
<last_modified>2015-04-19T02:37:39.684740</last_modified></object>\
</container>'''
resp = self._make_cont_get_req(fake_body, 'xml')
self.assertEqual('200 OK', resp.status)
body = resp.body
self.assertEqual(len(body), int(resp.headers['Content-Length']))
tree = minidom.parseString(body)
containers = tree.getElementsByTagName('container')
self.assertEqual(1, len(containers))
self.assertEqual('testc',
containers[0].attributes.getNamedItem("name").value)
results = containers[0].childNodes
self.assertEqual(3, len(results))
self._assert_element('subdir', {"name": "test-subdir"}, results[0])
obj_dict_1 = {"bytes": "16",
"last_modified": "2015-04-19T02:37:39.601660",
"hash": pt_etag1,
"name": "testfile",
"content_type": content_type_1}
self._assert_element('object', obj_dict_1, results[1])
obj_dict_2 = {"bytes": "24",
"last_modified": "2015-04-19T02:37:39.684740",
"hash": pt_etag2,
"name": "testfile2",
"content_type": content_type_2}
self._assert_element('object', obj_dict_2, results[2])
def test_GET_container_xml_with_crypto_override(self):
content_type_1 = 'image/jpeg'
content_type_2 = 'text/plain; param=foo'
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
<container name="testc">\
<object><hash>c6e8196d7f0fff6444b90861fe8d609d</hash>\
<content_type>''' + content_type_1 + '''\
</content_type><name>testfile</name><bytes>16</bytes>\
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
<object><hash>ac0374ed4d43635f803c82469d0b5a10</hash>\
<content_type>''' + content_type_2 + '''\
</content_type><name>testfile2</name><bytes>24</bytes>\
<last_modified>2015-04-19T02:37:39.684740</last_modified></object>\
</container>'''
resp = self._make_cont_get_req(fake_body, 'xml', override=True)
self.assertEqual('200 OK', resp.status)
body = resp.body
self.assertEqual(len(body), int(resp.headers['Content-Length']))
tree = minidom.parseString(body)
containers = tree.getElementsByTagName('container')
self.assertEqual(1, len(containers))
self.assertEqual('testc',
containers[0].attributes.getNamedItem("name").value)
objs = tree.getElementsByTagName('object')
self.assertEqual(2, len(objs))
obj_dict_1 = {"bytes": "16",
"last_modified": "2015-04-19T02:37:39.601660",
"hash": "c6e8196d7f0fff6444b90861fe8d609d",
"name": "testfile",
"content_type": content_type_1}
self._assert_element_contains_dict(obj_dict_1, objs[0])
obj_dict_2 = {"bytes": "24",
"last_modified": "2015-04-19T02:37:39.684740",
"hash": "ac0374ed4d43635f803c82469d0b5a10",
"name": "testfile2",
"content_type": content_type_2}
self._assert_element_contains_dict(obj_dict_2, objs[1])
def test_cont_get_xml_req_with_cipher_mismatch(self):
bad_crypto_meta = fake_get_crypto_meta()
bad_crypto_meta['cipher'] = 'unknown_cipher'
fake_body = '''<?xml version="1.0" encoding="UTF-8"?>
<container name="testc"><object>\
<hash>''' + encrypt_and_append_meta('c6e8196d7f0fff6444b90861fe8d609d',
fetch_crypto_keys()['container'],
crypto_meta=bad_crypto_meta) + '''\
</hash>\
<content_type>image/jpeg</content_type>\
<name>testfile</name><bytes>16</bytes>\
<last_modified>2015-04-19T02:37:39.601660</last_modified></object>\
</container>'''
resp = self._make_cont_get_req(fake_body, 'xml')
self.assertEqual('500 Internal Error', resp.status)
self.assertEqual('Error decrypting container listing', resp.body)
self.assertIn("Cipher must be AES_CTR_256",
self.decrypter.logger.get_lines_for_level('error')[0])
class TestModuleMethods(unittest.TestCase):
def test_purge_crypto_sysmeta_headers(self):

View File

@ -618,14 +618,5 @@ class TestCryptoPipelineChanges(unittest.TestCase):
self._check_listing(self.crypto_app)
class TestCryptoPipelineChangesFastPost(TestCryptoPipelineChanges):
@classmethod
def setUpClass(cls):
# set proxy config to use fast post
extra_conf = {'object_post_as_copy': 'False'}
cls._test_context = setup_servers(extra_conf=extra_conf)
cls.proxy_app = cls._test_context["test_servers"][0]
if __name__ == '__main__':
unittest.main()

View File

@ -14,7 +14,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import mock
import shutil
import tempfile
@ -93,9 +92,7 @@ class TestCopyConstraints(unittest.TestCase):
class TestServerSideCopyMiddleware(unittest.TestCase):
def setUp(self):
self.app = FakeSwift()
self.ssc = copy.filter_factory({
'object_post_as_copy': 'yes',
})(self.app)
self.ssc = copy.filter_factory({})(self.app)
self.ssc.logger = self.app.logger
def tearDown(self):
@ -166,92 +163,6 @@ class TestServerSideCopyMiddleware(unittest.TestCase):
self.assertRequestEqual(req, self.authorized[0])
self.assertNotIn('swift.orig_req_method', req.environ)
def test_POST_as_COPY_simple(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
req = Request.blank('/v1/a/c/o', method='POST')
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
# For basic test cases, assert orig_req_method behavior
self.assertEqual(req.environ['swift.orig_req_method'], 'POST')
def test_POST_as_COPY_201_return_202(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
req = Request.blank('/v1/a/c/o', method='POST')
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_POST_delete_at(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
t = str(int(time.time() + 100))
req = Request.blank('/v1/a/c/o', method='POST',
headers={'Content-Type': 'foo/bar',
'X-Delete-At': t})
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
calls = self.app.calls_with_headers
method, path, req_headers = calls[1]
self.assertEqual('PUT', method)
self.assertTrue('X-Delete-At' in req_headers)
self.assertEqual(req_headers['X-Delete-At'], str(t))
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_POST_as_COPY_static_large_object(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
{'X-Static-Large-Object': True}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPAccepted, {})
req = Request.blank('/v1/a/c/o', method='POST',
headers={})
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
calls = self.app.calls_with_headers
method, path, req_headers = calls[1]
self.assertEqual('PUT', method)
self.assertNotIn('X-Static-Large-Object', req_headers)
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_POST_as_COPY_dynamic_large_object_manifest(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
{'X-Object-Manifest': 'orig_manifest'}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
req = Request.blank('/v1/a/c/o', method='POST',
headers={'X-Object-Manifest': 'new_manifest'})
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
calls = self.app.calls_with_headers
method, path, req_headers = calls[1]
self.assertEqual('PUT', method)
self.assertEqual('new_manifest', req_headers['x-object-manifest'])
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_POST_as_COPY_dynamic_large_object_no_manifest(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
{'X-Object-Manifest': 'orig_manifest'}, 'passed')
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
req = Request.blank('/v1/a/c/o', method='POST',
headers={})
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
calls = self.app.calls_with_headers
method, path, req_headers = calls[1]
self.assertEqual('PUT', method)
self.assertNotIn('X-Object-Manifest', req_headers)
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_basic_put_with_x_copy_from(self):
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register('PUT', '/v1/a/c/o2', swob.HTTPCreated, {})
@ -1345,100 +1256,6 @@ class TestServerSideCopyMiddleware(unittest.TestCase):
req_headers.get('X-Object-Transient-Sysmeta-Test'))
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
def _test_POST_source_headers(self, extra_post_headers):
# helper method to perform a POST with metadata headers that should
# always be sent to the destination
post_headers = {'X-Object-Meta-Test2': 'added',
'X-Object-Sysmeta-Test2': 'added',
'X-Object-Transient-Sysmeta-Test2': 'added'}
post_headers.update(extra_post_headers)
get_resp_headers = {
'X-Timestamp': '1234567890.12345',
'X-Backend-Timestamp': '1234567890.12345',
'Content-Type': 'text/original',
'Content-Encoding': 'gzip',
'Content-Disposition': 'attachment; filename=myfile',
'X-Object-Meta-Test': 'original',
'X-Object-Sysmeta-Test': 'original',
'X-Object-Transient-Sysmeta-Test': 'original',
'X-Foo': 'Bar'}
self.app.register(
'GET', '/v1/a/c/o', swob.HTTPOk, headers=get_resp_headers)
self.app.register('PUT', '/v1/a/c/o', swob.HTTPCreated, {})
req = Request.blank('/v1/a/c/o', method='POST', headers=post_headers)
status, headers, body = self.call_ssc(req)
self.assertEqual(status, '202 Accepted')
calls = self.app.calls_with_headers
self.assertEqual(2, len(calls))
method, path, req_headers = calls[1]
self.assertEqual('PUT', method)
# these headers should always be applied to the destination
self.assertEqual('added', req_headers.get('X-Object-Meta-Test2'))
self.assertEqual('added',
req_headers.get('X-Object-Transient-Sysmeta-Test2'))
# POSTed sysmeta should never be applied to the destination
self.assertNotIn('X-Object-Sysmeta-Test2', req_headers)
# existing sysmeta should always be preserved
self.assertEqual('original',
req_headers.get('X-Object-Sysmeta-Test'))
return req_headers
def test_POST_no_updates(self):
post_headers = {}
req_headers = self._test_POST_source_headers(post_headers)
self.assertEqual('text/original', req_headers.get('Content-Type'))
self.assertNotIn('X-Object-Meta-Test', req_headers)
self.assertNotIn('X-Object-Transient-Sysmeta-Test', req_headers)
self.assertNotIn('X-Timestamp', req_headers)
self.assertNotIn('X-Backend-Timestamp', req_headers)
self.assertNotIn('Content-Encoding', req_headers)
self.assertNotIn('Content-Disposition', req_headers)
self.assertNotIn('X-Foo', req_headers)
def test_POST_with_updates(self):
post_headers = {
'Content-Type': 'text/not_original',
'Content-Encoding': 'not_gzip',
'Content-Disposition': 'attachment; filename=notmyfile',
'X-Object-Meta-Test': 'not_original',
'X-Object-Sysmeta-Test': 'not_original',
'X-Object-Transient-Sysmeta-Test': 'not_original',
'X-Foo': 'Not Bar',
}
req_headers = self._test_POST_source_headers(post_headers)
self.assertEqual('text/not_original', req_headers.get('Content-Type'))
self.assertEqual('not_gzip', req_headers.get('Content-Encoding'))
self.assertEqual('attachment; filename=notmyfile',
req_headers.get('Content-Disposition'))
self.assertEqual('not_original', req_headers.get('X-Object-Meta-Test'))
self.assertEqual('not_original',
req_headers.get('X-Object-Transient-Sysmeta-Test'))
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
def test_POST_x_fresh_metadata_with_updates(self):
# post-as-copy trumps x-fresh-metadata i.e. existing user metadata
# should not be copied, sysmeta is copied *and not updated with new*
post_headers = {
'X-Fresh-Metadata': 'true',
'Content-Type': 'text/not_original',
'Content-Encoding': 'not_gzip',
'Content-Disposition': 'attachment; filename=notmyfile',
'X-Object-Meta-Test': 'not_original',
'X-Object-Sysmeta-Test': 'not_original',
'X-Object-Transient-Sysmeta-Test': 'not_original',
'X-Foo': 'Not Bar',
}
req_headers = self._test_POST_source_headers(post_headers)
self.assertEqual('text/not_original', req_headers.get('Content-Type'))
self.assertEqual('not_gzip', req_headers.get('Content-Encoding'))
self.assertEqual('attachment; filename=notmyfile',
req_headers.get('Content-Disposition'))
self.assertEqual('not_original', req_headers.get('X-Object-Meta-Test'))
self.assertEqual('not_original',
req_headers.get('X-Object-Transient-Sysmeta-Test'))
self.assertEqual('Not Bar', req_headers.get('X-Foo'))
self.assertIn('X-Fresh-Metadata', req_headers)
def test_COPY_with_single_range(self):
# verify that source etag is not copied when copying a range
self.app.register('GET', '/v1/a/c/o', swob.HTTPOk,
@ -1472,67 +1289,6 @@ class TestServerSideCopyConfiguration(unittest.TestCase):
def tearDown(self):
shutil.rmtree(self.tmpdir)
def test_post_as_copy_defaults_to_false(self):
ssc = copy.filter_factory({})("no app here")
self.assertEqual(ssc.object_post_as_copy, False)
def test_reading_proxy_conf_when_no_middleware_conf_present(self):
proxy_conf = dedent("""
[DEFAULT]
bind_ip = 10.4.5.6
[pipeline:main]
pipeline = catch_errors copy ye-olde-proxy-server
[filter:copy]
use = egg:swift#copy
[app:ye-olde-proxy-server]
use = egg:swift#proxy
object_post_as_copy = no
""")
conffile = tempfile.NamedTemporaryFile()
conffile.write(proxy_conf)
conffile.flush()
ssc = copy.filter_factory({
'__file__': conffile.name
})("no app here")
self.assertEqual(ssc.object_post_as_copy, False)
def test_middleware_conf_precedence(self):
proxy_conf = dedent("""
[DEFAULT]
bind_ip = 10.4.5.6
[pipeline:main]
pipeline = catch_errors copy ye-olde-proxy-server
[filter:copy]
use = egg:swift#copy
object_post_as_copy = no
[app:ye-olde-proxy-server]
use = egg:swift#proxy
object_post_as_copy = yes
""")
conffile = tempfile.NamedTemporaryFile()
conffile.write(proxy_conf)
conffile.flush()
with mock.patch('swift.common.middleware.copy.get_logger',
return_value=debug_logger('copy')):
ssc = copy.filter_factory({
'object_post_as_copy': 'no',
'__file__': conffile.name
})("no app here")
self.assertEqual(ssc.object_post_as_copy, False)
self.assertFalse(ssc.logger.get_lines_for_level('warning'))
def _test_post_as_copy_emits_warning(self, conf):
with mock.patch('swift.common.middleware.copy.get_logger',
return_value=debug_logger('copy')):
@ -1585,9 +1341,7 @@ class TestServerSideCopyMiddlewareWithEC(unittest.TestCase):
self.app = PatchedObjControllerApp(
None, FakeMemcache(), account_ring=FakeRing(),
container_ring=FakeRing(), logger=self.logger)
self.ssc = copy.filter_factory({
'object_post_as_copy': 'yes',
})(self.app)
self.ssc = copy.filter_factory({})(self.app)
self.ssc.logger = self.app.logger
self.policy = POLICIES.default
self.app.container_info = dict(self.container_info)

View File

@ -129,11 +129,11 @@ class DloTestCase(unittest.TestCase):
"last_modified": lm,
"content_type": "application/png"}]
self.app.register(
'GET', '/v1/AUTH_test/c?format=json',
'GET', '/v1/AUTH_test/c',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps(full_container_listing))
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=seg',
'GET', '/v1/AUTH_test/c?prefix=seg',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps(segs))
@ -148,11 +148,11 @@ class DloTestCase(unittest.TestCase):
'X-Object-Manifest': 'c/seg_'},
'manyseg')
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_',
'GET', '/v1/AUTH_test/c?prefix=seg_',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps(segs[:3]))
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03',
'GET', '/v1/AUTH_test/c?prefix=seg_&marker=seg_03',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps(segs[3:]))
@ -163,7 +163,7 @@ class DloTestCase(unittest.TestCase):
'X-Object-Manifest': 'c/noseg_'},
'noseg')
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=noseg_',
'GET', '/v1/AUTH_test/c?prefix=noseg_',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps([]))
@ -278,7 +278,7 @@ class TestDloHeadManifest(DloTestCase):
self.assertEqual(
self.app.calls,
[('HEAD', '/v1/AUTH_test/mancon/manifest-no-segments'),
('GET', '/v1/AUTH_test/c?format=json&prefix=noseg_')])
('GET', '/v1/AUTH_test/c?prefix=noseg_')])
class TestDloGetManifest(DloTestCase):
@ -444,7 +444,7 @@ class TestDloGetManifest(DloTestCase):
self.assertEqual(
self.app.calls,
[('GET', '/v1/AUTH_test/mancon/manifest-many-segments'),
('GET', '/v1/AUTH_test/c?format=json&prefix=seg_'),
('GET', '/v1/AUTH_test/c?prefix=seg_'),
('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'),
('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'),
('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')])
@ -601,7 +601,7 @@ class TestDloGetManifest(DloTestCase):
def test_error_listing_container_first_listing_request(self):
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_',
'GET', '/v1/AUTH_test/c?prefix=seg_',
swob.HTTPNotFound, {}, None)
req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments',
@ -613,7 +613,7 @@ class TestDloGetManifest(DloTestCase):
def test_error_listing_container_second_listing_request(self):
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=seg_&marker=seg_03',
'GET', '/v1/AUTH_test/c?prefix=seg_&marker=seg_03',
swob.HTTPNotFound, {}, None)
req = swob.Request.blank('/v1/AUTH_test/mancon/manifest-many-segments',
@ -648,7 +648,7 @@ class TestDloGetManifest(DloTestCase):
swob.HTTPOk, {'Content-Length': '0', 'Etag': 'blah',
'X-Object-Manifest': 'c/quotetags'}, None)
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=quotetags',
'GET', '/v1/AUTH_test/c?prefix=quotetags',
swob.HTTPOk, {'Content-Type': 'application/json; charset=utf-8'},
json.dumps([{"hash": "\"abc\"", "bytes": 5, "name": "quotetags1",
"last_modified": "2013-11-22T02:42:14.261620",
@ -673,7 +673,7 @@ class TestDloGetManifest(DloTestCase):
segs = [{"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é1"},
{"hash": md5hex("AAAAA"), "bytes": 5, "name": u"é2"}]
self.app.register(
'GET', '/v1/AUTH_test/c?format=json&prefix=%C3%A9',
'GET', '/v1/AUTH_test/c?prefix=%C3%A9',
swob.HTTPOk, {'Content-Type': 'application/json'},
json.dumps(segs))
@ -745,7 +745,7 @@ class TestDloGetManifest(DloTestCase):
self.assertEqual(
self.app.calls,
[('GET', '/v1/AUTH_test/mancon/manifest'),
('GET', '/v1/AUTH_test/c?format=json&prefix=seg'),
('GET', '/v1/AUTH_test/c?prefix=seg'),
('GET', '/v1/AUTH_test/c/seg_01?multipart-manifest=get'),
('GET', '/v1/AUTH_test/c/seg_02?multipart-manifest=get'),
('GET', '/v1/AUTH_test/c/seg_03?multipart-manifest=get')])

View File

@ -85,17 +85,17 @@ class TestDomainRemap(unittest.TestCase):
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['Bad domain in host header'])
def test_domain_remap_account_with_path_root(self):
def test_domain_remap_account_with_path_root_container(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/'])
self.assertEqual(resp, ['/v1/AUTH_a/v1'])
def test_domain_remap_account_container_with_path_root(self):
def test_domain_remap_account_container_with_path_root_obj(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/'])
self.assertEqual(resp, ['/v1/AUTH_a/c/v1'])
def test_domain_remap_account_container_with_path_obj_slash_v1(self):
# Include http://localhost because urlparse used in Request.__init__
@ -111,7 +111,7 @@ class TestDomainRemap(unittest.TestCase):
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c//v1'])
self.assertEqual(resp, ['/v1/AUTH_a/c/v1//v1'])
def test_domain_remap_account_container_with_path_trailing_slash(self):
req = Request.blank('/obj/', environ={'REQUEST_METHOD': 'GET'},
@ -129,7 +129,13 @@ class TestDomainRemap(unittest.TestCase):
req = Request.blank('/v1/obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/obj'])
self.assertEqual(resp, ['/v1/AUTH_a/c/v1/obj'])
def test_domain_remap_with_path_root_and_path_no_slash(self):
req = Request.blank('/v1obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/v1obj'])
def test_domain_remap_account_matching_ending_not_domain(self):
req = Request.blank('/dontchange', environ={'REQUEST_METHOD': 'GET'},
@ -249,6 +255,58 @@ class TestDomainRemap(unittest.TestCase):
'http://cont.auth-uuid.example.com/test/')
class TestDomainRemapClientMangling(unittest.TestCase):
def setUp(self):
self.app = domain_remap.DomainRemapMiddleware(FakeApp(), {
'mangle_client_paths': True})
def test_domain_remap_account_with_path_root_container(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/'])
def test_domain_remap_account_container_with_path_root_obj(self):
req = Request.blank('/v1', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/'])
def test_domain_remap_account_container_with_path_obj_slash_v1(self):
# Include http://localhost because urlparse used in Request.__init__
# parse //v1 as http://v1
req = Request.blank('http://localhost//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c//v1'])
def test_domain_remap_account_container_with_root_path_obj_slash_v1(self):
req = Request.blank('/v1//v1',
environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c//v1'])
def test_domain_remap_account_container_with_path_trailing_slash(self):
req = Request.blank('/obj/', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/obj/'])
def test_domain_remap_account_container_with_path_root_and_path(self):
req = Request.blank('/v1/obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/obj'])
def test_domain_remap_with_path_root_and_path_no_slash(self):
req = Request.blank('/v1obj', environ={'REQUEST_METHOD': 'GET'},
headers={'Host': 'c.AUTH_a.example.com'})
resp = self.app(req.environ, start_response)
self.assertEqual(resp, ['/v1/AUTH_a/c/v1obj'])
class TestSwiftInfo(unittest.TestCase):
def setUp(self):
utils._swift_info = {}
@ -257,17 +315,17 @@ class TestSwiftInfo(unittest.TestCase):
def test_registered_defaults(self):
domain_remap.filter_factory({})
swift_info = utils.get_swift_info()
self.assertTrue('domain_remap' in swift_info)
self.assertTrue(
swift_info['domain_remap'].get('default_reseller_prefix') is None)
self.assertIn('domain_remap', swift_info)
self.assertEqual(swift_info['domain_remap'], {
'default_reseller_prefix': None})
def test_registered_nondefaults(self):
domain_remap.filter_factory({'default_reseller_prefix': 'cupcake'})
domain_remap.filter_factory({'default_reseller_prefix': 'cupcake',
'mangle_client_paths': 'yes'})
swift_info = utils.get_swift_info()
self.assertTrue('domain_remap' in swift_info)
self.assertEqual(
swift_info['domain_remap'].get('default_reseller_prefix'),
'cupcake')
self.assertIn('domain_remap', swift_info)
self.assertEqual(swift_info['domain_remap'], {
'default_reseller_prefix': 'cupcake'})
if __name__ == '__main__':

View File

@ -215,5 +215,36 @@ class TestGatekeeper(unittest.TestCase):
for app_hdrs in ({}, self.forbidden_headers_out):
self._test_duplicate_headers_not_removed(method, app_hdrs)
def _test_location_header(self, location_path):
headers = {'Location': location_path}
req = Request.blank(
'/v/a/c', environ={'REQUEST_METHOD': 'GET',
'swift.leave_relative_location': True})
class SelfishApp(FakeApp):
def __call__(self, env, start_response):
self.req = Request(env)
resp = Response(request=self.req, body='FAKE APP',
headers=self.headers)
# like webob, middlewares in the pipeline may rewrite
# location header from relative to absolute
resp.location = resp.absolute_location()
return resp(env, start_response)
selfish_app = SelfishApp(headers=headers)
app = self.get_app(selfish_app, {})
resp = req.get_response(app)
self.assertEqual('200 OK', resp.status)
self.assertIn('Location', resp.headers)
self.assertEqual(resp.headers['Location'], location_path)
def test_location_header_fixed(self):
self._test_location_header('/v/a/c/o2')
self._test_location_header('/v/a/c/o2?query=path&query2=doit')
self._test_location_header('/v/a/c/o2?query=path#test')
self._test_location_header('/v/a/c/o2;whatisparam?query=path#test')
if __name__ == '__main__':
unittest.main()

View File

@ -0,0 +1,345 @@
# Copyright (c) 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import unittest
from swift.common.swob import Request, HTTPOk
from swift.common.middleware import listing_formats
from test.unit.common.middleware.helpers import FakeSwift
class TestListingFormats(unittest.TestCase):
def setUp(self):
self.fake_swift = FakeSwift()
self.app = listing_formats.ListingFilter(self.fake_swift)
self.fake_account_listing = json.dumps([
{'name': 'bar', 'bytes': 0, 'count': 0,
'last_modified': '1970-01-01T00:00:00.000000'},
{'subdir': 'foo_'},
])
self.fake_container_listing = json.dumps([
{'name': 'bar', 'hash': 'etag', 'bytes': 0,
'content_type': 'text/plain',
'last_modified': '1970-01-01T00:00:00.000000'},
{'subdir': 'foo/'},
])
def test_valid_account(self):
self.fake_swift.register('GET', '/v1/a', HTTPOk, {
'Content-Length': str(len(self.fake_account_listing)),
'Content-Type': 'application/json'}, self.fake_account_listing)
req = Request.blank('/v1/a')
resp = req.get_response(self.app)
self.assertEqual(resp.body, 'bar\nfoo_\n')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=txt')
resp = req.get_response(self.app)
self.assertEqual(resp.body, 'bar\nfoo_\n')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=json')
resp = req.get_response(self.app)
self.assertEqual(resp.body, self.fake_account_listing)
self.assertEqual(resp.headers['Content-Type'],
'application/json; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=xml')
resp = req.get_response(self.app)
self.assertEqual(resp.body.split('\n'), [
'<?xml version="1.0" encoding="UTF-8"?>',
'<account name="a">',
'<container><name>bar</name><count>0</count><bytes>0</bytes>'
'<last_modified>1970-01-01T00:00:00.000000</last_modified>'
'</container>',
'<subdir name="foo_" />',
'</account>',
])
self.assertEqual(resp.headers['Content-Type'],
'application/xml; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
def test_valid_container(self):
self.fake_swift.register('GET', '/v1/a/c', HTTPOk, {
'Content-Length': str(len(self.fake_container_listing)),
'Content-Type': 'application/json'}, self.fake_container_listing)
req = Request.blank('/v1/a/c')
resp = req.get_response(self.app)
self.assertEqual(resp.body, 'bar\nfoo/\n')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=txt')
resp = req.get_response(self.app)
self.assertEqual(resp.body, 'bar\nfoo/\n')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=json')
resp = req.get_response(self.app)
self.assertEqual(resp.body, self.fake_container_listing)
self.assertEqual(resp.headers['Content-Type'],
'application/json; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=xml')
resp = req.get_response(self.app)
self.assertEqual(
resp.body,
'<?xml version="1.0" encoding="UTF-8"?>\n'
'<container name="c">'
'<object><name>bar</name><hash>etag</hash><bytes>0</bytes>'
'<content_type>text/plain</content_type>'
'<last_modified>1970-01-01T00:00:00.000000</last_modified>'
'</object>'
'<subdir name="foo/"><name>foo/</name></subdir>'
'</container>'
)
self.assertEqual(resp.headers['Content-Type'],
'application/xml; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
def test_blank_account(self):
self.fake_swift.register('GET', '/v1/a', HTTPOk, {
'Content-Length': '2', 'Content-Type': 'application/json'}, '[]')
req = Request.blank('/v1/a')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '204 No Content')
self.assertEqual(resp.body, '')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=txt')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '204 No Content')
self.assertEqual(resp.body, '')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=json')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '200 OK')
self.assertEqual(resp.body, '[]')
self.assertEqual(resp.headers['Content-Type'],
'application/json; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
req = Request.blank('/v1/a?format=xml')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '200 OK')
self.assertEqual(resp.body.split('\n'), [
'<?xml version="1.0" encoding="UTF-8"?>',
'<account name="a">',
'</account>',
])
self.assertEqual(resp.headers['Content-Type'],
'application/xml; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a?format=json'))
def test_blank_container(self):
self.fake_swift.register('GET', '/v1/a/c', HTTPOk, {
'Content-Length': '2', 'Content-Type': 'application/json'}, '[]')
req = Request.blank('/v1/a/c')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '204 No Content')
self.assertEqual(resp.body, '')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=txt')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '204 No Content')
self.assertEqual(resp.body, '')
self.assertEqual(resp.headers['Content-Type'],
'text/plain; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=json')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '200 OK')
self.assertEqual(resp.body, '[]')
self.assertEqual(resp.headers['Content-Type'],
'application/json; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
req = Request.blank('/v1/a/c?format=xml')
resp = req.get_response(self.app)
self.assertEqual(resp.status, '200 OK')
self.assertEqual(resp.body.split('\n'), [
'<?xml version="1.0" encoding="UTF-8"?>',
'<container name="c" />',
])
self.assertEqual(resp.headers['Content-Type'],
'application/xml; charset=utf-8')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/a/c?format=json'))
def test_pass_through(self):
def do_test(path):
self.fake_swift.register(
'GET', path, HTTPOk, {
'Content-Length': str(len(self.fake_container_listing)),
'Content-Type': 'application/json'},
self.fake_container_listing)
req = Request.blank(path + '?format=xml')
resp = req.get_response(self.app)
self.assertEqual(resp.body, self.fake_container_listing)
self.assertEqual(resp.headers['Content-Type'], 'application/json')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', path + '?format=xml')) # query param is unchanged
do_test('/')
do_test('/v1')
do_test('/auth/v1.0')
do_test('/v1/a/c/o')
def test_static_web_not_json(self):
body = 'doesnt matter'
self.fake_swift.register(
'GET', '/v1/staticweb/not-json', HTTPOk,
{'Content-Length': str(len(body)),
'Content-Type': 'text/plain'},
body)
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
self.assertEqual(resp.body, body)
self.assertEqual(resp.headers['Content-Type'], 'text/plain')
# We *did* try, though
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/staticweb/not-json?format=json'))
# TODO: add a similar test that has *no* content-type
# FakeSwift seems to make this hard to do
def test_static_web_not_really_json(self):
body = 'raises ValueError'
self.fake_swift.register(
'GET', '/v1/staticweb/not-json', HTTPOk,
{'Content-Length': str(len(body)),
'Content-Type': 'application/json'},
body)
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
self.assertEqual(resp.body, body)
self.assertEqual(resp.headers['Content-Type'], 'application/json')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/staticweb/not-json?format=json'))
def test_static_web_pretend_to_be_giant_json(self):
body = json.dumps(self.fake_container_listing * 1000000)
self.assertGreater( # sanity
len(body), listing_formats.MAX_CONTAINER_LISTING_CONTENT_LENGTH)
self.fake_swift.register(
'GET', '/v1/staticweb/not-json', HTTPOk,
{'Content-Type': 'application/json'},
body)
resp = Request.blank('/v1/staticweb/not-json').get_response(self.app)
self.assertEqual(resp.body, body)
self.assertEqual(resp.headers['Content-Type'], 'application/json')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/staticweb/not-json?format=json'))
# TODO: add a similar test for chunked transfers
# (staticweb referencing a DLO that doesn't fit in a single listing?)
def test_static_web_bad_json(self):
def do_test(body_obj):
body = json.dumps(body_obj)
self.fake_swift.register(
'GET', '/v1/staticweb/bad-json', HTTPOk,
{'Content-Length': str(len(body)),
'Content-Type': 'application/json'},
body)
def do_sub_test(path):
resp = Request.blank(path).get_response(self.app)
self.assertEqual(resp.body, body)
# NB: no charset is added; we pass through whatever we got
self.assertEqual(resp.headers['Content-Type'],
'application/json')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/staticweb/bad-json?format=json'))
do_sub_test('/v1/staticweb/bad-json')
do_sub_test('/v1/staticweb/bad-json?format=txt')
do_sub_test('/v1/staticweb/bad-json?format=xml')
do_sub_test('/v1/staticweb/bad-json?format=json')
do_test({})
do_test({'non-empty': 'hash'})
do_test(None)
do_test(0)
do_test('some string')
do_test([None])
do_test([0])
do_test(['some string'])
def test_static_web_bad_but_not_terrible_json(self):
body = json.dumps([{'no name': 'nor subdir'}])
self.fake_swift.register(
'GET', '/v1/staticweb/bad-json', HTTPOk,
{'Content-Length': str(len(body)),
'Content-Type': 'application/json'},
body)
def do_test(path, expect_charset=False):
resp = Request.blank(path).get_response(self.app)
self.assertEqual(resp.body, body)
if expect_charset:
self.assertEqual(resp.headers['Content-Type'],
'application/json; charset=utf-8')
else:
self.assertEqual(resp.headers['Content-Type'],
'application/json')
self.assertEqual(self.fake_swift.calls[-1], (
'GET', '/v1/staticweb/bad-json?format=json'))
do_test('/v1/staticweb/bad-json')
do_test('/v1/staticweb/bad-json?format=txt')
do_test('/v1/staticweb/bad-json?format=xml')
# The response we get is *just close enough* to being valid that we
# assume it is and slap on the missing charset. If you set up staticweb
# to serve back such responses, your clients are already hosed.
do_test('/v1/staticweb/bad-json?format=json', expect_charset=True)

View File

@ -389,37 +389,35 @@ class TestSloPutManifest(SloTestCase):
'PUT', '/v1/AUTH_test/checktest/man_3', swob.HTTPCreated, {}, None)
def test_put_manifest_too_quick_fail(self):
req = Request.blank('/v1/a/c/o')
req = Request.blank('/v1/a/c/o?multipart-manifest=put', method='PUT')
req.content_length = self.slo.max_manifest_size + 1
try:
self.slo.handle_multipart_put(req, fake_start_response)
except HTTPException as e:
pass
self.assertEqual(e.status_int, 413)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '413 Request Entity Too Large')
with patch.object(self.slo, 'max_manifest_segments', 0):
req = Request.blank('/v1/a/c/o', body=test_json_data)
e = None
try:
self.slo.handle_multipart_put(req, fake_start_response)
except HTTPException as e:
pass
self.assertEqual(e.status_int, 413)
req = Request.blank('/v1/a/c/o?multipart-manifest=put',
method='PUT', body=test_json_data)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '413 Request Entity Too Large')
req = Request.blank('/v1/a/c/o', headers={'X-Copy-From': 'lala'})
try:
self.slo.handle_multipart_put(req, fake_start_response)
except HTTPException as e:
pass
self.assertEqual(e.status_int, 405)
req = Request.blank('/v1/a/c/o?multipart-manifest=put', method='PUT',
headers={'X-Copy-From': 'lala'})
status, headers, body = self.call_slo(req)
self.assertEqual(status, '405 Method Not Allowed')
# ignores requests to /
req = Request.blank(
'/?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=test_json_data)
self.assertEqual(
list(self.slo.handle_multipart_put(req, fake_start_response)),
['passed'])
# we already validated that there are enough path segments in __call__
for path in ('/', '/v1/', '/v1/a/', '/v1/a/c/'):
req = Request.blank(
path + '?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=test_json_data)
with self.assertRaises(ValueError):
list(self.slo.handle_multipart_put(req, fake_start_response))
req = Request.blank(
path.rstrip('/') + '?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=test_json_data)
with self.assertRaises(ValueError):
list(self.slo.handle_multipart_put(req, fake_start_response))
def test_handle_multipart_put_success(self):
req = Request.blank(
@ -430,11 +428,9 @@ class TestSloPutManifest(SloTestCase):
'X-Object-Sysmeta-Slo-Size'):
self.assertNotIn(h, req.headers)
def my_fake_start_response(*args, **kwargs):
gen_etag = '"' + md5hex('etagoftheobjectsegment') + '"'
self.assertIn(('Etag', gen_etag), args[1])
self.slo(req.environ, my_fake_start_response)
status, headers, body = self.call_slo(req)
gen_etag = '"' + md5hex('etagoftheobjectsegment') + '"'
self.assertIn(('Etag', gen_etag), headers)
self.assertIn('X-Static-Large-Object', req.headers)
self.assertEqual(req.headers['X-Static-Large-Object'], 'True')
self.assertIn('X-Object-Sysmeta-Slo-Etag', req.headers)
@ -486,10 +482,10 @@ class TestSloPutManifest(SloTestCase):
{'path': '/cont/small_object',
'etag': 'etagoftheobjectsegment',
'size_bytes': 100}])
req = Request.blank('/v1/a/c/o', body=test_json_data)
with self.assertRaises(HTTPException) as catcher:
self.slo.handle_multipart_put(req, fake_start_response)
self.assertEqual(catcher.exception.status_int, 400)
req = Request.blank('/v1/a/c/o?multipart-manifest=put',
method='PUT', body=test_json_data)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
def test_handle_multipart_put_disallow_empty_last_segment(self):
test_json_data = json.dumps([{'path': '/cont/object',
@ -498,10 +494,10 @@ class TestSloPutManifest(SloTestCase):
{'path': '/cont/small_object',
'etag': 'etagoftheobjectsegment',
'size_bytes': 0}])
req = Request.blank('/v1/a/c/o', body=test_json_data)
with self.assertRaises(HTTPException) as catcher:
self.slo.handle_multipart_put(req, fake_start_response)
self.assertEqual(catcher.exception.status_int, 400)
req = Request.blank('/v1/a/c/o?multipart-manifest=put',
method='PUT', body=test_json_data)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
def test_handle_multipart_put_success_unicode(self):
test_json_data = json.dumps([{'path': u'/cont/object\u2661',
@ -512,7 +508,7 @@ class TestSloPutManifest(SloTestCase):
environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'test'},
body=test_json_data)
self.assertNotIn('X-Static-Large-Object', req.headers)
self.slo(req.environ, fake_start_response)
self.call_slo(req)
self.assertIn('X-Static-Large-Object', req.headers)
self.assertEqual(req.environ['PATH_INFO'], '/v1/AUTH_test/c/man')
self.assertIn(('HEAD', '/v1/AUTH_test/cont/object\xe2\x99\xa1'),
@ -523,7 +519,7 @@ class TestSloPutManifest(SloTestCase):
'/test_good/AUTH_test/c/man?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, headers={'Accept': 'test'},
body=test_xml_data)
no_xml = self.slo(req.environ, fake_start_response)
no_xml = list(self.slo(req.environ, fake_start_response))
self.assertEqual(no_xml, ['Manifest must be valid JSON.\n'])
def test_handle_multipart_put_bad_data(self):
@ -533,14 +529,15 @@ class TestSloPutManifest(SloTestCase):
req = Request.blank(
'/test_good/AUTH_test/c/man?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=bad_data)
self.assertRaises(HTTPException, self.slo.handle_multipart_put, req,
fake_start_response)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
self.assertIn('invalid size_bytes', body)
for bad_data in [
json.dumps([{'path': '/cont', 'etag': 'etagoftheobj',
'size_bytes': 100}]),
json.dumps('asdf'), json.dumps(None), json.dumps(5),
'not json', '1234', None, '', json.dumps({'path': None}),
'not json', '1234', '', json.dumps({'path': None}),
json.dumps([{'path': '/cont/object', 'etag': None,
'size_bytes': 12}]),
json.dumps([{'path': '/cont/object', 'etag': 'asdf',
@ -557,8 +554,14 @@ class TestSloPutManifest(SloTestCase):
req = Request.blank(
'/v1/AUTH_test/c/man?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=bad_data)
self.assertRaises(HTTPException, self.slo.handle_multipart_put,
req, fake_start_response)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
req = Request.blank(
'/v1/AUTH_test/c/man?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=None)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '411 Length Required')
def test_handle_multipart_put_check_data(self):
good_data = json.dumps(
@ -642,10 +645,11 @@ class TestSloPutManifest(SloTestCase):
{'path': '/cont/small_object',
'etag': 'etagoftheobjectsegment',
'size_bytes': 100}])
req = Request.blank('/v1/AUTH_test/c/o', body=test_json_data)
with self.assertRaises(HTTPException) as cm:
self.slo.handle_multipart_put(req, fake_start_response)
self.assertEqual(cm.exception.status_int, 400)
req = Request.blank('/v1/AUTH_test/c/o?multipart-manifest=put',
method='PUT', body=test_json_data)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
self.assertIn('Too small; each segment must be at least 1 byte', body)
def test_handle_multipart_put_skip_size_check_no_early_bailout(self):
# The first is too small (it's 0 bytes), and
@ -657,12 +661,12 @@ class TestSloPutManifest(SloTestCase):
{'path': '/cont/object2',
'etag': 'wrong wrong wrong',
'size_bytes': 100}])
req = Request.blank('/v1/AUTH_test/c/o', body=test_json_data)
with self.assertRaises(HTTPException) as cm:
self.slo.handle_multipart_put(req, fake_start_response)
self.assertEqual(cm.exception.status_int, 400)
self.assertIn('at least 1 byte', cm.exception.body)
self.assertIn('Etag Mismatch', cm.exception.body)
req = Request.blank('/v1/AUTH_test/c/o?multipart-manifest=put',
method='PUT', body=test_json_data)
status, headers, body = self.call_slo(req)
self.assertEqual(status, '400 Bad Request')
self.assertIn('at least 1 byte', body)
self.assertIn('Etag Mismatch', body)
def test_handle_multipart_put_skip_etag_check(self):
good_data = json.dumps([
@ -694,10 +698,9 @@ class TestSloPutManifest(SloTestCase):
req = Request.blank(
'/v1/AUTH_test/checktest/man_3?multipart-manifest=put',
environ={'REQUEST_METHOD': 'PUT'}, body=bad_data)
with self.assertRaises(HTTPException) as catcher:
self.slo.handle_multipart_put(req, fake_start_response)
self.assertEqual(400, catcher.exception.status_int)
self.assertIn("Unsatisfiable Range", catcher.exception.body)
status, headers, body = self.call_slo(req)
self.assertEqual('400 Bad Request', status)
self.assertIn("Unsatisfiable Range", body)
def test_handle_multipart_put_success_conditional(self):
test_json_data = json.dumps([{'path': u'/cont/object',
@ -2771,29 +2774,25 @@ class TestSloGetManifest(SloTestCase):
self.assertTrue(error_lines[0].startswith(
'ERROR: An error occurred while retrieving segments'))
def test_download_takes_too_long(self):
the_time = [time.time()]
def mock_time():
return the_time[0]
# this is just a convenient place to hang a time jump; there's nothing
# special about the choice of is_success().
def mock_is_success(status_int):
the_time[0] += 7 * 3600
return status_int // 100 == 2
@patch('swift.common.request_helpers.time')
def test_download_takes_too_long(self, mock_time):
mock_time.time.side_effect = [
0, # start time
1, # just building the first segment request; purely local
2, # build the second segment request object, too, so we know we
# can't coalesce and should instead go fetch the first segment
7 * 3600, # that takes a while, but gets serviced; we build the
# third request and service the second
21 * 3600, # which takes *even longer* (ostensibly something to
# do with submanifests), but we build the fourth...
28 * 3600, # and before we go to service it we time out
]
req = Request.blank(
'/v1/AUTH_test/gettest/manifest-abcd',
environ={'REQUEST_METHOD': 'GET'})
with patch.object(slo, 'is_success', mock_is_success), \
patch('swift.common.request_helpers.time.time',
mock_time), \
patch('swift.common.request_helpers.is_success',
mock_is_success):
status, headers, body, exc = self.call_slo(
req, expect_exception=True)
status, headers, body, exc = self.call_slo(
req, expect_exception=True)
self.assertIsInstance(exc, SegmentError)
self.assertEqual(status, '200 OK')

View File

@ -279,7 +279,7 @@ class FakeApp(object):
if ((env['PATH_INFO'] in (
'/v1/a/c3', '/v1/a/c4', '/v1/a/c8', '/v1/a/c9'))
and (env['QUERY_STRING'] ==
'delimiter=/&format=json&prefix=subdir/')):
'delimiter=/&prefix=subdir/')):
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'X-Container-Read': '.r:*',
@ -296,14 +296,14 @@ class FakeApp(object):
{"subdir":"subdir3/subsubdir/"}]
'''.strip()
elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \
'delimiter=/&format=json&prefix=subdiry/':
'delimiter=/&prefix=subdiry/':
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'X-Container-Read': '.r:*',
'Content-Type': 'application/json; charset=utf-8'})
body = '[]'
elif env['PATH_INFO'] == '/v1/a/c3' and env['QUERY_STRING'] == \
'limit=1&format=json&delimiter=/&limit=1&prefix=subdirz/':
'limit=1&delimiter=/&prefix=subdirz/':
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'X-Container-Read': '.r:*',
@ -315,7 +315,7 @@ class FakeApp(object):
"last_modified":"2011-03-24T04:27:52.709100"}]
'''.strip()
elif env['PATH_INFO'] == '/v1/a/c6' and env['QUERY_STRING'] == \
'limit=1&format=json&delimiter=/&limit=1&prefix=subdir/':
'limit=1&delimiter=/&prefix=subdir/':
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'X-Container-Read': '.r:*',
@ -329,9 +329,9 @@ class FakeApp(object):
'''.strip()
elif env['PATH_INFO'] == '/v1/a/c10' and (
env['QUERY_STRING'] ==
'delimiter=/&format=json&prefix=%E2%98%83/' or
'delimiter=/&prefix=%E2%98%83/' or
env['QUERY_STRING'] ==
'delimiter=/&format=json&prefix=%E2%98%83/%E2%98%83/'):
'delimiter=/&prefix=%E2%98%83/%E2%98%83/'):
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'X-Container-Read': '.r:*',
@ -346,7 +346,7 @@ class FakeApp(object):
'''.strip()
elif 'prefix=' in env['QUERY_STRING']:
return Response(status='204 No Content')(env, start_response)
elif 'format=json' in env['QUERY_STRING']:
else:
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'Content-Type': 'application/json; charset=utf-8'})
@ -397,15 +397,6 @@ class FakeApp(object):
"content_type":"text/plain",
"last_modified":"2011-03-24T04:27:52.935560"}]
'''.strip()
else:
headers.update({'X-Container-Object-Count': '12',
'X-Container-Bytes-Used': '73763',
'Content-Type': 'text/plain; charset=utf-8'})
body = '\n'.join(['401error.html', '404error.html', 'index.html',
'listing.css', 'one.txt', 'subdir/1.txt',
'subdir/2.txt', u'subdir/\u2603.txt', 'subdir2',
'subdir3/subsubdir/index.html', 'two.txt',
u'\u2603/\u2603/one.txt'])
return Response(status='200 Ok', headers=headers,
body=body)(env, start_response)
@ -481,8 +472,8 @@ class TestStaticWeb(unittest.TestCase):
def test_container2(self):
resp = Request.blank('/v1/a/c2').get_response(self.test_staticweb)
self.assertEqual(resp.status_int, 200)
self.assertEqual(resp.content_type, 'text/plain')
self.assertEqual(len(resp.body.split('\n')),
self.assertEqual(resp.content_type, 'application/json')
self.assertEqual(len(json.loads(resp.body)),
int(resp.headers['x-container-object-count']))
def test_container2_web_mode_explicitly_off(self):
@ -490,8 +481,8 @@ class TestStaticWeb(unittest.TestCase):
'/v1/a/c2',
headers={'x-web-mode': 'false'}).get_response(self.test_staticweb)
self.assertEqual(resp.status_int, 200)
self.assertEqual(resp.content_type, 'text/plain')
self.assertEqual(len(resp.body.split('\n')),
self.assertEqual(resp.content_type, 'application/json')
self.assertEqual(len(json.loads(resp.body)),
int(resp.headers['x-container-object-count']))
def test_container2_web_mode_explicitly_on(self):
@ -507,7 +498,7 @@ class TestStaticWeb(unittest.TestCase):
def test_container2json(self):
resp = Request.blank(
'/v1/a/c2?format=json').get_response(self.test_staticweb)
'/v1/a/c2').get_response(self.test_staticweb)
self.assertEqual(resp.status_int, 200)
self.assertEqual(resp.content_type, 'application/json')
self.assertEqual(len(json.loads(resp.body)),
@ -515,7 +506,7 @@ class TestStaticWeb(unittest.TestCase):
def test_container2json_web_mode_explicitly_off(self):
resp = Request.blank(
'/v1/a/c2?format=json',
'/v1/a/c2',
headers={'x-web-mode': 'false'}).get_response(self.test_staticweb)
self.assertEqual(resp.status_int, 200)
self.assertEqual(resp.content_type, 'application/json')
@ -524,7 +515,7 @@ class TestStaticWeb(unittest.TestCase):
def test_container2json_web_mode_explicitly_on(self):
resp = Request.blank(
'/v1/a/c2?format=json',
'/v1/a/c2',
headers={'x-web-mode': 'true'}).get_response(self.test_staticweb)
self.assertEqual(resp.status_int, 404)

View File

@ -117,23 +117,14 @@ class TestSubRequestLogging(unittest.TestCase):
self._test_subrequest_logged('PUT')
self._test_subrequest_logged('DELETE')
def _test_subrequest_logged_POST(self, subrequest_type,
post_as_copy=False):
# Test that subrequests made downstream from Copy POST will be logged
# with the request type of the subrequest as opposed to the GET/PUT.
app = FakeApp({'subrequest_type': subrequest_type,
'object_post_as_copy': post_as_copy})
def _test_subrequest_logged_POST(self, subrequest_type):
app = FakeApp({'subrequest_type': subrequest_type})
hdrs = {'content-type': 'text/plain'}
req = Request.blank(self.path, method='POST', headers=hdrs)
app.register('POST', self.path, HTTPOk, headers=hdrs)
expect_lines = 2
if post_as_copy:
app.register('PUT', self.path, HTTPOk, headers=hdrs)
app.register('GET', '/v1/a/c/o', HTTPOk, headers=hdrs)
expect_lines = 4
req.get_response(app)
info_log_lines = app.fake_logger.get_lines_for_level('info')
@ -142,33 +133,17 @@ class TestSubRequestLogging(unittest.TestCase):
subreq_put_post = '%s %s' % (subrequest_type, SUB_PUT_POST_PATH)
origpost = 'POST %s' % self.path
copyget = 'GET %s' % self.path
if post_as_copy:
# post_as_copy expect GET subreq, copy GET, PUT subreq, orig POST
subreq_get = '%s %s' % (subrequest_type, SUB_GET_PATH)
self.assertTrue(subreq_get in info_log_lines[0])
self.assertTrue(copyget in info_log_lines[1])
self.assertTrue(subreq_put_post in info_log_lines[2])
self.assertTrue(origpost in info_log_lines[3])
else:
# fast post expect POST subreq, original POST
self.assertTrue(subreq_put_post in info_log_lines[0])
self.assertTrue(origpost in info_log_lines[1])
# fast post expect POST subreq, original POST
self.assertTrue(subreq_put_post in info_log_lines[0])
self.assertTrue(origpost in info_log_lines[1])
def test_subrequest_logged_post_as_copy_with_POST_fast_post(self):
self._test_subrequest_logged_POST('HEAD', post_as_copy=False)
self._test_subrequest_logged_POST('GET', post_as_copy=False)
self._test_subrequest_logged_POST('POST', post_as_copy=False)
self._test_subrequest_logged_POST('PUT', post_as_copy=False)
self._test_subrequest_logged_POST('DELETE', post_as_copy=False)
def test_subrequest_logged_post_as_copy_with_POST(self):
self._test_subrequest_logged_POST('HEAD', post_as_copy=True)
self._test_subrequest_logged_POST('GET', post_as_copy=True)
self._test_subrequest_logged_POST('POST', post_as_copy=True)
self._test_subrequest_logged_POST('PUT', post_as_copy=True)
self._test_subrequest_logged_POST('DELETE', post_as_copy=True)
def test_subrequest_logged_with_POST(self):
self._test_subrequest_logged_POST('HEAD')
self._test_subrequest_logged_POST('GET')
self._test_subrequest_logged_POST('POST')
self._test_subrequest_logged_POST('PUT')
self._test_subrequest_logged_POST('DELETE')
if __name__ == '__main__':

View File

@ -19,7 +19,6 @@ import unittest
from contextlib import contextmanager
from base64 import b64encode
from time import time
import mock
from swift.common.middleware import tempauth as auth
from swift.common.middleware.acl import format_acl
@ -265,27 +264,58 @@ class TestAuth(unittest.TestCase):
self.assertEqual(req.environ['swift.authorize'],
local_auth.denied_response)
def test_auth_with_s3_authorization(self):
def test_auth_with_s3_authorization_good(self):
local_app = FakeApp()
local_auth = auth.filter_factory(
{'user_s3_s3': 'secret .admin'})(local_app)
req = self._make_request('/v1/AUTH_s3', environ={
req = self._make_request('/v1/s3:s3', environ={
'swift3.auth_details': {
'access_key': 's3:s3',
'signature': b64encode('sig'),
'string_to_sign': 't',
'check_signature': lambda secret: True}})
resp = req.get_response(local_auth)
self.assertEqual(resp.status_int, 404)
self.assertEqual(local_app.calls, 1)
self.assertEqual(req.environ['PATH_INFO'], '/v1/AUTH_s3')
self.assertEqual(req.environ['swift.authorize'],
local_auth.authorize)
def test_auth_with_s3_authorization_invalid(self):
local_app = FakeApp()
local_auth = auth.filter_factory(
{'user_s3_s3': 'secret .admin'})(local_app)
req = self._make_request('/v1/s3:s3', environ={
'swift3.auth_details': {
'access_key': 's3:s3',
'signature': b64encode('sig'),
'string_to_sign': 't',
'check_signature': lambda secret: False}})
resp = req.get_response(local_auth)
self.assertEqual(resp.status_int, 401)
self.assertEqual(local_app.calls, 1)
self.assertEqual(req.environ['PATH_INFO'], '/v1/s3:s3')
self.assertEqual(req.environ['swift.authorize'],
local_auth.denied_response)
def test_auth_with_old_s3_details(self):
local_app = FakeApp()
local_auth = auth.filter_factory(
{'user_s3_s3': 'secret .admin'})(local_app)
req = self._make_request('/v1/s3:s3', environ={
'swift3.auth_details': {
'access_key': 's3:s3',
'signature': b64encode('sig'),
'string_to_sign': 't'}})
resp = req.get_response(local_auth)
with mock.patch('hmac.new') as hmac:
hmac.return_value.digest.return_value = 'sig'
resp = req.get_response(local_auth)
self.assertEqual(hmac.mock_calls, [
mock.call('secret', 't', mock.ANY),
mock.call().digest()])
self.assertEqual(resp.status_int, 404)
self.assertEqual(resp.status_int, 401)
self.assertEqual(local_app.calls, 1)
self.assertEqual(req.environ['PATH_INFO'], '/v1/s3:s3')
self.assertEqual(req.environ['swift.authorize'],
local_auth.authorize)
local_auth.denied_response)
def test_auth_no_reseller_prefix_no_token(self):
# Check that normally we set up a call back to our authorize.

View File

@ -330,23 +330,6 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
def test_put_object_post_as_copy(self):
# PUTs due to a post-as-copy should NOT cause a versioning op
self.app.register(
'PUT', '/v1/a/c/o', swob.HTTPCreated, {}, 'passed')
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
req = Request.blank(
'/v1/a/c/o',
environ={'REQUEST_METHOD': 'PUT', 'swift.cache': cache,
'CONTENT_LENGTH': '100',
'swift.post_as_copy': True})
status, headers, body = self.call_vw(req)
self.assertEqual(status, '201 Created')
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
self.assertEqual(1, self.app.call_count)
def test_put_first_object_success(self):
self.app.register(
'PUT', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
@ -584,7 +567,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPNotFound, {}, None)
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
@ -600,7 +583,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(['VW', None], self.app.swift_sources)
self.assertEqual({'fake_trans_id'}, set(self.app.txn_ids))
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('DELETE', '/v1/a/c/o'),
@ -611,7 +594,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {}, '[]')
cache = FakeCache({'sysmeta': {'versions-location': 'ver_cont'}})
@ -624,7 +607,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('DELETE', '/v1/a/c/o'),
@ -633,7 +616,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
def test_delete_latest_version_no_marker_success(self):
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "y", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -672,7 +655,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
req_headers = self.app.headers[-1]
self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', '/v1/a/ver_cont/001o/2'),
@ -683,7 +666,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
def test_delete_latest_version_restores_marker_success(self):
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "x", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -731,7 +714,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
# in the base versioned container.
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "y", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -766,7 +749,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('HEAD', '/v1/a/c/o'),
@ -787,7 +770,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
def test_delete_latest_version_doubled_up_markers_success(self):
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
'GET', '/v1/a/ver_cont?prefix=001o/'
'&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "x", '
@ -905,7 +888,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "y", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -931,7 +914,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(len(self.authorized), 1)
self.assertRequestEqual(req, self.authorized[0])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', '/v1/a/ver_cont/001o/1'),
@ -942,7 +925,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
def test_DELETE_on_expired_versioned_object(self):
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "y", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -979,7 +962,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertRequestEqual(req, self.authorized[0])
self.assertEqual(5, self.app.call_count)
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', '/v1/a/ver_cont/001o/2'),
@ -992,7 +975,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
authorize_call = []
self.app.register(
'GET',
'/v1/a/ver_cont?format=json&prefix=001o/&marker=&reverse=on',
'/v1/a/ver_cont?prefix=001o/&marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "y", '
'"last_modified": "2014-11-21T14:23:02.206740", '
@ -1021,7 +1004,7 @@ class VersionedWritesTestCase(VersionedWritesBaseTestCase):
self.assertEqual(len(authorize_call), 1)
self.assertRequestEqual(req, authorize_call[0])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
])
@ -1058,7 +1041,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
self.app.register(
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "x", '
@ -1072,7 +1055,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
'"name": "001o/2", '
'"content_type": "text/plain"}]')
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
'GET', '/v1/a/ver_cont?prefix=001o/'
'&marker=001o/2',
swob.HTTPNotFound, {}, None)
self.app.register(
@ -1103,7 +1086,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
req_headers = self.app.headers[-1]
self.assertNotIn('x-if-delete-at', [h.lower() for h in req_headers])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', prefix_listing_prefix + 'marker=001o/2'),
@ -1114,7 +1097,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
def test_DELETE_on_expired_versioned_object(self):
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "x", '
@ -1128,7 +1111,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
'"name": "001o/2", '
'"content_type": "text/plain"}]')
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
'GET', '/v1/a/ver_cont?prefix=001o/'
'&marker=001o/2',
swob.HTTPNotFound, {}, None)
@ -1156,7 +1139,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
self.assertRequestEqual(req, self.authorized[0])
self.assertEqual(6, self.app.call_count)
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', prefix_listing_prefix + 'marker=001o/2'),
@ -1171,7 +1154,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
self.app.register(
'DELETE', '/v1/a/c/o', swob.HTTPOk, {}, 'passed')
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&reverse=on',
swob.HTTPOk, {},
'[{"hash": "x", '
@ -1185,7 +1168,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
'"name": "001o/2", '
'"content_type": "text/plain"}]')
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/'
'GET', '/v1/a/ver_cont?prefix=001o/'
'&marker=001o/2',
swob.HTTPNotFound, {}, None)
self.app.register(
@ -1206,7 +1189,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
self.assertEqual(status, '403 Forbidden')
self.assertEqual(len(authorize_call), 1)
self.assertRequestEqual(req, authorize_call[0])
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', prefix_listing_prefix + 'marker=001o/2'),
@ -1223,7 +1206,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
# first container server can reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&reverse=on',
swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[2:]))))
# but all objects are already gone
@ -1239,21 +1222,21 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
# second container server can't reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/2&reverse=on',
swob.HTTPOk, {}, json.dumps(old_versions[3:]))
# subsequent requests shouldn't reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&end_marker=001o/2',
swob.HTTPOk, {}, json.dumps(old_versions[:1]))
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/0&end_marker=001o/2',
swob.HTTPOk, {}, json.dumps(old_versions[1:2]))
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/1&end_marker=001o/2',
swob.HTTPOk, {}, '[]')
self.app.register(
@ -1272,7 +1255,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
'CONTENT_LENGTH': '0'})
status, headers, body = self.call_vw(req)
self.assertEqual(status, '204 No Content')
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', '/v1/a/ver_cont/001o/4'),
@ -1298,7 +1281,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
# first container server can reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&reverse=on',
swob.HTTPOk, {}, json.dumps(list(reversed(old_versions[-2:]))))
# but both objects are already gone
@ -1311,21 +1294,21 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
# second container server can't reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/3&reverse=on',
swob.HTTPOk, {}, json.dumps(old_versions[4:]))
# subsequent requests shouldn't reverse
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=&end_marker=001o/3',
swob.HTTPOk, {}, json.dumps(old_versions[:2]))
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/1&end_marker=001o/3',
swob.HTTPOk, {}, json.dumps(old_versions[2:3]))
self.app.register(
'GET', '/v1/a/ver_cont?format=json&prefix=001o/&'
'GET', '/v1/a/ver_cont?prefix=001o/&'
'marker=001o/2&end_marker=001o/3',
swob.HTTPOk, {}, '[]')
self.app.register(
@ -1344,7 +1327,7 @@ class VersionedWritesOldContainersTestCase(VersionedWritesBaseTestCase):
'CONTENT_LENGTH': '0'})
status, headers, body = self.call_vw(req)
self.assertEqual(status, '204 No Content')
prefix_listing_prefix = '/v1/a/ver_cont?format=json&prefix=001o/&'
prefix_listing_prefix = '/v1/a/ver_cont?prefix=001o/&'
self.assertEqual(self.app.calls, [
('GET', prefix_listing_prefix + 'marker=&reverse=on'),
('GET', '/v1/a/ver_cont/001o/4'),

View File

@ -400,7 +400,7 @@ class TestRingBuilder(unittest.TestCase):
for dev in rb._iter_devs():
dev['tiers'] = utils.tiers_for_dev(dev)
assign_parts = defaultdict(list)
rb._gather_parts_for_balance(assign_parts, replica_plan)
rb._gather_parts_for_balance(assign_parts, replica_plan, False)
max_run = 0
run = 0
last_part = 0
@ -1621,9 +1621,7 @@ class TestRingBuilder(unittest.TestCase):
rb.rebalance(seed=12345)
part_counts = self._partition_counts(rb, key='zone')
self.assertEqual(part_counts[0], 212)
self.assertEqual(part_counts[1], 211)
self.assertEqual(part_counts[2], 345)
self.assertEqual({0: 212, 1: 211, 2: 345}, part_counts)
# Now, devices 0 and 1 take 50% more than their fair shares by
# weight.
@ -1633,9 +1631,7 @@ class TestRingBuilder(unittest.TestCase):
rb.rebalance(seed=12345)
part_counts = self._partition_counts(rb, key='zone')
self.assertEqual(part_counts[0], 256)
self.assertEqual(part_counts[1], 256)
self.assertEqual(part_counts[2], 256)
self.assertEqual({0: 256, 1: 256, 2: 256}, part_counts)
# Devices 0 and 1 may take up to 75% over their fair share, but the
# placement algorithm only wants to spread things out evenly between
@ -1698,9 +1694,12 @@ class TestRingBuilder(unittest.TestCase):
rb.rebalance(seed=12345)
part_counts = self._partition_counts(rb, key='ip')
self.assertEqual(part_counts['127.0.0.1'], 238)
self.assertEqual(part_counts['127.0.0.2'], 237)
self.assertEqual(part_counts['127.0.0.3'], 293)
self.assertEqual({
'127.0.0.1': 237,
'127.0.0.2': 237,
'127.0.0.3': 294,
}, part_counts)
# Even out the weights: balance becomes perfect
for dev in rb.devs:
@ -2451,6 +2450,105 @@ class TestRingBuilder(unittest.TestCase):
(0, 0, '127.0.0.1', 3): [0, 256, 0, 0],
})
def test_undispersable_zone_converge_on_balance(self):
rb = ring.RingBuilder(8, 6, 0)
dev_id = 0
# 3 regions, 2 zone for each region, 1 server with only *one* device in
# each zone (this is an absolutely pathological case)
for r in range(3):
for z in range(2):
ip = '127.%s.%s.1' % (r, z)
dev_id += 1
rb.add_dev({'id': dev_id, 'region': r, 'zone': z,
'weight': 1000, 'ip': ip, 'port': 10000,
'device': 'd%s' % dev_id})
rb.rebalance(seed=7)
# sanity, all balanced and 0 dispersion
self.assertEqual(rb.get_balance(), 0)
self.assertEqual(rb.dispersion, 0)
# add one device to the server in z1 for each region, N.B. when we
# *balance* this topology we will have very bad dispersion (too much
# weight in z1 compared to z2!)
for r in range(3):
z = 0
ip = '127.%s.%s.1' % (r, z)
dev_id += 1
rb.add_dev({'id': dev_id, 'region': r, 'zone': z,
'weight': 1000, 'ip': ip, 'port': 10000,
'device': 'd%s' % dev_id})
changed_part, _, _ = rb.rebalance(seed=7)
# sanity, all part but only one replica moved to new devices
self.assertEqual(changed_part, 2 ** 8)
# so the first time, rings are still unbalanced becase we'll only move
# one replica of each part.
self.assertEqual(rb.get_balance(), 50.1953125)
self.assertEqual(rb.dispersion, 99.609375)
# N.B. since we mostly end up grabbing parts by "weight forced" some
# seeds given some specific ring state will randomly pick bad
# part-replicas that end up going back down onto the same devices
changed_part, _, _ = rb.rebalance(seed=7)
self.assertEqual(changed_part, 14)
# ... this isn't a really "desirable" behavior, but even with bad luck,
# things do get better
self.assertEqual(rb.get_balance(), 47.265625)
self.assertEqual(rb.dispersion, 99.609375)
# but if you stick with it, eventually the next rebalance, will get to
# move "the right" part-replicas, resulting in near optimal balance
changed_part, _, _ = rb.rebalance(seed=7)
self.assertEqual(changed_part, 240)
self.assertEqual(rb.get_balance(), 0.390625)
self.assertEqual(rb.dispersion, 99.609375)
def test_undispersable_server_converge_on_balance(self):
rb = ring.RingBuilder(8, 6, 0)
dev_id = 0
# 3 zones, 2 server for each zone, 2 device for each server
for z in range(3):
for i in range(2):
ip = '127.0.%s.%s' % (z, i + 1)
for d in range(2):
dev_id += 1
rb.add_dev({'id': dev_id, 'region': 1, 'zone': z,
'weight': 1000, 'ip': ip, 'port': 10000,
'device': 'd%s' % dev_id})
rb.rebalance(seed=7)
# sanity, all balanced and 0 dispersion
self.assertEqual(rb.get_balance(), 0)
self.assertEqual(rb.dispersion, 0)
# add one device for first server for each zone
for z in range(3):
ip = '127.0.%s.1' % z
dev_id += 1
rb.add_dev({'id': dev_id, 'region': 1, 'zone': z,
'weight': 1000, 'ip': ip, 'port': 10000,
'device': 'd%s' % dev_id})
changed_part, _, _ = rb.rebalance(seed=7)
# sanity, all part but only one replica moved to new devices
self.assertEqual(changed_part, 2 ** 8)
# but the first time, those are still unbalance becase ring builder
# can move only one replica for each part
self.assertEqual(rb.get_balance(), 16.9921875)
self.assertEqual(rb.dispersion, 59.765625)
rb.rebalance(seed=7)
# converge into around 0~1
self.assertGreaterEqual(rb.get_balance(), 0)
self.assertLess(rb.get_balance(), 1)
# dispersion doesn't get any worse
self.assertEqual(rb.dispersion, 59.765625)
def test_effective_overload(self):
rb = ring.RingBuilder(8, 3, 1)
# z0
@ -3595,12 +3693,12 @@ class TestGetRequiredOverload(unittest.TestCase):
rb.rebalance(seed=17)
self.assertEqual(rb.get_balance(), 1581.6406249999998)
# but despite the overall trend toward imbalance, in the tier
# with the huge device, the small device is trying to shed parts
# as effectively as it can (which would be useful if it was the
# only small device isolated in a tier with other huge devices
# trying to gobble up all the replicanths in the tier - see
# `test_one_small_guy_does_not_spoil_his_buddy`!)
# but despite the overall trend toward imbalance, in the tier with the
# huge device, we want to see the small device (d4) try to shed parts
# as effectively as it can to the huge device in the same tier (d5)
# this is a useful behavior anytime when for whatever reason a device
# w/i a tier wants parts from another device already in the same tier
# another example is `test_one_small_guy_does_not_spoil_his_buddy`
expected = {
0: 123,
1: 123,
@ -3691,6 +3789,45 @@ class TestGetRequiredOverload(unittest.TestCase):
self.assertEqual(rb.get_balance(), 30.46875)
# increasing overload moves towards one replica in each tier
rb.set_overload(0.3)
expected = {
0: 0.553443113772455,
1: 0.553443113772455,
2: 0.553443113772455,
3: 0.553443113772455,
4: 0.778443113772455,
5: 0.007784431137724551,
}
target_replicas = rb._build_target_replicas_by_tier()
self.assertEqual(expected, {t[-1]: r for (t, r) in
target_replicas.items()
if len(t) == 4})
# ... and as always increasing overload makes balance *worse*
rb.rebalance(seed=12)
self.assertEqual(rb.get_balance(), 30.46875)
# the little guy it really struggling to take his share tho
expected = {
0: 142,
1: 141,
2: 142,
3: 141,
4: 200,
5: 2,
}
self.assertEqual(expected, {
d['id']: d['parts'] for d in rb._iter_devs()})
# ... and you can see it in the balance!
expected = {
0: -7.367187499999986,
1: -8.019531249999986,
2: -7.367187499999986,
3: -8.019531249999986,
4: 30.46875,
5: 30.46875,
}
self.assertEqual(expected, rb._build_balance_per_dev())
rb.set_overload(0.5)
expected = {
0: 0.5232035928143712,
@ -3705,7 +3842,7 @@ class TestGetRequiredOverload(unittest.TestCase):
target_replicas.items()
if len(t) == 4})
# ... and as always increasing overload makes balance *worse*
# because the device is so small, balance get's bad quick
rb.rebalance(seed=17)
self.assertEqual(rb.get_balance(), 95.703125)

View File

@ -1115,7 +1115,7 @@ class TestCooperativeRingBuilder(BaseTestCompositeBuilder):
rb1.rebalance()
self.assertEqual([rb1], update_calls)
self.assertEqual([rb1], can_part_move_calls.keys())
self.assertEqual(512, len(can_part_move_calls[rb1]))
self.assertEqual(768, len(can_part_move_calls[rb1]))
# two component builders with same parent builder
cb = CompositeRingBuilder()
@ -1139,8 +1139,8 @@ class TestCooperativeRingBuilder(BaseTestCompositeBuilder):
# rb1 is being rebalanced so gets checked, and rb2 also gets checked
self.assertEqual(sorted([rb1, rb2]), sorted(can_part_move_calls))
self.assertEqual(512, len(can_part_move_calls[rb1]))
self.assertEqual(512, len(can_part_move_calls[rb2]))
self.assertEqual(768, len(can_part_move_calls[rb1]))
self.assertEqual(768, len(can_part_move_calls[rb2]))
def test_save_then_load(self):
cb = CompositeRingBuilder()

View File

@ -619,10 +619,10 @@ class TestUtils(unittest.TestCase):
rb.rebalance(seed=100)
rb.validate()
self.assertEqual(rb.dispersion, 39.84375)
self.assertEqual(rb.dispersion, 55.46875)
report = dispersion_report(rb)
self.assertEqual(report['worst_tier'], 'r1z1')
self.assertEqual(report['max_dispersion'], 39.84375)
self.assertEqual(report['max_dispersion'], 44.921875)
def build_tier_report(max_replicas, placed_parts, dispersion,
replicas):
@ -633,16 +633,17 @@ class TestUtils(unittest.TestCase):
'replicas': replicas,
}
# Each node should store 256 partitions to avoid multiple replicas
# Each node should store less than or equal to 256 partitions to
# avoid multiple replicas.
# 2/5 of total weight * 768 ~= 307 -> 51 partitions on each node in
# zone 1 are stored at least twice on the nodes
expected = [
['r1z1', build_tier_report(
2, 256, 39.84375, [0, 0, 154, 102])],
2, 256, 44.921875, [0, 0, 141, 115])],
['r1z1-127.0.0.1', build_tier_report(
1, 256, 19.921875, [0, 205, 51, 0])],
1, 242, 29.33884297520661, [14, 171, 71, 0])],
['r1z1-127.0.0.2', build_tier_report(
1, 256, 19.921875, [0, 205, 51, 0])],
1, 243, 29.218106995884774, [13, 172, 71, 0])],
]
report = dispersion_report(rb, 'r1z1[^/]*$', verbose=True)
graph = report['graph']
@ -667,9 +668,9 @@ class TestUtils(unittest.TestCase):
# can't move all the part-replicas in one rebalance
rb.rebalance(seed=100)
report = dispersion_report(rb, verbose=True)
self.assertEqual(rb.dispersion, 9.375)
self.assertEqual(report['worst_tier'], 'r1z1-127.0.0.1')
self.assertEqual(report['max_dispersion'], 7.18562874251497)
self.assertEqual(rb.dispersion, 11.71875)
self.assertEqual(report['worst_tier'], 'r1z1-127.0.0.2')
self.assertEqual(report['max_dispersion'], 8.875739644970414)
# do a sencond rebalance
rb.rebalance(seed=100)
report = dispersion_report(rb, verbose=True)

View File

@ -20,7 +20,7 @@ import time
from six.moves import range
from test import safe_repr
from test.unit import MockTrue
from test.unit import mock_check_drive
from swift.common.swob import Request, HTTPException
from swift.common.http import HTTP_REQUEST_ENTITY_TOO_LARGE, \
@ -372,21 +372,49 @@ class TestConstraints(unittest.TestCase):
self.assertTrue('X-Delete-At' in req.headers)
self.assertEqual(req.headers['X-Delete-At'], expected)
def test_check_dir(self):
self.assertFalse(constraints.check_dir('', ''))
with mock.patch("os.path.isdir", MockTrue()):
self.assertTrue(constraints.check_dir('/srv', 'foo/bar'))
def test_check_drive_invalid_path(self):
root = '/srv/'
with mock_check_drive() as mocks:
self.assertIsNone(constraints.check_dir(root, 'foo?bar'))
self.assertIsNone(constraints.check_mount(root, 'foo bar'))
self.assertIsNone(constraints.check_drive(root, 'foo/bar', True))
self.assertIsNone(constraints.check_drive(root, 'foo%bar', False))
self.assertEqual([], mocks['isdir'].call_args_list)
self.assertEqual([], mocks['ismount'].call_args_list)
def test_check_mount(self):
self.assertFalse(constraints.check_mount('', ''))
with mock.patch("swift.common.utils.ismount", MockTrue()):
self.assertTrue(constraints.check_mount('/srv', '1'))
self.assertTrue(constraints.check_mount('/srv', 'foo-bar'))
self.assertTrue(constraints.check_mount(
'/srv', '003ed03c-242a-4b2f-bee9-395f801d1699'))
self.assertFalse(constraints.check_mount('/srv', 'foo bar'))
self.assertFalse(constraints.check_mount('/srv', 'foo/bar'))
self.assertFalse(constraints.check_mount('/srv', 'foo?bar'))
def test_check_drive_ismount(self):
root = '/srv'
path = 'sdb1'
with mock_check_drive(ismount=True) as mocks:
self.assertIsNone(constraints.check_dir(root, path))
self.assertIsNone(constraints.check_drive(root, path, False))
self.assertEqual([mock.call('/srv/sdb1'), mock.call('/srv/sdb1')],
mocks['isdir'].call_args_list)
self.assertEqual([], mocks['ismount'].call_args_list)
with mock_check_drive(ismount=True) as mocks:
self.assertEqual('/srv/sdb1', constraints.check_mount(root, path))
self.assertEqual('/srv/sdb1', constraints.check_drive(
root, path, True))
self.assertEqual([], mocks['isdir'].call_args_list)
self.assertEqual([mock.call('/srv/sdb1'), mock.call('/srv/sdb1')],
mocks['ismount'].call_args_list)
def test_check_drive_isdir(self):
root = '/srv'
path = 'sdb2'
with mock_check_drive(isdir=True) as mocks:
self.assertEqual('/srv/sdb2', constraints.check_dir(root, path))
self.assertEqual('/srv/sdb2', constraints.check_drive(
root, path, False))
self.assertEqual([mock.call('/srv/sdb2'), mock.call('/srv/sdb2')],
mocks['isdir'].call_args_list)
self.assertEqual([], mocks['ismount'].call_args_list)
with mock_check_drive(isdir=True) as mocks:
self.assertIsNone(constraints.check_mount(root, path))
self.assertIsNone(constraints.check_drive(root, path, True))
self.assertEqual([], mocks['isdir'].call_args_list)
self.assertEqual([mock.call('/srv/sdb2'), mock.call('/srv/sdb2')],
mocks['ismount'].call_args_list)
def test_check_float(self):
self.assertFalse(constraints.check_float(''))

Some files were not shown because too many files have changed in this diff Show More