Merge master into feature/crypto
Change-Id: I9b3d8d4eb008ead3ce70b77fe84ee00d558814c0
This commit is contained in:
commit
24d3f51386
|
@ -14,3 +14,5 @@ ChangeLog
|
|||
pycscope.*
|
||||
.idea
|
||||
MANIFEST
|
||||
|
||||
test/probe/.noseids
|
||||
|
|
13
AUTHORS
13
AUTHORS
|
@ -13,6 +13,16 @@ Jay Payne (letterj@gmail.com)
|
|||
Will Reese (wreese@gmail.com)
|
||||
Chuck Thier (cthier@gmail.com)
|
||||
|
||||
CORE Emeritus
|
||||
-------------
|
||||
Chmouel Boudjnah (chmouel@enovance.com)
|
||||
Florian Hines (syn@ronin.io)
|
||||
Greg Holt (gholt@rackspace.com)
|
||||
Jay Payne (letterj@gmail.com)
|
||||
Peter Portante (peter.portante@redhat.com)
|
||||
Will Reese (wreese@gmail.com)
|
||||
Chuck Thier (cthier@gmail.com)
|
||||
|
||||
Contributors
|
||||
------------
|
||||
Mehdi Abaakouk (mehdi.abaakouk@enovance.com)
|
||||
|
@ -26,7 +36,6 @@ Yummy Bian (yummy.bian@gmail.com)
|
|||
Darrell Bishop (darrell@swiftstack.com)
|
||||
James E. Blair (jeblair@openstack.org)
|
||||
Fabien Boucher (fabien.boucher@enovance.com)
|
||||
Chmouel Boudjnah (chmouel@enovance.com)
|
||||
Clark Boylan (clark.boylan@gmail.com)
|
||||
Pádraig Brady (pbrady@redhat.com)
|
||||
Lorcan Browne (lorcan.browne@hp.com)
|
||||
|
@ -72,7 +81,6 @@ Gregory Haynes (greg@greghaynes.net)
|
|||
Doug Hellmann (doug.hellmann@dreamhost.com)
|
||||
Dan Hersam (dan.hersam@hp.com)
|
||||
Derek Higgins (derekh@redhat.com)
|
||||
Florian Hines (syn@ronin.io)
|
||||
Alex Holden (alex@alexjonasholden.com)
|
||||
Edward Hope-Morley (opentastic@gmail.com)
|
||||
Kun Huang (gareth@unitedstack.com)
|
||||
|
@ -145,7 +153,6 @@ Alex Pecoraro (alex.pecoraro@emc.com)
|
|||
Sascha Peilicke (saschpe@gmx.de)
|
||||
Constantine Peresypkin (constantine.peresypk@rackspace.com)
|
||||
Dieter Plaetinck (dieter@vimeo.com)
|
||||
Peter Portante (peter.portante@redhat.com)
|
||||
Dan Prince (dprince@redhat.com)
|
||||
Felipe Reyes (freyes@tty.cl)
|
||||
Matt Riedemann (mriedem@us.ibm.com)
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
.. _formpost:
|
||||
|
||||
====================
|
||||
Form POST middleware
|
||||
====================
|
||||
|
@ -8,12 +10,13 @@ path.
|
|||
|
||||
You can upload objects directly to the Object Storage system from a
|
||||
browser by using the form **POST** middleware. This middleware uses
|
||||
account secret keys to generate a cryptographic signature for the
|
||||
account or container secret keys to generate a cryptographic signature for the
|
||||
request. This means that you do not need to send an authentication token
|
||||
in the ``X-Auth-Token`` header to perform the request.
|
||||
|
||||
The form **POST** middleware uses the same secret keys as the temporary
|
||||
URL middleware uses. For information about how to set these keys, see account secret keys.
|
||||
URL middleware uses. For information about how to set these keys, see
|
||||
:ref:`secret_keys`.
|
||||
|
||||
For information about the form **POST** middleware configuration
|
||||
options, see `Form
|
||||
|
@ -162,7 +165,8 @@ signature includes these elements from the form:
|
|||
is set to ``600`` seconds into the future.
|
||||
|
||||
- The secret key. Set as the ``X-Account-Meta-Temp-URL-Key`` header
|
||||
value.
|
||||
value for accounts or ``X-Container-Meta-Temp-URL-Key`` header
|
||||
value for containers. See :ref:`secret_keys` for more information.
|
||||
|
||||
The following example code generates a signature for use with form
|
||||
**POST**:
|
||||
|
@ -211,4 +215,3 @@ This example uses the **swift-form-signature** script to compute the
|
|||
-F signature=35129416ebda2f1a21b3c2b8939850dfc63d8f43 \
|
||||
-F redirect=https://example.com/done.html \
|
||||
-F file=@flower.jpg
|
||||
|
||||
|
|
|
@ -23,9 +23,7 @@ Note
|
|||
~~~~
|
||||
|
||||
To use **POST** requests to upload objects to specific Object Storage
|
||||
locations, use form **POST** instead of temporary URL middleware. See
|
||||
`Form POST <http://docs.openstack.org/havana/config-reference/content/object-storage-form-post.html>`__
|
||||
in the *OpenStack Configuration Reference*.
|
||||
locations, use :doc:`form_post_middleware` instead of temporary URL middleware.
|
||||
|
||||
Temporary URL format
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -38,7 +36,7 @@ parameters:
|
|||
.. code::
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/object
|
||||
?temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709
|
||||
?temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709
|
||||
&temp_url_expires=1323479485
|
||||
&filename=My+Test+File.pdf
|
||||
|
||||
|
@ -64,23 +62,33 @@ object name. Object Storage returns this value in the ``Content-Disposition``
|
|||
response header. Browsers can interpret this file name value as a file
|
||||
attachment to be saved.
|
||||
|
||||
Account secret keys
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
.. _secret_keys:
|
||||
|
||||
Object Storage supports up to two secret keys. You set secret keys at
|
||||
the account level.
|
||||
Secret Keys
|
||||
~~~~~~~~~~~
|
||||
|
||||
To set these keys, set one or both of the following request headers to
|
||||
arbitrary values:
|
||||
The cryptographic signature used in Temporary URLs and also in
|
||||
:doc:`form_post_middleware` uses a secret key. Object Storage allows you to
|
||||
store two secret key values per account, and two per container. When validating
|
||||
a request, Object Storage checks signatures against all keys. Using two keys at
|
||||
each level enables key rotation without invalidating existing temporary URLs.
|
||||
|
||||
To set the keys at the account level, set one or both of the following
|
||||
request headers to arbitrary values on a **POST** request to the account:
|
||||
|
||||
- ``X-Account-Meta-Temp-URL-Key``
|
||||
|
||||
- ``X-Account-Meta-Temp-URL-Key-2``
|
||||
|
||||
The arbitrary values serve as the secret keys.
|
||||
To set the keys at the container level, set one or both of the following
|
||||
request headers to arbitrary values on a **POST** or **PUT** request to the
|
||||
container:
|
||||
|
||||
Object Storage checks signatures against both keys, if present, to
|
||||
enable key rotation without invalidating existing temporary URLs.
|
||||
- ``X-Container-Meta-Temp-URL-Key``
|
||||
|
||||
- ``X-Container-Meta-Temp-URL-Key-2``
|
||||
|
||||
The arbitrary values serve as the secret keys.
|
||||
|
||||
For example, use the **swift post** command to set the secret key to
|
||||
*``MYKEY``*:
|
||||
|
@ -104,16 +112,16 @@ signature includes these elements:
|
|||
- The allowed method. Typically, **GET** or **PUT**.
|
||||
|
||||
- Expiry time. In the example for the HMAC-SHA1 signature for temporary
|
||||
URLs below, the expiry time is set to ``86400`` seconds (or 1 day)
|
||||
URLs below, the expiry time is set to ``86400`` seconds (or 1 day)
|
||||
into the future.
|
||||
|
||||
- The path. Starting with ``/v1/`` onwards and including a container
|
||||
name and object. In the example below, the path is
|
||||
name and object. In the example below, the path is
|
||||
``/v1/my_account/container/object``. Do not URL-encode the path at
|
||||
this stage.
|
||||
|
||||
- The secret key. Set as the ``X-Account-Meta-Temp-URL-Key`` header
|
||||
value.
|
||||
- The secret key. Use one of the key values as described
|
||||
in :ref:`secret_keys`.
|
||||
|
||||
This sample Python code shows how to compute a signature for use with
|
||||
temporary URLs:
|
||||
|
@ -138,8 +146,8 @@ Do not URL-encode the path when you generate the HMAC-SHA1 signature.
|
|||
However, when you make the actual HTTP request, you should properly
|
||||
URL-encode the URL.
|
||||
|
||||
The *``MYKEY``* value is the value you set in the
|
||||
``X-Account-Meta-Temp-URL-Key`` request header on the account.
|
||||
The *``MYKEY``* value is one of the key values as described
|
||||
in :ref:`secret_keys`.
|
||||
|
||||
For more information, see `RFC 2104: HMAC: Keyed-Hashing for Message
|
||||
Authentication <http://www.ietf.org/rfc/rfc2104.txt>`__.
|
||||
|
|
|
@ -80,11 +80,22 @@ Custom Logger Hooks
|
|||
|
||||
Storage Backends (DiskFile API implementations)
|
||||
-----------------------------------------------
|
||||
* `SwiftOnFile <https://github.com/swiftonfile/swiftonfile>`_ - Enables objects created using Swift API to be accessed as files on a POSIX filesystem and vice versa.
|
||||
* `Swift-on-File <https://github.com/stackforge/swiftonfile>`_ - Enables objects created using Swift API to be accessed as files on a POSIX filesystem and vice versa.
|
||||
* `swift-ceph-backend <https://github.com/stackforge/swift-ceph-backend>`_ - Ceph RADOS object server implementation for Swift.
|
||||
* `kinetic-swift <https://github.com/swiftstack/kinetic-swift>`_ - Seagate Kinetic Drive as backend for Swift
|
||||
* `swift-scality-backend <https://github.com/scality/ScalitySproxydSwift>`_ - Scality sproxyd object server implementation for Swift.
|
||||
|
||||
Developer Tools
|
||||
---------------
|
||||
* `vagrant-swift-all-in-one
|
||||
<https://github.com/swiftstack/vagrant-swift-all-in-one>`_ - Quickly setup a
|
||||
standard development using Vagrant and chef cookbooks in an Ubuntu virtual
|
||||
machine.
|
||||
* `SAIO Ansible playbook <https://github.com/thiagodasilva/swift-aio>`_ -
|
||||
Quickly setup a standard develop enviornment using Vagrant and ansible in a
|
||||
Fedora virtual machine (with built-in `Swift-on-File
|
||||
<https://github.com/stackforge/swiftonfile>`_ support).
|
||||
|
||||
Other
|
||||
-----
|
||||
|
||||
|
|
|
@ -10,15 +10,11 @@ This section documents setting up a virtual machine for doing Swift
|
|||
development. The virtual machine will emulate running a four node Swift
|
||||
cluster.
|
||||
|
||||
* Get an Ubuntu 12.04 LTS (Precise Pangolin) server image or try something
|
||||
* Get an Ubuntu 14.04 LTS server image or try something
|
||||
Fedora/CentOS.
|
||||
|
||||
* Create guest virtual machine from the image.
|
||||
|
||||
Additional information about setting up a Swift development snapshot on other
|
||||
distributions is available on the wiki at
|
||||
http://wiki.openstack.org/SAIOInstructions.
|
||||
|
||||
----------------------------
|
||||
What's in a <your-user-name>
|
||||
----------------------------
|
||||
|
|
|
@ -7,38 +7,34 @@ System Requirements
|
|||
-------------------
|
||||
|
||||
Swift development currently targets Ubuntu Server 14.04, but should work on
|
||||
most Linux platforms with the following software:
|
||||
most Linux platforms.
|
||||
|
||||
* Python 2.6 or 2.7
|
||||
Swift is written in Python and has these dependencies:
|
||||
|
||||
* Python 2.7
|
||||
* rsync 3.0
|
||||
* The Python packages listed in `the requirements file <https://github.com/openstack/swift/blob/master/requirements.txt>`_
|
||||
* Testing additionally requires `the test dependencies <https://github.com/openstack/swift/blob/master/test-requirements.txt>`_
|
||||
|
||||
And the following python libraries:
|
||||
|
||||
* Eventlet 0.9.15
|
||||
* Setuptools
|
||||
* Simplejson
|
||||
* Xattr
|
||||
* Nose
|
||||
* Sphinx
|
||||
* Netifaces
|
||||
* Dnspython
|
||||
* Pastedeploy
|
||||
|
||||
Python 2.6 should work, but it's not actively tested. There is no current
|
||||
support for Python 3.
|
||||
|
||||
-------------
|
||||
Getting Swift
|
||||
-------------
|
||||
|
||||
Swift's source code is hosted on github and managed with git. The current trunk can be checked out like this:
|
||||
Swift's source code is hosted on github and managed with git. The current
|
||||
trunk can be checked out like this:
|
||||
|
||||
``git clone https://github.com/openstack/swift.git``
|
||||
|
||||
A source tarball for the latest release of Swift is available on the `launchpad project page <https://launchpad.net/swift>`_.
|
||||
A source tarball for the latest release of Swift is available on the
|
||||
`launchpad project page <https://launchpad.net/swift>`_.
|
||||
|
||||
Prebuilt packages for Ubuntu are available starting with Natty, or from PPAs for earlier releases.
|
||||
Prebuilt packages for Ubuntu and RHEL variants are available.
|
||||
|
||||
* `Swift Ubuntu Packages <https://launchpad.net/ubuntu/+source/swift>`_
|
||||
* `Swift PPA Archive <https://launchpad.net/~swift-core/+archive/release>`_
|
||||
* `Swift RDO Packages <https://openstack.redhat.com/Repositories>`_
|
||||
|
||||
-----------
|
||||
Development
|
||||
|
@ -47,13 +43,26 @@ Development
|
|||
To get started with development with Swift, or to just play around, the
|
||||
following docs will be useful:
|
||||
|
||||
* :doc:`Swift All in One <development_saio>` - Set up a VM with Swift installed
|
||||
* :doc:`Swift All in One <development_saio>` - Set up a VM with Swift
|
||||
installed
|
||||
* :doc:`Development Guidelines <development_guidelines>`
|
||||
* `Associated Projects <http://docs.openstack.org/developer/swift/associated_projects.html>`
|
||||
|
||||
--------------------------
|
||||
CLI client and SDK library
|
||||
--------------------------
|
||||
|
||||
There are many clients in the `ecosystem <http://docs.openstack.org/developer/swift/associated_projects.html#application-bindings>`_. The official CLI
|
||||
and SDK is python-swiftclient.
|
||||
|
||||
* `Source code <https://github.com/openstack/python-swiftclient>`_
|
||||
* `Python Package Index <https://pypi.python.org/pypi/python-swiftclient>`_
|
||||
|
||||
----------
|
||||
Production
|
||||
----------
|
||||
|
||||
If you want to set up and configure Swift for a production cluster, the following doc should be useful:
|
||||
If you want to set up and configure Swift for a production cluster, the
|
||||
following doc should be useful:
|
||||
|
||||
* :doc:`Multiple Server Swift Installation <howto_installmultinode>`
|
||||
|
|
|
@ -32,7 +32,7 @@ is::
|
|||
client_ip remote_addr datetime request_method request_path protocol
|
||||
status_int referer user_agent auth_token bytes_recvd bytes_sent
|
||||
client_etag transaction_id headers request_time source log_info
|
||||
request_start_time request_end_time
|
||||
request_start_time request_end_time policy_index
|
||||
|
||||
=================== ==========================================================
|
||||
**Log Field** **Value**
|
||||
|
@ -66,6 +66,7 @@ log_info Various info that may be useful for diagnostics, e.g. the
|
|||
value of any x-delete-at header.
|
||||
request_start_time High-resolution timestamp from the start of the request.
|
||||
request_end_time High-resolution timestamp from the end of the request.
|
||||
policy_index The value of the storage policy index.
|
||||
=================== ==========================================================
|
||||
|
||||
In one log line, all of the above fields are space-separated and url-encoded.
|
||||
|
@ -100,6 +101,7 @@ TA :ref:`common_tempauth`
|
|||
DLO :ref:`dynamic-large-objects`
|
||||
LE :ref:`list_endpoints`
|
||||
KS :ref:`keystoneauth`
|
||||
RL :ref:`ratelimit`
|
||||
======================= =============================
|
||||
|
||||
|
||||
|
@ -114,7 +116,7 @@ these log lines is::
|
|||
|
||||
remote_addr - - [datetime] "request_method request_path" status_int
|
||||
content_length "referer" "transaction_id" "user_agent" request_time
|
||||
additional_info
|
||||
additional_info server_pid policy_index
|
||||
|
||||
=================== ==========================================================
|
||||
**Log Field** **Value**
|
||||
|
@ -136,4 +138,5 @@ user_agent The value of the HTTP User-Agent header. Swift services
|
|||
request_time The duration of the request.
|
||||
additional_info Additional useful information.
|
||||
server_pid The process id of the server
|
||||
policy_index The value of the storage policy index.
|
||||
=================== ==========================================================
|
||||
|
|
|
@ -75,7 +75,7 @@ weight float The relative weight of the device in comparison to other
|
|||
back into balance a device that has ended up with more or less
|
||||
data than desired over time. A good average weight of 100.0
|
||||
allows flexibility in lowering the weight later if necessary.
|
||||
ip string The IP address of the server containing the device.
|
||||
ip string The IP address or hostname of the server containing the device.
|
||||
port int The TCP port the listening server process uses that serves
|
||||
requests for the device.
|
||||
device string The on disk name of the device on the server.
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
.. _ratelimit:
|
||||
|
||||
=============
|
||||
Rate Limiting
|
||||
=============
|
||||
|
|
|
@ -203,7 +203,7 @@ use = egg:swift#proxy
|
|||
# These are the headers whose values will only be shown to swift_owners. The
|
||||
# exact definition of a swift_owner is up to the auth system in use, but
|
||||
# usually indicates administrative responsibilities.
|
||||
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-account-access-control
|
||||
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
|
||||
|
||||
[filter:tempauth]
|
||||
use = egg:swift#tempauth
|
||||
|
|
|
@ -28,10 +28,11 @@ from swift.common.direct_client import direct_delete_container, \
|
|||
direct_delete_object, direct_get_container
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.ring import Ring
|
||||
from swift.common.ring.utils import is_local_device
|
||||
from swift.common.utils import get_logger, whataremyips, ismount, \
|
||||
config_true_value, Timestamp
|
||||
from swift.common.daemon import Daemon
|
||||
from swift.common.storage_policy import POLICIES
|
||||
from swift.common.storage_policy import POLICIES, PolicyError
|
||||
|
||||
|
||||
class AccountReaper(Daemon):
|
||||
|
@ -159,8 +160,8 @@ class AccountReaper(Daemon):
|
|||
if not partition.isdigit():
|
||||
continue
|
||||
nodes = self.get_account_ring().get_part_nodes(int(partition))
|
||||
if nodes[0]['ip'] not in self.myips or \
|
||||
not os.path.isdir(partition_path):
|
||||
if (not is_local_device(self.myips, None, nodes[0]['ip'], None)
|
||||
or not os.path.isdir(partition_path)):
|
||||
continue
|
||||
for suffix in os.listdir(partition_path):
|
||||
suffix_path = os.path.join(partition_path, suffix)
|
||||
|
@ -353,6 +354,10 @@ class AccountReaper(Daemon):
|
|||
break
|
||||
try:
|
||||
policy_index = headers.get('X-Backend-Storage-Policy-Index', 0)
|
||||
policy = POLICIES.get_by_index(policy_index)
|
||||
if not policy:
|
||||
self.logger.error('ERROR: invalid storage policy index: %r'
|
||||
% policy_index)
|
||||
for obj in objects:
|
||||
if isinstance(obj['name'], unicode):
|
||||
obj['name'] = obj['name'].encode('utf8')
|
||||
|
@ -428,7 +433,12 @@ class AccountReaper(Daemon):
|
|||
of the container node dicts.
|
||||
"""
|
||||
container_nodes = list(container_nodes)
|
||||
ring = self.get_object_ring(policy_index)
|
||||
try:
|
||||
ring = self.get_object_ring(policy_index)
|
||||
except PolicyError:
|
||||
self.stats_objects_remaining += 1
|
||||
self.logger.increment('objects_remaining')
|
||||
return
|
||||
part, nodes = ring.get_nodes(account, container, obj)
|
||||
successes = 0
|
||||
failures = 0
|
||||
|
|
|
@ -692,6 +692,7 @@ class SwiftRecon(object):
|
|||
objq = {}
|
||||
conq = {}
|
||||
acctq = {}
|
||||
stats = {}
|
||||
recon = Scout("quarantined", self.verbose, self.suppress_errors,
|
||||
self.timeout)
|
||||
print("[%s] Checking quarantine" % self._ptime())
|
||||
|
@ -700,7 +701,12 @@ class SwiftRecon(object):
|
|||
objq[url] = response['objects']
|
||||
conq[url] = response['containers']
|
||||
acctq[url] = response['accounts']
|
||||
stats = {"objects": objq, "containers": conq, "accounts": acctq}
|
||||
if response['policies']:
|
||||
for key in response['policies']:
|
||||
pkey = "objects_%s" % key
|
||||
stats.setdefault(pkey, {})
|
||||
stats[pkey][url] = response['policies'][key]['objects']
|
||||
stats.update({"objects": objq, "containers": conq, "accounts": acctq})
|
||||
for item in stats:
|
||||
if len(stats[item]) > 0:
|
||||
computed = self._gen_stats(stats[item].values(),
|
||||
|
@ -874,8 +880,8 @@ class SwiftRecon(object):
|
|||
args.add_option('--top', type='int', metavar='COUNT', default=0,
|
||||
help='Also show the top COUNT entries in rank order.')
|
||||
args.add_option('--all', action="store_true",
|
||||
help="Perform all checks. Equal to -arudlq --md5 "
|
||||
"--sockstat")
|
||||
help="Perform all checks. Equal to \t\t\t-arudlq "
|
||||
"--md5 --sockstat --auditor --updater --expirer")
|
||||
args.add_option('--region', type="int",
|
||||
help="Only query servers in specified region")
|
||||
args.add_option('--zone', '-z', type="int",
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
|
||||
from errno import EEXIST
|
||||
from itertools import islice, izip
|
||||
from operator import itemgetter
|
||||
from os import mkdir
|
||||
from os.path import basename, abspath, dirname, exists, join as pathjoin
|
||||
from sys import argv as sys_argv, exit, stderr
|
||||
|
@ -27,10 +28,12 @@ import math
|
|||
from swift.common import exceptions
|
||||
from swift.common.ring import RingBuilder, Ring
|
||||
from swift.common.ring.builder import MAX_BALANCE
|
||||
from swift.common.utils import lock_parent_directory
|
||||
from swift.common.ring.utils import parse_search_value, parse_args, \
|
||||
build_dev_from_opts, parse_builder_ring_filename_args, find_parts, \
|
||||
from swift.common.ring.utils import validate_args, \
|
||||
validate_and_normalize_ip, build_dev_from_opts, \
|
||||
parse_builder_ring_filename_args, parse_search_value, \
|
||||
parse_search_values_from_opts, parse_change_values_from_opts, \
|
||||
dispersion_report
|
||||
from swift.common.utils import lock_parent_directory
|
||||
|
||||
MAJOR_VERSION = 1
|
||||
MINOR_VERSION = 3
|
||||
|
@ -55,6 +58,106 @@ def format_device(dev):
|
|||
'"%(meta)s"' % copy_dev)
|
||||
|
||||
|
||||
def _parse_search_values(argvish):
|
||||
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
try:
|
||||
search_values = {}
|
||||
if len(args) > 0:
|
||||
if new_cmd_format or len(args) != 1:
|
||||
print Commands.search.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
search_values = parse_search_value(args[0])
|
||||
else:
|
||||
search_values = parse_search_values_from_opts(opts)
|
||||
return search_values
|
||||
except ValueError as e:
|
||||
print e
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
|
||||
def _find_parts(devs):
|
||||
devs = [d['id'] for d in devs]
|
||||
if not devs or not builder._replica2part2dev:
|
||||
return None
|
||||
|
||||
partition_count = {}
|
||||
for replica in builder._replica2part2dev:
|
||||
for partition, device in enumerate(replica):
|
||||
if device in devs:
|
||||
if partition not in partition_count:
|
||||
partition_count[partition] = 0
|
||||
partition_count[partition] += 1
|
||||
|
||||
# Sort by number of found replicas to keep the output format
|
||||
sorted_partition_count = sorted(
|
||||
partition_count.iteritems(), key=itemgetter(1), reverse=True)
|
||||
|
||||
return sorted_partition_count
|
||||
|
||||
|
||||
def _parse_list_parts_values(argvish):
|
||||
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
try:
|
||||
devs = []
|
||||
if len(args) > 0:
|
||||
if new_cmd_format:
|
||||
print Commands.list_parts.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for arg in args:
|
||||
devs.extend(
|
||||
builder.search_devs(parse_search_value(arg)) or [])
|
||||
else:
|
||||
devs.extend(builder.search_devs(
|
||||
parse_search_values_from_opts(opts)) or [])
|
||||
|
||||
return devs
|
||||
except ValueError as e:
|
||||
print e
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
|
||||
def _parse_address(rest):
|
||||
if rest.startswith('['):
|
||||
# remove first [] for ip
|
||||
rest = rest.replace('[', '', 1).replace(']', '', 1)
|
||||
|
||||
pos = 0
|
||||
while (pos < len(rest) and
|
||||
not (rest[pos] == 'R' or rest[pos] == '/')):
|
||||
pos += 1
|
||||
address = rest[:pos]
|
||||
rest = rest[pos:]
|
||||
|
||||
port_start = address.rfind(':')
|
||||
if port_start == -1:
|
||||
raise ValueError('Invalid port in add value')
|
||||
|
||||
ip = address[:port_start]
|
||||
try:
|
||||
port = int(address[(port_start + 1):])
|
||||
except (TypeError, ValueError):
|
||||
raise ValueError(
|
||||
'Invalid port %s in add value' % address[port_start:])
|
||||
|
||||
# if this is an ipv6 address then we want to convert it
|
||||
# to all lowercase and use its fully expanded representation
|
||||
# to make searches easier
|
||||
ip = validate_and_normalize_ip(ip)
|
||||
|
||||
return (ip, port, rest)
|
||||
|
||||
|
||||
def _parse_add_values(argvish):
|
||||
"""
|
||||
Parse devices to add as specified on the command line.
|
||||
|
@ -63,23 +166,17 @@ def _parse_add_values(argvish):
|
|||
|
||||
:returns: array of device dicts
|
||||
"""
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
opts, args = parse_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the --options format,
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
opts_used = opts.region or opts.zone or opts.ip or opts.port or \
|
||||
opts.device or opts.weight or opts.meta
|
||||
|
||||
if len(args) > 0 and opts_used:
|
||||
print Commands.add.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
elif len(args) > 0:
|
||||
if len(args) % 2 != 0:
|
||||
parsed_devs = []
|
||||
if len(args) > 0:
|
||||
if new_cmd_format or len(args) % 2 != 0:
|
||||
print Commands.add.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
parsed_devs = []
|
||||
devs_and_weights = izip(islice(args, 0, len(args), 2),
|
||||
islice(args, 1, len(args), 2))
|
||||
|
||||
|
@ -93,12 +190,11 @@ def _parse_add_values(argvish):
|
|||
region = int(devstr[1:i])
|
||||
rest = devstr[i:]
|
||||
else:
|
||||
stderr.write("WARNING: No region specified for %s. "
|
||||
"Defaulting to region 1.\n" % devstr)
|
||||
stderr.write('WARNING: No region specified for %s. '
|
||||
'Defaulting to region 1.\n' % devstr)
|
||||
|
||||
if not rest.startswith('z'):
|
||||
print 'Invalid add value: %s' % devstr
|
||||
exit(EXIT_ERROR)
|
||||
raise ValueError('Invalid add value: %s' % devstr)
|
||||
i = 1
|
||||
while i < len(rest) and rest[i].isdigit():
|
||||
i += 1
|
||||
|
@ -106,64 +202,18 @@ def _parse_add_values(argvish):
|
|||
rest = rest[i:]
|
||||
|
||||
if not rest.startswith('-'):
|
||||
print 'Invalid add value: %s' % devstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
i = 1
|
||||
if rest[i] == '[':
|
||||
i += 1
|
||||
while i < len(rest) and rest[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
ip = rest[1:i].lstrip('[').rstrip(']')
|
||||
rest = rest[i:]
|
||||
else:
|
||||
while i < len(rest) and rest[i] in '0123456789.':
|
||||
i += 1
|
||||
ip = rest[1:i]
|
||||
rest = rest[i:]
|
||||
raise ValueError('Invalid add value: %s' % devstr)
|
||||
|
||||
if not rest.startswith(':'):
|
||||
print 'Invalid add value: %s' % devstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
i = 1
|
||||
while i < len(rest) and rest[i].isdigit():
|
||||
i += 1
|
||||
port = int(rest[1:i])
|
||||
rest = rest[i:]
|
||||
ip, port, rest = _parse_address(rest[1:])
|
||||
|
||||
replication_ip = ip
|
||||
replication_port = port
|
||||
if rest.startswith('R'):
|
||||
i = 1
|
||||
if rest[i] == '[':
|
||||
i += 1
|
||||
while i < len(rest) and rest[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
replication_ip = rest[1:i].lstrip('[').rstrip(']')
|
||||
rest = rest[i:]
|
||||
else:
|
||||
while i < len(rest) and rest[i] in '0123456789.':
|
||||
i += 1
|
||||
replication_ip = rest[1:i]
|
||||
rest = rest[i:]
|
||||
|
||||
if not rest.startswith(':'):
|
||||
print 'Invalid add value: %s' % devstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
i = 1
|
||||
while i < len(rest) and rest[i].isdigit():
|
||||
i += 1
|
||||
replication_port = int(rest[1:i])
|
||||
rest = rest[i:]
|
||||
|
||||
replication_ip, replication_port, rest = \
|
||||
_parse_address(rest[1:])
|
||||
if not rest.startswith('/'):
|
||||
print 'Invalid add value: %s' % devstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
raise ValueError(
|
||||
'Invalid add value: %s' % devstr)
|
||||
i = 1
|
||||
while i < len(rest) and rest[i] != '_':
|
||||
i += 1
|
||||
|
@ -174,32 +224,227 @@ def _parse_add_values(argvish):
|
|||
if rest.startswith('_'):
|
||||
meta = rest[1:]
|
||||
|
||||
try:
|
||||
weight = float(weightstr)
|
||||
except ValueError:
|
||||
print 'Invalid weight value: %s' % weightstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
weight = float(weightstr)
|
||||
|
||||
if weight < 0:
|
||||
print 'Invalid weight value (must be positive): %s' % weightstr
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
raise ValueError('Invalid weight value: %s' % devstr)
|
||||
|
||||
parsed_devs.append({'region': region, 'zone': zone, 'ip': ip,
|
||||
'port': port, 'device': device_name,
|
||||
'replication_ip': replication_ip,
|
||||
'replication_port': replication_port,
|
||||
'weight': weight, 'meta': meta})
|
||||
return parsed_devs
|
||||
else:
|
||||
try:
|
||||
dev = build_dev_from_opts(opts)
|
||||
except ValueError as e:
|
||||
print e
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
parsed_devs.append(build_dev_from_opts(opts))
|
||||
|
||||
return parsed_devs
|
||||
|
||||
|
||||
def _set_weight_values(devs, weight):
|
||||
if not devs:
|
||||
print('Search value matched 0 devices.\n'
|
||||
'The on-disk ring builder is unchanged.')
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to update the weight for '
|
||||
'these %s devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device modifications'
|
||||
exit(EXIT_ERROR)
|
||||
return [dev]
|
||||
|
||||
for dev in devs:
|
||||
builder.set_dev_weight(dev['id'], weight)
|
||||
print '%s weight set to %s' % (format_device(dev),
|
||||
dev['weight'])
|
||||
|
||||
|
||||
def _parse_set_weight_values(argvish):
|
||||
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
try:
|
||||
devs = []
|
||||
if not new_cmd_format:
|
||||
if len(args) % 2 != 0:
|
||||
print Commands.set_weight.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
devs_and_weights = izip(islice(argvish, 0, len(argvish), 2),
|
||||
islice(argvish, 1, len(argvish), 2))
|
||||
for devstr, weightstr in devs_and_weights:
|
||||
devs.extend(builder.search_devs(
|
||||
parse_search_value(devstr)) or [])
|
||||
weight = float(weightstr)
|
||||
_set_weight_values(devs, weight)
|
||||
else:
|
||||
if len(args) != 1:
|
||||
print Commands.set_weight.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
devs.extend(builder.search_devs(
|
||||
parse_search_values_from_opts(opts)) or [])
|
||||
weight = float(args[0])
|
||||
_set_weight_values(devs, weight)
|
||||
except ValueError as e:
|
||||
print e
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
|
||||
def _set_info_values(devs, change):
|
||||
|
||||
if not devs:
|
||||
print("Search value matched 0 devices.\n"
|
||||
"The on-disk ring builder is unchanged.")
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to update the info for '
|
||||
'these %s devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device modifications'
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for dev in devs:
|
||||
orig_dev_string = format_device(dev)
|
||||
test_dev = dict(dev)
|
||||
for key in change:
|
||||
test_dev[key] = change[key]
|
||||
for check_dev in builder.devs:
|
||||
if not check_dev or check_dev['id'] == test_dev['id']:
|
||||
continue
|
||||
if check_dev['ip'] == test_dev['ip'] and \
|
||||
check_dev['port'] == test_dev['port'] and \
|
||||
check_dev['device'] == test_dev['device']:
|
||||
print 'Device %d already uses %s:%d/%s.' % \
|
||||
(check_dev['id'], check_dev['ip'],
|
||||
check_dev['port'], check_dev['device'])
|
||||
exit(EXIT_ERROR)
|
||||
for key in change:
|
||||
dev[key] = change[key]
|
||||
print 'Device %s is now %s' % (orig_dev_string,
|
||||
format_device(dev))
|
||||
|
||||
|
||||
def _parse_set_info_values(argvish):
|
||||
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
if not new_cmd_format:
|
||||
if len(args) % 2 != 0:
|
||||
print Commands.search.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
searches_and_changes = izip(islice(argvish, 0, len(argvish), 2),
|
||||
islice(argvish, 1, len(argvish), 2))
|
||||
|
||||
for search_value, change_value in searches_and_changes:
|
||||
devs = builder.search_devs(parse_search_value(search_value))
|
||||
change = {}
|
||||
ip = ''
|
||||
if len(change_value) and change_value[0].isdigit():
|
||||
i = 1
|
||||
while (i < len(change_value) and
|
||||
change_value[i] in '0123456789.'):
|
||||
i += 1
|
||||
ip = change_value[:i]
|
||||
change_value = change_value[i:]
|
||||
elif len(change_value) and change_value[0] == '[':
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
ip = change_value[:i].lstrip('[').rstrip(']')
|
||||
change_value = change_value[i:]
|
||||
if ip:
|
||||
change['ip'] = validate_and_normalize_ip(ip)
|
||||
if change_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i].isdigit():
|
||||
i += 1
|
||||
change['port'] = int(change_value[1:i])
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('R'):
|
||||
change_value = change_value[1:]
|
||||
replication_ip = ''
|
||||
if len(change_value) and change_value[0].isdigit():
|
||||
i = 1
|
||||
while (i < len(change_value) and
|
||||
change_value[i] in '0123456789.'):
|
||||
i += 1
|
||||
replication_ip = change_value[:i]
|
||||
change_value = change_value[i:]
|
||||
elif len(change_value) and change_value[0] == '[':
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
replication_ip = \
|
||||
change_value[:i].lstrip('[').rstrip(']')
|
||||
change_value = change_value[i:]
|
||||
if replication_ip:
|
||||
change['replication_ip'] = \
|
||||
validate_and_normalize_ip(replication_ip)
|
||||
if change_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i].isdigit():
|
||||
i += 1
|
||||
change['replication_port'] = int(change_value[1:i])
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('/'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != '_':
|
||||
i += 1
|
||||
change['device'] = change_value[1:i]
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('_'):
|
||||
change['meta'] = change_value[1:]
|
||||
change_value = ''
|
||||
if change_value or not change:
|
||||
raise ValueError('Invalid set info change value: %s' %
|
||||
repr(argvish[1]))
|
||||
_set_info_values(devs, change)
|
||||
else:
|
||||
devs = builder.search_devs(parse_search_values_from_opts(opts))
|
||||
change = parse_change_values_from_opts(opts)
|
||||
_set_info_values(devs, change)
|
||||
|
||||
|
||||
def _parse_remove_values(argvish):
|
||||
|
||||
new_cmd_format, opts, args = validate_args(argvish)
|
||||
|
||||
# We'll either parse the all-in-one-string format or the
|
||||
# --options format,
|
||||
# but not both. If both are specified, raise an error.
|
||||
try:
|
||||
devs = []
|
||||
if len(args) > 0:
|
||||
if new_cmd_format:
|
||||
print Commands.remove.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for arg in args:
|
||||
devs.extend(builder.search_devs(
|
||||
parse_search_value(arg)) or [])
|
||||
else:
|
||||
devs.extend(builder.search_devs(
|
||||
parse_search_values_from_opts(opts)))
|
||||
|
||||
return devs
|
||||
except ValueError as e:
|
||||
print e
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
|
||||
class Commands(object):
|
||||
|
@ -286,6 +531,18 @@ swift-ring-builder <builder_file>
|
|||
def search():
|
||||
"""
|
||||
swift-ring-builder <builder_file> search <search-value>
|
||||
|
||||
or
|
||||
|
||||
swift-ring-builder <builder_file> search
|
||||
--region <region> --zone <zone> --ip <ip or hostname> --port <port>
|
||||
--replication-ip <r_ip or r_hostname> --replication-port <r_port>
|
||||
--device <device_name> --meta <meta> --weight <weight>
|
||||
|
||||
Where <r_ip>, <r_hostname> and <r_port> are replication ip, hostname
|
||||
and port.
|
||||
Any of the options are optional in both cases.
|
||||
|
||||
Shows information about matching devices.
|
||||
"""
|
||||
if len(argv) < 4:
|
||||
|
@ -293,7 +550,9 @@ swift-ring-builder <builder_file> search <search-value>
|
|||
print
|
||||
print parse_search_value.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
devs = builder.search_devs(parse_search_value(argv[3]))
|
||||
|
||||
devs = builder.search_devs(_parse_search_values(argv[3:]))
|
||||
|
||||
if not devs:
|
||||
print 'No matching devices found'
|
||||
exit(EXIT_ERROR)
|
||||
|
@ -322,6 +581,18 @@ swift-ring-builder <builder_file> search <search-value>
|
|||
def list_parts():
|
||||
"""
|
||||
swift-ring-builder <builder_file> list_parts <search-value> [<search-value>] ..
|
||||
|
||||
or
|
||||
|
||||
swift-ring-builder <builder_file> list_parts
|
||||
--region <region> --zone <zone> --ip <ip or hostname> --port <port>
|
||||
--replication-ip <r_ip or r_hostname> --replication-port <r_port>
|
||||
--device <device_name> --meta <meta> --weight <weight>
|
||||
|
||||
Where <r_ip>, <r_hostname> and <r_port> are replication ip, hostname
|
||||
and port.
|
||||
Any of the options are optional in both cases.
|
||||
|
||||
Returns a 2 column list of all the partitions that are assigned to any of
|
||||
the devices matching the search values given. The first column is the
|
||||
assigned partition number and the second column is the number of device
|
||||
|
@ -340,7 +611,12 @@ swift-ring-builder <builder_file> list_parts <search-value> [<search-value>] ..
|
|||
'Please rebalance first.' % argv[1])
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
sorted_partition_count = find_parts(builder, argv)
|
||||
devs = _parse_list_parts_values(argv[3:])
|
||||
if not devs:
|
||||
print 'No matching devices found'
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
sorted_partition_count = _find_parts(devs)
|
||||
|
||||
if not sorted_partition_count:
|
||||
print 'No matching devices found'
|
||||
|
@ -364,8 +640,8 @@ swift-ring-builder <builder_file> add
|
|||
or
|
||||
|
||||
swift-ring-builder <builder_file> add
|
||||
--region <region> --zone <zone> --ip <ip> --port <port>
|
||||
[--replication-ip <r_ip> --replication-port <r_port>]
|
||||
--region <region> --zone <zone> --ip <ip or hostname> --port <port>
|
||||
[--replication-ip <r_ip or r_hostname>] [--replication-port <r_port>]
|
||||
--device <device_name> --weight <weight>
|
||||
[--meta <meta>]
|
||||
|
||||
|
@ -373,24 +649,30 @@ swift-ring-builder <builder_file> add
|
|||
assigned to the new device until after running 'rebalance'. This is so you
|
||||
can make multiple device changes and rebalance them all just once.
|
||||
"""
|
||||
if len(argv) < 5 or len(argv) % 2 != 1:
|
||||
if len(argv) < 5:
|
||||
print Commands.add.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for new_dev in _parse_add_values(argv[3:]):
|
||||
for dev in builder.devs:
|
||||
if dev is None:
|
||||
continue
|
||||
if dev['ip'] == new_dev['ip'] and \
|
||||
dev['port'] == new_dev['port'] and \
|
||||
dev['device'] == new_dev['device']:
|
||||
print 'Device %d already uses %s:%d/%s.' % \
|
||||
(dev['id'], dev['ip'], dev['port'], dev['device'])
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
builder.add_dev(new_dev)
|
||||
print('Device %s weight %s' %
|
||||
(format_device(new_dev), new_dev['weight']))
|
||||
try:
|
||||
for new_dev in _parse_add_values(argv[3:]):
|
||||
for dev in builder.devs:
|
||||
if dev is None:
|
||||
continue
|
||||
if dev['ip'] == new_dev['ip'] and \
|
||||
dev['port'] == new_dev['port'] and \
|
||||
dev['device'] == new_dev['device']:
|
||||
print 'Device %d already uses %s:%d/%s.' % \
|
||||
(dev['id'], dev['ip'],
|
||||
dev['port'], dev['device'])
|
||||
print "The on-disk ring builder is unchanged.\n"
|
||||
exit(EXIT_ERROR)
|
||||
dev_id = builder.add_dev(new_dev)
|
||||
print('Device %s with %s weight got id %s' %
|
||||
(format_device(new_dev), new_dev['weight'], dev_id))
|
||||
except ValueError as err:
|
||||
print err
|
||||
print 'The on-disk ring builder is unchanged.'
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
builder.save(argv[1])
|
||||
exit(EXIT_SUCCESS)
|
||||
|
@ -400,38 +682,30 @@ swift-ring-builder <builder_file> add
|
|||
swift-ring-builder <builder_file> set_weight <search-value> <weight>
|
||||
[<search-value> <weight] ...
|
||||
|
||||
or
|
||||
|
||||
swift-ring-builder <builder_file> set_weight
|
||||
--region <region> --zone <zone> --ip <ip or hostname> --port <port>
|
||||
--replication-ip <r_ip or r_hostname> --replication-port <r_port>
|
||||
--device <device_name> --meta <meta> --weight <weight>
|
||||
|
||||
Where <r_ip>, <r_hostname> and <r_port> are replication ip, hostname
|
||||
and port.
|
||||
Any of the options are optional in both cases.
|
||||
|
||||
Resets the devices' weights. No partitions will be reassigned to or from
|
||||
the device until after running 'rebalance'. This is so you can make
|
||||
multiple device changes and rebalance them all just once.
|
||||
"""
|
||||
if len(argv) < 5 or len(argv) % 2 != 1:
|
||||
# if len(argv) < 5 or len(argv) % 2 != 1:
|
||||
if len(argv) < 5:
|
||||
print Commands.set_weight.__doc__.strip()
|
||||
print
|
||||
print parse_search_value.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
devs_and_weights = izip(islice(argv, 3, len(argv), 2),
|
||||
islice(argv, 4, len(argv), 2))
|
||||
for devstr, weightstr in devs_and_weights:
|
||||
devs = builder.search_devs(parse_search_value(devstr))
|
||||
weight = float(weightstr)
|
||||
if not devs:
|
||||
print("Search value \"%s\" matched 0 devices.\n"
|
||||
"The on-disk ring builder is unchanged.\n"
|
||||
% devstr)
|
||||
exit(EXIT_ERROR)
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to update the weight for '
|
||||
'these %s devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device modifications'
|
||||
exit(EXIT_ERROR)
|
||||
for dev in devs:
|
||||
builder.set_dev_weight(dev['id'], weight)
|
||||
print '%s weight set to %s' % (format_device(dev),
|
||||
dev['weight'])
|
||||
_parse_set_weight_values(argv[3:])
|
||||
|
||||
builder.save(argv[1])
|
||||
exit(EXIT_SUCCESS)
|
||||
|
||||
|
@ -441,7 +715,21 @@ swift-ring-builder <builder_file> set_info
|
|||
<search-value> <ip>:<port>[R<r_ip>:<r_port>]/<device_name>_<meta>
|
||||
[<search-value> <ip>:<port>[R<r_ip>:<r_port>]/<device_name>_<meta>] ...
|
||||
|
||||
Where <r_ip> and <r_port> are replication ip and port.
|
||||
or
|
||||
|
||||
swift-ring-builder <builder_file> set_info
|
||||
--ip <ip or hostname> --port <port>
|
||||
--replication-ip <r_ip or r_hostname> --replication-port <r_port>
|
||||
--device <device_name> --meta <meta>
|
||||
--change-ip <ip or hostname> --change-port <port>
|
||||
--change-replication-ip <r_ip or r_hostname>
|
||||
--change-replication-port <r_port>
|
||||
--change-device <device_name>
|
||||
--change-meta <meta>
|
||||
|
||||
Where <r_ip>, <r_hostname> and <r_port> are replication ip, hostname
|
||||
and port.
|
||||
Any of the options are optional in both cases.
|
||||
|
||||
For each search-value, resets the matched device's information.
|
||||
This information isn't used to assign partitions, so you can use
|
||||
|
@ -451,111 +739,36 @@ swift-ring-builder <builder_file> set_info
|
|||
want to change. For instance set_info d74 _"snet: 5.6.7.8" would
|
||||
just update the meta data for device id 74.
|
||||
"""
|
||||
if len(argv) < 5 or len(argv) % 2 != 1:
|
||||
if len(argv) < 5:
|
||||
print Commands.set_info.__doc__.strip()
|
||||
print
|
||||
print parse_search_value.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
searches_and_changes = izip(islice(argv, 3, len(argv), 2),
|
||||
islice(argv, 4, len(argv), 2))
|
||||
try:
|
||||
_parse_set_info_values(argv[3:])
|
||||
except ValueError as err:
|
||||
print err
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for search_value, change_value in searches_and_changes:
|
||||
devs = builder.search_devs(parse_search_value(search_value))
|
||||
change = []
|
||||
if len(change_value) and change_value[0].isdigit():
|
||||
i = 1
|
||||
while (i < len(change_value) and
|
||||
change_value[i] in '0123456789.'):
|
||||
i += 1
|
||||
change.append(('ip', change_value[:i]))
|
||||
change_value = change_value[i:]
|
||||
elif len(change_value) and change_value[0] == '[':
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
change.append(('ip', change_value[:i].lstrip('[').rstrip(']')))
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i].isdigit():
|
||||
i += 1
|
||||
change.append(('port', int(change_value[1:i])))
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('R'):
|
||||
change_value = change_value[1:]
|
||||
if len(change_value) and change_value[0].isdigit():
|
||||
i = 1
|
||||
while (i < len(change_value) and
|
||||
change_value[i] in '0123456789.'):
|
||||
i += 1
|
||||
change.append(('replication_ip', change_value[:i]))
|
||||
change_value = change_value[i:]
|
||||
elif len(change_value) and change_value[0] == '[':
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != ']':
|
||||
i += 1
|
||||
i += 1
|
||||
change.append(('replication_ip',
|
||||
change_value[:i].lstrip('[').rstrip(']')))
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i].isdigit():
|
||||
i += 1
|
||||
change.append(('replication_port', int(change_value[1:i])))
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('/'):
|
||||
i = 1
|
||||
while i < len(change_value) and change_value[i] != '_':
|
||||
i += 1
|
||||
change.append(('device', change_value[1:i]))
|
||||
change_value = change_value[i:]
|
||||
if change_value.startswith('_'):
|
||||
change.append(('meta', change_value[1:]))
|
||||
change_value = ''
|
||||
if change_value or not change:
|
||||
raise ValueError('Invalid set info change value: %s' %
|
||||
repr(argv[4]))
|
||||
if not devs:
|
||||
print("Search value \"%s\" matched 0 devices.\n"
|
||||
"The on-disk ring builder is unchanged.\n"
|
||||
% search_value)
|
||||
exit(EXIT_ERROR)
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to update the info for '
|
||||
'these %s devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device modifications'
|
||||
exit(EXIT_ERROR)
|
||||
for dev in devs:
|
||||
orig_dev_string = format_device(dev)
|
||||
test_dev = dict(dev)
|
||||
for key, value in change:
|
||||
test_dev[key] = value
|
||||
for check_dev in builder.devs:
|
||||
if not check_dev or check_dev['id'] == test_dev['id']:
|
||||
continue
|
||||
if check_dev['ip'] == test_dev['ip'] and \
|
||||
check_dev['port'] == test_dev['port'] and \
|
||||
check_dev['device'] == test_dev['device']:
|
||||
print 'Device %d already uses %s:%d/%s.' % \
|
||||
(check_dev['id'], check_dev['ip'],
|
||||
check_dev['port'], check_dev['device'])
|
||||
exit(EXIT_ERROR)
|
||||
for key, value in change:
|
||||
dev[key] = value
|
||||
print 'Device %s is now %s' % (orig_dev_string,
|
||||
format_device(dev))
|
||||
builder.save(argv[1])
|
||||
exit(EXIT_SUCCESS)
|
||||
|
||||
def remove():
|
||||
"""
|
||||
swift-ring-builder <builder_file> remove <search-value> [search-value ...]
|
||||
|
||||
or
|
||||
|
||||
swift-ring-builder <builder_file> search
|
||||
--region <region> --zone <zone> --ip <ip or hostname> --port <port>
|
||||
--replication-ip <r_ip or r_hostname> --replication-port <r_port>
|
||||
--device <device_name> --meta <meta> --weight <weight>
|
||||
|
||||
Where <r_ip>, <r_hostname> and <r_port> are replication ip, hostname
|
||||
and port.
|
||||
Any of the options are optional in both cases.
|
||||
|
||||
Removes the device(s) from the ring. This should normally just be used for
|
||||
a device that has failed. For a device you wish to decommission, it's best
|
||||
to set its weight to 0, wait for it to drain all its data, then use this
|
||||
|
@ -569,36 +782,37 @@ swift-ring-builder <builder_file> remove <search-value> [search-value ...]
|
|||
print parse_search_value.__doc__.strip()
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for search_value in argv[3:]:
|
||||
devs = builder.search_devs(parse_search_value(search_value))
|
||||
if not devs:
|
||||
print("Search value \"%s\" matched 0 devices.\n"
|
||||
"The on-disk ring builder is unchanged." % search_value)
|
||||
exit(EXIT_ERROR)
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to remove these %s '
|
||||
'devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device removals'
|
||||
exit(EXIT_ERROR)
|
||||
devs = _parse_remove_values(argv[3:])
|
||||
|
||||
if not devs:
|
||||
print('Search value matched 0 devices.\n'
|
||||
'The on-disk ring builder is unchanged.')
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
if len(devs) > 1:
|
||||
print 'Matched more than one device:'
|
||||
for dev in devs:
|
||||
try:
|
||||
builder.remove_dev(dev['id'])
|
||||
except exceptions.RingBuilderError as e:
|
||||
print '-' * 79
|
||||
print(
|
||||
"An error occurred while removing device with id %d\n"
|
||||
"This usually means that you attempted to remove\n"
|
||||
"the last device in a ring. If this is the case,\n"
|
||||
"consider creating a new ring instead.\n"
|
||||
"The on-disk ring builder is unchanged.\n"
|
||||
"Original exception message: %s" %
|
||||
(dev['id'], e)
|
||||
)
|
||||
print '-' * 79
|
||||
exit(EXIT_ERROR)
|
||||
print ' %s' % format_device(dev)
|
||||
if raw_input('Are you sure you want to remove these %s '
|
||||
'devices? (y/N) ' % len(devs)) != 'y':
|
||||
print 'Aborting device removals'
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
for dev in devs:
|
||||
try:
|
||||
builder.remove_dev(dev['id'])
|
||||
except exceptions.RingBuilderError as e:
|
||||
print '-' * 79
|
||||
print(
|
||||
'An error occurred while removing device with id %d\n'
|
||||
'This usually means that you attempted to remove\n'
|
||||
'the last device in a ring. If this is the case,\n'
|
||||
'consider creating a new ring instead.\n'
|
||||
'The on-disk ring builder is unchanged.\n'
|
||||
'Original exception message: %s' %
|
||||
(dev['id'], e))
|
||||
print '-' * 79
|
||||
exit(EXIT_ERROR)
|
||||
|
||||
print '%s marked for removal and will ' \
|
||||
'be removed next rebalance.' % format_device(dev)
|
||||
|
|
|
@ -388,6 +388,10 @@ def check_account_format(req, account):
|
|||
:raise: HTTPPreconditionFailed if account header
|
||||
is not well formatted.
|
||||
"""
|
||||
if not account:
|
||||
raise HTTPPreconditionFailed(
|
||||
request=req,
|
||||
body='Account name cannot be empty')
|
||||
if isinstance(account, unicode):
|
||||
account = account.encode('utf-8')
|
||||
if '/' in account:
|
||||
|
|
|
@ -33,6 +33,7 @@ from swift.common.utils import get_logger, whataremyips, storage_directory, \
|
|||
renamer, mkdirs, lock_parent_directory, config_true_value, \
|
||||
unlink_older_than, dump_recon_cache, rsync_ip, ismount, json, Timestamp
|
||||
from swift.common import ring
|
||||
from swift.common.ring.utils import is_local_device
|
||||
from swift.common.http import HTTP_NOT_FOUND, HTTP_INSUFFICIENT_STORAGE
|
||||
from swift.common.bufferedhttp import BufferedHTTPConnection
|
||||
from swift.common.exceptions import DriveNotMounted
|
||||
|
@ -543,8 +544,9 @@ class Replicator(Daemon):
|
|||
return
|
||||
self._local_device_ids = set()
|
||||
for node in self.ring.devs:
|
||||
if (node and node['replication_ip'] in ips and
|
||||
node['replication_port'] == self.port):
|
||||
if node and is_local_device(ips, self.port,
|
||||
node['replication_ip'],
|
||||
node['replication_port']):
|
||||
if self.mount_check and not ismount(
|
||||
os.path.join(self.root, node['device'])):
|
||||
self.logger.warn(
|
||||
|
|
|
@ -90,8 +90,9 @@ sample code for computing the signature::
|
|||
max_file_size, max_file_count, expires)
|
||||
signature = hmac.new(key, hmac_body, sha1).hexdigest()
|
||||
|
||||
The key is the value of either the X-Account-Meta-Temp-URL-Key or the
|
||||
X-Account-Meta-Temp-Url-Key-2 header on the account.
|
||||
The key is the value of either the account (X-Account-Meta-Temp-URL-Key,
|
||||
X-Account-Meta-Temp-Url-Key-2) or the container
|
||||
(X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys.
|
||||
|
||||
Be certain to use the full path, from the /v1/ onward.
|
||||
Note that x_delete_at and x_delete_after are not used in signature generation
|
||||
|
@ -123,7 +124,7 @@ from swift.common.utils import streq_const_time, register_swift_info, \
|
|||
parse_content_disposition, iter_multipart_mime_documents
|
||||
from swift.common.wsgi import make_pre_authed_env
|
||||
from swift.common.swob import HTTPUnauthorized
|
||||
from swift.proxy.controllers.base import get_account_info
|
||||
from swift.proxy.controllers.base import get_account_info, get_container_info
|
||||
|
||||
|
||||
#: The size of data to read from the form at any given time.
|
||||
|
@ -393,7 +394,13 @@ class FormPost(object):
|
|||
|
||||
def _get_keys(self, env):
|
||||
"""
|
||||
Fetch the tempurl keys for the account. Also validate that the request
|
||||
Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values
|
||||
for the account or container, or an empty list if none are set.
|
||||
|
||||
Returns 0-4 elements depending on how many keys are set in the
|
||||
account's or container's metadata.
|
||||
|
||||
Also validate that the request
|
||||
path indicates a valid container; if not, no keys will be returned.
|
||||
|
||||
:param env: The WSGI environment for the request.
|
||||
|
@ -405,12 +412,20 @@ class FormPost(object):
|
|||
return []
|
||||
|
||||
account_info = get_account_info(env, self.app, swift_source='FP')
|
||||
return get_tempurl_keys_from_metadata(account_info['meta'])
|
||||
account_keys = get_tempurl_keys_from_metadata(account_info['meta'])
|
||||
|
||||
container_info = get_container_info(env, self.app, swift_source='FP')
|
||||
container_keys = get_tempurl_keys_from_metadata(
|
||||
container_info.get('meta', []))
|
||||
|
||||
return account_keys + container_keys
|
||||
|
||||
|
||||
def filter_factory(global_conf, **local_conf):
|
||||
"""Returns the WSGI filter for use with paste.deploy."""
|
||||
conf = global_conf.copy()
|
||||
conf.update(local_conf)
|
||||
|
||||
register_swift_info('formpost')
|
||||
|
||||
return lambda app: FormPost(app, conf)
|
||||
|
|
|
@ -77,7 +77,7 @@ from urllib import quote, unquote
|
|||
from swift.common.swob import Request
|
||||
from swift.common.utils import (get_logger, get_remote_client,
|
||||
get_valid_utf8_str, config_true_value,
|
||||
InputProxy, list_from_csv)
|
||||
InputProxy, list_from_csv, get_policy_index)
|
||||
|
||||
QUOTE_SAFE = '/:'
|
||||
|
||||
|
@ -135,7 +135,7 @@ class ProxyLoggingMiddleware(object):
|
|||
return value
|
||||
|
||||
def log_request(self, req, status_int, bytes_received, bytes_sent,
|
||||
start_time, end_time):
|
||||
start_time, end_time, resp_headers=None):
|
||||
"""
|
||||
Log a request.
|
||||
|
||||
|
@ -145,7 +145,9 @@ class ProxyLoggingMiddleware(object):
|
|||
:param bytes_sent: bytes yielded to the WSGI server
|
||||
:param start_time: timestamp request started
|
||||
:param end_time: timestamp request completed
|
||||
:param resp_headers: dict of the response headers
|
||||
"""
|
||||
resp_headers = resp_headers or {}
|
||||
req_path = get_valid_utf8_str(req.path)
|
||||
the_request = quote(unquote(req_path), QUOTE_SAFE)
|
||||
if req.query_string:
|
||||
|
@ -166,6 +168,7 @@ class ProxyLoggingMiddleware(object):
|
|||
duration_time_str = "%.4f" % (end_time - start_time)
|
||||
start_time_str = "%.9f" % start_time
|
||||
end_time_str = "%.9f" % end_time
|
||||
policy_index = get_policy_index(req.headers, resp_headers)
|
||||
self.access_logger.info(' '.join(
|
||||
quote(str(x) if x else '-', QUOTE_SAFE)
|
||||
for x in (
|
||||
|
@ -188,7 +191,8 @@ class ProxyLoggingMiddleware(object):
|
|||
req.environ.get('swift.source'),
|
||||
','.join(req.environ.get('swift.log_info') or ''),
|
||||
start_time_str,
|
||||
end_time_str
|
||||
end_time_str,
|
||||
policy_index
|
||||
)))
|
||||
# Log timing and bytes-transferred data to StatsD
|
||||
metric_name = self.statsd_metric_name(req, status_int, method)
|
||||
|
@ -257,6 +261,7 @@ class ProxyLoggingMiddleware(object):
|
|||
elif isinstance(iterable, list):
|
||||
start_response_args[0][1].append(
|
||||
('Content-Length', str(sum(len(i) for i in iterable))))
|
||||
resp_headers = dict(start_response_args[0][1])
|
||||
start_response(*start_response_args[0])
|
||||
req = Request(env)
|
||||
|
||||
|
@ -283,7 +288,10 @@ class ProxyLoggingMiddleware(object):
|
|||
status_int = status_int_for_logging(client_disconnect)
|
||||
self.log_request(
|
||||
req, status_int, input_proxy.bytes_received, bytes_sent,
|
||||
start_time, time.time())
|
||||
start_time, time.time(), resp_headers=resp_headers)
|
||||
close_method = getattr(iterable, 'close', None)
|
||||
if callable(close_method):
|
||||
close_method()
|
||||
|
||||
try:
|
||||
iterable = self.app(env, my_start_response)
|
||||
|
|
|
@ -232,7 +232,8 @@ class RateLimitMiddleware(object):
|
|||
return None
|
||||
|
||||
try:
|
||||
account_info = get_account_info(req.environ, self.app)
|
||||
account_info = get_account_info(req.environ, self.app,
|
||||
swift_source='RL')
|
||||
account_global_ratelimit = \
|
||||
account_info.get('sysmeta', {}).get('global-write-ratelimit')
|
||||
except ValueError:
|
||||
|
|
|
@ -266,15 +266,28 @@ class ReconMiddleware(object):
|
|||
|
||||
def get_quarantine_count(self):
|
||||
"""get obj/container/account quarantine counts"""
|
||||
qcounts = {"objects": 0, "containers": 0, "accounts": 0}
|
||||
qcounts = {"objects": 0, "containers": 0, "accounts": 0,
|
||||
"policies": {}}
|
||||
qdir = "quarantined"
|
||||
for device in os.listdir(self.devices):
|
||||
for qtype in qcounts:
|
||||
qtgt = os.path.join(self.devices, device, qdir, qtype)
|
||||
if os.path.exists(qtgt):
|
||||
qpath = os.path.join(self.devices, device, qdir)
|
||||
if os.path.exists(qpath):
|
||||
for qtype in os.listdir(qpath):
|
||||
qtgt = os.path.join(qpath, qtype)
|
||||
linkcount = os.lstat(qtgt).st_nlink
|
||||
if linkcount > 2:
|
||||
qcounts[qtype] += linkcount - 2
|
||||
if qtype.startswith('objects'):
|
||||
if '-' in qtype:
|
||||
pkey = qtype.split('-', 1)[1]
|
||||
else:
|
||||
pkey = '0'
|
||||
qcounts['policies'].setdefault(pkey,
|
||||
{'objects': 0})
|
||||
qcounts['policies'][pkey]['objects'] \
|
||||
+= linkcount - 2
|
||||
qcounts['objects'] += linkcount - 2
|
||||
else:
|
||||
qcounts[qtype] += linkcount - 2
|
||||
return qcounts
|
||||
|
||||
def get_socket_info(self, openr=open):
|
||||
|
|
|
@ -81,10 +81,13 @@ Using this in combination with browser form post translation
|
|||
middleware could also allow direct-from-browser uploads to specific
|
||||
locations in Swift.
|
||||
|
||||
TempURL supports up to two keys, specified by X-Account-Meta-Temp-URL-Key and
|
||||
X-Account-Meta-Temp-URL-Key-2. Signatures are checked against both keys, if
|
||||
present. This is to allow for key rotation without invalidating all existing
|
||||
temporary URLs.
|
||||
TempURL supports both account and container level keys. Each allows up to two
|
||||
keys to be set, allowing key rotation without invalidating all existing
|
||||
temporary URLs. Account keys are specified by X-Account-Meta-Temp-URL-Key and
|
||||
X-Account-Meta-Temp-URL-Key-2, while container keys are specified by
|
||||
X-Container-Meta-Temp-URL-Key and X-Container-Meta-Temp-URL-Key-2.
|
||||
Signatures are checked against account and container keys, if
|
||||
present.
|
||||
|
||||
With GET TempURLs, a Content-Disposition header will be set on the
|
||||
response so that browsers will interpret this as a file attachment to
|
||||
|
@ -118,7 +121,7 @@ from time import time
|
|||
from urllib import urlencode
|
||||
from urlparse import parse_qs
|
||||
|
||||
from swift.proxy.controllers.base import get_account_info
|
||||
from swift.proxy.controllers.base import get_account_info, get_container_info
|
||||
from swift.common.swob import HeaderKeyDict, HTTPUnauthorized
|
||||
from swift.common.utils import split_path, get_valid_utf8_str, \
|
||||
register_swift_info, get_hmac, streq_const_time, quote
|
||||
|
@ -409,11 +412,11 @@ class TempURL(object):
|
|||
|
||||
def _get_keys(self, env, account):
|
||||
"""
|
||||
Returns the X-Account-Meta-Temp-URL-Key[-2] header values for the
|
||||
account, or an empty list if none is set.
|
||||
Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values
|
||||
for the account or container, or an empty list if none are set.
|
||||
|
||||
Returns 0, 1, or 2 elements depending on how many keys are set
|
||||
in the account's metadata.
|
||||
Returns 0-4 elements depending on how many keys are set in the
|
||||
account's or container's metadata.
|
||||
|
||||
:param env: The WSGI environment for the request.
|
||||
:param account: Account str.
|
||||
|
@ -421,7 +424,13 @@ class TempURL(object):
|
|||
X-Account-Meta-Temp-URL-Key-2 str value if set]
|
||||
"""
|
||||
account_info = get_account_info(env, self.app, swift_source='TU')
|
||||
return get_tempurl_keys_from_metadata(account_info['meta'])
|
||||
account_keys = get_tempurl_keys_from_metadata(account_info['meta'])
|
||||
|
||||
container_info = get_container_info(env, self.app, swift_source='TU')
|
||||
container_keys = get_tempurl_keys_from_metadata(
|
||||
container_info.get('meta', []))
|
||||
|
||||
return account_keys + container_keys
|
||||
|
||||
def _get_hmacs(self, env, expires, keys, request_method=None):
|
||||
"""
|
||||
|
|
|
@ -27,7 +27,8 @@ from time import time
|
|||
|
||||
from swift.common import exceptions
|
||||
from swift.common.ring import RingData
|
||||
from swift.common.ring.utils import tiers_for_dev, build_tier_tree
|
||||
from swift.common.ring.utils import tiers_for_dev, build_tier_tree, \
|
||||
validate_and_normalize_address
|
||||
|
||||
MAX_BALANCE = 999.99
|
||||
|
||||
|
@ -1305,6 +1306,15 @@ class RingBuilder(object):
|
|||
if key == 'meta':
|
||||
if value not in dev.get(key):
|
||||
matched = False
|
||||
elif key == 'ip' or key == 'replication_ip':
|
||||
cdev = ''
|
||||
try:
|
||||
cdev = validate_and_normalize_address(
|
||||
dev.get(key, ''))
|
||||
except ValueError:
|
||||
pass
|
||||
if cdev != value:
|
||||
matched = False
|
||||
elif dev.get(key) != value:
|
||||
matched = False
|
||||
if matched:
|
||||
|
|
|
@ -13,9 +13,11 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from collections import defaultdict
|
||||
from operator import itemgetter
|
||||
import optparse
|
||||
import re
|
||||
import socket
|
||||
|
||||
from swift.common.utils import expand_ipv6
|
||||
|
||||
|
||||
def tiers_for_dev(dev):
|
||||
|
@ -128,10 +130,136 @@ def build_tier_tree(devices):
|
|||
return tier2children
|
||||
|
||||
|
||||
def validate_and_normalize_ip(ip):
|
||||
"""
|
||||
Return normalized ip if the ip is a valid ip.
|
||||
Otherwise raise ValueError Exception. The hostname is
|
||||
normalized to all lower case. IPv6-addresses are converted to
|
||||
lowercase and fully expanded.
|
||||
"""
|
||||
# first convert to lower case
|
||||
new_ip = ip.lower()
|
||||
if is_valid_ipv4(new_ip):
|
||||
return new_ip
|
||||
elif is_valid_ipv6(new_ip):
|
||||
return expand_ipv6(new_ip)
|
||||
else:
|
||||
raise ValueError('Invalid ip %s' % ip)
|
||||
|
||||
|
||||
def validate_and_normalize_address(address):
|
||||
"""
|
||||
Return normalized address if the address is a valid ip or hostname.
|
||||
Otherwise raise ValueError Exception. The hostname is
|
||||
normalized to all lower case. IPv6-addresses are converted to
|
||||
lowercase and fully expanded.
|
||||
|
||||
RFC1123 2.1 Host Names and Nubmers
|
||||
DISCUSSION
|
||||
This last requirement is not intended to specify the complete
|
||||
syntactic form for entering a dotted-decimal host number;
|
||||
that is considered to be a user-interface issue. For
|
||||
example, a dotted-decimal number must be enclosed within
|
||||
"[ ]" brackets for SMTP mail (see Section 5.2.17). This
|
||||
notation could be made universal within a host system,
|
||||
simplifying the syntactic checking for a dotted-decimal
|
||||
number.
|
||||
|
||||
If a dotted-decimal number can be entered without such
|
||||
identifying delimiters, then a full syntactic check must be
|
||||
made, because a segment of a host domain name is now allowed
|
||||
to begin with a digit and could legally be entirely numeric
|
||||
(see Section 6.1.2.4). However, a valid host name can never
|
||||
have the dotted-decimal form #.#.#.#, since at least the
|
||||
highest-level component label will be alphabetic.
|
||||
"""
|
||||
new_address = address.lstrip('[').rstrip(']')
|
||||
if address.startswith('[') and address.endswith(']'):
|
||||
return validate_and_normalize_ip(new_address)
|
||||
|
||||
new_address = new_address.lower()
|
||||
if is_valid_ipv4(new_address):
|
||||
return new_address
|
||||
elif is_valid_ipv6(new_address):
|
||||
return expand_ipv6(new_address)
|
||||
elif is_valid_hostname(new_address):
|
||||
return new_address
|
||||
else:
|
||||
raise ValueError('Invalid address %s' % address)
|
||||
|
||||
|
||||
def is_valid_ip(ip):
|
||||
"""
|
||||
Return True if the provided ip is a valid IP-address
|
||||
"""
|
||||
return is_valid_ipv4(ip) or is_valid_ipv6(ip)
|
||||
|
||||
|
||||
def is_valid_ipv4(ip):
|
||||
"""
|
||||
Return True if the provided ip is a valid IPv4-address
|
||||
"""
|
||||
try:
|
||||
socket.inet_pton(socket.AF_INET, ip)
|
||||
except socket.error:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def is_valid_ipv6(ip):
|
||||
"""
|
||||
Return True if the provided ip is a valid IPv6-address
|
||||
"""
|
||||
try:
|
||||
socket.inet_pton(socket.AF_INET6, ip)
|
||||
except socket.error: # not a valid address
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def is_valid_hostname(hostname):
|
||||
"""
|
||||
Return True if the provided hostname is a valid hostname
|
||||
"""
|
||||
if len(hostname) < 1 or len(hostname) > 255:
|
||||
return False
|
||||
if hostname[-1] == ".":
|
||||
# strip exactly one dot from the right, if present
|
||||
hostname = hostname[:-1]
|
||||
allowed = re.compile("(?!-)[A-Z\d-]{1,63}(?<!-)$", re.IGNORECASE)
|
||||
return all(allowed.match(x) for x in hostname.split("."))
|
||||
|
||||
|
||||
def is_local_device(my_ips, my_port, dev_ip, dev_port):
|
||||
"""
|
||||
Return True if the provided dev_ip and dev_port are among the IP
|
||||
addresses specified in my_ips and my_port respectively.
|
||||
|
||||
If dev_ip is a hostname then it is first translated to an IP
|
||||
address before checking it against my_ips.
|
||||
"""
|
||||
if not is_valid_ip(dev_ip) and is_valid_hostname(dev_ip):
|
||||
try:
|
||||
# get the ip for this host; use getaddrinfo so that
|
||||
# it works for both ipv4 and ipv6 addresses
|
||||
addrinfo = socket.getaddrinfo(dev_ip, dev_port)
|
||||
for addr in addrinfo:
|
||||
family = addr[0]
|
||||
dev_ip = addr[4][0] # get the ip-address
|
||||
if family == socket.AF_INET6:
|
||||
dev_ip = expand_ipv6(dev_ip)
|
||||
if dev_ip in my_ips and dev_port == my_port:
|
||||
return True
|
||||
return False
|
||||
except socket.gaierror:
|
||||
return False
|
||||
return dev_ip in my_ips and dev_port == my_port
|
||||
|
||||
|
||||
def parse_search_value(search_value):
|
||||
"""The <search-value> can be of the form::
|
||||
|
||||
d<device_id>r<region>z<zone>-<ip>:<port>[R<r_ip>:<r_port>]/
|
||||
d<device_id>r<region>z<zone>-<ip>:<port>R<r_ip>:<r_port>/
|
||||
<device_name>_<meta>
|
||||
|
||||
Where <r_ip> and <r_port> are replication ip and port.
|
||||
|
@ -201,6 +329,12 @@ def parse_search_value(search_value):
|
|||
i += 1
|
||||
match['ip'] = search_value[:i].lstrip('[').rstrip(']')
|
||||
search_value = search_value[i:]
|
||||
|
||||
if 'ip' in match:
|
||||
# ipv6 addresses are converted to all lowercase
|
||||
# and use the fully expanded representation
|
||||
match['ip'] = validate_and_normalize_ip(match['ip'])
|
||||
|
||||
if search_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(search_value) and search_value[i].isdigit():
|
||||
|
@ -224,6 +358,13 @@ def parse_search_value(search_value):
|
|||
i += 1
|
||||
match['replication_ip'] = search_value[:i].lstrip('[').rstrip(']')
|
||||
search_value = search_value[i:]
|
||||
|
||||
if 'replication_ip' in match:
|
||||
# ipv6 addresses are converted to all lowercase
|
||||
# and use the fully expanded representation
|
||||
match['replication_ip'] = \
|
||||
validate_and_normalize_ip(match['replication_ip'])
|
||||
|
||||
if search_value.startswith(':'):
|
||||
i = 1
|
||||
while i < len(search_value) and search_value[i].isdigit():
|
||||
|
@ -245,11 +386,68 @@ def parse_search_value(search_value):
|
|||
return match
|
||||
|
||||
|
||||
def parse_search_values_from_opts(opts):
|
||||
"""
|
||||
Convert optparse style options into a dictionary for searching.
|
||||
|
||||
:param opts: optparse style options
|
||||
:returns: a dictonary with search values to filter devices,
|
||||
supported parameters are id, region, zone, ip, port,
|
||||
replication_ip, replication_port, device, weight, meta
|
||||
"""
|
||||
|
||||
search_values = {}
|
||||
for key in ('id', 'region', 'zone', 'ip', 'port', 'replication_ip',
|
||||
'replication_port', 'device', 'weight', 'meta'):
|
||||
value = getattr(opts, key, None)
|
||||
if value:
|
||||
if key == 'ip' or key == 'replication_ip':
|
||||
value = validate_and_normalize_address(value)
|
||||
search_values[key] = value
|
||||
return search_values
|
||||
|
||||
|
||||
def parse_change_values_from_opts(opts):
|
||||
"""
|
||||
Convert optparse style options into a dictionary for changing.
|
||||
|
||||
:param opts: optparse style options
|
||||
:returns: a dictonary with change values to filter devices,
|
||||
supported parameters are ip, port, replication_ip,
|
||||
replication_port
|
||||
"""
|
||||
|
||||
change_values = {}
|
||||
for key in ('change_ip', 'change_port', 'change_replication_ip',
|
||||
'change_replication_port', 'change_device', 'change_meta'):
|
||||
value = getattr(opts, key, None)
|
||||
if value:
|
||||
if key == 'change_ip' or key == 'change_replication_ip':
|
||||
value = validate_and_normalize_address(value)
|
||||
change_values[key.replace('change_', '')] = value
|
||||
return change_values
|
||||
|
||||
|
||||
def validate_args(argvish):
|
||||
"""
|
||||
Build OptionParse and validate it whether the format is new command-line
|
||||
format or not.
|
||||
"""
|
||||
opts, args = parse_args(argvish)
|
||||
new_cmd_format = opts.id or opts.region or opts.zone or \
|
||||
opts.ip or opts.port or \
|
||||
opts.replication_ip or opts.replication_port or \
|
||||
opts.device or opts.weight or opts.meta
|
||||
return (new_cmd_format, opts, args)
|
||||
|
||||
|
||||
def parse_args(argvish):
|
||||
"""
|
||||
Build OptionParser and evaluate command line arguments.
|
||||
"""
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option('-u', '--id', type="int",
|
||||
help="Device ID")
|
||||
parser.add_option('-r', '--region', type="int",
|
||||
help="Region")
|
||||
parser.add_option('-z', '--zone', type="int",
|
||||
|
@ -268,6 +466,18 @@ def parse_args(argvish):
|
|||
help="Device weight")
|
||||
parser.add_option('-m', '--meta', type="string", default="",
|
||||
help="Extra device info (just a string)")
|
||||
parser.add_option('-I', '--change-ip', type="string",
|
||||
help="IP address for change")
|
||||
parser.add_option('-P', '--change-port', type="int",
|
||||
help="Port number for change")
|
||||
parser.add_option('-J', '--change-replication-ip', type="string",
|
||||
help="Replication IP address for change")
|
||||
parser.add_option('-Q', '--change-replication-port', type="int",
|
||||
help="Replication port number for change")
|
||||
parser.add_option('-D', '--change-device', type="string",
|
||||
help="Device name (e.g. md0, sdb1) for change")
|
||||
parser.add_option('-M', '--change-meta', type="string", default="",
|
||||
help="Extra device info (just a string) for change")
|
||||
return parser.parse_args(argvish)
|
||||
|
||||
|
||||
|
@ -300,40 +510,17 @@ def build_dev_from_opts(opts):
|
|||
raise ValueError('Required argument %s/%s not specified.' %
|
||||
(shortopt, longopt))
|
||||
|
||||
replication_ip = opts.replication_ip or opts.ip
|
||||
ip = validate_and_normalize_address(opts.ip)
|
||||
replication_ip = validate_and_normalize_address(
|
||||
(opts.replication_ip or opts.ip))
|
||||
replication_port = opts.replication_port or opts.port
|
||||
|
||||
return {'region': opts.region, 'zone': opts.zone, 'ip': opts.ip,
|
||||
return {'region': opts.region, 'zone': opts.zone, 'ip': ip,
|
||||
'port': opts.port, 'device': opts.device, 'meta': opts.meta,
|
||||
'replication_ip': replication_ip,
|
||||
'replication_port': replication_port, 'weight': opts.weight}
|
||||
|
||||
|
||||
def find_parts(builder, argv):
|
||||
devs = []
|
||||
for arg in argv[3:]:
|
||||
devs.extend(builder.search_devs(parse_search_value(arg)) or [])
|
||||
|
||||
devs = [d['id'] for d in devs]
|
||||
|
||||
if not devs:
|
||||
return None
|
||||
|
||||
partition_count = {}
|
||||
for replica in builder._replica2part2dev:
|
||||
for partition, device in enumerate(replica):
|
||||
if device in devs:
|
||||
if partition not in partition_count:
|
||||
partition_count[partition] = 0
|
||||
partition_count[partition] += 1
|
||||
|
||||
# Sort by number of found replicas to keep the output format
|
||||
sorted_partition_count = sorted(
|
||||
partition_count.iteritems(), key=itemgetter(1), reverse=True)
|
||||
|
||||
return sorted_partition_count
|
||||
|
||||
|
||||
def dispersion_report(builder, search_filter=None, verbose=False):
|
||||
if not builder._dispersion_graph:
|
||||
builder._build_dispersion_graph()
|
||||
|
|
|
@ -329,6 +329,21 @@ def generate_trans_id(trans_id_suffix):
|
|||
uuid.uuid4().hex[:21], time.time(), quote(trans_id_suffix))
|
||||
|
||||
|
||||
def get_policy_index(req_headers, res_headers):
|
||||
"""
|
||||
Returns the appropriate index of the storage policy for the request from
|
||||
a proxy server
|
||||
|
||||
:param req: dict of the request headers.
|
||||
:param res: dict of the response headers.
|
||||
|
||||
:returns: string index of storage policy, or None
|
||||
"""
|
||||
header = 'X-Backend-Storage-Policy-Index'
|
||||
policy_index = res_headers.get(header, req_headers.get(header))
|
||||
return str(policy_index) if policy_index is not None else None
|
||||
|
||||
|
||||
def get_log_line(req, res, trans_time, additional_info):
|
||||
"""
|
||||
Make a line for logging that matches the documented log line format
|
||||
|
@ -342,14 +357,15 @@ def get_log_line(req, res, trans_time, additional_info):
|
|||
:returns: a properly formated line for logging.
|
||||
"""
|
||||
|
||||
return '%s - - [%s] "%s %s" %s %s "%s" "%s" "%s" %.4f "%s" %d' % (
|
||||
policy_index = get_policy_index(req.headers, res.headers)
|
||||
return '%s - - [%s] "%s %s" %s %s "%s" "%s" "%s" %.4f "%s" %d %s' % (
|
||||
req.remote_addr,
|
||||
time.strftime('%d/%b/%Y:%H:%M:%S +0000', time.gmtime()),
|
||||
req.method, req.path, res.status.split()[0],
|
||||
res.content_length or '-', req.referer or '-',
|
||||
req.headers.get('x-trans-id', '-'),
|
||||
req.user_agent or '-', trans_time, additional_info or '-',
|
||||
os.getpid())
|
||||
os.getpid(), policy_index or '-')
|
||||
|
||||
|
||||
def get_trans_id_time(trans_id):
|
||||
|
@ -907,7 +923,7 @@ class NullLogger(object):
|
|||
"""A no-op logger for eventlet wsgi."""
|
||||
|
||||
def write(self, *args):
|
||||
#"Logs" the args to nowhere
|
||||
# "Logs" the args to nowhere
|
||||
pass
|
||||
|
||||
|
||||
|
@ -1069,6 +1085,7 @@ class LoggingHandlerWeakRef(weakref.ref):
|
|||
Like a weak reference, but passes through a couple methods that logging
|
||||
handlers need.
|
||||
"""
|
||||
|
||||
def close(self):
|
||||
referent = self()
|
||||
try:
|
||||
|
@ -1542,6 +1559,17 @@ def parse_options(parser=None, once=False, test_args=None):
|
|||
return config, options
|
||||
|
||||
|
||||
def expand_ipv6(address):
|
||||
"""
|
||||
Expand ipv6 address.
|
||||
:param address: a string indicating valid ipv6 address
|
||||
:returns: a string indicating fully expanded ipv6 address
|
||||
|
||||
"""
|
||||
packed_ip = socket.inet_pton(socket.AF_INET6, address)
|
||||
return socket.inet_ntop(socket.AF_INET6, packed_ip)
|
||||
|
||||
|
||||
def whataremyips():
|
||||
"""
|
||||
Get the machine's ip addresses
|
||||
|
@ -1561,7 +1589,7 @@ def whataremyips():
|
|||
# If we have an ipv6 address remove the
|
||||
# %ether_interface at the end
|
||||
if family == netifaces.AF_INET6:
|
||||
addr = addr.split('%')[0]
|
||||
addr = expand_ipv6(addr.split('%')[0])
|
||||
addresses.append(addr)
|
||||
except ValueError:
|
||||
pass
|
||||
|
@ -2388,7 +2416,7 @@ def dump_recon_cache(cache_dict, cache_file, logger, lock_timeout=2):
|
|||
if existing_entry:
|
||||
cache_entry = json.loads(existing_entry)
|
||||
except ValueError:
|
||||
#file doesn't have a valid entry, we'll recreate it
|
||||
# file doesn't have a valid entry, we'll recreate it
|
||||
pass
|
||||
for cache_key, cache_value in cache_dict.items():
|
||||
put_recon_cache_entry(cache_entry, cache_key, cache_value)
|
||||
|
@ -2724,14 +2752,15 @@ def tpool_reraise(func, *args, **kwargs):
|
|||
|
||||
|
||||
class ThreadPool(object):
|
||||
BYTE = 'a'.encode('utf-8')
|
||||
|
||||
"""
|
||||
Perform blocking operations in background threads.
|
||||
|
||||
Call its methods from within greenlets to green-wait for results without
|
||||
blocking the eventlet reactor (hopefully).
|
||||
"""
|
||||
|
||||
BYTE = 'a'.encode('utf-8')
|
||||
|
||||
def __init__(self, nthreads=2):
|
||||
self.nthreads = nthreads
|
||||
self._run_queue = Queue()
|
||||
|
|
|
@ -308,7 +308,9 @@ class ContainerController(BaseStorageServer):
|
|||
elif requested_policy_index is not None:
|
||||
# validate requested policy with existing container
|
||||
if requested_policy_index != broker.storage_policy_index:
|
||||
raise HTTPConflict(request=req)
|
||||
raise HTTPConflict(request=req,
|
||||
headers={'x-backend-storage-policy-index':
|
||||
broker.storage_policy_index})
|
||||
broker.update_put_timestamp(timestamp)
|
||||
if broker.is_deleted():
|
||||
raise HTTPConflict(request=req)
|
||||
|
@ -378,9 +380,13 @@ class ContainerController(BaseStorageServer):
|
|||
if resp:
|
||||
return resp
|
||||
if created:
|
||||
return HTTPCreated(request=req)
|
||||
return HTTPCreated(request=req,
|
||||
headers={'x-backend-storage-policy-index':
|
||||
broker.storage_policy_index})
|
||||
else:
|
||||
return HTTPAccepted(request=req)
|
||||
return HTTPAccepted(request=req,
|
||||
headers={'x-backend-storage-policy-index':
|
||||
broker.storage_policy_index})
|
||||
|
||||
@public
|
||||
@timing_stats(sample_rate=0.1)
|
||||
|
|
|
@ -29,6 +29,7 @@ from swift.common.direct_client import direct_get_object
|
|||
from swift.common.internal_client import delete_object, put_object
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.ring import Ring
|
||||
from swift.common.ring.utils import is_local_device
|
||||
from swift.common.utils import (
|
||||
audit_location_generator, clean_content_type, config_true_value,
|
||||
FileLikeIter, get_logger, hash_path, quote, urlparse, validate_sync_to,
|
||||
|
@ -239,7 +240,8 @@ class ContainerSync(Daemon):
|
|||
x, nodes = self.container_ring.get_nodes(info['account'],
|
||||
info['container'])
|
||||
for ordinal, node in enumerate(nodes):
|
||||
if node['ip'] in self._myips and node['port'] == self._myport:
|
||||
if is_local_device(self._myips, self._myport,
|
||||
node['ip'], node['port']):
|
||||
break
|
||||
else:
|
||||
return
|
||||
|
|
|
@ -6,9 +6,9 @@
|
|||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: swift 2.2.2.post26\n"
|
||||
"Project-Id-Version: swift 2.2.2.post63\n"
|
||||
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
|
||||
"POT-Creation-Date: 2015-02-06 06:10+0000\n"
|
||||
"POT-Creation-Date: 2015-02-16 06:30+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@ -63,97 +63,97 @@ msgstr ""
|
|||
msgid "ERROR Could not get account info %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:132 swift/common/utils.py:1964
|
||||
#: swift/account/reaper.py:133 swift/common/utils.py:1992
|
||||
#: swift/obj/diskfile.py:468 swift/obj/updater.py:87 swift/obj/updater.py:130
|
||||
#, python-format
|
||||
msgid "Skipping %s as it is not mounted"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:136
|
||||
#: swift/account/reaper.py:137
|
||||
msgid "Exception in top-level account reaper loop"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:139
|
||||
#: swift/account/reaper.py:140
|
||||
#, python-format
|
||||
msgid "Devices pass completed: %.02fs"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:236
|
||||
#: swift/account/reaper.py:237
|
||||
#, python-format
|
||||
msgid "Beginning pass on account %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:253
|
||||
#: swift/account/reaper.py:254
|
||||
#, python-format
|
||||
msgid "Exception with containers for account %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:260
|
||||
#: swift/account/reaper.py:261
|
||||
#, python-format
|
||||
msgid "Exception with account %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:261
|
||||
#: swift/account/reaper.py:262
|
||||
#, python-format
|
||||
msgid "Incomplete pass on account %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:263
|
||||
#: swift/account/reaper.py:264
|
||||
#, python-format
|
||||
msgid ", %s containers deleted"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:265
|
||||
#: swift/account/reaper.py:266
|
||||
#, python-format
|
||||
msgid ", %s objects deleted"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:267
|
||||
#: swift/account/reaper.py:268
|
||||
#, python-format
|
||||
msgid ", %s containers remaining"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:270
|
||||
#: swift/account/reaper.py:271
|
||||
#, python-format
|
||||
msgid ", %s objects remaining"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:272
|
||||
#: swift/account/reaper.py:273
|
||||
#, python-format
|
||||
msgid ", %s containers possibly remaining"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:275
|
||||
#: swift/account/reaper.py:276
|
||||
#, python-format
|
||||
msgid ", %s objects possibly remaining"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:278
|
||||
#: swift/account/reaper.py:279
|
||||
msgid ", return codes: "
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:282
|
||||
#: swift/account/reaper.py:283
|
||||
#, python-format
|
||||
msgid ", elapsed: %.02fs"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:288
|
||||
#: swift/account/reaper.py:289
|
||||
#, python-format
|
||||
msgid "Account %s has not been reaped since %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:347 swift/account/reaper.py:391
|
||||
#: swift/account/reaper.py:453 swift/container/updater.py:306
|
||||
#: swift/account/reaper.py:348 swift/account/reaper.py:392
|
||||
#: swift/account/reaper.py:454 swift/container/updater.py:306
|
||||
#, python-format
|
||||
msgid "Exception with %(ip)s:%(port)s/%(device)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/reaper.py:363
|
||||
#: swift/account/reaper.py:364
|
||||
#, python-format
|
||||
msgid "Exception with objects for container %(container)s for account %(account)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/account/server.py:275 swift/container/server.py:576
|
||||
#: swift/account/server.py:275 swift/container/server.py:582
|
||||
#: swift/obj/server.py:723
|
||||
#, python-format
|
||||
msgid "ERROR __call__ error with %(method)s %(path)s "
|
||||
|
@ -189,79 +189,79 @@ msgstr ""
|
|||
msgid "Invalid pending entry %(file)s: %(entry)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:142
|
||||
#: swift/common/db_replicator.py:143
|
||||
#, python-format
|
||||
msgid "ERROR reading HTTP response from %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:192
|
||||
#: swift/common/db_replicator.py:193
|
||||
#, python-format
|
||||
msgid "Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:198
|
||||
#: swift/common/db_replicator.py:199
|
||||
#, python-format
|
||||
msgid "Removed %(remove)d dbs"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:199
|
||||
#: swift/common/db_replicator.py:200
|
||||
#, python-format
|
||||
msgid "%(success)s successes, %(failure)s failures"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:230
|
||||
#: swift/common/db_replicator.py:231
|
||||
#, python-format
|
||||
msgid "ERROR rsync failed with %(code)s: %(args)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:293
|
||||
#: swift/common/db_replicator.py:294
|
||||
#, python-format
|
||||
msgid "ERROR Bad response %(status)s from %(host)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:452 swift/common/db_replicator.py:676
|
||||
#: swift/common/db_replicator.py:453 swift/common/db_replicator.py:678
|
||||
#, python-format
|
||||
msgid "Quarantining DB %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:455
|
||||
#: swift/common/db_replicator.py:456
|
||||
#, python-format
|
||||
msgid "ERROR reading db %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:486
|
||||
#: swift/common/db_replicator.py:487
|
||||
#, python-format
|
||||
msgid "ERROR Remote drive not mounted %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:488
|
||||
#: swift/common/db_replicator.py:489
|
||||
#, python-format
|
||||
msgid "ERROR syncing %(file)s with node %(node)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:516
|
||||
#: swift/common/db_replicator.py:517
|
||||
#, python-format
|
||||
msgid "ERROR while trying to clean up %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:542
|
||||
#: swift/common/db_replicator.py:543
|
||||
msgid "ERROR Failed to get my own IPs?"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:551
|
||||
#: swift/common/db_replicator.py:553
|
||||
#, python-format
|
||||
msgid "Skipping %(device)s as it is not mounted"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:560
|
||||
#: swift/common/db_replicator.py:562
|
||||
msgid "Beginning replication run"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:565
|
||||
#: swift/common/db_replicator.py:567
|
||||
msgid "Replication run OVER"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/db_replicator.py:578
|
||||
#: swift/common/db_replicator.py:580
|
||||
msgid "ERROR trying to replicate"
|
||||
msgstr ""
|
||||
|
||||
|
@ -382,90 +382,90 @@ msgstr ""
|
|||
msgid "Unable to locate %s in libc. Leaving as a no-op."
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:496
|
||||
#: swift/common/utils.py:512
|
||||
msgid "Unable to locate fallocate, posix_fallocate in libc. Leaving as a no-op."
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:923
|
||||
#: swift/common/utils.py:939
|
||||
msgid "STDOUT: Connection reset by peer"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:925 swift/common/utils.py:928
|
||||
#: swift/common/utils.py:941 swift/common/utils.py:944
|
||||
#, python-format
|
||||
msgid "STDOUT: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1162
|
||||
#: swift/common/utils.py:1179
|
||||
msgid "Connection refused"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1164
|
||||
#: swift/common/utils.py:1181
|
||||
msgid "Host unreachable"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1166
|
||||
#: swift/common/utils.py:1183
|
||||
msgid "Connection timeout"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1468
|
||||
#: swift/common/utils.py:1485
|
||||
msgid "UNCAUGHT EXCEPTION"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1523
|
||||
#: swift/common/utils.py:1540
|
||||
msgid "Error: missing config path argument"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1528
|
||||
#: swift/common/utils.py:1545
|
||||
#, python-format
|
||||
msgid "Error: unable to locate %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1825
|
||||
#: swift/common/utils.py:1853
|
||||
#, python-format
|
||||
msgid "Unable to read config from %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:1831
|
||||
#: swift/common/utils.py:1859
|
||||
#, python-format
|
||||
msgid "Unable to find %s config section in %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2185
|
||||
#: swift/common/utils.py:2213
|
||||
#, python-format
|
||||
msgid "Invalid X-Container-Sync-To format %r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2190
|
||||
#: swift/common/utils.py:2218
|
||||
#, python-format
|
||||
msgid "No realm key for %r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2194
|
||||
#: swift/common/utils.py:2222
|
||||
#, python-format
|
||||
msgid "No cluster endpoint for %r %r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2203
|
||||
#: swift/common/utils.py:2231
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
|
||||
"\"https\"."
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2207
|
||||
#: swift/common/utils.py:2235
|
||||
msgid "Path required in X-Container-Sync-To"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2210
|
||||
#: swift/common/utils.py:2238
|
||||
msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2215
|
||||
#: swift/common/utils.py:2243
|
||||
#, python-format
|
||||
msgid "Invalid host %r in X-Container-Sync-To"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/utils.py:2407
|
||||
#: swift/common/utils.py:2435
|
||||
msgid "Exception dumping recon cache"
|
||||
msgstr ""
|
||||
|
||||
|
@ -494,24 +494,24 @@ msgstr ""
|
|||
msgid "Following CNAME chain for %(given_domain)s to %(found_domain)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:247
|
||||
#: swift/common/middleware/ratelimit.py:248
|
||||
#, python-format
|
||||
msgid "Returning 497 because of blacklisting: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:262
|
||||
#: swift/common/middleware/ratelimit.py:263
|
||||
#, python-format
|
||||
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:270
|
||||
#: swift/common/middleware/ratelimit.py:271
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
|
||||
"Sleep) %(e)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:292
|
||||
#: swift/common/middleware/ratelimit.py:293
|
||||
msgid "Warning: Cannot ratelimit without a memcached client"
|
||||
msgstr ""
|
||||
|
||||
|
@ -642,52 +642,52 @@ msgid ""
|
|||
"later)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:192
|
||||
#: swift/container/sync.py:193
|
||||
msgid "Begin container sync \"once\" mode"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:204
|
||||
#: swift/container/sync.py:205
|
||||
#, python-format
|
||||
msgid "Container sync \"once\" mode completed: %.02fs"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:212
|
||||
#: swift/container/sync.py:213
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], "
|
||||
"%(skip)s skipped, %(fail)s failed"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:264
|
||||
#: swift/container/sync.py:266
|
||||
#, python-format
|
||||
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:320
|
||||
#: swift/container/sync.py:322
|
||||
#, python-format
|
||||
msgid "ERROR Syncing %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:408
|
||||
#: swift/container/sync.py:410
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Unknown exception trying to GET: %(node)r %(account)r %(container)r "
|
||||
"%(object)r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:442
|
||||
#: swift/container/sync.py:444
|
||||
#, python-format
|
||||
msgid "Unauth %(sync_from)r => %(sync_to)r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:448
|
||||
#: swift/container/sync.py:450
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Not found %(sync_from)r => %(sync_to)r - object "
|
||||
"%(obj_name)r"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/sync.py:455 swift/container/sync.py:462
|
||||
#: swift/container/sync.py:457 swift/container/sync.py:464
|
||||
#, python-format
|
||||
msgid "ERROR Syncing %(db_file)s %(row)s"
|
||||
msgstr ""
|
||||
|
@ -697,8 +697,8 @@ msgstr ""
|
|||
msgid "ERROR: Failed to get paths to drive partitions: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/container/updater.py:91 swift/obj/replicator.py:428
|
||||
#: swift/obj/replicator.py:512
|
||||
#: swift/container/updater.py:91 swift/obj/replicator.py:479
|
||||
#: swift/obj/replicator.py:565
|
||||
#, python-format
|
||||
msgid "%s is not mounted"
|
||||
msgstr ""
|
||||
|
@ -887,103 +887,108 @@ msgstr ""
|
|||
msgid "ERROR container update failed with %(ip)s:%(port)s/%(dev)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:135
|
||||
#: swift/obj/replicator.py:138
|
||||
#, python-format
|
||||
msgid "Killing long-running rsync: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:149
|
||||
#: swift/obj/replicator.py:152
|
||||
#, python-format
|
||||
msgid "Bad rsync return code: %(ret)d <- %(args)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:156 swift/obj/replicator.py:160
|
||||
#: swift/obj/replicator.py:159 swift/obj/replicator.py:163
|
||||
#, python-format
|
||||
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:256
|
||||
#: swift/obj/replicator.py:277
|
||||
#, python-format
|
||||
msgid "Removing %s objects"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:281
|
||||
#, python-format
|
||||
msgid "Removing partition: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:259
|
||||
#: swift/obj/replicator.py:285
|
||||
msgid "Error syncing handoff partition"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:296
|
||||
#: swift/obj/replicator.py:342
|
||||
#, python-format
|
||||
msgid "%(ip)s/%(device)s responded as unmounted"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:301
|
||||
#: swift/obj/replicator.py:347
|
||||
#, python-format
|
||||
msgid "Invalid response %(resp)s from %(ip)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:333
|
||||
#: swift/obj/replicator.py:382
|
||||
#, python-format
|
||||
msgid "Error syncing with node: %s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:337
|
||||
#: swift/obj/replicator.py:386
|
||||
msgid "Error syncing partition"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:350
|
||||
#: swift/obj/replicator.py:399
|
||||
#, python-format
|
||||
msgid ""
|
||||
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
|
||||
"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:361
|
||||
#: swift/obj/replicator.py:410
|
||||
#, python-format
|
||||
msgid ""
|
||||
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% "
|
||||
"synced"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:368
|
||||
#: swift/obj/replicator.py:417
|
||||
#, python-format
|
||||
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:376
|
||||
#: swift/obj/replicator.py:425
|
||||
#, python-format
|
||||
msgid "Nothing replicated for %s seconds."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:405
|
||||
#: swift/obj/replicator.py:454
|
||||
msgid "Lockup detected.. killing live coros."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:515
|
||||
#: swift/obj/replicator.py:568
|
||||
msgid "Ring change detected. Aborting current replication pass."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:536
|
||||
#: swift/obj/replicator.py:589
|
||||
msgid "Exception in top-level replication loop"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:545
|
||||
#: swift/obj/replicator.py:598
|
||||
msgid "Running object replicator in script mode."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:563
|
||||
#: swift/obj/replicator.py:616
|
||||
#, python-format
|
||||
msgid "Object replication complete (once). (%.02f minutes)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:570
|
||||
#: swift/obj/replicator.py:623
|
||||
msgid "Starting object replicator in daemon mode."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:574
|
||||
#: swift/obj/replicator.py:627
|
||||
msgid "Starting object replication pass."
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:579
|
||||
#: swift/obj/replicator.py:632
|
||||
#, python-format
|
||||
msgid "Object replication complete. (%.02f minutes)"
|
||||
msgstr ""
|
||||
|
@ -1056,21 +1061,21 @@ msgstr ""
|
|||
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/server.py:379
|
||||
#: swift/proxy/server.py:380
|
||||
msgid "ERROR Unhandled exception in request"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/server.py:434
|
||||
#: swift/proxy/server.py:435
|
||||
#, python-format
|
||||
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/server.py:451 swift/proxy/server.py:469
|
||||
#: swift/proxy/server.py:452 swift/proxy/server.py:470
|
||||
#, python-format
|
||||
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/server.py:539
|
||||
#: swift/proxy/server.py:540
|
||||
#, python-format
|
||||
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
|
||||
msgstr ""
|
||||
|
@ -1079,60 +1084,60 @@ msgstr ""
|
|||
msgid "Account"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:697 swift/proxy/controllers/base.py:730
|
||||
#: swift/proxy/controllers/base.py:698 swift/proxy/controllers/base.py:731
|
||||
#: swift/proxy/controllers/obj.py:191 swift/proxy/controllers/obj.py:318
|
||||
#: swift/proxy/controllers/obj.py:358 swift/proxy/controllers/obj.py:376
|
||||
#: swift/proxy/controllers/obj.py:502
|
||||
msgid "Object"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:698
|
||||
#: swift/proxy/controllers/base.py:699
|
||||
msgid "Trying to read during GET (retrying)"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:731
|
||||
#: swift/proxy/controllers/base.py:732
|
||||
msgid "Trying to read during GET"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:735
|
||||
#: swift/proxy/controllers/base.py:736
|
||||
#, python-format
|
||||
msgid "Client did not read from proxy within %ss"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:740
|
||||
#: swift/proxy/controllers/base.py:741
|
||||
msgid "Client disconnected on read"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:742
|
||||
#: swift/proxy/controllers/base.py:743
|
||||
msgid "Trying to send to client"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:779 swift/proxy/controllers/base.py:1049
|
||||
#: swift/proxy/controllers/base.py:780 swift/proxy/controllers/base.py:1050
|
||||
#, python-format
|
||||
msgid "Trying to %(method)s %(path)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:816 swift/proxy/controllers/base.py:1037
|
||||
#: swift/proxy/controllers/base.py:817 swift/proxy/controllers/base.py:1038
|
||||
#: swift/proxy/controllers/obj.py:350 swift/proxy/controllers/obj.py:390
|
||||
msgid "ERROR Insufficient Storage"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:819
|
||||
#: swift/proxy/controllers/base.py:820
|
||||
#, python-format
|
||||
msgid "ERROR %(status)d %(body)s From %(type)s Server"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:1040
|
||||
#: swift/proxy/controllers/base.py:1041
|
||||
#, python-format
|
||||
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:1152
|
||||
#: swift/proxy/controllers/base.py:1153
|
||||
#, python-format
|
||||
msgid "%(type)s returning 503 for %(statuses)s"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/container.py:91 swift/proxy/controllers/obj.py:117
|
||||
#: swift/proxy/controllers/container.py:95 swift/proxy/controllers/obj.py:117
|
||||
msgid "Container"
|
||||
msgstr ""
|
||||
|
||||
|
|
|
@ -8,8 +8,8 @@ msgid ""
|
|||
msgstr ""
|
||||
"Project-Id-Version: Swift\n"
|
||||
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
|
||||
"POT-Creation-Date: 2015-02-06 06:10+0000\n"
|
||||
"PO-Revision-Date: 2015-02-05 16:52+0000\n"
|
||||
"POT-Creation-Date: 2015-02-16 06:30+0000\n"
|
||||
"PO-Revision-Date: 2015-02-13 19:15+0000\n"
|
||||
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
|
||||
"Language-Team: Chinese (China) "
|
||||
"(http://www.transifex.com/projects/p/swift/language/zh_CN/)\n"
|
||||
|
@ -65,97 +65,97 @@ msgstr "审计失败%s: %s"
|
|||
msgid "ERROR Could not get account info %s"
|
||||
msgstr "错误:无法获取账号信息%s"
|
||||
|
||||
#: swift/account/reaper.py:132 swift/common/utils.py:1964
|
||||
#: swift/account/reaper.py:133 swift/common/utils.py:1992
|
||||
#: swift/obj/diskfile.py:468 swift/obj/updater.py:87 swift/obj/updater.py:130
|
||||
#, python-format
|
||||
msgid "Skipping %s as it is not mounted"
|
||||
msgstr "挂载失败 跳过%s"
|
||||
|
||||
#: swift/account/reaper.py:136
|
||||
#: swift/account/reaper.py:137
|
||||
msgid "Exception in top-level account reaper loop"
|
||||
msgstr "异常出现在top-level账号reaper环"
|
||||
|
||||
#: swift/account/reaper.py:139
|
||||
#: swift/account/reaper.py:140
|
||||
#, python-format
|
||||
msgid "Devices pass completed: %.02fs"
|
||||
msgstr "设备通过完成: %.02fs"
|
||||
|
||||
#: swift/account/reaper.py:236
|
||||
#: swift/account/reaper.py:237
|
||||
#, python-format
|
||||
msgid "Beginning pass on account %s"
|
||||
msgstr "账号%s开始通过"
|
||||
|
||||
#: swift/account/reaper.py:253
|
||||
#: swift/account/reaper.py:254
|
||||
#, python-format
|
||||
msgid "Exception with containers for account %s"
|
||||
msgstr "账号%s内容器出现异常"
|
||||
|
||||
#: swift/account/reaper.py:260
|
||||
#: swift/account/reaper.py:261
|
||||
#, python-format
|
||||
msgid "Exception with account %s"
|
||||
msgstr "账号%s出现异常"
|
||||
|
||||
#: swift/account/reaper.py:261
|
||||
#: swift/account/reaper.py:262
|
||||
#, python-format
|
||||
msgid "Incomplete pass on account %s"
|
||||
msgstr "账号%s未完成通过"
|
||||
|
||||
#: swift/account/reaper.py:263
|
||||
#: swift/account/reaper.py:264
|
||||
#, python-format
|
||||
msgid ", %s containers deleted"
|
||||
msgstr ",删除容器%s"
|
||||
|
||||
#: swift/account/reaper.py:265
|
||||
#: swift/account/reaper.py:266
|
||||
#, python-format
|
||||
msgid ", %s objects deleted"
|
||||
msgstr ",删除对象%s"
|
||||
|
||||
#: swift/account/reaper.py:267
|
||||
#: swift/account/reaper.py:268
|
||||
#, python-format
|
||||
msgid ", %s containers remaining"
|
||||
msgstr ",剩余容器%s"
|
||||
|
||||
#: swift/account/reaper.py:270
|
||||
#: swift/account/reaper.py:271
|
||||
#, python-format
|
||||
msgid ", %s objects remaining"
|
||||
msgstr ",剩余对象%s"
|
||||
|
||||
#: swift/account/reaper.py:272
|
||||
#: swift/account/reaper.py:273
|
||||
#, python-format
|
||||
msgid ", %s containers possibly remaining"
|
||||
msgstr ",可能剩余容器%s"
|
||||
|
||||
#: swift/account/reaper.py:275
|
||||
#: swift/account/reaper.py:276
|
||||
#, python-format
|
||||
msgid ", %s objects possibly remaining"
|
||||
msgstr ",可能剩余对象%s"
|
||||
|
||||
#: swift/account/reaper.py:278
|
||||
#: swift/account/reaper.py:279
|
||||
msgid ", return codes: "
|
||||
msgstr ",返回代码:"
|
||||
|
||||
#: swift/account/reaper.py:282
|
||||
#: swift/account/reaper.py:283
|
||||
#, python-format
|
||||
msgid ", elapsed: %.02fs"
|
||||
msgstr ",耗时:%.02fs"
|
||||
|
||||
#: swift/account/reaper.py:288
|
||||
#: swift/account/reaper.py:289
|
||||
#, python-format
|
||||
msgid "Account %s has not been reaped since %s"
|
||||
msgstr "账号%s自%s起未被reaped"
|
||||
|
||||
#: swift/account/reaper.py:347 swift/account/reaper.py:391
|
||||
#: swift/account/reaper.py:453 swift/container/updater.py:306
|
||||
#: swift/account/reaper.py:348 swift/account/reaper.py:392
|
||||
#: swift/account/reaper.py:454 swift/container/updater.py:306
|
||||
#, python-format
|
||||
msgid "Exception with %(ip)s:%(port)s/%(device)s"
|
||||
msgstr "%(ip)s:%(port)s/%(device)s出现异常"
|
||||
|
||||
#: swift/account/reaper.py:363
|
||||
#: swift/account/reaper.py:364
|
||||
#, python-format
|
||||
msgid "Exception with objects for container %(container)s for account %(account)s"
|
||||
msgstr "账号%(account)s容器%(container)s的对象出现异常"
|
||||
|
||||
#: swift/account/server.py:275 swift/container/server.py:576
|
||||
#: swift/account/server.py:275 swift/container/server.py:582
|
||||
#: swift/obj/server.py:723
|
||||
#, python-format
|
||||
msgid "ERROR __call__ error with %(method)s %(path)s "
|
||||
|
@ -191,79 +191,79 @@ msgstr "服务器错误并尝试去回滚已经锁住的链接"
|
|||
msgid "Invalid pending entry %(file)s: %(entry)s"
|
||||
msgstr "不可用的等待输入%(file)s: %(entry)s"
|
||||
|
||||
#: swift/common/db_replicator.py:142
|
||||
#: swift/common/db_replicator.py:143
|
||||
#, python-format
|
||||
msgid "ERROR reading HTTP response from %s"
|
||||
msgstr "读取HTTP错误 响应来源%s"
|
||||
|
||||
#: swift/common/db_replicator.py:192
|
||||
#: swift/common/db_replicator.py:193
|
||||
#, python-format
|
||||
msgid "Attempted to replicate %(count)d dbs in %(time).5f seconds (%(rate).5f/s)"
|
||||
msgstr "%(time).5f seconds (%(rate).5f/s)尝试复制%(count)d dbs"
|
||||
|
||||
#: swift/common/db_replicator.py:198
|
||||
#: swift/common/db_replicator.py:199
|
||||
#, python-format
|
||||
msgid "Removed %(remove)d dbs"
|
||||
msgstr "删除%(remove)d dbs"
|
||||
|
||||
#: swift/common/db_replicator.py:199
|
||||
#: swift/common/db_replicator.py:200
|
||||
#, python-format
|
||||
msgid "%(success)s successes, %(failure)s failures"
|
||||
msgstr "%(success)s成功,%(failure)s失败"
|
||||
|
||||
#: swift/common/db_replicator.py:230
|
||||
#: swift/common/db_replicator.py:231
|
||||
#, python-format
|
||||
msgid "ERROR rsync failed with %(code)s: %(args)s"
|
||||
msgstr "错误 rsync失败 %(code)s: %(args)s"
|
||||
|
||||
#: swift/common/db_replicator.py:293
|
||||
#: swift/common/db_replicator.py:294
|
||||
#, python-format
|
||||
msgid "ERROR Bad response %(status)s from %(host)s"
|
||||
msgstr "失败响应错误%(status)s来自%(host)s"
|
||||
|
||||
#: swift/common/db_replicator.py:452 swift/common/db_replicator.py:676
|
||||
#: swift/common/db_replicator.py:453 swift/common/db_replicator.py:678
|
||||
#, python-format
|
||||
msgid "Quarantining DB %s"
|
||||
msgstr "隔离DB%s"
|
||||
|
||||
#: swift/common/db_replicator.py:455
|
||||
#: swift/common/db_replicator.py:456
|
||||
#, python-format
|
||||
msgid "ERROR reading db %s"
|
||||
msgstr "错误 读取db %s"
|
||||
|
||||
#: swift/common/db_replicator.py:486
|
||||
#: swift/common/db_replicator.py:487
|
||||
#, python-format
|
||||
msgid "ERROR Remote drive not mounted %s"
|
||||
msgstr "错误 远程驱动器无法挂载 %s"
|
||||
|
||||
#: swift/common/db_replicator.py:488
|
||||
#: swift/common/db_replicator.py:489
|
||||
#, python-format
|
||||
msgid "ERROR syncing %(file)s with node %(node)s"
|
||||
msgstr "错误 同步 %(file)s 和 节点%(node)s"
|
||||
|
||||
#: swift/common/db_replicator.py:516
|
||||
#: swift/common/db_replicator.py:517
|
||||
#, python-format
|
||||
msgid "ERROR while trying to clean up %s"
|
||||
msgstr "清理时出现错误%s"
|
||||
|
||||
#: swift/common/db_replicator.py:542
|
||||
#: swift/common/db_replicator.py:543
|
||||
msgid "ERROR Failed to get my own IPs?"
|
||||
msgstr "错误 无法获得我方IPs?"
|
||||
|
||||
#: swift/common/db_replicator.py:551
|
||||
#: swift/common/db_replicator.py:553
|
||||
#, python-format
|
||||
msgid "Skipping %(device)s as it is not mounted"
|
||||
msgstr "因无法挂载跳过%(device)s"
|
||||
|
||||
#: swift/common/db_replicator.py:560
|
||||
#: swift/common/db_replicator.py:562
|
||||
msgid "Beginning replication run"
|
||||
msgstr "开始运行复制"
|
||||
|
||||
#: swift/common/db_replicator.py:565
|
||||
#: swift/common/db_replicator.py:567
|
||||
msgid "Replication run OVER"
|
||||
msgstr "复制运行结束"
|
||||
|
||||
#: swift/common/db_replicator.py:578
|
||||
#: swift/common/db_replicator.py:580
|
||||
msgid "ERROR trying to replicate"
|
||||
msgstr "尝试复制时发生错误"
|
||||
|
||||
|
@ -386,90 +386,90 @@ msgstr ""
|
|||
msgid "Unable to locate %s in libc. Leaving as a no-op."
|
||||
msgstr "无法查询到%s 保留为no-op"
|
||||
|
||||
#: swift/common/utils.py:496
|
||||
#: swift/common/utils.py:512
|
||||
msgid "Unable to locate fallocate, posix_fallocate in libc. Leaving as a no-op."
|
||||
msgstr "无法查询到fallocate, posix_fallocate。保存为no-op"
|
||||
|
||||
#: swift/common/utils.py:923
|
||||
#: swift/common/utils.py:939
|
||||
msgid "STDOUT: Connection reset by peer"
|
||||
msgstr "STDOUT:连接被peer重新设置"
|
||||
|
||||
#: swift/common/utils.py:925 swift/common/utils.py:928
|
||||
#: swift/common/utils.py:941 swift/common/utils.py:944
|
||||
#, python-format
|
||||
msgid "STDOUT: %s"
|
||||
msgstr "STDOUT: %s"
|
||||
|
||||
#: swift/common/utils.py:1162
|
||||
#: swift/common/utils.py:1179
|
||||
msgid "Connection refused"
|
||||
msgstr "连接被拒绝"
|
||||
|
||||
#: swift/common/utils.py:1164
|
||||
#: swift/common/utils.py:1181
|
||||
msgid "Host unreachable"
|
||||
msgstr "无法连接到主机"
|
||||
|
||||
#: swift/common/utils.py:1166
|
||||
#: swift/common/utils.py:1183
|
||||
msgid "Connection timeout"
|
||||
msgstr "连接超时"
|
||||
|
||||
#: swift/common/utils.py:1468
|
||||
#: swift/common/utils.py:1485
|
||||
msgid "UNCAUGHT EXCEPTION"
|
||||
msgstr "未捕获的异常"
|
||||
|
||||
#: swift/common/utils.py:1523
|
||||
#: swift/common/utils.py:1540
|
||||
msgid "Error: missing config path argument"
|
||||
msgstr "错误:设置路径信息丢失"
|
||||
|
||||
#: swift/common/utils.py:1528
|
||||
#: swift/common/utils.py:1545
|
||||
#, python-format
|
||||
msgid "Error: unable to locate %s"
|
||||
msgstr "错误:无法查询到 %s"
|
||||
|
||||
#: swift/common/utils.py:1825
|
||||
#: swift/common/utils.py:1853
|
||||
#, python-format
|
||||
msgid "Unable to read config from %s"
|
||||
msgstr "无法从%s读取设置"
|
||||
|
||||
#: swift/common/utils.py:1831
|
||||
#: swift/common/utils.py:1859
|
||||
#, python-format
|
||||
msgid "Unable to find %s config section in %s"
|
||||
msgstr "无法在%s中查找到%s设置部分"
|
||||
|
||||
#: swift/common/utils.py:2185
|
||||
#: swift/common/utils.py:2213
|
||||
#, python-format
|
||||
msgid "Invalid X-Container-Sync-To format %r"
|
||||
msgstr "无效的X-Container-Sync-To格式%r"
|
||||
|
||||
#: swift/common/utils.py:2190
|
||||
#: swift/common/utils.py:2218
|
||||
#, python-format
|
||||
msgid "No realm key for %r"
|
||||
msgstr "%r权限key不存在"
|
||||
|
||||
#: swift/common/utils.py:2194
|
||||
#: swift/common/utils.py:2222
|
||||
#, python-format
|
||||
msgid "No cluster endpoint for %r %r"
|
||||
msgstr "%r %r的集群节点不存在"
|
||||
|
||||
#: swift/common/utils.py:2203
|
||||
#: swift/common/utils.py:2231
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Invalid scheme %r in X-Container-Sync-To, must be \"//\", \"http\", or "
|
||||
"\"https\"."
|
||||
msgstr "在X-Container-Sync-To中%r是无效的方案,须为\"//\", \"http\", or \"https\"。"
|
||||
|
||||
#: swift/common/utils.py:2207
|
||||
#: swift/common/utils.py:2235
|
||||
msgid "Path required in X-Container-Sync-To"
|
||||
msgstr "在X-Container-Sync-To中路径是必须的"
|
||||
|
||||
#: swift/common/utils.py:2210
|
||||
#: swift/common/utils.py:2238
|
||||
msgid "Params, queries, and fragments not allowed in X-Container-Sync-To"
|
||||
msgstr "在X-Container-Sync-To中,变量,查询和碎片不被允许"
|
||||
|
||||
#: swift/common/utils.py:2215
|
||||
#: swift/common/utils.py:2243
|
||||
#, python-format
|
||||
msgid "Invalid host %r in X-Container-Sync-To"
|
||||
msgstr "X-Container-Sync-To中无效主机%r"
|
||||
|
||||
#: swift/common/utils.py:2407
|
||||
#: swift/common/utils.py:2435
|
||||
msgid "Exception dumping recon cache"
|
||||
msgstr "执行dump recon的时候出现异常"
|
||||
|
||||
|
@ -498,17 +498,17 @@ msgstr "集合%(given_domain)s到%(found_domain)s"
|
|||
msgid "Following CNAME chain for %(given_domain)s to %(found_domain)s"
|
||||
msgstr "跟随CNAME链从%(given_domain)s到%(found_domain)s"
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:247
|
||||
#: swift/common/middleware/ratelimit.py:248
|
||||
#, python-format
|
||||
msgid "Returning 497 because of blacklisting: %s"
|
||||
msgstr "返回497因为黑名单:%s"
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:262
|
||||
#: swift/common/middleware/ratelimit.py:263
|
||||
#, python-format
|
||||
msgid "Ratelimit sleep log: %(sleep)s for %(account)s/%(container)s/%(object)s"
|
||||
msgstr "流量控制休眠日志:%(sleep)s for %(account)s/%(container)s/%(object)s"
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:270
|
||||
#: swift/common/middleware/ratelimit.py:271
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Returning 498 for %(meth)s to %(acc)s/%(cont)s/%(obj)s . Ratelimit (Max "
|
||||
|
@ -517,7 +517,7 @@ msgstr ""
|
|||
"返还498从%(meth)s到%(acc)s/%(cont)s/%(obj)s,流量控制(Max \"\n"
|
||||
"\"Sleep) %(e)s"
|
||||
|
||||
#: swift/common/middleware/ratelimit.py:292
|
||||
#: swift/common/middleware/ratelimit.py:293
|
||||
msgid "Warning: Cannot ratelimit without a memcached client"
|
||||
msgstr "警告:缺失缓存客户端 无法控制流量 "
|
||||
|
||||
|
@ -648,16 +648,16 @@ msgid ""
|
|||
"later)"
|
||||
msgstr "错误 账号更新失败 %(ip)s:%(port)s/%(device)s (稍后尝试)"
|
||||
|
||||
#: swift/container/sync.py:192
|
||||
#: swift/container/sync.py:193
|
||||
msgid "Begin container sync \"once\" mode"
|
||||
msgstr "开始容器同步\"once\"模式"
|
||||
|
||||
#: swift/container/sync.py:204
|
||||
#: swift/container/sync.py:205
|
||||
#, python-format
|
||||
msgid "Container sync \"once\" mode completed: %.02fs"
|
||||
msgstr "容器同步\"once\"模式完成:%.02fs"
|
||||
|
||||
#: swift/container/sync.py:212
|
||||
#: swift/container/sync.py:213
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Since %(time)s: %(sync)s synced [%(delete)s deletes, %(put)s puts], "
|
||||
|
@ -666,36 +666,36 @@ msgstr ""
|
|||
"自%(time)s起:%(sync)s完成同步 [%(delete)s 删除, %(put)s 上传], \"\n"
|
||||
"\"%(skip)s 跳过, %(fail)s 失败"
|
||||
|
||||
#: swift/container/sync.py:264
|
||||
#: swift/container/sync.py:266
|
||||
#, python-format
|
||||
msgid "ERROR %(db_file)s: %(validate_sync_to_err)s"
|
||||
msgstr "错误 %(db_file)s: %(validate_sync_to_err)s"
|
||||
|
||||
#: swift/container/sync.py:320
|
||||
#: swift/container/sync.py:322
|
||||
#, python-format
|
||||
msgid "ERROR Syncing %s"
|
||||
msgstr "同步时发生错误%s"
|
||||
|
||||
#: swift/container/sync.py:408
|
||||
#: swift/container/sync.py:410
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Unknown exception trying to GET: %(node)r %(account)r %(container)r "
|
||||
"%(object)r"
|
||||
msgstr "尝试获取时发生未知的异常%(node)r %(account)r %(container)r %(object)r"
|
||||
|
||||
#: swift/container/sync.py:442
|
||||
#: swift/container/sync.py:444
|
||||
#, python-format
|
||||
msgid "Unauth %(sync_from)r => %(sync_to)r"
|
||||
msgstr "未授权%(sync_from)r => %(sync_to)r"
|
||||
|
||||
#: swift/container/sync.py:448
|
||||
#: swift/container/sync.py:450
|
||||
#, python-format
|
||||
msgid ""
|
||||
"Not found %(sync_from)r => %(sync_to)r - object "
|
||||
"%(obj_name)r"
|
||||
msgstr "未找到: %(sync_from)r => %(sync_to)r - object %(obj_name)r"
|
||||
|
||||
#: swift/container/sync.py:455 swift/container/sync.py:462
|
||||
#: swift/container/sync.py:457 swift/container/sync.py:464
|
||||
#, python-format
|
||||
msgid "ERROR Syncing %(db_file)s %(row)s"
|
||||
msgstr "同步错误 %(db_file)s %(row)s"
|
||||
|
@ -705,8 +705,8 @@ msgstr "同步错误 %(db_file)s %(row)s"
|
|||
msgid "ERROR: Failed to get paths to drive partitions: %s"
|
||||
msgstr "%s未挂载"
|
||||
|
||||
#: swift/container/updater.py:91 swift/obj/replicator.py:428
|
||||
#: swift/obj/replicator.py:512
|
||||
#: swift/container/updater.py:91 swift/obj/replicator.py:479
|
||||
#: swift/obj/replicator.py:565
|
||||
#, python-format
|
||||
msgid "%s is not mounted"
|
||||
msgstr "%s未挂载"
|
||||
|
@ -905,50 +905,55 @@ msgstr "错误 容器更新失败:%(status)d 从%(ip)s:%(port)s/%(dev)s得到
|
|||
msgid "ERROR container update failed with %(ip)s:%(port)s/%(dev)s"
|
||||
msgstr "错误 容器更新失败%(ip)s:%(port)s/%(dev)s"
|
||||
|
||||
#: swift/obj/replicator.py:135
|
||||
#: swift/obj/replicator.py:138
|
||||
#, python-format
|
||||
msgid "Killing long-running rsync: %s"
|
||||
msgstr "终止long-running同步: %s"
|
||||
|
||||
#: swift/obj/replicator.py:149
|
||||
#: swift/obj/replicator.py:152
|
||||
#, python-format
|
||||
msgid "Bad rsync return code: %(ret)d <- %(args)s"
|
||||
msgstr "Bad rsync返还代码:%(ret)d <- %(args)s"
|
||||
|
||||
#: swift/obj/replicator.py:156 swift/obj/replicator.py:160
|
||||
#: swift/obj/replicator.py:159 swift/obj/replicator.py:163
|
||||
#, python-format
|
||||
msgid "Successful rsync of %(src)s at %(dst)s (%(time).03f)"
|
||||
msgstr "成功的rsync %(src)s at %(dst)s (%(time).03f)"
|
||||
|
||||
#: swift/obj/replicator.py:256
|
||||
#: swift/obj/replicator.py:277
|
||||
#, python-format
|
||||
msgid "Removing %s objects"
|
||||
msgstr ""
|
||||
|
||||
#: swift/obj/replicator.py:281
|
||||
#, python-format
|
||||
msgid "Removing partition: %s"
|
||||
msgstr "移除分区:%s"
|
||||
|
||||
#: swift/obj/replicator.py:259
|
||||
#: swift/obj/replicator.py:285
|
||||
msgid "Error syncing handoff partition"
|
||||
msgstr "执行同步切换分区时发生错误"
|
||||
|
||||
#: swift/obj/replicator.py:296
|
||||
#: swift/obj/replicator.py:342
|
||||
#, python-format
|
||||
msgid "%(ip)s/%(device)s responded as unmounted"
|
||||
msgstr "%(ip)s/%(device)s的回应为未挂载"
|
||||
|
||||
#: swift/obj/replicator.py:301
|
||||
#: swift/obj/replicator.py:347
|
||||
#, python-format
|
||||
msgid "Invalid response %(resp)s from %(ip)s"
|
||||
msgstr "无效的回应%(resp)s来自%(ip)s"
|
||||
|
||||
#: swift/obj/replicator.py:333
|
||||
#: swift/obj/replicator.py:382
|
||||
#, python-format
|
||||
msgid "Error syncing with node: %s"
|
||||
msgstr "执行同步时节点%s发生错误"
|
||||
|
||||
#: swift/obj/replicator.py:337
|
||||
#: swift/obj/replicator.py:386
|
||||
msgid "Error syncing partition"
|
||||
msgstr "执行同步分区时发生错误"
|
||||
|
||||
#: swift/obj/replicator.py:350
|
||||
#: swift/obj/replicator.py:399
|
||||
#, python-format
|
||||
msgid ""
|
||||
"%(replicated)d/%(total)d (%(percentage).2f%%) partitions replicated in "
|
||||
|
@ -957,53 +962,53 @@ msgstr ""
|
|||
"%(replicated)d/%(total)d (%(percentage).2f%%) 分区被复制 持续时间为 \"\n"
|
||||
"\"%(time).2fs (%(rate).2f/sec, %(remaining)s remaining)"
|
||||
|
||||
#: swift/obj/replicator.py:361
|
||||
#: swift/obj/replicator.py:410
|
||||
#, python-format
|
||||
msgid ""
|
||||
"%(checked)d suffixes checked - %(hashed).2f%% hashed, %(synced).2f%% "
|
||||
"synced"
|
||||
msgstr "%(checked)d后缀已被检查 %(hashed).2f%% hashed, %(synced).2f%% synced"
|
||||
|
||||
#: swift/obj/replicator.py:368
|
||||
#: swift/obj/replicator.py:417
|
||||
#, python-format
|
||||
msgid "Partition times: max %(max).4fs, min %(min).4fs, med %(med).4fs"
|
||||
msgstr "分区时间: max %(max).4fs, min %(min).4fs, med %(med).4fs"
|
||||
|
||||
#: swift/obj/replicator.py:376
|
||||
#: swift/obj/replicator.py:425
|
||||
#, python-format
|
||||
msgid "Nothing replicated for %s seconds."
|
||||
msgstr "%s秒无复制"
|
||||
|
||||
#: swift/obj/replicator.py:405
|
||||
#: swift/obj/replicator.py:454
|
||||
msgid "Lockup detected.. killing live coros."
|
||||
msgstr "检测到lockup。终止正在执行的coros"
|
||||
|
||||
#: swift/obj/replicator.py:515
|
||||
#: swift/obj/replicator.py:568
|
||||
msgid "Ring change detected. Aborting current replication pass."
|
||||
msgstr "Ring改变被检测到。退出现有的复制通过"
|
||||
|
||||
#: swift/obj/replicator.py:536
|
||||
#: swift/obj/replicator.py:589
|
||||
msgid "Exception in top-level replication loop"
|
||||
msgstr "top-level复制圈出现异常"
|
||||
|
||||
#: swift/obj/replicator.py:545
|
||||
#: swift/obj/replicator.py:598
|
||||
msgid "Running object replicator in script mode."
|
||||
msgstr "在加密模式下执行对象复制"
|
||||
|
||||
#: swift/obj/replicator.py:563
|
||||
#: swift/obj/replicator.py:616
|
||||
#, python-format
|
||||
msgid "Object replication complete (once). (%.02f minutes)"
|
||||
msgstr "对象复制完成(一次)。(%.02f minutes)"
|
||||
|
||||
#: swift/obj/replicator.py:570
|
||||
#: swift/obj/replicator.py:623
|
||||
msgid "Starting object replicator in daemon mode."
|
||||
msgstr "在守护模式下开始对象复制"
|
||||
|
||||
#: swift/obj/replicator.py:574
|
||||
#: swift/obj/replicator.py:627
|
||||
msgid "Starting object replication pass."
|
||||
msgstr "开始通过对象复制"
|
||||
|
||||
#: swift/obj/replicator.py:579
|
||||
#: swift/obj/replicator.py:632
|
||||
#, python-format
|
||||
msgid "Object replication complete. (%.02f minutes)"
|
||||
msgstr "对象复制完成。(%.02f minutes)"
|
||||
|
@ -1076,21 +1081,21 @@ msgstr "错误 Pickle问题 隔离%s"
|
|||
msgid "ERROR with remote server %(ip)s:%(port)s/%(device)s"
|
||||
msgstr "远程服务器发生错误 %(ip)s:%(port)s/%(device)s"
|
||||
|
||||
#: swift/proxy/server.py:379
|
||||
#: swift/proxy/server.py:380
|
||||
msgid "ERROR Unhandled exception in request"
|
||||
msgstr "错误 未处理的异常发出请求"
|
||||
|
||||
#: swift/proxy/server.py:434
|
||||
#: swift/proxy/server.py:435
|
||||
#, python-format
|
||||
msgid "Node error limited %(ip)s:%(port)s (%(device)s)"
|
||||
msgstr "节点错误极限 %(ip)s:%(port)s (%(device)s)"
|
||||
|
||||
#: swift/proxy/server.py:451 swift/proxy/server.py:469
|
||||
#: swift/proxy/server.py:452 swift/proxy/server.py:470
|
||||
#, python-format
|
||||
msgid "%(msg)s %(ip)s:%(port)s/%(device)s"
|
||||
msgstr "%(msg)s %(ip)s:%(port)s/%(device)s"
|
||||
|
||||
#: swift/proxy/server.py:539
|
||||
#: swift/proxy/server.py:540
|
||||
#, python-format
|
||||
msgid "ERROR with %(type)s server %(ip)s:%(port)s/%(device)s re: %(info)s"
|
||||
msgstr "%(type)s服务器发生错误 %(ip)s:%(port)s/%(device)s re: %(info)s"
|
||||
|
@ -1099,60 +1104,60 @@ msgstr "%(type)s服务器发生错误 %(ip)s:%(port)s/%(device)s re: %(info)s"
|
|||
msgid "Account"
|
||||
msgstr "账号"
|
||||
|
||||
#: swift/proxy/controllers/base.py:697 swift/proxy/controllers/base.py:730
|
||||
#: swift/proxy/controllers/base.py:698 swift/proxy/controllers/base.py:731
|
||||
#: swift/proxy/controllers/obj.py:191 swift/proxy/controllers/obj.py:318
|
||||
#: swift/proxy/controllers/obj.py:358 swift/proxy/controllers/obj.py:376
|
||||
#: swift/proxy/controllers/obj.py:502
|
||||
msgid "Object"
|
||||
msgstr "对象"
|
||||
|
||||
#: swift/proxy/controllers/base.py:698
|
||||
#: swift/proxy/controllers/base.py:699
|
||||
msgid "Trying to read during GET (retrying)"
|
||||
msgstr "执行GET时尝试读取(重新尝试)"
|
||||
|
||||
#: swift/proxy/controllers/base.py:731
|
||||
#: swift/proxy/controllers/base.py:732
|
||||
msgid "Trying to read during GET"
|
||||
msgstr "执行GET时尝试读取"
|
||||
|
||||
#: swift/proxy/controllers/base.py:735
|
||||
#: swift/proxy/controllers/base.py:736
|
||||
#, python-format
|
||||
msgid "Client did not read from proxy within %ss"
|
||||
msgstr "客户尚未从代理处读取%ss"
|
||||
|
||||
#: swift/proxy/controllers/base.py:740
|
||||
#: swift/proxy/controllers/base.py:741
|
||||
msgid "Client disconnected on read"
|
||||
msgstr "客户读取时中断"
|
||||
|
||||
#: swift/proxy/controllers/base.py:742
|
||||
#: swift/proxy/controllers/base.py:743
|
||||
msgid "Trying to send to client"
|
||||
msgstr "尝试发送到客户端"
|
||||
|
||||
#: swift/proxy/controllers/base.py:779 swift/proxy/controllers/base.py:1049
|
||||
#: swift/proxy/controllers/base.py:780 swift/proxy/controllers/base.py:1050
|
||||
#, python-format
|
||||
msgid "Trying to %(method)s %(path)s"
|
||||
msgstr "尝试执行%(method)s %(path)s"
|
||||
|
||||
#: swift/proxy/controllers/base.py:816 swift/proxy/controllers/base.py:1037
|
||||
#: swift/proxy/controllers/base.py:817 swift/proxy/controllers/base.py:1038
|
||||
#: swift/proxy/controllers/obj.py:350 swift/proxy/controllers/obj.py:390
|
||||
msgid "ERROR Insufficient Storage"
|
||||
msgstr "错误 存储空间不足"
|
||||
|
||||
#: swift/proxy/controllers/base.py:819
|
||||
#: swift/proxy/controllers/base.py:820
|
||||
#, python-format
|
||||
msgid "ERROR %(status)d %(body)s From %(type)s Server"
|
||||
msgstr "错误 %(status)d %(body)s 来自 %(type)s 服务器"
|
||||
|
||||
#: swift/proxy/controllers/base.py:1040
|
||||
#: swift/proxy/controllers/base.py:1041
|
||||
#, python-format
|
||||
msgid "ERROR %(status)d Trying to %(method)s %(path)sFrom Container Server"
|
||||
msgstr ""
|
||||
|
||||
#: swift/proxy/controllers/base.py:1152
|
||||
#: swift/proxy/controllers/base.py:1153
|
||||
#, python-format
|
||||
msgid "%(type)s returning 503 for %(statuses)s"
|
||||
msgstr "%(type)s 返回 503 在 %(statuses)s"
|
||||
|
||||
#: swift/proxy/controllers/container.py:91 swift/proxy/controllers/obj.py:117
|
||||
#: swift/proxy/controllers/container.py:95 swift/proxy/controllers/obj.py:117
|
||||
msgid "Container"
|
||||
msgstr "容器"
|
||||
|
||||
|
|
|
@ -14,7 +14,8 @@
|
|||
# limitations under the License.
|
||||
|
||||
import os
|
||||
from os.path import isdir, isfile, join
|
||||
import errno
|
||||
from os.path import isdir, isfile, join, dirname
|
||||
import random
|
||||
import shutil
|
||||
import time
|
||||
|
@ -27,10 +28,11 @@ from eventlet import GreenPool, tpool, Timeout, sleep, hubs
|
|||
from eventlet.green import subprocess
|
||||
from eventlet.support.greenlets import GreenletExit
|
||||
|
||||
from swift.common.ring.utils import is_local_device
|
||||
from swift.common.utils import whataremyips, unlink_older_than, \
|
||||
compute_eta, get_logger, dump_recon_cache, ismount, \
|
||||
rsync_ip, mkdirs, config_true_value, list_from_csv, get_hub, \
|
||||
tpool_reraise, config_auto_int_value
|
||||
tpool_reraise, config_auto_int_value, storage_directory
|
||||
from swift.common.bufferedhttp import http_connect
|
||||
from swift.common.daemon import Daemon
|
||||
from swift.common.http import HTTP_OK, HTTP_INSUFFICIENT_STORAGE
|
||||
|
@ -94,7 +96,8 @@ class ObjectReplicator(Daemon):
|
|||
conf.get('handoff_delete', 'auto'), 0)
|
||||
self._diskfile_mgr = DiskFileManager(conf, self.logger)
|
||||
|
||||
def sync(self, node, job, suffixes): # Just exists for doc anchor point
|
||||
# Just exists for doc anchor point
|
||||
def sync(self, node, job, suffixes, *args, **kwargs):
|
||||
"""
|
||||
Synchronize local suffix directories from a partition with a remote
|
||||
node.
|
||||
|
@ -105,7 +108,7 @@ class ObjectReplicator(Daemon):
|
|||
|
||||
:returns: boolean indicating success or failure
|
||||
"""
|
||||
return self.sync_method(node, job, suffixes)
|
||||
return self.sync_method(node, job, suffixes, *args, **kwargs)
|
||||
|
||||
def get_object_ring(self, policy_idx):
|
||||
"""
|
||||
|
@ -167,7 +170,7 @@ class ObjectReplicator(Daemon):
|
|||
sync method in Swift.
|
||||
"""
|
||||
if not os.path.exists(job['path']):
|
||||
return False
|
||||
return False, set()
|
||||
args = [
|
||||
'rsync',
|
||||
'--recursive',
|
||||
|
@ -192,14 +195,15 @@ class ObjectReplicator(Daemon):
|
|||
args.append(spath)
|
||||
had_any = True
|
||||
if not had_any:
|
||||
return False
|
||||
return False, set()
|
||||
data_dir = get_data_dir(job['policy_idx'])
|
||||
args.append(join(rsync_module, node['device'],
|
||||
data_dir, job['partition']))
|
||||
return self._rsync(args) == 0
|
||||
return self._rsync(args) == 0, set()
|
||||
|
||||
def ssync(self, node, job, suffixes):
|
||||
return ssync_sender.Sender(self, node, job, suffixes)()
|
||||
def ssync(self, node, job, suffixes, remote_check_objs=None):
|
||||
return ssync_sender.Sender(
|
||||
self, node, job, suffixes, remote_check_objs)()
|
||||
|
||||
def check_ring(self, object_ring):
|
||||
"""
|
||||
|
@ -232,9 +236,18 @@ class ObjectReplicator(Daemon):
|
|||
try:
|
||||
responses = []
|
||||
suffixes = tpool.execute(tpool_get_suffixes, job['path'])
|
||||
synced_remote_regions = {}
|
||||
delete_objs = None
|
||||
if suffixes:
|
||||
for node in job['nodes']:
|
||||
success = self.sync(node, job, suffixes)
|
||||
kwargs = {}
|
||||
if node['region'] in synced_remote_regions and \
|
||||
self.conf.get('sync_method') == 'ssync':
|
||||
kwargs['remote_check_objs'] = \
|
||||
synced_remote_regions[node['region']]
|
||||
# cand_objs is a list of objects for deletion
|
||||
success, cand_objs = self.sync(
|
||||
node, job, suffixes, **kwargs)
|
||||
if success:
|
||||
with Timeout(self.http_timeout):
|
||||
conn = http_connect(
|
||||
|
@ -243,7 +256,14 @@ class ObjectReplicator(Daemon):
|
|||
node['device'], job['partition'], 'REPLICATE',
|
||||
'/' + '-'.join(suffixes), headers=self.headers)
|
||||
conn.getresponse().read()
|
||||
if node['region'] != job['region'] and cand_objs:
|
||||
synced_remote_regions[node['region']] = cand_objs
|
||||
responses.append(success)
|
||||
for region, cand_objs in synced_remote_regions.iteritems():
|
||||
if delete_objs is None:
|
||||
delete_objs = cand_objs
|
||||
else:
|
||||
delete_objs = delete_objs.intersection(cand_objs)
|
||||
if self.handoff_delete:
|
||||
# delete handoff if we have had handoff_delete successes
|
||||
delete_handoff = len([resp for resp in responses if resp]) >= \
|
||||
|
@ -253,14 +273,34 @@ class ObjectReplicator(Daemon):
|
|||
delete_handoff = len(responses) == len(job['nodes']) and \
|
||||
all(responses)
|
||||
if not suffixes or delete_handoff:
|
||||
self.logger.info(_("Removing partition: %s"), job['path'])
|
||||
tpool.execute(shutil.rmtree, job['path'], ignore_errors=True)
|
||||
if delete_objs:
|
||||
self.logger.info(_("Removing %s objects"),
|
||||
len(delete_objs))
|
||||
self.delete_handoff_objs(job, delete_objs)
|
||||
else:
|
||||
self.logger.info(_("Removing partition: %s"), job['path'])
|
||||
tpool.execute(
|
||||
shutil.rmtree, job['path'], ignore_errors=True)
|
||||
except (Exception, Timeout):
|
||||
self.logger.exception(_("Error syncing handoff partition"))
|
||||
finally:
|
||||
self.partition_times.append(time.time() - begin)
|
||||
self.logger.timing_since('partition.delete.timing', begin)
|
||||
|
||||
def delete_handoff_objs(self, job, delete_objs):
|
||||
for object_hash in delete_objs:
|
||||
object_path = storage_directory(job['obj_path'], job['partition'],
|
||||
object_hash)
|
||||
tpool.execute(shutil.rmtree, object_path, ignore_errors=True)
|
||||
suffix_dir = dirname(object_path)
|
||||
try:
|
||||
os.rmdir(suffix_dir)
|
||||
except OSError as e:
|
||||
if e.errno not in (errno.ENOENT, errno.ENOTEMPTY):
|
||||
self.logger.exception(
|
||||
"Unexpected error trying to cleanup suffix dir:%r",
|
||||
suffix_dir)
|
||||
|
||||
def update(self, job):
|
||||
"""
|
||||
High-level method that replicates a single partition.
|
||||
|
@ -279,6 +319,8 @@ class ObjectReplicator(Daemon):
|
|||
self.suffix_hash += hashed
|
||||
self.logger.update_stats('suffix.hashes', hashed)
|
||||
attempts_left = len(job['nodes'])
|
||||
synced_remote_regions = set()
|
||||
random.shuffle(job['nodes'])
|
||||
nodes = itertools.chain(
|
||||
job['nodes'],
|
||||
job['object_ring'].get_more_nodes(int(job['partition'])))
|
||||
|
@ -286,6 +328,10 @@ class ObjectReplicator(Daemon):
|
|||
# If this throws StopIteration it will be caught way below
|
||||
node = next(nodes)
|
||||
attempts_left -= 1
|
||||
# if we have already synced to this remote region,
|
||||
# don't sync again on this replication pass
|
||||
if node['region'] in synced_remote_regions:
|
||||
continue
|
||||
try:
|
||||
with Timeout(self.http_timeout):
|
||||
resp = http_connect(
|
||||
|
@ -319,7 +365,7 @@ class ObjectReplicator(Daemon):
|
|||
suffixes = [suffix for suffix in local_hash if
|
||||
local_hash[suffix] !=
|
||||
remote_hash.get(suffix, -1)]
|
||||
self.sync(node, job, suffixes)
|
||||
success, _junk = self.sync(node, job, suffixes)
|
||||
with Timeout(self.http_timeout):
|
||||
conn = http_connect(
|
||||
node['replication_ip'], node['replication_port'],
|
||||
|
@ -327,6 +373,9 @@ class ObjectReplicator(Daemon):
|
|||
'/' + '-'.join(suffixes),
|
||||
headers=self.headers)
|
||||
conn.getresponse().read()
|
||||
# add only remote region when replicate succeeded
|
||||
if success and node['region'] != job['region']:
|
||||
synced_remote_regions.add(node['region'])
|
||||
self.suffix_sync += len(suffixes)
|
||||
self.logger.update_stats('suffix.syncs', len(suffixes))
|
||||
except (Exception, Timeout):
|
||||
|
@ -417,8 +466,10 @@ class ObjectReplicator(Daemon):
|
|||
data_dir = get_data_dir(policy.idx)
|
||||
for local_dev in [dev for dev in obj_ring.devs
|
||||
if (dev
|
||||
and dev['replication_ip'] in ips
|
||||
and dev['replication_port'] == self.port
|
||||
and is_local_device(ips,
|
||||
self.port,
|
||||
dev['replication_ip'],
|
||||
dev['replication_port'])
|
||||
and (override_devices is None
|
||||
or dev['device'] in override_devices))]:
|
||||
dev_path = join(self.devices_dir, local_dev['device'])
|
||||
|
@ -447,11 +498,13 @@ class ObjectReplicator(Daemon):
|
|||
jobs.append(
|
||||
dict(path=job_path,
|
||||
device=local_dev['device'],
|
||||
obj_path=obj_path,
|
||||
nodes=nodes,
|
||||
delete=len(nodes) > len(part_nodes) - 1,
|
||||
policy_idx=policy.idx,
|
||||
partition=partition,
|
||||
object_ring=obj_ring))
|
||||
object_ring=obj_ring,
|
||||
region=local_dev['region']))
|
||||
except ValueError:
|
||||
continue
|
||||
return jobs
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import urllib
|
||||
from itertools import ifilter
|
||||
from swift.common import bufferedhttp
|
||||
from swift.common import exceptions
|
||||
from swift.common import http
|
||||
|
@ -28,7 +29,7 @@ class Sender(object):
|
|||
process is there.
|
||||
"""
|
||||
|
||||
def __init__(self, daemon, node, job, suffixes):
|
||||
def __init__(self, daemon, node, job, suffixes, remote_check_objs=None):
|
||||
self.daemon = daemon
|
||||
self.node = node
|
||||
self.job = job
|
||||
|
@ -37,7 +38,11 @@ class Sender(object):
|
|||
self.response = None
|
||||
self.response_buffer = ''
|
||||
self.response_chunk_left = 0
|
||||
self.send_list = None
|
||||
self.available_set = set()
|
||||
# When remote_check_objs is given in job, ssync_sender trys only to
|
||||
# make sure those objects exist or not in remote.
|
||||
self.remote_check_objs = remote_check_objs
|
||||
self.send_list = []
|
||||
self.failures = 0
|
||||
|
||||
@property
|
||||
|
@ -45,8 +50,16 @@ class Sender(object):
|
|||
return int(self.job.get('policy_idx', 0))
|
||||
|
||||
def __call__(self):
|
||||
"""
|
||||
Perform ssync with remote node.
|
||||
|
||||
:returns: a 2-tuple, in the form (success, can_delete_objs).
|
||||
|
||||
Success is a boolean, and can_delete_objs is an iterable of strings
|
||||
representing the hashes which are in sync with the remote node.
|
||||
"""
|
||||
if not self.suffixes:
|
||||
return True
|
||||
return True, set()
|
||||
try:
|
||||
# Double try blocks in case our main error handler fails.
|
||||
try:
|
||||
|
@ -57,9 +70,20 @@ class Sender(object):
|
|||
# other exceptions will be logged with a full stack trace.
|
||||
self.connect()
|
||||
self.missing_check()
|
||||
self.updates()
|
||||
if not self.remote_check_objs:
|
||||
self.updates()
|
||||
can_delete_obj = self.available_set
|
||||
else:
|
||||
# when we are initialized with remote_check_objs we don't
|
||||
# *send* any requested updates; instead we only collect
|
||||
# what's already in sync and safe for deletion
|
||||
can_delete_obj = self.available_set.difference(
|
||||
self.send_list)
|
||||
self.disconnect()
|
||||
return self.failures == 0
|
||||
if not self.failures:
|
||||
return True, can_delete_obj
|
||||
else:
|
||||
return False, set()
|
||||
except (exceptions.MessageTimeout,
|
||||
exceptions.ReplicationException) as err:
|
||||
self.daemon.logger.error(
|
||||
|
@ -85,7 +109,7 @@ class Sender(object):
|
|||
# would only get called if the above except Exception handler
|
||||
# failed (bad node or job data).
|
||||
self.daemon.logger.exception('EXCEPTION in replication.Sender')
|
||||
return False
|
||||
return False, set()
|
||||
|
||||
def connect(self):
|
||||
"""
|
||||
|
@ -96,7 +120,7 @@ class Sender(object):
|
|||
self.daemon.conn_timeout, 'connect send'):
|
||||
self.connection = bufferedhttp.BufferedHTTPConnection(
|
||||
'%s:%s' % (self.node['replication_ip'],
|
||||
self.node['replication_port']))
|
||||
self.node['replication_port']))
|
||||
self.connection.putrequest('REPLICATION', '/%s/%s' % (
|
||||
self.node['device'], self.job['partition']))
|
||||
self.connection.putheader('Transfer-Encoding', 'chunked')
|
||||
|
@ -169,10 +193,14 @@ class Sender(object):
|
|||
self.daemon.node_timeout, 'missing_check start'):
|
||||
msg = ':MISSING_CHECK: START\r\n'
|
||||
self.connection.send('%x\r\n%s\r\n' % (len(msg), msg))
|
||||
for path, object_hash, timestamp in \
|
||||
self.daemon._diskfile_mgr.yield_hashes(
|
||||
self.job['device'], self.job['partition'],
|
||||
self.policy_idx, self.suffixes):
|
||||
hash_gen = self.daemon._diskfile_mgr.yield_hashes(
|
||||
self.job['device'], self.job['partition'],
|
||||
self.policy_idx, self.suffixes)
|
||||
if self.remote_check_objs:
|
||||
hash_gen = ifilter(lambda (path, object_hash, timestamp):
|
||||
object_hash in self.remote_check_objs, hash_gen)
|
||||
for path, object_hash, timestamp in hash_gen:
|
||||
self.available_set.add(object_hash)
|
||||
with exceptions.MessageTimeout(
|
||||
self.daemon.node_timeout,
|
||||
'missing_check send line'):
|
||||
|
@ -197,7 +225,6 @@ class Sender(object):
|
|||
elif line:
|
||||
raise exceptions.ReplicationException(
|
||||
'Unexpected response: %r' % line[:1024])
|
||||
self.send_list = []
|
||||
while True:
|
||||
with exceptions.MessageTimeout(
|
||||
self.daemon.http_timeout, 'missing_check line wait'):
|
||||
|
@ -274,7 +301,7 @@ class Sender(object):
|
|||
"""
|
||||
Sends a DELETE subrequest with the given information.
|
||||
"""
|
||||
msg = ['DELETE ' + url_path, 'X-Timestamp: ' + timestamp]
|
||||
msg = ['DELETE ' + url_path, 'X-Timestamp: ' + timestamp.internal]
|
||||
msg = '\r\n'.join(msg) + '\r\n\r\n'
|
||||
with exceptions.MessageTimeout(
|
||||
self.daemon.node_timeout, 'send_delete'):
|
||||
|
|
|
@ -185,7 +185,8 @@ def headers_to_object_info(headers, status_int=HTTP_OK):
|
|||
'length': headers.get('content-length'),
|
||||
'type': headers.get('content-type'),
|
||||
'etag': headers.get('etag'),
|
||||
'meta': meta
|
||||
'meta': meta,
|
||||
'sysmeta': sysmeta
|
||||
}
|
||||
return info
|
||||
|
||||
|
|
|
@ -46,7 +46,9 @@ class ContainerController(Controller):
|
|||
st = self.server_type.lower()
|
||||
return ['x-remove-%s-read' % st,
|
||||
'x-remove-%s-write' % st,
|
||||
'x-remove-versions-location']
|
||||
'x-remove-versions-location',
|
||||
'x-remove-%s-sync-key' % st,
|
||||
'x-remove-%s-sync-to' % st]
|
||||
|
||||
def _convert_policy_to_index(self, req):
|
||||
"""
|
||||
|
@ -84,6 +86,10 @@ class ContainerController(Controller):
|
|||
def GETorHEAD(self, req):
|
||||
"""Handler for HTTP GET/HEAD requests."""
|
||||
if not self.account_info(self.account_name, req)[1]:
|
||||
if 'swift.authorize' in req.environ:
|
||||
aresp = req.environ['swift.authorize'](req)
|
||||
if aresp:
|
||||
return aresp
|
||||
return HTTPNotFound(request=req)
|
||||
part = self.app.container_ring.get_part(
|
||||
self.account_name, self.container_name)
|
||||
|
|
|
@ -186,6 +186,7 @@ class Application(object):
|
|||
'x-container-read, x-container-write, '
|
||||
'x-container-sync-key, x-container-sync-to, '
|
||||
'x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, '
|
||||
'x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, '
|
||||
'x-account-access-control')
|
||||
self.swift_owner_headers = [
|
||||
name.strip().title()
|
||||
|
|
|
@ -582,7 +582,10 @@ class Container(Base):
|
|||
if self.conn.response.status == 204:
|
||||
required_fields = [['bytes_used', 'x-container-bytes-used'],
|
||||
['object_count', 'x-container-object-count']]
|
||||
optional_fields = [['versions', 'x-versions-location']]
|
||||
optional_fields = [
|
||||
['versions', 'x-versions-location'],
|
||||
['tempurl_key', 'x-container-meta-temp-url-key'],
|
||||
['tempurl_key2', 'x-container-meta-temp-url-key-2']]
|
||||
|
||||
return self.header_fields(required_fields, optional_fields)
|
||||
|
||||
|
|
|
@ -2746,6 +2746,177 @@ class TestTempurlUTF8(Base2, TestTempurl):
|
|||
set_up = False
|
||||
|
||||
|
||||
class TestContainerTempurlEnv(object):
|
||||
tempurl_enabled = None # tri-state: None initially, then True/False
|
||||
|
||||
@classmethod
|
||||
def setUp(cls):
|
||||
cls.conn = Connection(tf.config)
|
||||
cls.conn.authenticate()
|
||||
|
||||
if cls.tempurl_enabled is None:
|
||||
cls.tempurl_enabled = 'tempurl' in cluster_info
|
||||
if not cls.tempurl_enabled:
|
||||
return
|
||||
|
||||
cls.tempurl_key = Utils.create_name()
|
||||
cls.tempurl_key2 = Utils.create_name()
|
||||
|
||||
cls.account = Account(
|
||||
cls.conn, tf.config.get('account', tf.config['username']))
|
||||
cls.account.delete_containers()
|
||||
|
||||
cls.container = cls.account.container(Utils.create_name())
|
||||
if not cls.container.create({
|
||||
'x-container-meta-temp-url-key': cls.tempurl_key,
|
||||
'x-container-meta-temp-url-key-2': cls.tempurl_key2}):
|
||||
raise ResponseError(cls.conn.response)
|
||||
|
||||
cls.obj = cls.container.file(Utils.create_name())
|
||||
cls.obj.write("obj contents")
|
||||
cls.other_obj = cls.container.file(Utils.create_name())
|
||||
cls.other_obj.write("other obj contents")
|
||||
|
||||
|
||||
class TestContainerTempurl(Base):
|
||||
env = TestContainerTempurlEnv
|
||||
set_up = False
|
||||
|
||||
def setUp(self):
|
||||
super(TestContainerTempurl, self).setUp()
|
||||
if self.env.tempurl_enabled is False:
|
||||
raise SkipTest("TempURL not enabled")
|
||||
elif self.env.tempurl_enabled is not True:
|
||||
# just some sanity checking
|
||||
raise Exception(
|
||||
"Expected tempurl_enabled to be True/False, got %r" %
|
||||
(self.env.tempurl_enabled,))
|
||||
|
||||
expires = int(time.time()) + 86400
|
||||
sig = self.tempurl_sig(
|
||||
'GET', expires, self.env.conn.make_path(self.env.obj.path),
|
||||
self.env.tempurl_key)
|
||||
self.obj_tempurl_parms = {'temp_url_sig': sig,
|
||||
'temp_url_expires': str(expires)}
|
||||
|
||||
def tempurl_sig(self, method, expires, path, key):
|
||||
return hmac.new(
|
||||
key,
|
||||
'%s\n%s\n%s' % (method, expires, urllib.unquote(path)),
|
||||
hashlib.sha1).hexdigest()
|
||||
|
||||
def test_GET(self):
|
||||
contents = self.env.obj.read(
|
||||
parms=self.obj_tempurl_parms,
|
||||
cfg={'no_auth_token': True})
|
||||
self.assertEqual(contents, "obj contents")
|
||||
|
||||
# GET tempurls also allow HEAD requests
|
||||
self.assert_(self.env.obj.info(parms=self.obj_tempurl_parms,
|
||||
cfg={'no_auth_token': True}))
|
||||
|
||||
def test_GET_with_key_2(self):
|
||||
expires = int(time.time()) + 86400
|
||||
sig = self.tempurl_sig(
|
||||
'GET', expires, self.env.conn.make_path(self.env.obj.path),
|
||||
self.env.tempurl_key2)
|
||||
parms = {'temp_url_sig': sig,
|
||||
'temp_url_expires': str(expires)}
|
||||
|
||||
contents = self.env.obj.read(parms=parms, cfg={'no_auth_token': True})
|
||||
self.assertEqual(contents, "obj contents")
|
||||
|
||||
def test_PUT(self):
|
||||
new_obj = self.env.container.file(Utils.create_name())
|
||||
|
||||
expires = int(time.time()) + 86400
|
||||
sig = self.tempurl_sig(
|
||||
'PUT', expires, self.env.conn.make_path(new_obj.path),
|
||||
self.env.tempurl_key)
|
||||
put_parms = {'temp_url_sig': sig,
|
||||
'temp_url_expires': str(expires)}
|
||||
|
||||
new_obj.write('new obj contents',
|
||||
parms=put_parms, cfg={'no_auth_token': True})
|
||||
self.assertEqual(new_obj.read(), "new obj contents")
|
||||
|
||||
# PUT tempurls also allow HEAD requests
|
||||
self.assert_(new_obj.info(parms=put_parms,
|
||||
cfg={'no_auth_token': True}))
|
||||
|
||||
def test_HEAD(self):
|
||||
expires = int(time.time()) + 86400
|
||||
sig = self.tempurl_sig(
|
||||
'HEAD', expires, self.env.conn.make_path(self.env.obj.path),
|
||||
self.env.tempurl_key)
|
||||
head_parms = {'temp_url_sig': sig,
|
||||
'temp_url_expires': str(expires)}
|
||||
|
||||
self.assert_(self.env.obj.info(parms=head_parms,
|
||||
cfg={'no_auth_token': True}))
|
||||
# HEAD tempurls don't allow PUT or GET requests, despite the fact that
|
||||
# PUT and GET tempurls both allow HEAD requests
|
||||
self.assertRaises(ResponseError, self.env.other_obj.read,
|
||||
cfg={'no_auth_token': True},
|
||||
parms=self.obj_tempurl_parms)
|
||||
self.assert_status([401])
|
||||
|
||||
self.assertRaises(ResponseError, self.env.other_obj.write,
|
||||
'new contents',
|
||||
cfg={'no_auth_token': True},
|
||||
parms=self.obj_tempurl_parms)
|
||||
self.assert_status([401])
|
||||
|
||||
def test_different_object(self):
|
||||
contents = self.env.obj.read(
|
||||
parms=self.obj_tempurl_parms,
|
||||
cfg={'no_auth_token': True})
|
||||
self.assertEqual(contents, "obj contents")
|
||||
|
||||
self.assertRaises(ResponseError, self.env.other_obj.read,
|
||||
cfg={'no_auth_token': True},
|
||||
parms=self.obj_tempurl_parms)
|
||||
self.assert_status([401])
|
||||
|
||||
def test_changing_sig(self):
|
||||
contents = self.env.obj.read(
|
||||
parms=self.obj_tempurl_parms,
|
||||
cfg={'no_auth_token': True})
|
||||
self.assertEqual(contents, "obj contents")
|
||||
|
||||
parms = self.obj_tempurl_parms.copy()
|
||||
if parms['temp_url_sig'][0] == 'a':
|
||||
parms['temp_url_sig'] = 'b' + parms['temp_url_sig'][1:]
|
||||
else:
|
||||
parms['temp_url_sig'] = 'a' + parms['temp_url_sig'][1:]
|
||||
|
||||
self.assertRaises(ResponseError, self.env.obj.read,
|
||||
cfg={'no_auth_token': True},
|
||||
parms=parms)
|
||||
self.assert_status([401])
|
||||
|
||||
def test_changing_expires(self):
|
||||
contents = self.env.obj.read(
|
||||
parms=self.obj_tempurl_parms,
|
||||
cfg={'no_auth_token': True})
|
||||
self.assertEqual(contents, "obj contents")
|
||||
|
||||
parms = self.obj_tempurl_parms.copy()
|
||||
if parms['temp_url_expires'][-1] == '0':
|
||||
parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '1'
|
||||
else:
|
||||
parms['temp_url_expires'] = parms['temp_url_expires'][:-1] + '0'
|
||||
|
||||
self.assertRaises(ResponseError, self.env.obj.read,
|
||||
cfg={'no_auth_token': True},
|
||||
parms=parms)
|
||||
self.assert_status([401])
|
||||
|
||||
|
||||
class TestContainerTempurlUTF8(Base2, TestContainerTempurl):
|
||||
set_up = False
|
||||
|
||||
|
||||
class TestSloTempurlEnv(object):
|
||||
enabled = None # tri-state: None initially, then True/False
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@ from subprocess import Popen, PIPE
|
|||
import sys
|
||||
from time import sleep, time
|
||||
from collections import defaultdict
|
||||
import unittest
|
||||
from nose import SkipTest
|
||||
|
||||
from swiftclient import get_auth, head_account
|
||||
|
@ -123,10 +124,6 @@ def kill_server(port, port2server, pids):
|
|||
sleep(0.1)
|
||||
|
||||
|
||||
def kill_servers(port2server, pids):
|
||||
Manager(['all']).kill()
|
||||
|
||||
|
||||
def kill_nonprimary_server(primary_nodes, port2server, pids):
|
||||
primary_ports = [n['port'] for n in primary_nodes]
|
||||
for port, server in port2server.iteritems():
|
||||
|
@ -141,18 +138,20 @@ def kill_nonprimary_server(primary_nodes, port2server, pids):
|
|||
return port
|
||||
|
||||
|
||||
def get_ring(ring_name, server=None, force_validate=None):
|
||||
def get_ring(ring_name, required_replicas, required_devices,
|
||||
server=None, force_validate=None):
|
||||
if not server:
|
||||
server = ring_name
|
||||
ring = Ring('/etc/swift', ring_name=ring_name)
|
||||
if not VALIDATE_RSYNC and not force_validate:
|
||||
return ring
|
||||
# easy sanity checks
|
||||
if ring.replica_count != 3:
|
||||
print 'WARNING: %s has %s replicas instead of 3' % (
|
||||
ring.serialized_path, ring.replica_count)
|
||||
assert 4 == len(ring.devs), '%s has %s devices instead of 4' % (
|
||||
ring.serialized_path, len(ring.devs))
|
||||
if ring.replica_count != required_replicas:
|
||||
raise SkipTest('%s has %s replicas instead of %s' % (
|
||||
ring.serialized_path, ring.replica_count, required_replicas))
|
||||
if len(ring.devs) != required_devices:
|
||||
raise SkipTest('%s has %s devices instead of %s' % (
|
||||
ring.serialized_path, len(ring.devs), required_devices))
|
||||
# map server to config by port
|
||||
port_to_config = {}
|
||||
for server_ in Manager([server]):
|
||||
|
@ -167,12 +166,13 @@ def get_ring(ring_name, server=None, force_validate=None):
|
|||
if device == dev['device']:
|
||||
dev_path = os.path.join(conf['devices'], device)
|
||||
full_path = os.path.realpath(dev_path)
|
||||
assert os.path.exists(full_path), \
|
||||
'device %s in %s was not found (%s)' % (
|
||||
device, conf['devices'], full_path)
|
||||
if not os.path.exists(full_path):
|
||||
raise SkipTest(
|
||||
'device %s in %s was not found (%s)' %
|
||||
(device, conf['devices'], full_path))
|
||||
break
|
||||
else:
|
||||
raise AssertionError(
|
||||
raise SkipTest(
|
||||
"unable to find ring device %s under %s's devices (%s)" % (
|
||||
dev['device'], server, conf['devices']))
|
||||
# verify server is exposing rsync device
|
||||
|
@ -184,81 +184,127 @@ def get_ring(ring_name, server=None, force_validate=None):
|
|||
p = Popen(cmd, shell=True, stdout=PIPE)
|
||||
stdout, _stderr = p.communicate()
|
||||
if p.returncode:
|
||||
raise AssertionError('unable to connect to rsync '
|
||||
'export %s (%s)' % (rsync_export, cmd))
|
||||
raise SkipTest('unable to connect to rsync '
|
||||
'export %s (%s)' % (rsync_export, cmd))
|
||||
for line in stdout.splitlines():
|
||||
if line.rsplit(None, 1)[-1] == dev['device']:
|
||||
break
|
||||
else:
|
||||
raise AssertionError("unable to find ring device %s under rsync's "
|
||||
"exported devices for %s (%s)" % (
|
||||
dev['device'], rsync_export, cmd))
|
||||
raise SkipTest("unable to find ring device %s under rsync's "
|
||||
"exported devices for %s (%s)" %
|
||||
(dev['device'], rsync_export, cmd))
|
||||
return ring
|
||||
|
||||
|
||||
def reset_environment():
|
||||
p = Popen("resetswift 2>&1", shell=True, stdout=PIPE)
|
||||
stdout, _stderr = p.communicate()
|
||||
print stdout
|
||||
Manager(['all']).stop()
|
||||
pids = {}
|
||||
try:
|
||||
account_ring = get_ring('account')
|
||||
container_ring = get_ring('container')
|
||||
policy = POLICIES.default
|
||||
object_ring = get_ring(policy.ring_name, 'object')
|
||||
Manager(['main']).start(wait=False)
|
||||
port2server = {}
|
||||
for server, port in [('account', 6002), ('container', 6001),
|
||||
('object', 6000)]:
|
||||
for number in xrange(1, 9):
|
||||
port2server[port + (number * 10)] = '%s%d' % (server, number)
|
||||
for port in port2server:
|
||||
check_server(port, port2server, pids)
|
||||
port2server[8080] = 'proxy'
|
||||
url, token, account = check_server(8080, port2server, pids)
|
||||
config_dict = defaultdict(dict)
|
||||
for name in ('account', 'container', 'object'):
|
||||
for server_name in (name, '%s-replicator' % name):
|
||||
for server in Manager([server_name]):
|
||||
for i, conf in enumerate(server.conf_files(), 1):
|
||||
config_dict[server.server][i] = conf
|
||||
except BaseException:
|
||||
try:
|
||||
raise
|
||||
except AssertionError as e:
|
||||
raise SkipTest(e)
|
||||
finally:
|
||||
def get_policy(**kwargs):
|
||||
kwargs.setdefault('is_deprecated', False)
|
||||
# go thru the policies and make sure they match the requirements of kwargs
|
||||
for policy in POLICIES:
|
||||
# TODO: for EC, pop policy type here and check it first
|
||||
matches = True
|
||||
for key, value in kwargs.items():
|
||||
try:
|
||||
kill_servers(port2server, pids)
|
||||
except Exception:
|
||||
pass
|
||||
return pids, port2server, account_ring, container_ring, object_ring, \
|
||||
policy, url, token, account, config_dict
|
||||
if getattr(policy, key) != value:
|
||||
matches = False
|
||||
except AttributeError:
|
||||
matches = False
|
||||
if matches:
|
||||
return policy
|
||||
raise SkipTest('No policy matching %s' % kwargs)
|
||||
|
||||
|
||||
def get_to_final_state():
|
||||
replicators = Manager(['account-replicator', 'container-replicator',
|
||||
'object-replicator'])
|
||||
replicators.stop()
|
||||
updaters = Manager(['container-updater', 'object-updater'])
|
||||
updaters.stop()
|
||||
class ProbeTest(unittest.TestCase):
|
||||
"""
|
||||
Don't instantiate this directly, use a child class instead.
|
||||
"""
|
||||
|
||||
replicators.once()
|
||||
updaters.once()
|
||||
replicators.once()
|
||||
def setUp(self):
|
||||
p = Popen("resetswift 2>&1", shell=True, stdout=PIPE)
|
||||
stdout, _stderr = p.communicate()
|
||||
print stdout
|
||||
Manager(['all']).stop()
|
||||
self.pids = {}
|
||||
try:
|
||||
self.account_ring = get_ring(
|
||||
'account',
|
||||
self.acct_cont_required_replicas,
|
||||
self.acct_cont_required_devices)
|
||||
self.container_ring = get_ring(
|
||||
'container',
|
||||
self.acct_cont_required_replicas,
|
||||
self.acct_cont_required_devices)
|
||||
self.policy = get_policy(**self.policy_requirements)
|
||||
self.object_ring = get_ring(
|
||||
self.policy.ring_name,
|
||||
self.obj_required_replicas,
|
||||
self.obj_required_devices,
|
||||
server='object')
|
||||
Manager(['main']).start(wait=False)
|
||||
self.port2server = {}
|
||||
for server, port in [('account', 6002), ('container', 6001),
|
||||
('object', 6000)]:
|
||||
for number in xrange(1, 9):
|
||||
self.port2server[port + (number * 10)] = \
|
||||
'%s%d' % (server, number)
|
||||
for port in self.port2server:
|
||||
check_server(port, self.port2server, self.pids)
|
||||
self.port2server[8080] = 'proxy'
|
||||
self.url, self.token, self.account = \
|
||||
check_server(8080, self.port2server, self.pids)
|
||||
self.configs = defaultdict(dict)
|
||||
for name in ('account', 'container', 'object'):
|
||||
for server_name in (name, '%s-replicator' % name):
|
||||
for server in Manager([server_name]):
|
||||
for i, conf in enumerate(server.conf_files(), 1):
|
||||
self.configs[server.server][i] = conf
|
||||
self.replicators = Manager(
|
||||
['account-replicator', 'container-replicator',
|
||||
'object-replicator'])
|
||||
self.updaters = Manager(['container-updater', 'object-updater'])
|
||||
except BaseException:
|
||||
try:
|
||||
raise
|
||||
finally:
|
||||
try:
|
||||
Manager(['all']).kill()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def tearDown(self):
|
||||
Manager(['all']).kill()
|
||||
|
||||
def get_to_final_state(self):
|
||||
# these .stop()s are probably not strictly necessary,
|
||||
# but may prevent race conditions
|
||||
self.replicators.stop()
|
||||
self.updaters.stop()
|
||||
|
||||
self.replicators.once()
|
||||
self.updaters.once()
|
||||
self.replicators.once()
|
||||
|
||||
|
||||
class ReplProbeTest(ProbeTest):
|
||||
|
||||
acct_cont_required_replicas = 3
|
||||
acct_cont_required_devices = 4
|
||||
obj_required_replicas = 3
|
||||
obj_required_devices = 4
|
||||
policy_requirements = {'is_default': True}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
for server in ('account', 'container'):
|
||||
try:
|
||||
get_ring(server, force_validate=True)
|
||||
except AssertionError as err:
|
||||
get_ring(server, 3, 4,
|
||||
force_validate=True)
|
||||
except SkipTest as err:
|
||||
sys.exit('%s ERROR: %s' % (server, err))
|
||||
print '%s OK' % server
|
||||
for policy in POLICIES:
|
||||
try:
|
||||
get_ring(policy.ring_name, server='object', force_validate=True)
|
||||
except AssertionError as err:
|
||||
get_ring(policy.ring_name, 3, 4,
|
||||
server='object', force_validate=True)
|
||||
except SkipTest as err:
|
||||
sys.exit('object ERROR (%s): %s' % (policy.name, err))
|
||||
print 'object OK (%s)' % policy.name
|
||||
|
|
|
@ -14,50 +14,26 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
|
||||
from swiftclient import client
|
||||
|
||||
from swift.common import direct_client
|
||||
from swift.common.manager import Manager
|
||||
from test.probe.common import get_to_final_state, kill_nonprimary_server, \
|
||||
kill_server, kill_servers, reset_environment, start_server
|
||||
from test.probe.common import kill_nonprimary_server, \
|
||||
kill_server, ReplProbeTest, start_server
|
||||
|
||||
|
||||
class TestAccountFailures(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestAccountFailures(ReplProbeTest):
|
||||
|
||||
def test_main(self):
|
||||
# Create container1 and container2
|
||||
# Assert account level sees them
|
||||
# Create container2/object1
|
||||
# Assert account level doesn't see it yet
|
||||
# Get to final state
|
||||
# Assert account level now sees the container2/object1
|
||||
# Kill account servers excepting two of the primaries
|
||||
# Delete container1
|
||||
# Assert account level knows container1 is gone but doesn't know about
|
||||
# container2/object2 yet
|
||||
# Put container2/object2
|
||||
# Run container updaters
|
||||
# Assert account level now knows about container2/object2
|
||||
# Restart other primary account server
|
||||
# Assert that server doesn't know about container1's deletion or the
|
||||
# new container2/object2 yet
|
||||
# Get to final state
|
||||
# Assert that server is now up to date
|
||||
|
||||
container1 = 'container1'
|
||||
client.put_container(self.url, self.token, container1)
|
||||
container2 = 'container2'
|
||||
client.put_container(self.url, self.token, container2)
|
||||
|
||||
# Assert account level sees them
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '2')
|
||||
self.assertEquals(headers['x-account-object-count'], '0')
|
||||
|
@ -76,7 +52,10 @@ class TestAccountFailures(TestCase):
|
|||
self.assert_(found1)
|
||||
self.assert_(found2)
|
||||
|
||||
# Create container2/object1
|
||||
client.put_object(self.url, self.token, container2, 'object1', '1234')
|
||||
|
||||
# Assert account level doesn't see it yet
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '2')
|
||||
self.assertEquals(headers['x-account-object-count'], '0')
|
||||
|
@ -95,7 +74,10 @@ class TestAccountFailures(TestCase):
|
|||
self.assert_(found1)
|
||||
self.assert_(found2)
|
||||
|
||||
get_to_final_state()
|
||||
# Get to final state
|
||||
self.get_to_final_state()
|
||||
|
||||
# Assert account level now sees the container2/object1
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '2')
|
||||
self.assertEquals(headers['x-account-object-count'], '1')
|
||||
|
@ -117,9 +99,16 @@ class TestAccountFailures(TestCase):
|
|||
apart, anodes = self.account_ring.get_nodes(self.account)
|
||||
kill_nonprimary_server(anodes, self.port2server, self.pids)
|
||||
kill_server(anodes[0]['port'], self.port2server, self.pids)
|
||||
# Kill account servers excepting two of the primaries
|
||||
|
||||
# Delete container1
|
||||
client.delete_container(self.url, self.token, container1)
|
||||
|
||||
# Put container2/object2
|
||||
client.put_object(self.url, self.token, container2, 'object2', '12345')
|
||||
|
||||
# Assert account level knows container1 is gone but doesn't know about
|
||||
# container2/object2 yet
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '1')
|
||||
self.assertEquals(headers['x-account-object-count'], '1')
|
||||
|
@ -136,7 +125,10 @@ class TestAccountFailures(TestCase):
|
|||
self.assert_(not found1)
|
||||
self.assert_(found2)
|
||||
|
||||
# Run container updaters
|
||||
Manager(['container-updater']).once()
|
||||
|
||||
# Assert account level now knows about container2/object2
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '1')
|
||||
self.assertEquals(headers['x-account-object-count'], '2')
|
||||
|
@ -153,8 +145,11 @@ class TestAccountFailures(TestCase):
|
|||
self.assert_(not found1)
|
||||
self.assert_(found2)
|
||||
|
||||
# Restart other primary account server
|
||||
start_server(anodes[0]['port'], self.port2server, self.pids)
|
||||
|
||||
# Assert that server doesn't know about container1's deletion or the
|
||||
# new container2/object2 yet
|
||||
headers, containers = \
|
||||
direct_client.direct_get_account(anodes[0], apart, self.account)
|
||||
self.assertEquals(headers['x-account-container-count'], '2')
|
||||
|
@ -172,7 +167,10 @@ class TestAccountFailures(TestCase):
|
|||
self.assert_(found1)
|
||||
self.assert_(found2)
|
||||
|
||||
get_to_final_state()
|
||||
# Get to final state
|
||||
self.get_to_final_state()
|
||||
|
||||
# Assert that server is now up to date
|
||||
headers, containers = \
|
||||
direct_client.direct_get_account(anodes[0], apart, self.account)
|
||||
self.assertEquals(headers['x-account-container-count'], '1')
|
||||
|
|
|
@ -19,22 +19,17 @@ import re
|
|||
import unittest
|
||||
|
||||
from swiftclient import get_auth
|
||||
from test.probe.common import kill_servers, reset_environment
|
||||
from test.probe.common import ReplProbeTest
|
||||
from urlparse import urlparse
|
||||
|
||||
|
||||
class TestAccountGetFakeResponsesMatch(unittest.TestCase):
|
||||
class TestAccountGetFakeResponsesMatch(ReplProbeTest):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
super(TestAccountGetFakeResponsesMatch, self).setUp()
|
||||
self.url, self.token = get_auth(
|
||||
'http://127.0.0.1:8080/auth/v1.0', 'admin:admin', 'admin')
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
|
||||
def _account_path(self, account):
|
||||
_, _, path, _, _, _ = urlparse(self.url)
|
||||
|
||||
|
|
|
@ -12,8 +12,8 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
import uuid
|
||||
import unittest
|
||||
|
||||
from swiftclient import client
|
||||
|
||||
|
@ -21,19 +21,10 @@ from swift.common.storage_policy import POLICIES
|
|||
from swift.common.manager import Manager
|
||||
from swift.common.direct_client import direct_delete_account, \
|
||||
direct_get_object, direct_head_container, ClientException
|
||||
from test.probe.common import kill_servers, reset_environment, \
|
||||
get_to_final_state, ENABLED_POLICIES
|
||||
from test.probe.common import ReplProbeTest, ENABLED_POLICIES
|
||||
|
||||
|
||||
class TestAccountReaper(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestAccountReaper(ReplProbeTest):
|
||||
|
||||
def test_sync(self):
|
||||
all_objects = []
|
||||
|
@ -64,7 +55,7 @@ class TestAccountReaper(unittest.TestCase):
|
|||
|
||||
Manager(['account-reaper']).once()
|
||||
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
|
||||
for policy, container, obj in all_objects:
|
||||
cpart, cnodes = self.container_ring.get_nodes(
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
|
||||
from os import listdir
|
||||
from os.path import join as path_join
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
|
||||
from eventlet import GreenPool, Timeout
|
||||
|
@ -27,8 +27,8 @@ from swiftclient import client
|
|||
from swift.common import direct_client
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.utils import hash_path, readconf
|
||||
from test.probe.common import get_to_final_state, kill_nonprimary_server, \
|
||||
kill_server, kill_servers, reset_environment, start_server
|
||||
from test.probe.common import kill_nonprimary_server, \
|
||||
kill_server, ReplProbeTest, start_server
|
||||
|
||||
eventlet.monkey_patch(all=False, socket=True)
|
||||
|
||||
|
@ -40,42 +40,41 @@ def get_db_file_path(obj_dir):
|
|||
return path_join(obj_dir, filename)
|
||||
|
||||
|
||||
class TestContainerFailures(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestContainerFailures(ReplProbeTest):
|
||||
|
||||
def test_one_node_fails(self):
|
||||
# Create container1
|
||||
# Kill container1 servers excepting two of the primaries
|
||||
# Delete container1
|
||||
# Restart other container1 primary server
|
||||
# Create container1/object1 (allowed because at least server thinks the
|
||||
# container exists)
|
||||
# Get to a final state
|
||||
# Assert all container1 servers indicate container1 is alive and
|
||||
# well with object1
|
||||
# Assert account level also indicates container1 is alive and
|
||||
# well with object1
|
||||
container1 = 'container-%s' % uuid4()
|
||||
cpart, cnodes = self.container_ring.get_nodes(self.account, container1)
|
||||
client.put_container(self.url, self.token, container1)
|
||||
|
||||
# Kill container1 servers excepting two of the primaries
|
||||
kill_nonprimary_server(cnodes, self.port2server, self.pids)
|
||||
kill_server(cnodes[0]['port'], self.port2server, self.pids)
|
||||
|
||||
# Delete container1
|
||||
client.delete_container(self.url, self.token, container1)
|
||||
|
||||
# Restart other container1 primary server
|
||||
start_server(cnodes[0]['port'], self.port2server, self.pids)
|
||||
|
||||
# Create container1/object1 (allowed because at least server thinks the
|
||||
# container exists)
|
||||
client.put_object(self.url, self.token, container1, 'object1', '123')
|
||||
get_to_final_state()
|
||||
|
||||
# Get to a final state
|
||||
self.get_to_final_state()
|
||||
|
||||
# Assert all container1 servers indicate container1 is alive and
|
||||
# well with object1
|
||||
for cnode in cnodes:
|
||||
self.assertEquals(
|
||||
[o['name'] for o in direct_client.direct_get_container(
|
||||
cnode, cpart, self.account, container1)[1]],
|
||||
['object1'])
|
||||
|
||||
# Assert account level also indicates container1 is alive and
|
||||
# well with object1
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '1')
|
||||
self.assertEquals(headers['x-account-object-count'], '1')
|
||||
|
@ -83,26 +82,30 @@ class TestContainerFailures(TestCase):
|
|||
|
||||
def test_two_nodes_fail(self):
|
||||
# Create container1
|
||||
# Kill container1 servers excepting one of the primaries
|
||||
# Delete container1 directly to the one primary still up
|
||||
# Restart other container1 servers
|
||||
# Get to a final state
|
||||
# Assert all container1 servers indicate container1 is gone (happens
|
||||
# because the one node that knew about the delete replicated to the
|
||||
# others.)
|
||||
# Assert account level also indicates container1 is gone
|
||||
container1 = 'container-%s' % uuid4()
|
||||
cpart, cnodes = self.container_ring.get_nodes(self.account, container1)
|
||||
client.put_container(self.url, self.token, container1)
|
||||
|
||||
# Kill container1 servers excepting one of the primaries
|
||||
cnp_port = kill_nonprimary_server(cnodes, self.port2server, self.pids)
|
||||
kill_server(cnodes[0]['port'], self.port2server, self.pids)
|
||||
kill_server(cnodes[1]['port'], self.port2server, self.pids)
|
||||
|
||||
# Delete container1 directly to the one primary still up
|
||||
direct_client.direct_delete_container(cnodes[2], cpart, self.account,
|
||||
container1)
|
||||
|
||||
# Restart other container1 servers
|
||||
start_server(cnodes[0]['port'], self.port2server, self.pids)
|
||||
start_server(cnodes[1]['port'], self.port2server, self.pids)
|
||||
start_server(cnp_port, self.port2server, self.pids)
|
||||
get_to_final_state()
|
||||
|
||||
# Get to a final state
|
||||
self.get_to_final_state()
|
||||
|
||||
# Assert all container1 servers indicate container1 is gone (happens
|
||||
# because the one node that knew about the delete replicated to the
|
||||
# others.)
|
||||
for cnode in cnodes:
|
||||
exc = None
|
||||
try:
|
||||
|
@ -111,6 +114,8 @@ class TestContainerFailures(TestCase):
|
|||
except ClientException as err:
|
||||
exc = err
|
||||
self.assertEquals(exc.http_status, 404)
|
||||
|
||||
# Assert account level also indicates container1 is gone
|
||||
headers, containers = client.get_account(self.url, self.token)
|
||||
self.assertEquals(headers['x-account-container-count'], '0')
|
||||
self.assertEquals(headers['x-account-object-count'], '0')
|
||||
|
|
|
@ -14,9 +14,9 @@
|
|||
|
||||
from hashlib import md5
|
||||
import time
|
||||
import unittest
|
||||
import uuid
|
||||
import random
|
||||
import unittest
|
||||
|
||||
from nose import SkipTest
|
||||
|
||||
|
@ -26,22 +26,19 @@ from swift.common import utils, direct_client
|
|||
from swift.common.storage_policy import POLICIES
|
||||
from swift.common.http import HTTP_NOT_FOUND
|
||||
from test.probe.brain import BrainSplitter
|
||||
from test.probe.common import reset_environment, get_to_final_state, \
|
||||
ENABLED_POLICIES
|
||||
from test.probe.common import ReplProbeTest, ENABLED_POLICIES
|
||||
|
||||
from swiftclient import client, ClientException
|
||||
|
||||
TIMEOUT = 60
|
||||
|
||||
|
||||
class TestContainerMergePolicyIndex(unittest.TestCase):
|
||||
class TestContainerMergePolicyIndex(ReplProbeTest):
|
||||
|
||||
def setUp(self):
|
||||
if len(ENABLED_POLICIES) < 2:
|
||||
raise SkipTest()
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
raise SkipTest('Need more than one policy')
|
||||
super(TestContainerMergePolicyIndex, self).setUp()
|
||||
self.container_name = 'container-%s' % uuid.uuid4()
|
||||
self.object_name = 'object-%s' % uuid.uuid4()
|
||||
self.brain = BrainSplitter(self.url, self.token, self.container_name,
|
||||
|
@ -93,7 +90,7 @@ class TestContainerMergePolicyIndex(unittest.TestCase):
|
|||
self.fail('Unable to find /%s/%s/%s in %r' % (
|
||||
self.account, self.container_name, self.object_name,
|
||||
found_policy_indexes))
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
Manager(['container-reconciler']).once()
|
||||
# validate containers
|
||||
head_responses = []
|
||||
|
@ -198,7 +195,7 @@ class TestContainerMergePolicyIndex(unittest.TestCase):
|
|||
self.fail('Unable to find tombstone /%s/%s/%s in %r' % (
|
||||
self.account, self.container_name, self.object_name,
|
||||
found_policy_indexes))
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
Manager(['container-reconciler']).once()
|
||||
# validate containers
|
||||
head_responses = []
|
||||
|
@ -315,7 +312,7 @@ class TestContainerMergePolicyIndex(unittest.TestCase):
|
|||
break # one should do it...
|
||||
|
||||
self.brain.start_handoff_half()
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
Manager(['container-reconciler']).once()
|
||||
# clear proxy cache
|
||||
client.post_container(self.url, self.token, self.container_name, {})
|
||||
|
@ -426,7 +423,7 @@ class TestContainerMergePolicyIndex(unittest.TestCase):
|
|||
acceptable_statuses=(4,),
|
||||
headers={'X-Backend-Storage-Policy-Index': int(new_policy)})
|
||||
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
|
||||
# verify entry in the queue
|
||||
client = InternalClient(conf_file, 'probe-test', 3)
|
||||
|
@ -450,7 +447,7 @@ class TestContainerMergePolicyIndex(unittest.TestCase):
|
|||
headers={'X-Backend-Storage-Policy-Index': int(old_policy)})
|
||||
|
||||
# make sure the queue is settled
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
for container in client.iter_containers('.misplaced_objects'):
|
||||
for obj in client.iter_objects('.misplaced_objects',
|
||||
container['name']):
|
||||
|
|
|
@ -12,16 +12,16 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import unittest
|
||||
import uuid
|
||||
from urlparse import urlparse
|
||||
import random
|
||||
from nose import SkipTest
|
||||
import unittest
|
||||
|
||||
from swiftclient import client
|
||||
|
||||
from swift.common.manager import Manager
|
||||
from test.probe.common import kill_servers, reset_environment, ENABLED_POLICIES
|
||||
from test.probe.common import ReplProbeTest, ENABLED_POLICIES
|
||||
|
||||
|
||||
def get_current_realm_cluster(url):
|
||||
|
@ -43,17 +43,12 @@ def get_current_realm_cluster(url):
|
|||
raise SkipTest('Unable find current realm cluster')
|
||||
|
||||
|
||||
class TestContainerSync(unittest.TestCase):
|
||||
class TestContainerSync(ReplProbeTest):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
super(TestContainerSync, self).setUp()
|
||||
self.realm, self.cluster = get_current_realm_cluster(self.url)
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
|
||||
def test_sync(self):
|
||||
base_headers = {'X-Container-Sync-Key': 'secret'}
|
||||
|
||||
|
@ -95,5 +90,4 @@ class TestContainerSync(unittest.TestCase):
|
|||
|
||||
|
||||
if __name__ == "__main__":
|
||||
get_current_realm_cluster('http://localhost:8080')
|
||||
unittest.main()
|
||||
|
|
|
@ -18,7 +18,7 @@ import os
|
|||
import shutil
|
||||
import time
|
||||
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
|
||||
from swiftclient import client
|
||||
|
@ -26,21 +26,12 @@ from swiftclient import client
|
|||
from swift.common import direct_client
|
||||
from swift.obj.diskfile import get_data_dir
|
||||
from swift.common.exceptions import ClientException
|
||||
from test.probe.common import kill_server, kill_servers, reset_environment,\
|
||||
start_server
|
||||
from test.probe.common import kill_server, ReplProbeTest, start_server
|
||||
from swift.common.utils import readconf
|
||||
from swift.common.manager import Manager
|
||||
|
||||
|
||||
class TestEmptyDevice(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestEmptyDevice(ReplProbeTest):
|
||||
|
||||
def _get_objects_dir(self, onode):
|
||||
device = onode['device']
|
||||
|
@ -52,21 +43,6 @@ class TestEmptyDevice(TestCase):
|
|||
|
||||
def test_main(self):
|
||||
# Create container
|
||||
# Kill one container/obj primary server
|
||||
# Delete the default data directory for objects on the primary server
|
||||
# Create container/obj (goes to two primary servers and one handoff)
|
||||
# Kill other two container/obj primary servers
|
||||
# Indirectly through proxy assert we can get container/obj
|
||||
# Restart those other two container/obj primary servers
|
||||
# Directly to handoff server assert we can get container/obj
|
||||
# Assert container listing (via proxy and directly) has container/obj
|
||||
# Bring the first container/obj primary server back up
|
||||
# Assert that it doesn't have container/obj yet
|
||||
# Run object replication for first container/obj primary server
|
||||
# Run object replication for handoff node
|
||||
# Assert the first container/obj primary server now has container/obj
|
||||
# Assert the handoff server no longer has container/obj
|
||||
|
||||
container = 'container-%s' % uuid4()
|
||||
client.put_container(self.url, self.token, container)
|
||||
|
||||
|
@ -76,28 +52,41 @@ class TestEmptyDevice(TestCase):
|
|||
opart, onodes = self.object_ring.get_nodes(
|
||||
self.account, container, obj)
|
||||
onode = onodes[0]
|
||||
|
||||
# Kill one container/obj primary server
|
||||
kill_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Delete the default data directory for objects on the primary server
|
||||
obj_dir = '%s/%s' % (self._get_objects_dir(onode),
|
||||
get_data_dir(self.policy.idx))
|
||||
shutil.rmtree(obj_dir, True)
|
||||
self.assertFalse(os.path.exists(obj_dir))
|
||||
|
||||
# Create container/obj (goes to two primary servers and one handoff)
|
||||
client.put_object(self.url, self.token, container, obj, 'VERIFY')
|
||||
odata = client.get_object(self.url, self.token, container, obj)[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Object GET did not return VERIFY, instead it '
|
||||
'returned: %s' % repr(odata))
|
||||
# Kill all primaries to ensure GET handoff works
|
||||
|
||||
# Kill other two container/obj primary servers
|
||||
# to ensure GET handoff works
|
||||
for node in onodes[1:]:
|
||||
kill_server(node['port'], self.port2server, self.pids)
|
||||
|
||||
# Indirectly through proxy assert we can get container/obj
|
||||
odata = client.get_object(self.url, self.token, container, obj)[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Object GET did not return VERIFY, instead it '
|
||||
'returned: %s' % repr(odata))
|
||||
# Restart those other two container/obj primary servers
|
||||
for node in onodes[1:]:
|
||||
start_server(node['port'], self.port2server, self.pids)
|
||||
self.assertFalse(os.path.exists(obj_dir))
|
||||
# We've indirectly verified the handoff node has the object, but
|
||||
# let's directly verify it.
|
||||
|
||||
# Directly to handoff server assert we can get container/obj
|
||||
another_onode = self.object_ring.get_more_nodes(opart).next()
|
||||
odata = direct_client.direct_get_object(
|
||||
another_onode, opart, self.account, container, obj,
|
||||
|
@ -105,6 +94,8 @@ class TestEmptyDevice(TestCase):
|
|||
if odata != 'VERIFY':
|
||||
raise Exception('Direct object GET did not return VERIFY, instead '
|
||||
'it returned: %s' % repr(odata))
|
||||
|
||||
# Assert container listing (via proxy and directly) has container/obj
|
||||
objs = [o['name'] for o in
|
||||
client.get_container(self.url, self.token, container)[1]]
|
||||
if obj not in objs:
|
||||
|
@ -127,7 +118,11 @@ class TestEmptyDevice(TestCase):
|
|||
cnodes if cnode not in found_objs_on_cnode]
|
||||
raise Exception('Container servers %r did not know about object' %
|
||||
missing)
|
||||
|
||||
# Bring the first container/obj primary server back up
|
||||
start_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Assert that it doesn't have container/obj yet
|
||||
self.assertFalse(os.path.exists(obj_dir))
|
||||
exc = None
|
||||
try:
|
||||
|
@ -148,18 +143,23 @@ class TestEmptyDevice(TestCase):
|
|||
except KeyError:
|
||||
another_port_num = another_onode['port']
|
||||
|
||||
# Run object replication for first container/obj primary server
|
||||
num = (port_num - 6000) / 10
|
||||
Manager(['object-replicator']).once(number=num)
|
||||
|
||||
# Run object replication for handoff node
|
||||
another_num = (another_port_num - 6000) / 10
|
||||
Manager(['object-replicator']).once(number=another_num)
|
||||
|
||||
# Assert the first container/obj primary server now has container/obj
|
||||
odata = direct_client.direct_get_object(
|
||||
onode, opart, self.account, container, obj, headers={
|
||||
'X-Backend-Storage-Policy-Index': self.policy.idx})[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Direct object GET did not return VERIFY, instead '
|
||||
'it returned: %s' % repr(odata))
|
||||
|
||||
# Assert the handoff server no longer has container/obj
|
||||
exc = None
|
||||
try:
|
||||
direct_client.direct_get_object(
|
||||
|
|
|
@ -14,47 +14,45 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
|
||||
from swiftclient import client
|
||||
|
||||
from swift.common import direct_client
|
||||
from swift.common.manager import Manager
|
||||
from test.probe.common import kill_nonprimary_server, kill_server, \
|
||||
kill_servers, reset_environment, start_server
|
||||
from test.probe.common import kill_nonprimary_server, \
|
||||
kill_server, ReplProbeTest, start_server
|
||||
|
||||
|
||||
class TestObjectAsyncUpdate(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestObjectAsyncUpdate(ReplProbeTest):
|
||||
|
||||
def test_main(self):
|
||||
# Create container
|
||||
# Kill container servers excepting two of the primaries
|
||||
# Create container/obj
|
||||
# Restart other primary server
|
||||
# Assert it does not know about container/obj
|
||||
# Run the object-updaters
|
||||
# Assert the other primary server now knows about container/obj
|
||||
container = 'container-%s' % uuid4()
|
||||
client.put_container(self.url, self.token, container)
|
||||
|
||||
# Kill container servers excepting two of the primaries
|
||||
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
|
||||
cnode = cnodes[0]
|
||||
kill_nonprimary_server(cnodes, self.port2server, self.pids)
|
||||
kill_server(cnode['port'], self.port2server, self.pids)
|
||||
|
||||
# Create container/obj
|
||||
obj = 'object-%s' % uuid4()
|
||||
client.put_object(self.url, self.token, container, obj, '')
|
||||
|
||||
# Restart other primary server
|
||||
start_server(cnode['port'], self.port2server, self.pids)
|
||||
|
||||
# Assert it does not know about container/obj
|
||||
self.assert_(not direct_client.direct_get_container(
|
||||
cnode, cpart, self.account, container)[1])
|
||||
|
||||
# Run the object-updaters
|
||||
Manager(['object-updater']).once()
|
||||
|
||||
# Assert the other primary server now knows about container/obj
|
||||
objs = [o['name'] for o in direct_client.direct_get_container(
|
||||
cnode, cpart, self.account, container)[1]]
|
||||
self.assert_(obj in objs)
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
# limitations under the License.
|
||||
|
||||
import random
|
||||
import unittest
|
||||
import uuid
|
||||
import unittest
|
||||
|
||||
from nose import SkipTest
|
||||
|
||||
|
@ -22,14 +22,13 @@ from swift.common.internal_client import InternalClient
|
|||
from swift.common.manager import Manager
|
||||
from swift.common.utils import Timestamp
|
||||
|
||||
from test.probe.common import reset_environment, get_to_final_state, \
|
||||
ENABLED_POLICIES
|
||||
from test.probe.common import ReplProbeTest, ENABLED_POLICIES
|
||||
from test.probe.test_container_merge_policy_index import BrainSplitter
|
||||
|
||||
from swiftclient import client
|
||||
|
||||
|
||||
class TestObjectExpirer(unittest.TestCase):
|
||||
class TestObjectExpirer(ReplProbeTest):
|
||||
|
||||
def setUp(self):
|
||||
if len(ENABLED_POLICIES) < 2:
|
||||
|
@ -47,9 +46,7 @@ class TestObjectExpirer(unittest.TestCase):
|
|||
conf_file = conf_files[0]
|
||||
self.client = InternalClient(conf_file, 'probe-test', 3)
|
||||
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
super(TestObjectExpirer, self).setUp()
|
||||
self.container_name = 'container-%s' % uuid.uuid4()
|
||||
self.object_name = 'object-%s' % uuid.uuid4()
|
||||
self.brain = BrainSplitter(self.url, self.token, self.container_name,
|
||||
|
@ -82,7 +79,7 @@ class TestObjectExpirer(unittest.TestCase):
|
|||
self.expirer.once()
|
||||
|
||||
self.brain.start_handoff_half()
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
|
||||
# validate object is expired
|
||||
found_in_policy = None
|
||||
|
|
|
@ -17,7 +17,7 @@
|
|||
import time
|
||||
from os import listdir, unlink
|
||||
from os.path import join as path_join
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
|
||||
from swiftclient import client
|
||||
|
@ -26,7 +26,7 @@ from swift.common import direct_client
|
|||
from swift.common.exceptions import ClientException
|
||||
from swift.common.utils import hash_path, readconf
|
||||
from swift.obj.diskfile import write_metadata, read_metadata, get_data_dir
|
||||
from test.probe.common import kill_servers, reset_environment
|
||||
from test.probe.common import ReplProbeTest
|
||||
|
||||
|
||||
RETRIES = 5
|
||||
|
@ -49,15 +49,7 @@ def get_data_file_path(obj_dir):
|
|||
return path_join(obj_dir, filename)
|
||||
|
||||
|
||||
class TestObjectFailures(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestObjectFailures(ReplProbeTest):
|
||||
|
||||
def _setup_data_file(self, container, obj, data):
|
||||
client.put_container(self.url, self.token, container)
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
|
||||
from swiftclient import client
|
||||
|
@ -22,49 +22,17 @@ from swiftclient import client
|
|||
from swift.common import direct_client
|
||||
from swift.common.exceptions import ClientException
|
||||
from swift.common.manager import Manager
|
||||
from test.probe.common import kill_server, kill_servers, reset_environment, \
|
||||
start_server
|
||||
from test.probe.common import kill_server, ReplProbeTest, start_server
|
||||
|
||||
|
||||
class TestObjectHandoff(TestCase):
|
||||
|
||||
def setUp(self):
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
kill_servers(self.port2server, self.pids)
|
||||
class TestObjectHandoff(ReplProbeTest):
|
||||
|
||||
def test_main(self):
|
||||
# Create container
|
||||
# Kill one container/obj primary server
|
||||
# Create container/obj (goes to two primary servers and one handoff)
|
||||
# Kill other two container/obj primary servers
|
||||
# Indirectly through proxy assert we can get container/obj
|
||||
# Restart those other two container/obj primary servers
|
||||
# Directly to handoff server assert we can get container/obj
|
||||
# Assert container listing (via proxy and directly) has container/obj
|
||||
# Bring the first container/obj primary server back up
|
||||
# Assert that it doesn't have container/obj yet
|
||||
# Run object replication, ensuring we run the handoff node last so it
|
||||
# should remove its extra handoff partition
|
||||
# Assert the first container/obj primary server now has container/obj
|
||||
# Assert the handoff server no longer has container/obj
|
||||
# Kill the first container/obj primary server again (we have two
|
||||
# primaries and the handoff up now)
|
||||
# Delete container/obj
|
||||
# Assert we can't head container/obj
|
||||
# Assert container/obj is not in the container listing, both indirectly
|
||||
# and directly
|
||||
# Restart the first container/obj primary server again
|
||||
# Assert it still has container/obj
|
||||
# Run object replication, ensuring we run the handoff node last so it
|
||||
# should remove its extra handoff partition
|
||||
# Assert primary node no longer has container/obj
|
||||
container = 'container-%s' % uuid4()
|
||||
client.put_container(self.url, self.token, container)
|
||||
|
||||
# Kill one container/obj primary server
|
||||
cpart, cnodes = self.container_ring.get_nodes(self.account, container)
|
||||
cnode = cnodes[0]
|
||||
obj = 'object-%s' % uuid4()
|
||||
|
@ -72,22 +40,31 @@ class TestObjectHandoff(TestCase):
|
|||
self.account, container, obj)
|
||||
onode = onodes[0]
|
||||
kill_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Create container/obj (goes to two primary servers and one handoff)
|
||||
client.put_object(self.url, self.token, container, obj, 'VERIFY')
|
||||
odata = client.get_object(self.url, self.token, container, obj)[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Object GET did not return VERIFY, instead it '
|
||||
'returned: %s' % repr(odata))
|
||||
# Kill all primaries to ensure GET handoff works
|
||||
|
||||
# Kill other two container/obj primary servers
|
||||
# to ensure GET handoff works
|
||||
for node in onodes[1:]:
|
||||
kill_server(node['port'], self.port2server, self.pids)
|
||||
|
||||
# Indirectly through proxy assert we can get container/obj
|
||||
odata = client.get_object(self.url, self.token, container, obj)[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Object GET did not return VERIFY, instead it '
|
||||
'returned: %s' % repr(odata))
|
||||
|
||||
# Restart those other two container/obj primary servers
|
||||
for node in onodes[1:]:
|
||||
start_server(node['port'], self.port2server, self.pids)
|
||||
# We've indirectly verified the handoff node has the object, but let's
|
||||
# directly verify it.
|
||||
|
||||
# We've indirectly verified the handoff node has the container/object,
|
||||
# but let's directly verify it.
|
||||
another_onode = self.object_ring.get_more_nodes(opart).next()
|
||||
odata = direct_client.direct_get_object(
|
||||
another_onode, opart, self.account, container, obj, headers={
|
||||
|
@ -95,6 +72,8 @@ class TestObjectHandoff(TestCase):
|
|||
if odata != 'VERIFY':
|
||||
raise Exception('Direct object GET did not return VERIFY, instead '
|
||||
'it returned: %s' % repr(odata))
|
||||
|
||||
# Assert container listing (via proxy and directly) has container/obj
|
||||
objs = [o['name'] for o in
|
||||
client.get_container(self.url, self.token, container)[1]]
|
||||
if obj not in objs:
|
||||
|
@ -107,7 +86,11 @@ class TestObjectHandoff(TestCase):
|
|||
raise Exception(
|
||||
'Container server %s:%s did not know about object' %
|
||||
(cnode['ip'], cnode['port']))
|
||||
|
||||
# Bring the first container/obj primary server back up
|
||||
start_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Assert that it doesn't have container/obj yet
|
||||
exc = None
|
||||
try:
|
||||
direct_client.direct_get_object(
|
||||
|
@ -116,7 +99,9 @@ class TestObjectHandoff(TestCase):
|
|||
except ClientException as err:
|
||||
exc = err
|
||||
self.assertEquals(exc.http_status, 404)
|
||||
# Run the extra server last so it'll remove its extra partition
|
||||
|
||||
# Run object replication, ensuring we run the handoff node last so it
|
||||
# will remove its extra handoff partition
|
||||
for node in onodes:
|
||||
try:
|
||||
port_num = node['replication_port']
|
||||
|
@ -130,12 +115,16 @@ class TestObjectHandoff(TestCase):
|
|||
another_port_num = another_onode['port']
|
||||
another_num = (another_port_num - 6000) / 10
|
||||
Manager(['object-replicator']).once(number=another_num)
|
||||
|
||||
# Assert the first container/obj primary server now has container/obj
|
||||
odata = direct_client.direct_get_object(
|
||||
onode, opart, self.account, container, obj, headers={
|
||||
'X-Backend-Storage-Policy-Index': self.policy.idx})[-1]
|
||||
if odata != 'VERIFY':
|
||||
raise Exception('Direct object GET did not return VERIFY, instead '
|
||||
'it returned: %s' % repr(odata))
|
||||
|
||||
# Assert the handoff server no longer has container/obj
|
||||
exc = None
|
||||
try:
|
||||
direct_client.direct_get_object(
|
||||
|
@ -145,7 +134,11 @@ class TestObjectHandoff(TestCase):
|
|||
exc = err
|
||||
self.assertEquals(exc.http_status, 404)
|
||||
|
||||
# Kill the first container/obj primary server again (we have two
|
||||
# primaries and the handoff up now)
|
||||
kill_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Delete container/obj
|
||||
try:
|
||||
client.delete_object(self.url, self.token, container, obj)
|
||||
except client.ClientException as err:
|
||||
|
@ -155,12 +148,17 @@ class TestObjectHandoff(TestCase):
|
|||
# remove this with fix for
|
||||
# https://bugs.launchpad.net/swift/+bug/1318375
|
||||
self.assertEqual(503, err.http_status)
|
||||
|
||||
# Assert we can't head container/obj
|
||||
exc = None
|
||||
try:
|
||||
client.head_object(self.url, self.token, container, obj)
|
||||
except client.ClientException as err:
|
||||
exc = err
|
||||
self.assertEquals(exc.http_status, 404)
|
||||
|
||||
# Assert container/obj is not in the container listing, both indirectly
|
||||
# and directly
|
||||
objs = [o['name'] for o in
|
||||
client.get_container(self.url, self.token, container)[1]]
|
||||
if obj in objs:
|
||||
|
@ -173,11 +171,17 @@ class TestObjectHandoff(TestCase):
|
|||
raise Exception(
|
||||
'Container server %s:%s still knew about object' %
|
||||
(cnode['ip'], cnode['port']))
|
||||
|
||||
# Restart the first container/obj primary server again
|
||||
start_server(onode['port'], self.port2server, self.pids)
|
||||
|
||||
# Assert it still has container/obj
|
||||
direct_client.direct_get_object(
|
||||
onode, opart, self.account, container, obj, headers={
|
||||
'X-Backend-Storage-Policy-Index': self.policy.idx})
|
||||
# Run the extra server last so it'll remove its extra partition
|
||||
|
||||
# Run object replication, ensuring we run the handoff node last so it
|
||||
# will remove its extra handoff partition
|
||||
for node in onodes:
|
||||
try:
|
||||
port_num = node['replication_port']
|
||||
|
@ -187,6 +191,8 @@ class TestObjectHandoff(TestCase):
|
|||
Manager(['object-replicator']).once(number=node_id)
|
||||
another_node_id = (another_port_num - 6000) / 10
|
||||
Manager(['object-replicator']).once(number=another_node_id)
|
||||
|
||||
# Assert primary node no longer has container/obj
|
||||
exc = None
|
||||
try:
|
||||
direct_client.direct_get_object(
|
||||
|
|
|
@ -17,17 +17,16 @@ from io import StringIO
|
|||
from tempfile import mkdtemp
|
||||
from textwrap import dedent
|
||||
import functools
|
||||
import unittest
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import unittest
|
||||
import uuid
|
||||
|
||||
from swift.common import internal_client, utils
|
||||
|
||||
from test.probe.brain import BrainSplitter
|
||||
from test.probe.common import kill_servers, reset_environment, \
|
||||
get_to_final_state
|
||||
from test.probe.common import ReplProbeTest
|
||||
|
||||
|
||||
def _sync_methods(object_server_config_paths):
|
||||
|
@ -65,14 +64,12 @@ def expected_failure_with_ssync(m):
|
|||
return wrapper
|
||||
|
||||
|
||||
class Test(unittest.TestCase):
|
||||
class Test(ReplProbeTest):
|
||||
def setUp(self):
|
||||
"""
|
||||
Reset all environment and start all servers.
|
||||
"""
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
super(Test, self).setUp()
|
||||
self.container_name = 'container-%s' % uuid.uuid4()
|
||||
self.object_name = 'object-%s' % uuid.uuid4()
|
||||
self.brain = BrainSplitter(self.url, self.token, self.container_name,
|
||||
|
@ -101,10 +98,7 @@ class Test(unittest.TestCase):
|
|||
self.int_client = internal_client.InternalClient(conf_path, 'test', 1)
|
||||
|
||||
def tearDown(self):
|
||||
"""
|
||||
Stop all servers.
|
||||
"""
|
||||
kill_servers(self.port2server, self.pids)
|
||||
super(Test, self).tearDown()
|
||||
shutil.rmtree(self.tempdir)
|
||||
|
||||
def _put_object(self, headers=None):
|
||||
|
@ -117,11 +111,65 @@ class Test(unittest.TestCase):
|
|||
self.int_client.set_object_metadata(self.account, self.container_name,
|
||||
self.object_name, headers)
|
||||
|
||||
def _delete_object(self):
|
||||
self.int_client.delete_object(self.account, self.container_name,
|
||||
self.object_name)
|
||||
|
||||
def _get_object(self, headers=None, expect_statuses=(2,)):
|
||||
return self.int_client.get_object(self.account,
|
||||
self.container_name,
|
||||
self.object_name,
|
||||
headers,
|
||||
acceptable_statuses=expect_statuses)
|
||||
|
||||
def _get_object_metadata(self):
|
||||
return self.int_client.get_object_metadata(self.account,
|
||||
self.container_name,
|
||||
self.object_name)
|
||||
|
||||
def test_object_delete_is_replicated(self):
|
||||
self.brain.put_container(policy_index=0)
|
||||
# put object
|
||||
self._put_object()
|
||||
|
||||
# put newer object with sysmeta to first server subset
|
||||
self.brain.stop_primary_half()
|
||||
self._put_object()
|
||||
self.brain.start_primary_half()
|
||||
|
||||
# delete object on second server subset
|
||||
self.brain.stop_handoff_half()
|
||||
self._delete_object()
|
||||
self.brain.start_handoff_half()
|
||||
|
||||
# run replicator
|
||||
self.get_to_final_state()
|
||||
|
||||
# check object deletion has been replicated on first server set
|
||||
self.brain.stop_primary_half()
|
||||
self._get_object(expect_statuses=(4,))
|
||||
self.brain.start_primary_half()
|
||||
|
||||
# check object deletion persists on second server set
|
||||
self.brain.stop_handoff_half()
|
||||
self._get_object(expect_statuses=(4,))
|
||||
|
||||
# put newer object to second server set
|
||||
self._put_object()
|
||||
self.brain.start_handoff_half()
|
||||
|
||||
# run replicator
|
||||
self.get_to_final_state()
|
||||
|
||||
# check new object has been replicated on first server set
|
||||
self.brain.stop_primary_half()
|
||||
self._get_object()
|
||||
self.brain.start_primary_half()
|
||||
|
||||
# check new object persists on second server set
|
||||
self.brain.stop_handoff_half()
|
||||
self._get_object()
|
||||
|
||||
@expected_failure_with_ssync
|
||||
def test_sysmeta_after_replication_with_subsequent_post(self):
|
||||
sysmeta = {'x-object-sysmeta-foo': 'sysmeta-foo'}
|
||||
|
@ -150,7 +198,7 @@ class Test(unittest.TestCase):
|
|||
self.brain.start_handoff_half()
|
||||
|
||||
# run replicator
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
|
||||
# check user metadata has been replicated to first server subset
|
||||
# and sysmeta is unchanged
|
||||
|
@ -196,7 +244,7 @@ class Test(unittest.TestCase):
|
|||
self.brain.start_primary_half()
|
||||
|
||||
# run replicator
|
||||
get_to_final_state()
|
||||
self.get_to_final_state()
|
||||
|
||||
# check stale user metadata is not replicated to first server subset
|
||||
# and sysmeta is unchanged
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from unittest import main, TestCase
|
||||
from unittest import main
|
||||
from uuid import uuid4
|
||||
import os
|
||||
import time
|
||||
|
@ -24,9 +24,8 @@ from swiftclient import client
|
|||
from swift.common.storage_policy import POLICIES
|
||||
from swift.obj.diskfile import get_data_dir
|
||||
|
||||
from test.probe.common import kill_servers, reset_environment
|
||||
from test.probe.common import ReplProbeTest
|
||||
from swift.common.utils import readconf
|
||||
from swift.common.manager import Manager
|
||||
|
||||
|
||||
def collect_info(path_list):
|
||||
|
@ -68,7 +67,7 @@ def find_max_occupancy_node(dir_list):
|
|||
return number
|
||||
|
||||
|
||||
class TestReplicatorFunctions(TestCase):
|
||||
class TestReplicatorFunctions(ReplProbeTest):
|
||||
"""
|
||||
Class for testing replicators and replication servers.
|
||||
|
||||
|
@ -77,19 +76,6 @@ class TestReplicatorFunctions(TestCase):
|
|||
ring's files using set_info command or new ring's files with
|
||||
different port values.
|
||||
"""
|
||||
def setUp(self):
|
||||
"""
|
||||
Reset all environment and start all servers.
|
||||
"""
|
||||
(self.pids, self.port2server, self.account_ring, self.container_ring,
|
||||
self.object_ring, self.policy, self.url, self.token,
|
||||
self.account, self.configs) = reset_environment()
|
||||
|
||||
def tearDown(self):
|
||||
"""
|
||||
Stop all servers.
|
||||
"""
|
||||
kill_servers(self.port2server, self.pids)
|
||||
|
||||
def test_main(self):
|
||||
# Create one account, container and object file.
|
||||
|
@ -133,8 +119,7 @@ class TestReplicatorFunctions(TestCase):
|
|||
test_node_dir_list.append(d)
|
||||
# Run all replicators
|
||||
try:
|
||||
Manager(['object-replicator', 'container-replicator',
|
||||
'account-replicator']).start()
|
||||
self.replicators.start()
|
||||
|
||||
# Delete some files
|
||||
for directory in os.listdir(test_node):
|
||||
|
@ -208,8 +193,7 @@ class TestReplicatorFunctions(TestCase):
|
|||
raise
|
||||
time.sleep(1)
|
||||
finally:
|
||||
Manager(['object-replicator', 'container-replicator',
|
||||
'account-replicator']).stop()
|
||||
self.replicators.stop()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
|
|
@ -50,6 +50,9 @@ class FakeLogger(object):
|
|||
def info(self, msg, *args):
|
||||
self.msg = msg
|
||||
|
||||
def error(self, msg, *args):
|
||||
self.msg = msg
|
||||
|
||||
def timing_since(*args, **kwargs):
|
||||
pass
|
||||
|
||||
|
@ -95,15 +98,15 @@ class FakeRing(object):
|
|||
def __init__(self):
|
||||
self.nodes = [{'id': '1',
|
||||
'ip': '10.10.10.1',
|
||||
'port': None,
|
||||
'port': 6002,
|
||||
'device': None},
|
||||
{'id': '2',
|
||||
'ip': '10.10.10.1',
|
||||
'port': None,
|
||||
'port': 6002,
|
||||
'device': None},
|
||||
{'id': '3',
|
||||
'ip': '10.10.10.1',
|
||||
'port': None,
|
||||
'port': 6002,
|
||||
'device': None},
|
||||
]
|
||||
|
||||
|
@ -313,6 +316,13 @@ class TestReaper(unittest.TestCase):
|
|||
self.assertEqual(r.stats_objects_remaining, 1)
|
||||
self.assertEqual(r.stats_objects_possibly_remaining, 1)
|
||||
|
||||
def test_reap_object_non_exist_policy_index(self):
|
||||
r = self.init_reaper({}, fakelogger=True)
|
||||
r.reap_object('a', 'c', 'partition', cont_nodes, 'o', 2)
|
||||
self.assertEqual(r.stats_objects_deleted, 0)
|
||||
self.assertEqual(r.stats_objects_remaining, 1)
|
||||
self.assertEqual(r.stats_objects_possibly_remaining, 0)
|
||||
|
||||
@patch('swift.account.reaper.Ring',
|
||||
lambda *args, **kwargs: unit.FakeRing())
|
||||
def test_reap_container(self):
|
||||
|
@ -404,6 +414,31 @@ class TestReaper(unittest.TestCase):
|
|||
self.assertEqual(r.logger.inc['return_codes.4'], 3)
|
||||
self.assertEqual(r.stats_containers_remaining, 1)
|
||||
|
||||
@patch('swift.account.reaper.Ring',
|
||||
lambda *args, **kwargs: unit.FakeRing())
|
||||
def test_reap_container_non_exist_policy_index(self):
|
||||
r = self.init_reaper({}, fakelogger=True)
|
||||
with patch.multiple('swift.account.reaper',
|
||||
direct_get_container=DEFAULT,
|
||||
direct_delete_object=DEFAULT,
|
||||
direct_delete_container=DEFAULT) as mocks:
|
||||
headers = {'X-Backend-Storage-Policy-Index': 2}
|
||||
obj_listing = [{'name': 'o'}]
|
||||
|
||||
def fake_get_container(*args, **kwargs):
|
||||
try:
|
||||
obj = obj_listing.pop(0)
|
||||
except IndexError:
|
||||
obj_list = []
|
||||
else:
|
||||
obj_list = [obj]
|
||||
return headers, obj_list
|
||||
|
||||
mocks['direct_get_container'].side_effect = fake_get_container
|
||||
r.reap_container('a', 'partition', acc_nodes, 'c')
|
||||
self.assertEqual(r.logger.msg,
|
||||
'ERROR: invalid storage policy index: 2')
|
||||
|
||||
def fake_reap_container(self, *args, **kwargs):
|
||||
self.called_amount += 1
|
||||
self.r.stats_containers_deleted = 1
|
||||
|
|
|
@ -1722,7 +1722,7 @@ class TestAccountController(unittest.TestCase):
|
|||
self.assertEqual(
|
||||
self.controller.logger.log_dict['info'],
|
||||
[(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a" 404 '
|
||||
'- "-" "-" "-" 2.0000 "-" 1234',), {})])
|
||||
'- "-" "-" "-" 2.0000 "-" 1234 -',), {})])
|
||||
|
||||
def test_policy_stats_with_legacy(self):
|
||||
ts = itertools.count()
|
||||
|
|
|
@ -18,6 +18,7 @@ import json
|
|||
import mock
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import string
|
||||
from StringIO import StringIO
|
||||
import tempfile
|
||||
|
@ -211,6 +212,55 @@ class TestRecon(unittest.TestCase):
|
|||
for ring in ('account', 'container', 'object', 'object-1'):
|
||||
os.remove(os.path.join(self.swift_dir, "%s.ring.gz" % ring))
|
||||
|
||||
def test_quarantine_check(self):
|
||||
hosts = [('127.0.0.1', 6010), ('127.0.0.1', 6020),
|
||||
('127.0.0.1', 6030), ('127.0.0.1', 6040)]
|
||||
# sample json response from http://<host>:<port>/recon/quarantined
|
||||
responses = {6010: {'accounts': 0, 'containers': 0, 'objects': 1,
|
||||
'policies': {'0': {'objects': 0},
|
||||
'1': {'objects': 1}}},
|
||||
6020: {'accounts': 1, 'containers': 1, 'objects': 3,
|
||||
'policies': {'0': {'objects': 1},
|
||||
'1': {'objects': 2}}},
|
||||
6030: {'accounts': 2, 'containers': 2, 'objects': 5,
|
||||
'policies': {'0': {'objects': 2},
|
||||
'1': {'objects': 3}}},
|
||||
6040: {'accounts': 3, 'containers': 3, 'objects': 7,
|
||||
'policies': {'0': {'objects': 3},
|
||||
'1': {'objects': 4}}}}
|
||||
# <low> <high> <avg> <total> <Failed> <no_result> <reported>
|
||||
expected = {'objects_0': (0, 3, 1.5, 6, 0.0, 0, 4),
|
||||
'objects_1': (1, 4, 2.5, 10, 0.0, 0, 4),
|
||||
'objects': (1, 7, 4.0, 16, 0.0, 0, 4),
|
||||
'accounts': (0, 3, 1.5, 6, 0.0, 0, 4),
|
||||
'containers': (0, 3, 1.5, 6, 0.0, 0, 4)}
|
||||
|
||||
def mock_scout_quarantine(app, host):
|
||||
url = 'http://%s:%s/recon/quarantined' % host
|
||||
response = responses[host[1]]
|
||||
status = 200
|
||||
return url, response, status
|
||||
|
||||
stdout = StringIO()
|
||||
patches = [
|
||||
mock.patch('swift.cli.recon.Scout.scout', mock_scout_quarantine),
|
||||
mock.patch('sys.stdout', new=stdout),
|
||||
]
|
||||
with nested(*patches):
|
||||
self.recon_instance.quarantine_check(hosts)
|
||||
|
||||
output = stdout.getvalue()
|
||||
r = re.compile("\[quarantined_(.*)\](.*)")
|
||||
for line in output.splitlines():
|
||||
m = r.match(line)
|
||||
if m:
|
||||
ex = expected.pop(m.group(1))
|
||||
self.assertEquals(m.group(2),
|
||||
" low: %s, high: %s, avg: %s, total: %s,"
|
||||
" Failed: %s%%, no_result: %s, reported: %s"
|
||||
% ex)
|
||||
self.assertFalse(expected)
|
||||
|
||||
|
||||
class TestReconCommands(unittest.TestCase):
|
||||
def setUp(self):
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -345,6 +345,7 @@ class TestFormPost(unittest.TestCase):
|
|||
'SERVER_PROTOCOL': 'HTTP/1.0',
|
||||
'swift.account/AUTH_test': self._fake_cache_env(
|
||||
'AUTH_test', [key]),
|
||||
'swift.container/AUTH_test/container': {'meta': {}},
|
||||
'wsgi.errors': wsgi_errors,
|
||||
'wsgi.input': wsgi_input,
|
||||
'wsgi.multiprocess': False,
|
||||
|
@ -457,6 +458,7 @@ class TestFormPost(unittest.TestCase):
|
|||
'SERVER_PROTOCOL': 'HTTP/1.0',
|
||||
'swift.account/AUTH_test': self._fake_cache_env(
|
||||
'AUTH_test', [key]),
|
||||
'swift.container/AUTH_test/container': {'meta': {}},
|
||||
'wsgi.errors': wsgi_errors,
|
||||
'wsgi.input': wsgi_input,
|
||||
'wsgi.multiprocess': False,
|
||||
|
@ -572,6 +574,7 @@ class TestFormPost(unittest.TestCase):
|
|||
'SERVER_PROTOCOL': 'HTTP/1.0',
|
||||
'swift.account/AUTH_test': self._fake_cache_env(
|
||||
'AUTH_test', [key]),
|
||||
'swift.container/AUTH_test/container': {'meta': {}},
|
||||
'wsgi.errors': wsgi_errors,
|
||||
'wsgi.input': wsgi_input,
|
||||
'wsgi.multiprocess': False,
|
||||
|
@ -683,6 +686,7 @@ class TestFormPost(unittest.TestCase):
|
|||
'SERVER_PROTOCOL': 'HTTP/1.0',
|
||||
'swift.account/AUTH_test': self._fake_cache_env(
|
||||
'AUTH_test', [key]),
|
||||
'swift.container/AUTH_test/container': {'meta': {}},
|
||||
'wsgi.errors': wsgi_errors,
|
||||
'wsgi.input': wsgi_input,
|
||||
'wsgi.multiprocess': False,
|
||||
|
@ -728,6 +732,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('XX' + '\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -763,6 +768,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -793,6 +799,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -833,6 +840,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(
|
||||
iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]),
|
||||
|
@ -867,6 +875,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('404 Not Found', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -949,6 +958,7 @@ class TestFormPost(unittest.TestCase):
|
|||
]))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1016,6 +1026,7 @@ class TestFormPost(unittest.TestCase):
|
|||
]))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1055,6 +1066,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1075,6 +1087,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
env['HTTP_ORIGIN'] = 'http://localhost:5000'
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created',
|
||||
|
@ -1103,6 +1116,7 @@ class TestFormPost(unittest.TestCase):
|
|||
# Stick it in X-Account-Meta-Temp-URL-Key-2 and make sure we get it
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', ['bert', key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1120,6 +1134,42 @@ class TestFormPost(unittest.TestCase):
|
|||
'http://redirect?status=201&message=',
|
||||
dict(headers[0]).get('Location'))
|
||||
|
||||
def test_formpost_with_multiple_container_keys(self):
|
||||
first_key = 'ernie'
|
||||
second_key = 'bert'
|
||||
keys = [first_key, second_key]
|
||||
|
||||
meta = {}
|
||||
for idx, key in enumerate(keys):
|
||||
meta_name = 'temp-url-key' + ("-%d" % (idx + 1) if idx else "")
|
||||
if key:
|
||||
meta[meta_name] = key
|
||||
|
||||
for key in keys:
|
||||
sig, env, body = self._make_sig_env_body(
|
||||
'/v1/AUTH_test/container', 'http://redirect', 1024, 10,
|
||||
int(time() + 86400), key)
|
||||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env('AUTH_test')
|
||||
# Stick it in X-Container-Meta-Temp-URL-Key-2 and ensure we get it
|
||||
env['swift.container/AUTH_test/container'] = {'meta': meta}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
self.formpost = formpost.filter_factory({})(self.auth)
|
||||
|
||||
status = [None]
|
||||
headers = [None]
|
||||
|
||||
def start_response(s, h, e=None):
|
||||
status[0] = s
|
||||
headers[0] = h
|
||||
body = ''.join(self.formpost(env, start_response))
|
||||
self.assertEqual('303 See Other', status[0])
|
||||
self.assertEqual(
|
||||
'http://redirect?status=201&message=',
|
||||
dict(headers[0]).get('Location'))
|
||||
|
||||
def test_redirect(self):
|
||||
key = 'abc'
|
||||
sig, env, body = self._make_sig_env_body(
|
||||
|
@ -1128,6 +1178,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1165,6 +1216,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1202,6 +1254,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1550,6 +1603,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(x_delete_body_part + body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
@ -1625,6 +1679,7 @@ class TestFormPost(unittest.TestCase):
|
|||
env['wsgi.input'] = StringIO('\r\n'.join(x_delete_body_part + body))
|
||||
env['swift.account/AUTH_test'] = self._fake_cache_env(
|
||||
'AUTH_test', [key])
|
||||
env['swift.container/AUTH_test/container'] = {'meta': {}}
|
||||
self.app = FakeApp(iter([('201 Created', {}, ''),
|
||||
('201 Created', {}, '')]))
|
||||
self.auth = tempauth.filter_factory({})(self.app)
|
||||
|
|
|
@ -430,6 +430,27 @@ class TestProxyLogging(unittest.TestCase):
|
|||
self.assertEquals(log_parts[0], '1.2.3.4') # client ip
|
||||
self.assertEquals(log_parts[1], '1.2.3.4') # remote addr
|
||||
|
||||
def test_iterator_closing(self):
|
||||
|
||||
class CloseableBody(object):
|
||||
def __init__(self):
|
||||
self.closed = False
|
||||
|
||||
def close(self):
|
||||
self.closed = True
|
||||
|
||||
def __iter__(self):
|
||||
return iter(["CloseableBody"])
|
||||
|
||||
body = CloseableBody()
|
||||
app = proxy_logging.ProxyLoggingMiddleware(FakeApp(body), {})
|
||||
req = Request.blank('/', environ={'REQUEST_METHOD': 'GET',
|
||||
'REMOTE_ADDR': '1.2.3.4'})
|
||||
resp = app(req.environ, start_response)
|
||||
# exhaust generator
|
||||
[x for x in resp]
|
||||
self.assertTrue(body.closed)
|
||||
|
||||
def test_proxy_client_logging(self):
|
||||
app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {})
|
||||
app.access_logger = FakeLogger()
|
||||
|
@ -752,7 +773,7 @@ class TestProxyLogging(unittest.TestCase):
|
|||
resp = app(req.environ, start_response)
|
||||
resp_body = ''.join(resp)
|
||||
log_parts = self._log_parts(app)
|
||||
self.assertEquals(len(log_parts), 20)
|
||||
self.assertEquals(len(log_parts), 21)
|
||||
self.assertEquals(log_parts[0], '-')
|
||||
self.assertEquals(log_parts[1], '-')
|
||||
self.assertEquals(log_parts[2], '26/Apr/1970/17/46/41')
|
||||
|
@ -774,6 +795,7 @@ class TestProxyLogging(unittest.TestCase):
|
|||
self.assertEquals(log_parts[17], '-')
|
||||
self.assertEquals(log_parts[18], '10000000.000000000')
|
||||
self.assertEquals(log_parts[19], '10000001.000000000')
|
||||
self.assertEquals(log_parts[20], '-')
|
||||
|
||||
def test_dual_logging_middlewares(self):
|
||||
# Since no internal request is being made, outer most proxy logging
|
||||
|
@ -856,6 +878,39 @@ class TestProxyLogging(unittest.TestCase):
|
|||
self.assertEquals(resp_body, 'FAKE MIDDLEWARE')
|
||||
self.assertEquals(log_parts[11], str(len(resp_body)))
|
||||
|
||||
def test_policy_index(self):
|
||||
# Policy index can be specified by X-Backend-Storage-Policy-Index
|
||||
# in the request header for object API
|
||||
app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {})
|
||||
app.access_logger = FakeLogger()
|
||||
req = Request.blank('/v1/a/c/o', environ={'REQUEST_METHOD': 'PUT'},
|
||||
headers={'X-Backend-Storage-Policy-Index': '1'})
|
||||
resp = app(req.environ, start_response)
|
||||
''.join(resp)
|
||||
log_parts = self._log_parts(app)
|
||||
self.assertEquals(log_parts[20], '1')
|
||||
|
||||
# Policy index can be specified by X-Backend-Storage-Policy-Index
|
||||
# in the response header for container API
|
||||
app = proxy_logging.ProxyLoggingMiddleware(FakeApp(), {})
|
||||
app.access_logger = FakeLogger()
|
||||
req = Request.blank('/v1/a/c', environ={'REQUEST_METHOD': 'GET'})
|
||||
|
||||
def fake_call(app, env, start_response):
|
||||
start_response(app.response_str,
|
||||
[('Content-Type', 'text/plain'),
|
||||
('Content-Length', str(sum(map(len, app.body)))),
|
||||
('X-Backend-Storage-Policy-Index', '1')])
|
||||
while env['wsgi.input'].read(5):
|
||||
pass
|
||||
return app.body
|
||||
|
||||
with mock.patch.object(FakeApp, '__call__', fake_call):
|
||||
resp = app(req.environ, start_response)
|
||||
''.join(resp)
|
||||
log_parts = self._log_parts(app)
|
||||
self.assertEquals(log_parts[20], '1')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
|
|
@ -794,7 +794,7 @@ class TestReconSuccess(TestCase):
|
|||
self.assertEquals(rv, du_resp)
|
||||
|
||||
def test_get_quarantine_count(self):
|
||||
self.mockos.ls_output = ['sda']
|
||||
dirs = [['sda'], ['accounts', 'containers', 'objects', 'objects-1']]
|
||||
self.mockos.ismount_output = True
|
||||
|
||||
def fake_lstat(*args, **kwargs):
|
||||
|
@ -806,10 +806,16 @@ class TestReconSuccess(TestCase):
|
|||
def fake_exists(*args, **kwargs):
|
||||
return True
|
||||
|
||||
def fake_listdir(*args, **kwargs):
|
||||
return dirs.pop(0)
|
||||
|
||||
with mock.patch("os.lstat", fake_lstat):
|
||||
with mock.patch("os.path.exists", fake_exists):
|
||||
rv = self.app.get_quarantine_count()
|
||||
self.assertEquals(rv, {'objects': 2, 'accounts': 2, 'containers': 2})
|
||||
with mock.patch("os.listdir", fake_listdir):
|
||||
rv = self.app.get_quarantine_count()
|
||||
self.assertEquals(rv, {'objects': 4, 'accounts': 2, 'policies':
|
||||
{'1': {'objects': 2}, '0': {'objects': 2}},
|
||||
'containers': 2})
|
||||
|
||||
def test_get_socket_info(self):
|
||||
sockstat_content = ['sockets: used 271',
|
||||
|
|
|
@ -97,6 +97,9 @@ class TestTempURL(unittest.TestCase):
|
|||
'bytes': '0',
|
||||
'meta': meta}
|
||||
|
||||
container_cache_key = 'swift.container/' + account + '/c'
|
||||
environ.setdefault(container_cache_key, {'meta': {}})
|
||||
|
||||
def test_passthrough(self):
|
||||
resp = self._make_request('/v1/a/c/o').get_response(self.tempurl)
|
||||
self.assertEquals(resp.status_int, 401)
|
||||
|
@ -109,11 +112,12 @@ class TestTempURL(unittest.TestCase):
|
|||
environ={'REQUEST_METHOD': 'OPTIONS'}).get_response(self.tempurl)
|
||||
self.assertEquals(resp.status_int, 200)
|
||||
|
||||
def assert_valid_sig(self, expires, path, keys, sig):
|
||||
req = self._make_request(
|
||||
path, keys=keys,
|
||||
environ={'QUERY_STRING':
|
||||
'temp_url_sig=%s&temp_url_expires=%s' % (sig, expires)})
|
||||
def assert_valid_sig(self, expires, path, keys, sig, environ=None):
|
||||
if not environ:
|
||||
environ = {}
|
||||
environ['QUERY_STRING'] = 'temp_url_sig=%s&temp_url_expires=%s' % (
|
||||
sig, expires)
|
||||
req = self._make_request(path, keys=keys, environ=environ)
|
||||
self.tempurl.app = FakeApp(iter([('200 Ok', (), '123')]))
|
||||
resp = req.get_response(self.tempurl)
|
||||
self.assertEquals(resp.status_int, 200)
|
||||
|
@ -143,6 +147,29 @@ class TestTempURL(unittest.TestCase):
|
|||
for sig in (sig1, sig2):
|
||||
self.assert_valid_sig(expires, path, [key1, key2], sig)
|
||||
|
||||
def test_get_valid_container_keys(self):
|
||||
environ = {}
|
||||
# Add two static container keys
|
||||
container_keys = ['me', 'other']
|
||||
meta = {}
|
||||
for idx, key in enumerate(container_keys):
|
||||
meta_name = 'Temp-URL-key' + (("-%d" % (idx + 1) if idx else ""))
|
||||
if key:
|
||||
meta[meta_name] = key
|
||||
environ['swift.container/a/c'] = {'meta': meta}
|
||||
|
||||
method = 'GET'
|
||||
expires = int(time() + 86400)
|
||||
path = '/v1/a/c/o'
|
||||
key1 = 'me'
|
||||
key2 = 'other'
|
||||
hmac_body = '%s\n%s\n%s' % (method, expires, path)
|
||||
sig1 = hmac.new(key1, hmac_body, sha1).hexdigest()
|
||||
sig2 = hmac.new(key2, hmac_body, sha1).hexdigest()
|
||||
account_keys = []
|
||||
for sig in (sig1, sig2):
|
||||
self.assert_valid_sig(expires, path, account_keys, sig, environ)
|
||||
|
||||
def test_get_valid_with_filename(self):
|
||||
method = 'GET'
|
||||
expires = int(time() + 86400)
|
||||
|
|
|
@ -16,11 +16,17 @@
|
|||
import unittest
|
||||
|
||||
from swift.common import ring
|
||||
from swift.common.ring.utils import (build_tier_tree, tiers_for_dev,
|
||||
parse_search_value, parse_args,
|
||||
build_dev_from_opts, find_parts,
|
||||
from swift.common.ring.utils import (tiers_for_dev, build_tier_tree,
|
||||
validate_and_normalize_ip,
|
||||
validate_and_normalize_address,
|
||||
is_valid_ip, is_valid_ipv4,
|
||||
is_valid_ipv6, is_valid_hostname,
|
||||
is_local_device, parse_search_value,
|
||||
parse_search_values_from_opts,
|
||||
parse_change_values_from_opts,
|
||||
validate_args, parse_args,
|
||||
parse_builder_ring_filename_args,
|
||||
dispersion_report)
|
||||
build_dev_from_opts, dispersion_report)
|
||||
|
||||
|
||||
class TestUtils(unittest.TestCase):
|
||||
|
@ -95,6 +101,130 @@ class TestUtils(unittest.TestCase):
|
|||
(1, 2, '192.168.2.2:6000', 10),
|
||||
(1, 2, '192.168.2.2:6000', 11)]))
|
||||
|
||||
def test_is_valid_ip(self):
|
||||
self.assertTrue(is_valid_ip("127.0.0.1"))
|
||||
self.assertTrue(is_valid_ip("10.0.0.1"))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80::204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "fe80::"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
ipv6 = "::1"
|
||||
self.assertTrue(is_valid_ip(ipv6))
|
||||
not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a"
|
||||
self.assertFalse(is_valid_ip(not_ipv6))
|
||||
not_ipv6 = "1:2:3:4:5:6::7:8"
|
||||
self.assertFalse(is_valid_ip(not_ipv6))
|
||||
|
||||
def test_is_valid_ipv4(self):
|
||||
self.assertTrue(is_valid_ipv4("127.0.0.1"))
|
||||
self.assertTrue(is_valid_ipv4("10.0.0.1"))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80::204:61ff:254.157.241.86"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "fe80::"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
ipv6 = "::1"
|
||||
self.assertFalse(is_valid_ipv4(ipv6))
|
||||
not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a"
|
||||
self.assertFalse(is_valid_ipv4(not_ipv6))
|
||||
not_ipv6 = "1:2:3:4:5:6::7:8"
|
||||
self.assertFalse(is_valid_ipv4(not_ipv6))
|
||||
|
||||
def test_is_valid_ipv6(self):
|
||||
self.assertFalse(is_valid_ipv6("127.0.0.1"))
|
||||
self.assertFalse(is_valid_ipv6("10.0.0.1"))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80:0:0:0:204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80:0000:0000:0000:0204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80:0:0:0:0204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80::204:61ff:254.157.241.86"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "fe80::"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
ipv6 = "::1"
|
||||
self.assertTrue(is_valid_ipv6(ipv6))
|
||||
not_ipv6 = "3ffe:0b00:0000:0001:0000:0000:000a"
|
||||
self.assertFalse(is_valid_ipv6(not_ipv6))
|
||||
not_ipv6 = "1:2:3:4:5:6::7:8"
|
||||
self.assertFalse(is_valid_ipv6(not_ipv6))
|
||||
|
||||
def test_is_valid_hostname(self):
|
||||
self.assertTrue(is_valid_hostname("local"))
|
||||
self.assertTrue(is_valid_hostname("test.test.com"))
|
||||
hostname = "test." * 51
|
||||
self.assertTrue(is_valid_hostname(hostname))
|
||||
hostname = hostname.rstrip('.')
|
||||
self.assertTrue(is_valid_hostname(hostname))
|
||||
hostname = hostname + "00"
|
||||
self.assertFalse(is_valid_hostname(hostname))
|
||||
self.assertFalse(is_valid_hostname("$blah#"))
|
||||
|
||||
def test_is_local_device(self):
|
||||
my_ips = ["127.0.0.1",
|
||||
"0000:0000:0000:0000:0000:0000:0000:0001"]
|
||||
my_port = 6000
|
||||
self.assertTrue(is_local_device(my_ips, my_port,
|
||||
"localhost",
|
||||
my_port))
|
||||
self.assertFalse(is_local_device(my_ips, my_port,
|
||||
"localhost",
|
||||
my_port + 1))
|
||||
self.assertFalse(is_local_device(my_ips, my_port,
|
||||
"127.0.0.2",
|
||||
my_port))
|
||||
# for those that don't have a local port
|
||||
self.assertTrue(is_local_device(my_ips, None,
|
||||
my_ips[0], None))
|
||||
|
||||
def test_validate_and_normalize_ip(self):
|
||||
ipv4 = "10.0.0.1"
|
||||
self.assertEqual(ipv4, validate_and_normalize_ip(ipv4))
|
||||
ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
self.assertEqual(ipv6, validate_and_normalize_ip(ipv6.upper()))
|
||||
hostname = "test.test.com"
|
||||
self.assertRaises(ValueError,
|
||||
validate_and_normalize_ip, hostname)
|
||||
hostname = "$blah#"
|
||||
self.assertRaises(ValueError,
|
||||
validate_and_normalize_ip, hostname)
|
||||
|
||||
def test_validate_and_normalize_address(self):
|
||||
ipv4 = "10.0.0.1"
|
||||
self.assertEqual(ipv4, validate_and_normalize_address(ipv4))
|
||||
ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
self.assertEqual(ipv6, validate_and_normalize_address(ipv6.upper()))
|
||||
hostname = "test.test.com"
|
||||
self.assertEqual(hostname,
|
||||
validate_and_normalize_address(hostname.upper()))
|
||||
hostname = "$blah#"
|
||||
self.assertRaises(ValueError,
|
||||
validate_and_normalize_address, hostname)
|
||||
|
||||
def test_parse_search_value(self):
|
||||
res = parse_search_value('r0')
|
||||
self.assertEqual(res, {'region': 0})
|
||||
|
@ -108,6 +238,8 @@ class TestUtils(unittest.TestCase):
|
|||
self.assertEqual(res, {'zone': 1})
|
||||
res = parse_search_value('-127.0.0.1')
|
||||
self.assertEqual(res, {'ip': '127.0.0.1'})
|
||||
res = parse_search_value('127.0.0.1')
|
||||
self.assertEqual(res, {'ip': '127.0.0.1'})
|
||||
res = parse_search_value('-[127.0.0.1]:10001')
|
||||
self.assertEqual(res, {'ip': '127.0.0.1', 'port': 10001})
|
||||
res = parse_search_value(':10001')
|
||||
|
@ -125,22 +257,268 @@ class TestUtils(unittest.TestCase):
|
|||
self.assertEqual(res, {'meta': 'meta1'})
|
||||
self.assertRaises(ValueError, parse_search_value, 'OMGPONIES')
|
||||
|
||||
def test_replication_defaults(self):
|
||||
args = '-r 1 -z 1 -i 127.0.0.1 -p 6010 -d d1 -w 100'.split()
|
||||
opts, _ = parse_args(args)
|
||||
device = build_dev_from_opts(opts)
|
||||
def test_parse_search_values_from_opts(self):
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "test.test.com",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "r.test.com",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "change.test.test.com",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "change.r.test.com",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
expected = {
|
||||
'device': 'd1',
|
||||
'ip': '127.0.0.1',
|
||||
'meta': '',
|
||||
'port': 6010,
|
||||
'region': 1,
|
||||
'replication_ip': '127.0.0.1',
|
||||
'replication_port': 6010,
|
||||
'weight': 100.0,
|
||||
'zone': 1,
|
||||
'id': 1,
|
||||
'region': 2,
|
||||
'zone': 3,
|
||||
'ip': "test.test.com",
|
||||
'port': 6000,
|
||||
'replication_ip': "r.test.com",
|
||||
'replication_port': 7000,
|
||||
'device': "sda3",
|
||||
'meta': "some meta data",
|
||||
'weight': 3.14159265359,
|
||||
}
|
||||
self.assertEquals(device, expected)
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_search_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "127.0.0.1",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "127.0.0.10",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "127.0.0.2",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "127.0.0.20",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
expected = {
|
||||
'id': 1,
|
||||
'region': 2,
|
||||
'zone': 3,
|
||||
'ip': "127.0.0.1",
|
||||
'port': 6000,
|
||||
'replication_ip': "127.0.0.10",
|
||||
'replication_port': 7000,
|
||||
'device': "sda3",
|
||||
'meta': "some meta data",
|
||||
'weight': 3.14159265359,
|
||||
}
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_search_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "[127.0.0.1]",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "[127.0.0.10]",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "[127.0.0.2]",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "[127.0.0.20]",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_search_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
def test_parse_change_values_from_opts(self):
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "test.test.com",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "r.test.com",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "change.test.test.com",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "change.r.test.com",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
expected = {
|
||||
'ip': "change.test.test.com",
|
||||
'port': 6001,
|
||||
'replication_ip': "change.r.test.com",
|
||||
'replication_port': 7001,
|
||||
'device': "sdb3",
|
||||
'meta': "some meta data for change",
|
||||
}
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_change_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "127.0.0.1",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "127.0.0.10",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "127.0.0.2",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "127.0.0.20",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
expected = {
|
||||
'ip': "127.0.0.2",
|
||||
'port': 6001,
|
||||
'replication_ip': "127.0.0.20",
|
||||
'replication_port': 7001,
|
||||
'device': "sdb3",
|
||||
'meta': "some meta data for change",
|
||||
}
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_change_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "[127.0.0.1]",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "[127.0.0.10]",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "[127.0.0.2]",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "[127.0.0.20]",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
search_values = parse_change_values_from_opts(opts)
|
||||
self.assertEquals(search_values, expected)
|
||||
|
||||
def test_validate_args(self):
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "test.test.com",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "r.test.com",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "change.test.test.com",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "change.r.test.com",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
self.assertTrue(new_cmd_format)
|
||||
self.assertEqual(opts.id, 1)
|
||||
self.assertEqual(opts.region, 2)
|
||||
self.assertEqual(opts.zone, 3)
|
||||
self.assertEqual(opts.ip, "test.test.com")
|
||||
self.assertEqual(opts.port, 6000)
|
||||
self.assertEqual(opts.replication_ip, "r.test.com")
|
||||
self.assertEqual(opts.replication_port, 7000)
|
||||
self.assertEqual(opts.device, "sda3")
|
||||
self.assertEqual(opts.meta, "some meta data")
|
||||
self.assertEqual(opts.weight, 3.14159265359)
|
||||
self.assertEqual(opts.change_ip, "change.test.test.com")
|
||||
self.assertEqual(opts.change_port, 6001)
|
||||
self.assertEqual(opts.change_replication_ip, "change.r.test.com")
|
||||
self.assertEqual(opts.change_replication_port, 7001)
|
||||
self.assertEqual(opts.change_device, "sdb3")
|
||||
self.assertEqual(opts.change_meta, "some meta data for change")
|
||||
|
||||
argv = \
|
||||
["--id", "0", "--region", "0", "--zone", "0",
|
||||
"--ip", "",
|
||||
"--port", "0",
|
||||
"--replication-ip", "",
|
||||
"--replication-port", "0",
|
||||
"--device", "",
|
||||
"--meta", "",
|
||||
"--weight", "0",
|
||||
"--change-ip", "",
|
||||
"--change-port", "0",
|
||||
"--change-replication-ip", "",
|
||||
"--change-replication-port", "0",
|
||||
"--change-device", "",
|
||||
"--change-meta", ""]
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
self.assertFalse(new_cmd_format)
|
||||
|
||||
argv = \
|
||||
["--id", "0", "--region", "0", "--zone", "0",
|
||||
"--ip", "",
|
||||
"--port", "0",
|
||||
"--replication-ip", "",
|
||||
"--replication-port", "0",
|
||||
"--device", "",
|
||||
"--meta", "",
|
||||
"--weight", "0",
|
||||
"--change-ip", "change.test.test.com",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "change.r.test.com",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
new_cmd_format, opts, args = validate_args(argv)
|
||||
self.assertFalse(new_cmd_format)
|
||||
|
||||
def test_parse_args(self):
|
||||
argv = \
|
||||
["--id", "1", "--region", "2", "--zone", "3",
|
||||
"--ip", "test.test.com",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "r.test.com",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359",
|
||||
"--change-ip", "change.test.test.com",
|
||||
"--change-port", "6001",
|
||||
"--change-replication-ip", "change.r.test.com",
|
||||
"--change-replication-port", "7001",
|
||||
"--change-device", "sdb3",
|
||||
"--change-meta", "some meta data for change"]
|
||||
|
||||
opts, args = parse_args(argv)
|
||||
self.assertEqual(opts.id, 1)
|
||||
self.assertEqual(opts.region, 2)
|
||||
self.assertEqual(opts.zone, 3)
|
||||
self.assertEqual(opts.ip, "test.test.com")
|
||||
self.assertEqual(opts.port, 6000)
|
||||
self.assertEqual(opts.replication_ip, "r.test.com")
|
||||
self.assertEqual(opts.replication_port, 7000)
|
||||
self.assertEqual(opts.device, "sda3")
|
||||
self.assertEqual(opts.meta, "some meta data")
|
||||
self.assertEqual(opts.weight, 3.14159265359)
|
||||
self.assertEqual(opts.change_ip, "change.test.test.com")
|
||||
self.assertEqual(opts.change_port, 6001)
|
||||
self.assertEqual(opts.change_replication_ip, "change.r.test.com")
|
||||
self.assertEqual(opts.change_replication_port, 7001)
|
||||
self.assertEqual(opts.change_device, "sdb3")
|
||||
self.assertEqual(opts.change_meta, "some meta data for change")
|
||||
self.assertEqual(len(args), 0)
|
||||
|
||||
def test_parse_builder_ring_filename_args(self):
|
||||
args = 'swift-ring-builder object.builder write_ring'
|
||||
|
@ -161,33 +539,86 @@ class TestUtils(unittest.TestCase):
|
|||
'my.file.name', 'my.file.name.ring.gz'
|
||||
), parse_builder_ring_filename_args(args.split()))
|
||||
|
||||
def test_find_parts(self):
|
||||
rb = ring.RingBuilder(8, 3, 0)
|
||||
rb.add_dev({'id': 0, 'region': 1, 'zone': 0, 'weight': 100,
|
||||
'ip': '127.0.0.1', 'port': 10000, 'device': 'sda1'})
|
||||
rb.add_dev({'id': 1, 'region': 1, 'zone': 1, 'weight': 100,
|
||||
'ip': '127.0.0.1', 'port': 10001, 'device': 'sda1'})
|
||||
rb.add_dev({'id': 2, 'region': 1, 'zone': 2, 'weight': 100,
|
||||
'ip': '127.0.0.1', 'port': 10002, 'device': 'sda1'})
|
||||
rb.rebalance()
|
||||
def test_build_dev_from_opts(self):
|
||||
argv = \
|
||||
["--region", "2", "--zone", "3",
|
||||
"--ip", "test.test.com",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "r.test.com",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359"]
|
||||
expected = {
|
||||
'region': 2,
|
||||
'zone': 3,
|
||||
'ip': "test.test.com",
|
||||
'port': 6000,
|
||||
'replication_ip': "r.test.com",
|
||||
'replication_port': 7000,
|
||||
'device': "sda3",
|
||||
'meta': "some meta data",
|
||||
'weight': 3.14159265359,
|
||||
}
|
||||
opts, args = parse_args(argv)
|
||||
device = build_dev_from_opts(opts)
|
||||
self.assertEquals(device, expected)
|
||||
|
||||
rb.add_dev({'id': 3, 'region': 2, 'zone': 1, 'weight': 10,
|
||||
'ip': '127.0.0.1', 'port': 10004, 'device': 'sda1'})
|
||||
rb.pretend_min_part_hours_passed()
|
||||
rb.rebalance()
|
||||
argv = \
|
||||
["--region", "2", "--zone", "3",
|
||||
"--ip", "[test.test.com]",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "[r.test.com]",
|
||||
"--replication-port", "7000",
|
||||
"--device", "sda3",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359"]
|
||||
opts, args = parse_args(argv)
|
||||
self.assertRaises(ValueError, build_dev_from_opts, opts)
|
||||
|
||||
argv = ['swift-ring-builder', 'object.builder',
|
||||
'list_parts', '127.0.0.1']
|
||||
sorted_partition_count = find_parts(rb, argv)
|
||||
argv = \
|
||||
["--region", "2", "--zone", "3",
|
||||
"--ip", "[test.test.com]",
|
||||
"--port", "6000",
|
||||
"--replication-ip", "[r.test.com]",
|
||||
"--replication-port", "7000",
|
||||
"--meta", "some meta data",
|
||||
"--weight", "3.14159265359"]
|
||||
opts, args = parse_args(argv)
|
||||
self.assertRaises(ValueError, build_dev_from_opts, opts)
|
||||
|
||||
# Expect 256 partitions in the output
|
||||
self.assertEqual(256, len(sorted_partition_count))
|
||||
def test_replication_defaults(self):
|
||||
args = '-r 1 -z 1 -i 127.0.0.1 -p 6010 -d d1 -w 100'.split()
|
||||
opts, _ = parse_args(args)
|
||||
device = build_dev_from_opts(opts)
|
||||
expected = {
|
||||
'device': 'd1',
|
||||
'ip': '127.0.0.1',
|
||||
'meta': '',
|
||||
'port': 6010,
|
||||
'region': 1,
|
||||
'replication_ip': '127.0.0.1',
|
||||
'replication_port': 6010,
|
||||
'weight': 100.0,
|
||||
'zone': 1,
|
||||
}
|
||||
self.assertEquals(device, expected)
|
||||
|
||||
# Each partitions should have 3 replicas
|
||||
for partition, count in sorted_partition_count:
|
||||
self.assertEqual(
|
||||
3, count, "Partition %d has only %d replicas" %
|
||||
(partition, count))
|
||||
args = '-r 1 -z 1 -i test.com -p 6010 -d d1 -w 100'.split()
|
||||
opts, _ = parse_args(args)
|
||||
device = build_dev_from_opts(opts)
|
||||
expected = {
|
||||
'device': 'd1',
|
||||
'ip': 'test.com',
|
||||
'meta': '',
|
||||
'port': 6010,
|
||||
'region': 1,
|
||||
'replication_ip': 'test.com',
|
||||
'replication_port': 6010,
|
||||
'weight': 100.0,
|
||||
'zone': 1,
|
||||
}
|
||||
self.assertEquals(device, expected)
|
||||
|
||||
def test_dispersion_report(self):
|
||||
rb = ring.RingBuilder(8, 3, 0)
|
||||
|
|
|
@ -502,6 +502,12 @@ class TestConstraints(unittest.TestCase):
|
|||
self.assertRaises(HTTPException,
|
||||
constraints.check_account_format,
|
||||
req, req.headers['X-Copy-From-Account'])
|
||||
req = Request.blank(
|
||||
'/v/a/c/o',
|
||||
headers={'X-Copy-From-Account': ''})
|
||||
self.assertRaises(HTTPException,
|
||||
constraints.check_account_format,
|
||||
req, req.headers['X-Copy-From-Account'])
|
||||
|
||||
|
||||
class TestConstraintsConfig(unittest.TestCase):
|
||||
|
|
|
@ -21,6 +21,7 @@ import ctypes
|
|||
import errno
|
||||
import eventlet
|
||||
import eventlet.event
|
||||
import functools
|
||||
import grp
|
||||
import logging
|
||||
import os
|
||||
|
@ -144,14 +145,28 @@ class MockSys(object):
|
|||
def reset_loggers():
|
||||
if hasattr(utils.get_logger, 'handler4logger'):
|
||||
for logger, handler in utils.get_logger.handler4logger.items():
|
||||
logger.thread_locals = (None, None)
|
||||
logger.removeHandler(handler)
|
||||
delattr(utils.get_logger, 'handler4logger')
|
||||
if hasattr(utils.get_logger, 'console_handler4logger'):
|
||||
for logger, h in utils.get_logger.console_handler4logger.items():
|
||||
logger.thread_locals = (None, None)
|
||||
logger.removeHandler(h)
|
||||
delattr(utils.get_logger, 'console_handler4logger')
|
||||
# Reset the LogAdapter class thread local state. Use get_logger() here
|
||||
# to fetch a LogAdapter instance because the items from
|
||||
# get_logger.handler4logger above are the underlying logger instances,
|
||||
# not the LogAdapter.
|
||||
utils.get_logger(None).thread_locals = (None, None)
|
||||
|
||||
|
||||
def reset_logger_state(f):
|
||||
@functools.wraps(f)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
reset_loggers()
|
||||
try:
|
||||
return f(self, *args, **kwargs)
|
||||
finally:
|
||||
reset_loggers()
|
||||
return wrapper
|
||||
|
||||
|
||||
class TestTimestamp(unittest.TestCase):
|
||||
|
@ -1241,6 +1256,7 @@ class TestUtils(unittest.TestCase):
|
|||
finally:
|
||||
utils.SysLogHandler = orig_sysloghandler
|
||||
|
||||
@reset_logger_state
|
||||
def test_clean_logger_exception(self):
|
||||
# setup stream logging
|
||||
sio = StringIO()
|
||||
|
@ -1330,8 +1346,8 @@ class TestUtils(unittest.TestCase):
|
|||
|
||||
finally:
|
||||
logger.logger.removeHandler(handler)
|
||||
reset_loggers()
|
||||
|
||||
@reset_logger_state
|
||||
def test_swift_log_formatter_max_line_length(self):
|
||||
# setup stream logging
|
||||
sio = StringIO()
|
||||
|
@ -1385,8 +1401,8 @@ class TestUtils(unittest.TestCase):
|
|||
self.assertEqual(strip_value(sio), '1234567890abcde\n')
|
||||
finally:
|
||||
logger.logger.removeHandler(handler)
|
||||
reset_loggers()
|
||||
|
||||
@reset_logger_state
|
||||
def test_swift_log_formatter(self):
|
||||
# setup stream logging
|
||||
sio = StringIO()
|
||||
|
@ -1449,12 +1465,20 @@ class TestUtils(unittest.TestCase):
|
|||
self.assertEquals(strip_value(sio), 'test 1.2.3.4 test 12345\n')
|
||||
finally:
|
||||
logger.logger.removeHandler(handler)
|
||||
reset_loggers()
|
||||
|
||||
def test_storage_directory(self):
|
||||
self.assertEquals(utils.storage_directory('objects', '1', 'ABCDEF'),
|
||||
'objects/1/DEF/ABCDEF')
|
||||
|
||||
def test_expand_ipv6(self):
|
||||
expanded_ipv6 = "fe80::204:61ff:fe9d:f156"
|
||||
upper_ipv6 = "fe80:0000:0000:0000:0204:61ff:fe9d:f156"
|
||||
self.assertEqual(expanded_ipv6, utils.expand_ipv6(upper_ipv6))
|
||||
omit_ipv6 = "fe80:0000:0000::0204:61ff:fe9d:f156"
|
||||
self.assertEqual(expanded_ipv6, utils.expand_ipv6(omit_ipv6))
|
||||
less_num_ipv6 = "fe80:0:00:000:0204:61ff:fe9d:f156"
|
||||
self.assertEqual(expanded_ipv6, utils.expand_ipv6(less_num_ipv6))
|
||||
|
||||
def test_whataremyips(self):
|
||||
myips = utils.whataremyips()
|
||||
self.assert_(len(myips) > 1)
|
||||
|
@ -1697,6 +1721,7 @@ log_name = %(yarr)s'''
|
|||
for func in required_func_calls:
|
||||
self.assert_(utils.os.called_funcs[func])
|
||||
|
||||
@reset_logger_state
|
||||
def test_capture_stdio(self):
|
||||
# stubs
|
||||
logger = utils.get_logger(None, 'dummy')
|
||||
|
@ -1744,13 +1769,12 @@ log_name = %(yarr)s'''
|
|||
utils.LoggerFileObject))
|
||||
self.assertFalse(isinstance(utils.sys.stderr,
|
||||
utils.LoggerFileObject))
|
||||
reset_loggers()
|
||||
finally:
|
||||
utils.sys = _orig_sys
|
||||
utils.os = _orig_os
|
||||
|
||||
@reset_logger_state
|
||||
def test_get_logger_console(self):
|
||||
reset_loggers()
|
||||
logger = utils.get_logger(None)
|
||||
console_handlers = [h for h in logger.logger.handlers if
|
||||
isinstance(h, logging.StreamHandler)]
|
||||
|
@ -1768,7 +1792,6 @@ log_name = %(yarr)s'''
|
|||
self.assertEquals(len(console_handlers), 1)
|
||||
new_handler = console_handlers[0]
|
||||
self.assertNotEquals(new_handler, old_handler)
|
||||
reset_loggers()
|
||||
|
||||
def verify_under_pseudo_time(
|
||||
self, func, target_runtime_ms=1, *args, **kwargs):
|
||||
|
@ -2704,6 +2727,33 @@ cluster_dfw1 = http://dfw1.host/v1/
|
|||
utils.get_hmac('GET', '/path', 1, 'abc'),
|
||||
'b17f6ff8da0e251737aa9e3ee69a881e3e092e2f')
|
||||
|
||||
def test_get_policy_index(self):
|
||||
# Account has no information about a policy
|
||||
req = Request.blank(
|
||||
'/sda1/p/a',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
res = Response()
|
||||
self.assertEquals(None, utils.get_policy_index(req.headers,
|
||||
res.headers))
|
||||
|
||||
# The policy of a container can be specified by the response header
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c',
|
||||
environ={'REQUEST_METHOD': 'GET'})
|
||||
res = Response(headers={'X-Backend-Storage-Policy-Index': '1'})
|
||||
self.assertEquals('1', utils.get_policy_index(req.headers,
|
||||
res.headers))
|
||||
|
||||
# The policy of an object to be created can be specified by the request
|
||||
# header
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c/o',
|
||||
environ={'REQUEST_METHOD': 'PUT'},
|
||||
headers={'X-Backend-Storage-Policy-Index': '2'})
|
||||
res = Response()
|
||||
self.assertEquals('2', utils.get_policy_index(req.headers,
|
||||
res.headers))
|
||||
|
||||
def test_get_log_line(self):
|
||||
req = Request.blank(
|
||||
'/sda1/p/a/c/o',
|
||||
|
@ -2713,7 +2763,7 @@ cluster_dfw1 = http://dfw1.host/v1/
|
|||
additional_info = 'some information'
|
||||
server_pid = 1234
|
||||
exp_line = '1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD ' \
|
||||
'/sda1/p/a/c/o" 200 - "-" "-" "-" 1.2000 "some information" 1234'
|
||||
'/sda1/p/a/c/o" 200 - "-" "-" "-" 1.2000 "some information" 1234 -'
|
||||
with mock.patch(
|
||||
'time.gmtime',
|
||||
mock.MagicMock(side_effect=[time.gmtime(10001.0)])):
|
||||
|
@ -2935,9 +2985,9 @@ class TestSwiftInfo(unittest.TestCase):
|
|||
utils._swift_info = {'swift': {'foo': 'bar'},
|
||||
'cap1': cap1}
|
||||
# expect no exceptions
|
||||
info = utils.get_swift_info(disallowed_sections=
|
||||
['cap2.cap1_foo', 'cap1.no_match',
|
||||
'cap1.cap1_foo.no_match.no_match'])
|
||||
info = utils.get_swift_info(
|
||||
disallowed_sections=['cap2.cap1_foo', 'cap1.no_match',
|
||||
'cap1.cap1_foo.no_match.no_match'])
|
||||
self.assertEquals(info['cap1'], cap1)
|
||||
|
||||
|
||||
|
@ -3666,19 +3716,21 @@ class TestStatsdLoggingDelegation(unittest.TestCase):
|
|||
self.assertEquals('\xef\xbf\xbd\xef\xbf\xbd\xec\xbc\x9d\xef\xbf\xbd',
|
||||
utils.get_valid_utf8_str(invalid_utf8_str))
|
||||
|
||||
@reset_logger_state
|
||||
def test_thread_locals(self):
|
||||
logger = utils.get_logger(None)
|
||||
orig_thread_locals = logger.thread_locals
|
||||
try:
|
||||
self.assertEquals(logger.thread_locals, (None, None))
|
||||
logger.txn_id = '1234'
|
||||
logger.client_ip = '1.2.3.4'
|
||||
self.assertEquals(logger.thread_locals, ('1234', '1.2.3.4'))
|
||||
logger.txn_id = '5678'
|
||||
logger.client_ip = '5.6.7.8'
|
||||
self.assertEquals(logger.thread_locals, ('5678', '5.6.7.8'))
|
||||
finally:
|
||||
logger.thread_locals = orig_thread_locals
|
||||
# test the setter
|
||||
logger.thread_locals = ('id', 'ip')
|
||||
self.assertEquals(logger.thread_locals, ('id', 'ip'))
|
||||
# reset
|
||||
logger.thread_locals = (None, None)
|
||||
self.assertEquals(logger.thread_locals, (None, None))
|
||||
logger.txn_id = '1234'
|
||||
logger.client_ip = '1.2.3.4'
|
||||
self.assertEquals(logger.thread_locals, ('1234', '1.2.3.4'))
|
||||
logger.txn_id = '5678'
|
||||
logger.client_ip = '5.6.7.8'
|
||||
self.assertEquals(logger.thread_locals, ('5678', '5.6.7.8'))
|
||||
|
||||
def test_no_fdatasync(self):
|
||||
called = []
|
||||
|
|
|
@ -386,6 +386,8 @@ class TestContainerController(unittest.TestCase):
|
|||
policy.idx})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEquals(resp.status_int, 201)
|
||||
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
||||
str(policy.idx))
|
||||
|
||||
# now make sure we read it back
|
||||
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
||||
|
@ -399,6 +401,8 @@ class TestContainerController(unittest.TestCase):
|
|||
headers={'X-Timestamp': Timestamp(1).internal})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEquals(resp.status_int, 201)
|
||||
self.assertEquals(resp.headers.get('X-Backend-Storage-Policy-Index'),
|
||||
str(POLICIES.default.idx))
|
||||
|
||||
# now make sure the default was used (pol 1)
|
||||
req = Request.blank('/sda1/p/a/c', environ={'REQUEST_METHOD': 'GET'})
|
||||
|
@ -414,6 +418,7 @@ class TestContainerController(unittest.TestCase):
|
|||
resp = req.get_response(self.controller)
|
||||
# make sure we get bad response
|
||||
self.assertEquals(resp.status_int, 400)
|
||||
self.assertNotIn('X-Backend-Storage-Policy-Index', resp.headers)
|
||||
|
||||
def test_PUT_no_policy_change(self):
|
||||
ts = (Timestamp(t).internal for t in itertools.count(time.time()))
|
||||
|
@ -471,6 +476,9 @@ class TestContainerController(unittest.TestCase):
|
|||
})
|
||||
resp = req.get_response(self.controller)
|
||||
self.assertEquals(resp.status_int, 409)
|
||||
self.assertEquals(
|
||||
resp.headers.get('X-Backend-Storage-Policy-Index'),
|
||||
str(policy.idx))
|
||||
|
||||
# and make sure there is no change!
|
||||
req = Request.blank('/sda1/p/a/c')
|
||||
|
@ -2595,7 +2603,7 @@ class TestContainerController(unittest.TestCase):
|
|||
self.assertEqual(
|
||||
self.controller.logger.log_dict['info'],
|
||||
[(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a/c" '
|
||||
'404 - "-" "-" "-" 2.0000 "-" 1234',), {})])
|
||||
'404 - "-" "-" "-" 2.0000 "-" 1234 0',), {})])
|
||||
|
||||
|
||||
@patch_policies([
|
||||
|
|
|
@ -22,13 +22,15 @@ import cPickle as pickle
|
|||
import time
|
||||
import tempfile
|
||||
from contextlib import contextmanager, closing
|
||||
from errno import ENOENT, ENOTEMPTY, ENOTDIR
|
||||
|
||||
from eventlet.green import subprocess
|
||||
from eventlet import Timeout, tpool
|
||||
|
||||
from test.unit import FakeLogger, patch_policies
|
||||
from swift.common import utils
|
||||
from swift.common.utils import hash_path, mkdirs, normalize_timestamp
|
||||
from swift.common.utils import hash_path, mkdirs, normalize_timestamp, \
|
||||
storage_directory
|
||||
from swift.common import ring
|
||||
from swift.obj import diskfile, replicator as object_replicator
|
||||
from swift.common.storage_policy import StoragePolicy, POLICIES
|
||||
|
@ -84,9 +86,20 @@ class MockProcess(object):
|
|||
def __init__(self, *args, **kwargs):
|
||||
targs = MockProcess.check_args.next()
|
||||
for targ in targs:
|
||||
if targ not in args[0]:
|
||||
process_errors.append("Invalid: %s not in %s" % (targ,
|
||||
args))
|
||||
# Allow more than 2 candidate targs
|
||||
# (e.g. a case that either node is fine when nodes shuffled)
|
||||
if isinstance(targ, tuple):
|
||||
allowed = False
|
||||
for target in targ:
|
||||
if target in args[0]:
|
||||
allowed = True
|
||||
if not allowed:
|
||||
process_errors.append("Invalid: %s not in %s" % (targ,
|
||||
args))
|
||||
else:
|
||||
if targ not in args[0]:
|
||||
process_errors.append("Invalid: %s not in %s" % (targ,
|
||||
args))
|
||||
self.stdout = self.Stream()
|
||||
|
||||
def wait(self):
|
||||
|
@ -112,14 +125,19 @@ def _create_test_rings(path):
|
|||
[2, 3, 0, 1, 6, 4, 5],
|
||||
]
|
||||
intended_devs = [
|
||||
{'id': 0, 'device': 'sda', 'zone': 0, 'ip': '127.0.0.0', 'port': 6000},
|
||||
{'id': 1, 'device': 'sda', 'zone': 1, 'ip': '127.0.0.1', 'port': 6000},
|
||||
{'id': 2, 'device': 'sda', 'zone': 2, 'ip': '127.0.0.2', 'port': 6000},
|
||||
{'id': 3, 'device': 'sda', 'zone': 4, 'ip': '127.0.0.3', 'port': 6000},
|
||||
{'id': 4, 'device': 'sda', 'zone': 5, 'ip': '127.0.0.4', 'port': 6000},
|
||||
{'id': 0, 'device': 'sda', 'zone': 0,
|
||||
'region': 1, 'ip': '127.0.0.0', 'port': 6000},
|
||||
{'id': 1, 'device': 'sda', 'zone': 1,
|
||||
'region': 2, 'ip': '127.0.0.1', 'port': 6000},
|
||||
{'id': 2, 'device': 'sda', 'zone': 2,
|
||||
'region': 1, 'ip': '127.0.0.2', 'port': 6000},
|
||||
{'id': 3, 'device': 'sda', 'zone': 4,
|
||||
'region': 2, 'ip': '127.0.0.3', 'port': 6000},
|
||||
{'id': 4, 'device': 'sda', 'zone': 5,
|
||||
'region': 1, 'ip': '127.0.0.4', 'port': 6000},
|
||||
{'id': 5, 'device': 'sda', 'zone': 6,
|
||||
'ip': 'fe80::202:b3ff:fe1e:8329', 'port': 6000},
|
||||
{'id': 6, 'device': 'sda', 'zone': 7,
|
||||
'region': 2, 'ip': 'fe80::202:b3ff:fe1e:8329', 'port': 6000},
|
||||
{'id': 6, 'device': 'sda', 'zone': 7, 'region': 1,
|
||||
'ip': '2001:0db8:85a3:0000:0000:8a2e:0370:7334', 'port': 6000},
|
||||
]
|
||||
intended_part_shift = 30
|
||||
|
@ -200,10 +218,11 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
nodes = [node for node in
|
||||
ring.get_part_nodes(int(cur_part))
|
||||
if node['ip'] not in _ips()]
|
||||
rsync_mods = tuple(['%s::object/sda/objects/%s' %
|
||||
(node['ip'], cur_part) for node in nodes])
|
||||
for node in nodes:
|
||||
rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], cur_part)
|
||||
process_arg_checker.append(
|
||||
(0, '', ['rsync', whole_path_from, rsync_mod]))
|
||||
(0, '', ['rsync', whole_path_from, rsync_mods]))
|
||||
with _mock_process(process_arg_checker):
|
||||
replicator.run_once()
|
||||
self.assertFalse(process_errors)
|
||||
|
@ -233,10 +252,11 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
nodes = [node for node in
|
||||
ring.get_part_nodes(int(cur_part))
|
||||
if node['ip'] not in _ips()]
|
||||
rsync_mods = tuple(['%s::object/sda/objects-1/%s' %
|
||||
(node['ip'], cur_part) for node in nodes])
|
||||
for node in nodes:
|
||||
rsync_mod = '%s::object/sda/objects-1/%s' % (node['ip'], cur_part)
|
||||
process_arg_checker.append(
|
||||
(0, '', ['rsync', whole_path_from, rsync_mod]))
|
||||
(0, '', ['rsync', whole_path_from, rsync_mods]))
|
||||
with _mock_process(process_arg_checker):
|
||||
replicator.run_once()
|
||||
self.assertFalse(process_errors)
|
||||
|
@ -530,6 +550,40 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
# The file should still exist
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
def test_delete_partition_with_handoff_delete_fail_in_other_region(self):
|
||||
with mock.patch('swift.obj.replicator.http_connect',
|
||||
mock_http_connect(200)):
|
||||
df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o')
|
||||
mkdirs(df._datadir)
|
||||
f = open(os.path.join(df._datadir,
|
||||
normalize_timestamp(time.time()) + '.data'),
|
||||
'wb')
|
||||
f.write('1234567890')
|
||||
f.close()
|
||||
ohash = hash_path('a', 'c', 'o')
|
||||
data_dir = ohash[-3:]
|
||||
whole_path_from = os.path.join(self.objects, '1', data_dir)
|
||||
part_path = os.path.join(self.objects, '1')
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
ring = self.replicator.get_object_ring(0)
|
||||
nodes = [node for node in
|
||||
ring.get_part_nodes(1)
|
||||
if node['ip'] not in _ips()]
|
||||
process_arg_checker = []
|
||||
for node in nodes:
|
||||
rsync_mod = '%s::object/sda/objects/%s' % (node['ip'], 1)
|
||||
if node['region'] != 1:
|
||||
# the rsync calls for other region to fail
|
||||
ret_code = 1
|
||||
else:
|
||||
ret_code = 0
|
||||
process_arg_checker.append(
|
||||
(ret_code, '', ['rsync', whole_path_from, rsync_mod]))
|
||||
with _mock_process(process_arg_checker):
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
def test_delete_partition_override_params(self):
|
||||
df = self.df_mgr.get_diskfile('sda', '0', 'a', 'c', 'o')
|
||||
mkdirs(df._datadir)
|
||||
|
@ -564,6 +618,190 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
self.assertFalse(os.access(pol1_part_path, os.F_OK))
|
||||
self.assertTrue(os.access(pol0_part_path, os.F_OK))
|
||||
|
||||
def test_delete_partition_ssync(self):
|
||||
with mock.patch('swift.obj.replicator.http_connect',
|
||||
mock_http_connect(200)):
|
||||
df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o')
|
||||
mkdirs(df._datadir)
|
||||
f = open(os.path.join(df._datadir,
|
||||
normalize_timestamp(time.time()) + '.data'),
|
||||
'wb')
|
||||
f.write('0')
|
||||
f.close()
|
||||
ohash = hash_path('a', 'c', 'o')
|
||||
whole_path_from = storage_directory(self.objects, 1, ohash)
|
||||
suffix_dir_path = os.path.dirname(whole_path_from)
|
||||
part_path = os.path.join(self.objects, '1')
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
self.call_nums = 0
|
||||
self.conf['sync_method'] = 'ssync'
|
||||
|
||||
def _fake_ssync(node, job, suffixes, **kwargs):
|
||||
success = True
|
||||
ret_val = [whole_path_from]
|
||||
if self.call_nums == 2:
|
||||
# ssync should return (True, []) only when the second
|
||||
# candidate node has not get the replica yet.
|
||||
success = False
|
||||
ret_val = []
|
||||
self.call_nums += 1
|
||||
return success, set(ret_val)
|
||||
|
||||
self.replicator.sync_method = _fake_ssync
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.replicator.replicate()
|
||||
# The file should be deleted at the second replicate call
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertFalse(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.replicator.replicate()
|
||||
# The partition should be deleted at the third replicate call
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertFalse(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertFalse(os.access(part_path, os.F_OK))
|
||||
del self.call_nums
|
||||
|
||||
def test_delete_partition_ssync_with_sync_failure(self):
|
||||
with mock.patch('swift.obj.replicator.http_connect',
|
||||
mock_http_connect(200)):
|
||||
df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o')
|
||||
mkdirs(df._datadir)
|
||||
f = open(os.path.join(df._datadir,
|
||||
normalize_timestamp(time.time()) + '.data'),
|
||||
'wb')
|
||||
f.write('0')
|
||||
f.close()
|
||||
ohash = hash_path('a', 'c', 'o')
|
||||
whole_path_from = storage_directory(self.objects, 1, ohash)
|
||||
suffix_dir_path = os.path.dirname(whole_path_from)
|
||||
part_path = os.path.join(self.objects, '1')
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.call_nums = 0
|
||||
self.conf['sync_method'] = 'ssync'
|
||||
|
||||
def _fake_ssync(node, job, suffixes, **kwags):
|
||||
success = False
|
||||
ret_val = []
|
||||
if self.call_nums == 2:
|
||||
# ssync should return (True, []) only when the second
|
||||
# candidate node has not get the replica yet.
|
||||
success = True
|
||||
ret_val = [whole_path_from]
|
||||
self.call_nums += 1
|
||||
return success, set(ret_val)
|
||||
|
||||
self.replicator.sync_method = _fake_ssync
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
del self.call_nums
|
||||
|
||||
def test_delete_partition_ssync_with_cleanup_failure(self):
|
||||
with mock.patch('swift.obj.replicator.http_connect',
|
||||
mock_http_connect(200)):
|
||||
self.replicator.logger = mock_logger = mock.MagicMock()
|
||||
df = self.df_mgr.get_diskfile('sda', '1', 'a', 'c', 'o')
|
||||
mkdirs(df._datadir)
|
||||
f = open(os.path.join(df._datadir,
|
||||
normalize_timestamp(time.time()) + '.data'),
|
||||
'wb')
|
||||
f.write('0')
|
||||
f.close()
|
||||
ohash = hash_path('a', 'c', 'o')
|
||||
whole_path_from = storage_directory(self.objects, 1, ohash)
|
||||
suffix_dir_path = os.path.dirname(whole_path_from)
|
||||
part_path = os.path.join(self.objects, '1')
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
self.call_nums = 0
|
||||
self.conf['sync_method'] = 'ssync'
|
||||
|
||||
def _fake_ssync(node, job, suffixes, **kwargs):
|
||||
success = True
|
||||
ret_val = [whole_path_from]
|
||||
if self.call_nums == 2:
|
||||
# ssync should return (True, []) only when the second
|
||||
# candidate node has not get the replica yet.
|
||||
success = False
|
||||
ret_val = []
|
||||
self.call_nums += 1
|
||||
return success, set(ret_val)
|
||||
|
||||
rmdir_func = os.rmdir
|
||||
|
||||
def raise_exception_rmdir(exception_class, error_no):
|
||||
instance = exception_class()
|
||||
instance.errno = error_no
|
||||
|
||||
def func(directory):
|
||||
if directory == suffix_dir_path:
|
||||
raise instance
|
||||
else:
|
||||
rmdir_func(directory)
|
||||
|
||||
return func
|
||||
|
||||
self.replicator.sync_method = _fake_ssync
|
||||
self.replicator.replicate()
|
||||
# The file should still exist
|
||||
self.assertTrue(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
# Fail with ENOENT
|
||||
with mock.patch('os.rmdir',
|
||||
raise_exception_rmdir(OSError, ENOENT)):
|
||||
self.replicator.replicate()
|
||||
self.assertEquals(mock_logger.exception.call_count, 0)
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
# Fail with ENOTEMPTY
|
||||
with mock.patch('os.rmdir',
|
||||
raise_exception_rmdir(OSError, ENOTEMPTY)):
|
||||
self.replicator.replicate()
|
||||
self.assertEquals(mock_logger.exception.call_count, 0)
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
# Fail with ENOTDIR
|
||||
with mock.patch('os.rmdir',
|
||||
raise_exception_rmdir(OSError, ENOTDIR)):
|
||||
self.replicator.replicate()
|
||||
self.assertEquals(mock_logger.exception.call_count, 1)
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertTrue(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
|
||||
# Finally we can cleanup everything
|
||||
self.replicator.replicate()
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertFalse(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertTrue(os.access(part_path, os.F_OK))
|
||||
self.replicator.replicate()
|
||||
self.assertFalse(os.access(whole_path_from, os.F_OK))
|
||||
self.assertFalse(os.access(suffix_dir_path, os.F_OK))
|
||||
self.assertFalse(os.access(part_path, os.F_OK))
|
||||
|
||||
def test_run_once_recover_from_failure(self):
|
||||
conf = dict(swift_dir=self.testdir, devices=self.devices,
|
||||
mount_check='false', timeout='300', stats_interval='1')
|
||||
|
@ -781,7 +1019,8 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
resp.read.return_value = pickle.dumps({'a83': 'c130a2c17ed45102a'
|
||||
'ada0f4eee69494ff'})
|
||||
set_default(self)
|
||||
self.replicator.sync = fake_func = mock.MagicMock()
|
||||
self.replicator.sync = fake_func = \
|
||||
mock.MagicMock(return_value=(True, []))
|
||||
self.replicator.update(local_job)
|
||||
reqs = []
|
||||
for node in local_job['nodes']:
|
||||
|
@ -792,6 +1031,26 @@ class TestObjectReplicator(unittest.TestCase):
|
|||
self.assertEquals(self.replicator.suffix_sync, 2)
|
||||
self.assertEquals(self.replicator.suffix_hash, 1)
|
||||
self.assertEquals(self.replicator.suffix_count, 1)
|
||||
|
||||
# Efficient Replication Case
|
||||
set_default(self)
|
||||
self.replicator.sync = fake_func = \
|
||||
mock.MagicMock(return_value=(True, []))
|
||||
all_jobs = self.replicator.collect_jobs()
|
||||
job = None
|
||||
for tmp in all_jobs:
|
||||
if tmp['partition'] == '3':
|
||||
job = tmp
|
||||
break
|
||||
# The candidate nodes to replicate (i.e. dev1 and dev3)
|
||||
# belong to another region
|
||||
self.replicator.update(job)
|
||||
self.assertEquals(fake_func.call_count, 1)
|
||||
self.assertEquals(self.replicator.replication_count, 1)
|
||||
self.assertEquals(self.replicator.suffix_sync, 1)
|
||||
self.assertEquals(self.replicator.suffix_hash, 1)
|
||||
self.assertEquals(self.replicator.suffix_count, 1)
|
||||
|
||||
mock_http.reset_mock()
|
||||
mock_logger.reset_mock()
|
||||
|
||||
|
|
|
@ -4168,7 +4168,7 @@ class TestObjectController(unittest.TestCase):
|
|||
self.object_controller.logger.log_dict['info'],
|
||||
[(('None - - [01/Jan/1970:02:46:41 +0000] "PUT'
|
||||
' /sda1/p/a/c/o" 405 - "-" "-" "-" 1.0000 "-"'
|
||||
' 1234',),
|
||||
' 1234 -',),
|
||||
{})])
|
||||
|
||||
def test_call_incorrect_replication_method(self):
|
||||
|
@ -4350,7 +4350,7 @@ class TestObjectController(unittest.TestCase):
|
|||
self.assertEqual(
|
||||
self.object_controller.logger.log_dict['info'],
|
||||
[(('1.2.3.4 - - [01/Jan/1970:02:46:41 +0000] "HEAD /sda1/p/a/c/o" '
|
||||
'404 - "-" "-" "-" 2.0000 "-" 1234',), {})])
|
||||
'404 - "-" "-" "-" 2.0000 "-" 1234 -',), {})])
|
||||
|
||||
@patch_policies([storage_policy.StoragePolicy(0, 'zero', True),
|
||||
storage_policy.StoragePolicy(1, 'one', False)])
|
||||
|
|
|
@ -137,7 +137,9 @@ class TestSender(unittest.TestCase):
|
|||
job = dict(partition='9')
|
||||
self.sender = ssync_sender.Sender(self.replicator, node, job, None)
|
||||
self.sender.suffixes = ['abc']
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.error.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1][:-1], ('%s:%s/%s/%s %s', '1.2.3.4', 5678, 'sda1', '9'))
|
||||
|
@ -154,7 +156,9 @@ class TestSender(unittest.TestCase):
|
|||
job = dict(partition='9')
|
||||
self.sender = ssync_sender.Sender(self.replicator, node, job, None)
|
||||
self.sender.suffixes = ['abc']
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.error.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1][:-1], ('%s:%s/%s/%s %s', '1.2.3.4', 5678, 'sda1', '9'))
|
||||
|
@ -167,7 +171,9 @@ class TestSender(unittest.TestCase):
|
|||
self.sender = ssync_sender.Sender(self.replicator, node, job, None)
|
||||
self.sender.suffixes = ['abc']
|
||||
self.sender.connect = 'cause exception'
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.exception.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1],
|
||||
|
@ -181,7 +187,9 @@ class TestSender(unittest.TestCase):
|
|||
self.sender = ssync_sender.Sender(self.replicator, node, job, None)
|
||||
self.sender.suffixes = ['abc']
|
||||
self.sender.connect = 'cause exception'
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
self.replicator.logger.exception.assert_called_once_with(
|
||||
'EXCEPTION in replication.Sender')
|
||||
|
||||
|
@ -191,7 +199,9 @@ class TestSender(unittest.TestCase):
|
|||
self.sender.missing_check = mock.MagicMock()
|
||||
self.sender.updates = mock.MagicMock()
|
||||
self.sender.disconnect = mock.MagicMock()
|
||||
self.assertTrue(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertTrue(success)
|
||||
self.assertEquals(candidates, set())
|
||||
self.sender.connect.assert_called_once_with()
|
||||
self.sender.missing_check.assert_called_once_with()
|
||||
self.sender.updates.assert_called_once_with()
|
||||
|
@ -204,7 +214,9 @@ class TestSender(unittest.TestCase):
|
|||
self.sender.updates = mock.MagicMock()
|
||||
self.sender.disconnect = mock.MagicMock()
|
||||
self.sender.failures = 1
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
self.sender.connect.assert_called_once_with()
|
||||
self.sender.missing_check.assert_called_once_with()
|
||||
self.sender.updates.assert_called_once_with()
|
||||
|
@ -243,6 +255,94 @@ class TestSender(unittest.TestCase):
|
|||
method_name, mock_method.mock_calls,
|
||||
expected_calls))
|
||||
|
||||
def test_call_and_missing_check(self):
|
||||
def yield_hashes(device, partition, policy_index, suffixes=None):
|
||||
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
||||
and policy_index == 0:
|
||||
yield (
|
||||
'/srv/node/dev/objects/9/abc/'
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'1380144470.00000')
|
||||
else:
|
||||
raise Exception(
|
||||
'No match for %r %r %r' % (device, partition, suffixes))
|
||||
|
||||
self.sender.connection = FakeConnection()
|
||||
self.sender.job = {'device': 'dev', 'partition': '9'}
|
||||
self.sender.suffixes = ['abc']
|
||||
self.sender.response = FakeResponse(
|
||||
chunk_body=(
|
||||
':MISSING_CHECK: START\r\n'
|
||||
'9d41d8cd98f00b204e9800998ecf0abc\r\n'
|
||||
':MISSING_CHECK: END\r\n'))
|
||||
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
||||
self.sender.connect = mock.MagicMock()
|
||||
self.sender.updates = mock.MagicMock()
|
||||
self.sender.disconnect = mock.MagicMock()
|
||||
success, candidates = self.sender()
|
||||
self.assertTrue(success)
|
||||
self.assertEqual(candidates, set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
self.assertEqual(self.sender.failures, 0)
|
||||
|
||||
def test_call_and_missing_check_with_obj_list(self):
|
||||
def yield_hashes(device, partition, policy_index, suffixes=None):
|
||||
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
||||
and policy_index == 0:
|
||||
yield (
|
||||
'/srv/node/dev/objects/9/abc/'
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'1380144470.00000')
|
||||
else:
|
||||
raise Exception(
|
||||
'No match for %r %r %r' % (device, partition, suffixes))
|
||||
job = {'device': 'dev', 'partition': '9'}
|
||||
self.sender = ssync_sender.Sender(self.replicator, None, job, ['abc'],
|
||||
['9d41d8cd98f00b204e9800998ecf0abc'])
|
||||
self.sender.connection = FakeConnection()
|
||||
self.sender.response = FakeResponse(
|
||||
chunk_body=(
|
||||
':MISSING_CHECK: START\r\n'
|
||||
':MISSING_CHECK: END\r\n'))
|
||||
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
||||
self.sender.connect = mock.MagicMock()
|
||||
self.sender.updates = mock.MagicMock()
|
||||
self.sender.disconnect = mock.MagicMock()
|
||||
success, candidates = self.sender()
|
||||
self.assertTrue(success)
|
||||
self.assertEqual(candidates, set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
self.assertEqual(self.sender.failures, 0)
|
||||
|
||||
def test_call_and_missing_check_with_obj_list_but_required(self):
|
||||
def yield_hashes(device, partition, policy_index, suffixes=None):
|
||||
if device == 'dev' and partition == '9' and suffixes == ['abc'] \
|
||||
and policy_index == 0:
|
||||
yield (
|
||||
'/srv/node/dev/objects/9/abc/'
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'1380144470.00000')
|
||||
else:
|
||||
raise Exception(
|
||||
'No match for %r %r %r' % (device, partition, suffixes))
|
||||
job = {'device': 'dev', 'partition': '9'}
|
||||
self.sender = ssync_sender.Sender(self.replicator, None, job, ['abc'],
|
||||
['9d41d8cd98f00b204e9800998ecf0abc'])
|
||||
self.sender.connection = FakeConnection()
|
||||
self.sender.response = FakeResponse(
|
||||
chunk_body=(
|
||||
':MISSING_CHECK: START\r\n'
|
||||
'9d41d8cd98f00b204e9800998ecf0abc\r\n'
|
||||
':MISSING_CHECK: END\r\n'))
|
||||
self.sender.daemon._diskfile_mgr.yield_hashes = yield_hashes
|
||||
self.sender.connect = mock.MagicMock()
|
||||
self.sender.updates = mock.MagicMock()
|
||||
self.sender.disconnect = mock.MagicMock()
|
||||
success, candidates = self.sender()
|
||||
self.assertTrue(success)
|
||||
self.assertEqual(candidates, set())
|
||||
|
||||
def test_connect_send_timeout(self):
|
||||
self.replicator.conn_timeout = 0.01
|
||||
node = dict(replication_ip='1.2.3.4', replication_port=5678,
|
||||
|
@ -257,7 +357,9 @@ class TestSender(unittest.TestCase):
|
|||
with mock.patch.object(
|
||||
ssync_sender.bufferedhttp.BufferedHTTPConnection,
|
||||
'putrequest', putrequest):
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.error.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1][:-1], ('%s:%s/%s/%s %s', '1.2.3.4', 5678, 'sda1', '9'))
|
||||
|
@ -279,7 +381,9 @@ class TestSender(unittest.TestCase):
|
|||
with mock.patch.object(
|
||||
ssync_sender.bufferedhttp, 'BufferedHTTPConnection',
|
||||
FakeBufferedHTTPConnection):
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.error.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1][:-1], ('%s:%s/%s/%s %s', '1.2.3.4', 5678, 'sda1', '9'))
|
||||
|
@ -302,7 +406,9 @@ class TestSender(unittest.TestCase):
|
|||
with mock.patch.object(
|
||||
ssync_sender.bufferedhttp, 'BufferedHTTPConnection',
|
||||
FakeBufferedHTTPConnection):
|
||||
self.assertFalse(self.sender())
|
||||
success, candidates = self.sender()
|
||||
self.assertFalse(success)
|
||||
self.assertEquals(candidates, set())
|
||||
call = self.replicator.logger.error.mock_calls[0]
|
||||
self.assertEqual(
|
||||
call[1][:-1], ('%s:%s/%s/%s %s', '1.2.3.4', 5678, 'sda1', '9'))
|
||||
|
@ -389,6 +495,7 @@ class TestSender(unittest.TestCase):
|
|||
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.send_list, [])
|
||||
self.assertEqual(self.sender.available_set, set())
|
||||
|
||||
def test_missing_check_has_suffixes(self):
|
||||
def yield_hashes(device, partition, policy_idx, suffixes=None):
|
||||
|
@ -431,6 +538,10 @@ class TestSender(unittest.TestCase):
|
|||
'33\r\n9d41d8cd98f00b204e9800998ecf1def 1380144474.44444\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.send_list, [])
|
||||
candidates = ['9d41d8cd98f00b204e9800998ecf0abc',
|
||||
'9d41d8cd98f00b204e9800998ecf0def',
|
||||
'9d41d8cd98f00b204e9800998ecf1def']
|
||||
self.assertEqual(self.sender.available_set, set(candidates))
|
||||
|
||||
def test_missing_check_far_end_disconnect(self):
|
||||
def yield_hashes(device, partition, policy_idx, suffixes=None):
|
||||
|
@ -462,6 +573,8 @@ class TestSender(unittest.TestCase):
|
|||
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
||||
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.available_set,
|
||||
set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
|
||||
def test_missing_check_far_end_disconnect2(self):
|
||||
def yield_hashes(device, partition, policy_idx, suffixes=None):
|
||||
|
@ -494,6 +607,8 @@ class TestSender(unittest.TestCase):
|
|||
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
||||
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.available_set,
|
||||
set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
|
||||
def test_missing_check_far_end_unexpected(self):
|
||||
def yield_hashes(device, partition, policy_idx, suffixes=None):
|
||||
|
@ -525,6 +640,8 @@ class TestSender(unittest.TestCase):
|
|||
'17\r\n:MISSING_CHECK: START\r\n\r\n'
|
||||
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.available_set,
|
||||
set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
|
||||
def test_missing_check_send_list(self):
|
||||
def yield_hashes(device, partition, policy_idx, suffixes=None):
|
||||
|
@ -556,6 +673,8 @@ class TestSender(unittest.TestCase):
|
|||
'33\r\n9d41d8cd98f00b204e9800998ecf0abc 1380144470.00000\r\n\r\n'
|
||||
'15\r\n:MISSING_CHECK: END\r\n\r\n')
|
||||
self.assertEqual(self.sender.send_list, ['0123abc'])
|
||||
self.assertEqual(self.sender.available_set,
|
||||
set(['9d41d8cd98f00b204e9800998ecf0abc']))
|
||||
|
||||
def test_updates_timeout(self):
|
||||
self.sender.connection = FakeConnection()
|
||||
|
@ -643,6 +762,33 @@ class TestSender(unittest.TestCase):
|
|||
'11\r\n:UPDATES: START\r\n\r\n'
|
||||
'f\r\n:UPDATES: END\r\n\r\n')
|
||||
|
||||
def test_update_send_delete(self):
|
||||
device = 'dev'
|
||||
part = '9'
|
||||
object_parts = ('a', 'c', 'o')
|
||||
df = self._make_open_diskfile(device, part, *object_parts)
|
||||
object_hash = utils.hash_path(*object_parts)
|
||||
delete_timestamp = utils.normalize_timestamp(time.time())
|
||||
df.delete(delete_timestamp)
|
||||
self.sender.connection = FakeConnection()
|
||||
self.sender.job = {'device': device, 'partition': part}
|
||||
self.sender.node = {}
|
||||
self.sender.send_list = [object_hash]
|
||||
self.sender.response = FakeResponse(
|
||||
chunk_body=(
|
||||
':UPDATES: START\r\n'
|
||||
':UPDATES: END\r\n'))
|
||||
self.sender.updates()
|
||||
self.assertEqual(
|
||||
''.join(self.sender.connection.sent),
|
||||
'11\r\n:UPDATES: START\r\n\r\n'
|
||||
'30\r\n'
|
||||
'DELETE /a/c/o\r\n'
|
||||
'X-Timestamp: %s\r\n\r\n\r\n'
|
||||
'f\r\n:UPDATES: END\r\n\r\n'
|
||||
% delete_timestamp
|
||||
)
|
||||
|
||||
def test_updates_put(self):
|
||||
device = 'dev'
|
||||
part = '9'
|
||||
|
@ -818,14 +964,16 @@ class TestSender(unittest.TestCase):
|
|||
self.sender.daemon.node_timeout = 0.01
|
||||
exc = None
|
||||
try:
|
||||
self.sender.send_delete('/a/c/o', '1381679759.90941')
|
||||
self.sender.send_delete('/a/c/o',
|
||||
utils.Timestamp('1381679759.90941'))
|
||||
except exceptions.MessageTimeout as err:
|
||||
exc = err
|
||||
self.assertEqual(str(exc), '0.01 seconds: send_delete')
|
||||
|
||||
def test_send_delete(self):
|
||||
self.sender.connection = FakeConnection()
|
||||
self.sender.send_delete('/a/c/o', '1381679759.90941')
|
||||
self.sender.send_delete('/a/c/o',
|
||||
utils.Timestamp('1381679759.90941'))
|
||||
self.assertEqual(
|
||||
''.join(self.sender.connection.sent),
|
||||
'30\r\n'
|
||||
|
|
|
@ -523,6 +523,15 @@ class TestFuncs(unittest.TestCase):
|
|||
self.assertEquals(resp['meta']['whatevs'], 14)
|
||||
self.assertEquals(resp['meta']['somethingelse'], 0)
|
||||
|
||||
def test_headers_to_object_info_sys_meta(self):
|
||||
prefix = get_sys_meta_prefix('object')
|
||||
headers = {'%sWhatevs' % prefix: 14,
|
||||
'%ssomethingelse' % prefix: 0}
|
||||
resp = headers_to_object_info(headers.items(), 200)
|
||||
self.assertEquals(len(resp['sysmeta']), 2)
|
||||
self.assertEquals(resp['sysmeta']['whatevs'], 14)
|
||||
self.assertEquals(resp['sysmeta']['somethingelse'], 0)
|
||||
|
||||
def test_headers_to_object_info_values(self):
|
||||
headers = {
|
||||
'content-length': '1024',
|
||||
|
|
|
@ -5345,13 +5345,15 @@ class TestContainerController(unittest.TestCase):
|
|||
|
||||
def test_transfer_headers(self):
|
||||
src_headers = {'x-remove-versions-location': 'x',
|
||||
'x-container-read': '*:user'}
|
||||
'x-container-read': '*:user',
|
||||
'x-remove-container-sync-key': 'x'}
|
||||
dst_headers = {'x-versions-location': 'backup'}
|
||||
controller = swift.proxy.controllers.ContainerController(self.app,
|
||||
'a', 'c')
|
||||
controller.transfer_headers(src_headers, dst_headers)
|
||||
expected_headers = {'x-versions-location': '',
|
||||
'x-container-read': '*:user'}
|
||||
'x-container-read': '*:user',
|
||||
'x-container-sync-key': ''}
|
||||
self.assertEqual(dst_headers, expected_headers)
|
||||
|
||||
def assert_status_map(self, method, statuses, expected,
|
||||
|
@ -6078,6 +6080,78 @@ class TestContainerController(unittest.TestCase):
|
|||
controller.HEAD(req)
|
||||
self.assert_(called[0])
|
||||
|
||||
def test_unauthorized_requests_when_account_not_found(self):
|
||||
# verify unauthorized container requests always return response
|
||||
# from swift.authorize
|
||||
called = [0, 0]
|
||||
|
||||
def authorize(req):
|
||||
called[0] += 1
|
||||
return HTTPUnauthorized(request=req)
|
||||
|
||||
def account_info(*args):
|
||||
called[1] += 1
|
||||
return None, None, None
|
||||
|
||||
def _do_test(method):
|
||||
with save_globals():
|
||||
swift.proxy.controllers.Controller.account_info = account_info
|
||||
app = proxy_server.Application(None, FakeMemcache(),
|
||||
account_ring=FakeRing(),
|
||||
container_ring=FakeRing())
|
||||
set_http_connect(201, 201, 201)
|
||||
req = Request.blank('/v1/a/c', {'REQUEST_METHOD': method})
|
||||
req.environ['swift.authorize'] = authorize
|
||||
self.app.update_request(req)
|
||||
res = app.handle_request(req)
|
||||
return res
|
||||
|
||||
for method in ('PUT', 'POST', 'DELETE'):
|
||||
# no delay_denial on method, expect one call to authorize
|
||||
called = [0, 0]
|
||||
res = _do_test(method)
|
||||
self.assertEqual(401, res.status_int)
|
||||
self.assertEqual([1, 0], called)
|
||||
|
||||
for method in ('HEAD', 'GET'):
|
||||
# delay_denial on method, expect two calls to authorize
|
||||
called = [0, 0]
|
||||
res = _do_test(method)
|
||||
self.assertEqual(401, res.status_int)
|
||||
self.assertEqual([2, 1], called)
|
||||
|
||||
def test_authorized_requests_when_account_not_found(self):
|
||||
# verify authorized container requests always return 404 when
|
||||
# account not found
|
||||
called = [0, 0]
|
||||
|
||||
def authorize(req):
|
||||
called[0] += 1
|
||||
|
||||
def account_info(*args):
|
||||
called[1] += 1
|
||||
return None, None, None
|
||||
|
||||
def _do_test(method):
|
||||
with save_globals():
|
||||
swift.proxy.controllers.Controller.account_info = account_info
|
||||
app = proxy_server.Application(None, FakeMemcache(),
|
||||
account_ring=FakeRing(),
|
||||
container_ring=FakeRing())
|
||||
set_http_connect(201, 201, 201)
|
||||
req = Request.blank('/v1/a/c', {'REQUEST_METHOD': method})
|
||||
req.environ['swift.authorize'] = authorize
|
||||
self.app.update_request(req)
|
||||
res = app.handle_request(req)
|
||||
return res
|
||||
|
||||
for method in ('PUT', 'POST', 'DELETE', 'HEAD', 'GET'):
|
||||
# expect one call to authorize
|
||||
called = [0, 0]
|
||||
res = _do_test(method)
|
||||
self.assertEqual(404, res.status_int)
|
||||
self.assertEqual([1, 1], called)
|
||||
|
||||
def test_OPTIONS_get_info_drops_origin(self):
|
||||
with save_globals():
|
||||
controller = proxy_server.ContainerController(self.app, 'a', 'c')
|
||||
|
|
Loading…
Reference in New Issue