Merge remote-tracking branch 'origin/master' into feature/hummingbird

Change-Id: Ib57213c11cbe7b844681bc2e15fa96b647f2030d
This commit is contained in:
Clay Gerrard 2016-08-30 09:59:45 -07:00
commit 1ab2a296f5
265 changed files with 20008 additions and 3407 deletions

View File

@ -1,5 +1,8 @@
#!/bin/bash
# How-To debug functional tests:
# SWIFT_TEST_IN_PROCESS=1 tox -e func -- --pdb test.functional.tests.TestFile.testCopy
SRC_DIR=$(python -c "import os; print os.path.dirname(os.path.realpath('$0'))")
set -e

View File

@ -106,3 +106,6 @@ Brian Cline <bcline@softlayer.com> <brian.cline@gmail.com>
Dharmendra Kushwaha <dharmendra.kushwaha@nectechnologies.in>
Zhang Guoqing <zhang.guoqing@99cloud.net>
Kato Tomoyuki <kato.tomoyuki@jp.fujitsu.com>
Liang Jingtao <liang.jingtao@zte.com.cn>
Yu Yafei <yu.yafei@zte.com.cn>
Zheng Yao <zheng.yao1@zte.com.cn>

13
AUTHORS
View File

@ -123,6 +123,7 @@ Andreas Jaeger (aj@suse.de)
Shri Javadekar (shrinand@maginatics.com)
Iryoung Jeong (iryoung@gmail.com)
Paul Jimenez (pj@place.org)
Liang Jingtao (liang.jingtao@zte.com.cn)
Zhang Jinnan (ben.os@99cloud.net)
Jason Johnson (jajohnson@softlayer.com)
Brian K. Jones (bkjones@gmail.com)
@ -133,19 +134,21 @@ Takashi Kajinami (kajinamit@nttdata.co.jp)
Matt Kassawara (mkassawara@gmail.com)
Morita Kazutaka (morita.kazutaka@gmail.com)
Josh Kearney (josh@jk0.org)
Ben Keller (bjkeller@us.ibm.com)
Bryan Keller (kellerbr@us.ibm.com)
Ilya Kharin (ikharin@mirantis.com)
Dae S. Kim (dae@velatum.com)
Nathan Kinder (nkinder@redhat.com)
Eugene Kirpichov (ekirpichov@gmail.com)
Ben Keller (bjkeller@us.ibm.com)
Bryan Keller (kellerbr@us.ibm.com)
Leah Klearman (lklrmn@gmail.com)
Martin Kletzander (mkletzan@redhat.com)
Jaivish Kothari (jaivish.kothari@nectechnologies.in)
Petr Kovar (pkovar@redhat.com)
Steve Kowalik (steven@wedontsleep.org)
Sergey Kraynev (skraynev@mirantis.com)
Sushil Kumar (sushil.kumar2@globallogic.com)
Madhuri Kumari (madhuri.rai07@gmail.com)
Yatin Kumbhare (yatinkumbhare@gmail.com)
Dharmendra Kushwaha (dharmendra.kushwaha@nectechnologies.in)
Hugo Kuo (tonytkdk@gmail.com)
Tin Lam (tl3438@att.com)
@ -172,6 +175,7 @@ Zhongyue Luo (zhongyue.nah@intel.com)
Paul Luse (paul.e.luse@intel.com)
Christopher MacGown (chris@pistoncloud.com)
Ganesh Maharaj Mahalingam (ganesh.mahalingam@intel.com)
Maria Malyarova (savoreux69@gmail.com)
Dragos Manolescu (dragosm@hp.com)
Ben Martin (blmartin@us.ibm.com)
Steve Martinelli (stevemar@ca.ibm.com)
@ -193,6 +197,7 @@ Jola Mirecka (jola.mirecka@hp.com)
Kazuhiro Miyahara (miyahara.kazuhiro@lab.ntt.co.jp)
Alfredo Moralejo (amoralej@redhat.com)
Daisuke Morita (morita.daisuke@ntti3.com)
Mohit Motiani (mohit.motiani@intel.com)
Dirk Mueller (dirk@dmllr.de)
Takashi Natsume (natsume.takashi@lab.ntt.co.jp)
Russ Nelson (russ@crynwr.com)
@ -207,6 +212,7 @@ Timothy Okwii (tokwii@cisco.com)
Matthew Oliver (matt@oliver.net.au)
Hisashi Osanai (osanai.hisashi@jp.fujitsu.com)
Eamonn O'Toole (eamonn.otoole@hpe.com)
Or Ozeri (oro@il.ibm.com)
James Page (james.page@ubuntu.com)
Prashanth Pai (ppai@redhat.com)
Venkateswarlu Pallamala (p.venkatesh551@gmail.com)
@ -263,6 +269,7 @@ Tobias Stevenson (tstevenson@vbridges.com)
Victor Stinner (vstinner@redhat.com)
Akihito Takai (takaiak@nttdata.co.jp)
Pearl Yajing Tan (pearl.y.tan@seagate.com)
Nandini Tata (nandini.tata.15@gmail.com)
Yuriy Taraday (yorik.sar@gmail.com)
Monty Taylor (mordred@inaugust.com)
Caleb Tennis (caleb.tennis@gmail.com)
@ -294,6 +301,8 @@ Andrew Welleck (awellec@us.ibm.com)
Wu Wenxiang (wu.wenxiang@99cloud.net)
Cory Wright (cory.wright@rackspace.com)
Ye Jia Xu (xyj.asmy@gmail.com)
Yu Yafei (yu.yafei@zte.com.cn)
Zheng Yao (zheng.yao1@zte.com.cn)
Alex Yang (alex890714@gmail.com)
Lin Yang (lin.a.yang@intel.com)
Yee (mail.zhang.yee@gmail.com)

View File

@ -1,3 +1,34 @@
swift (2.9.0)
* Swift now supports at-rest encryption. This feature encrypts all
object data and user-set object metadata as it is sent to the cluster.
This feature is designed to prevent information leaks if a hard drive
leaves the cluster. The encryption is transparent to the end-user.
At-rest encryption in Swift is enabled on the proxy server by
adding two middlewares to the pipeline. The `keymaster` middleware
is responsible for managing the encryption keys and the `encryption`
middleware does the actual encryption and decryption.
Existing clusters will continue to work without enabling
encryption. Although enabling this feature on existing clusters
is supported, best practice is to enable this feature on new
clusters when the cluster is created.
For more information on the details of the at-rest encryption
feature, please see the docs at
http://docs.openstack.org/developer/swift/overview_encryption.html.
* `swift-recon` can now be called with more than one server type.
* Fixed a bug where non-ascii names could cause an error in logging
and cause a 5xx response to the client.
* The install guide and API reference have been moved into Swift's
source code repository.
* Various other minor bug fixes and improvements.
swift (2.8.0)
* Allow concurrent bulk deletes for server-side deletes of static

View File

@ -111,7 +111,7 @@ For Deployers
Deployer docs are also available at
http://docs.openstack.org/developer/swift/. A good starting point is at
http://docs.openstack.org/developer/swift/deployment\_guide.html
http://docs.openstack.org/developer/swift/deployment_guide.html
There is an `ops runbook <http://docs.openstack.org/developer/swift/ops_runbook/>`__
that gives information about how to diagnose and troubleshoot common issues

246
api-ref/source/conf.py Normal file
View File

@ -0,0 +1,246 @@
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# swift documentation build configuration file
#
# This file is execfile()d with the current directory set to
# its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
from swift import __version__
import subprocess
import sys
import warnings
# TODO(Graham Hayes): Remove the following block of code when os-api-ref is
# using openstackdocstheme
import os_api_ref
if getattr(os_api_ref, 'THEME', 'olsosphinx') == 'openstackdocstheme':
# We are on the new version with openstackdocstheme support
extensions = [
'os_api_ref',
]
import openstackdocstheme # noqa
html_theme = 'openstackdocs'
html_theme_path = [openstackdocstheme.get_html_theme_path()]
html_theme_options = {
"sidebar_mode": "toc",
}
else:
# We are on the old version without openstackdocstheme support
extensions = [
'os_api_ref',
'oslosphinx',
]
# End temporary block
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Object Storage API Reference'
copyright = u'2010-present, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = __version__.rsplit('.', 1)[0]
# The full version, including alpha/beta/rc tags.
release = __version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# The reST default role (used for this markup: `text`) to use
# for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for man page output ----------------------------------------------
# Grouping the document tree for man pages.
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.Popen(
git_cmd, stdout=subprocess.PIPE).communicate()[0]
except OSError:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'swiftdoc'
# -- Options for LaTeX output -------------------------------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'swift.tex', u'OpenStack Object Storage API Documentation',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True

13
api-ref/source/index.rst Normal file
View File

@ -0,0 +1,13 @@
:tocdepth: 2
===================
Object Storage API
===================
.. rest_expand_all::
.. include:: storage-account-services.inc
.. include:: storage_endpoints.inc
.. include:: storage-object-services.inc
.. include:: storage-container-services.inc
.. include:: storage_info.inc

View File

@ -0,0 +1,973 @@
# variables in header
Accept:
description: |
Instead of using the ``format`` query parameter,
set this header to ``application/json``, ``application/xml``, or
``text/xml``.
in: header
required: false
type: string
Accept-Ranges:
description: |
The type of ranges that the object accepts.
in: header
required: true
type: string
Content-Disposition:
description: |
If set, specifies the override behavior for the
browser. For example, this header might specify that the browser
use a download program to save this file rather than show the
file, which is the default.
in: header
required: false
type: string
Content-Disposition_1:
description: |
If set, specifies the override behavior for the
browser. For example, this header might specify that the browser
use a download program to save this file rather than show the
file, which is the default. If not set, this header is not
returned by this operation.
in: header
required: false
type: string
Content-Encoding:
description: |
If set, the value of the ``Content-Encoding``
metadata.
in: header
required: false
type: string
Content-Encoding_1:
description: |
If set, the value of the ``Content-Encoding``
metadata. If not set, the operation does not return this header.
in: header
required: false
type: string
Content-Length:
description: |
If the operation succeeds, this value is zero
(0). If the operation fails, this value is the length of the error
text in the response body.
in: header
required: true
type: string
Content-Length_1:
description: |
Set to the length of the object content. Do not
set if chunked transfer encoding is being used.
in: header
required: false
type: integer
Content-Length_2:
description: |
The length of the response body that contains the
list of names. If the operation fails, this value is the length of
the error text in the response body.
in: header
required: true
type: string
Content-Length_3:
description: |
HEAD operations do not return content. The
``Content-Length`` header value is not the size of the response
body but is the size of the object, in bytes.
in: header
required: true
type: string
Content-Length_4:
description: |
The length of the object content in the response
body, in bytes.
in: header
required: true
type: string
Content-Type:
description: |
Changes the MIME type for the object.
in: header
required: false
type: string
Content-Type_1:
description: |
If the operation fails, this value is the MIME
type of the error text in the response body.
in: header
required: true
type: string
Content-Type_2:
description: |
The MIME type of the object.
in: header
required: true
type: string
Content-Type_3:
description: |
The MIME type of the list of names. If the
operation fails, this value is the MIME type of the error text in
the response body.
in: header
required: true
type: string
Date:
description: |
The transaction date and time.
The date and time stamp format is `ISO 8601
<https://en.wikipedia.org/wiki/ISO_8601>`_:
::
CCYY-MM-DDThh:mm:ss±hh:mm
For example, ``2015-08-27T09:49:58-05:00``.
The ``±hh:mm`` value, if included, is the time zone as an offset
from UTC. In the previous example, the offset value is ``-05:00``.
A ``null`` value indicates that the token never expires.
in: header
required: true
type: string
Destination:
description: |
The container and object name of the destination
object in the form of ``/container/object``. You must UTF-8-encode
and then URL-encode the names of the destination container and
object before you include them in this header.
in: header
required: true
type: string
ETag:
description: |
The MD5 checksum of the copied object content.
The value is not quoted.
in: header
required: true
type: string
ETag_1:
description: |
The MD5 checksum value of the request body. For
example, the MD5 checksum value of the object content. You are
strongly recommended to compute the MD5 checksum value of object
content and include it in the request. This enables the Object
Storage API to check the integrity of the upload. The value is not
quoted.
in: header
required: false
type: string
ETag_2:
description: |
For objects smaller than 5 GB, this value is the
MD5 checksum of the object content. The value is not quoted. For
manifest objects, this value is the MD5 checksum of the
concatenated string of MD5 checksums and ETags for each of the
segments in the manifest, and not the MD5 checksum of the content
that was downloaded. Also the value is enclosed in double-quote
characters. You are strongly recommended to compute the MD5
checksum of the response body as it is received and compare this
value with the one in the ETag header. If they differ, the content
was corrupted, so retry the operation.
in: header
required: true
type: string
If-Match:
description: |
See `Request for Comments: 2616
<http://www.ietf.org/rfc/rfc2616.txt>`_.
in: header
required: false
type: string
If-Modified-Since:
description: |
See `Request for Comments: 2616
<http://www.ietf.org/rfc/rfc2616.txt>`_.
in: header
required: false
type: string
If-None-Match:
description: |
In combination with ``Expect: 100-Continue``,
specify an ``"If- None-Match: *"`` header to query whether the
server already has a copy of the object before any data is sent.
in: header
required: false
type: string
If-Unmodified-Since:
description: |
See `Request for Comments: 2616
<http://www.ietf.org/rfc/rfc2616.txt>`_.
in: header
required: false
type: string
Last-Modified:
description: |
The date and time when the object was created or its metadata was
changed.
The date and time stamp format is `ISO 8601
<https://en.wikipedia.org/wiki/ISO_8601>`_:
::
CCYY-MM-DDThh:mm:ss±hh:mm
For example, ``2015-08-27T09:49:58-05:00``.
The ``±hh:mm`` value, if included, is the time zone as an offset
from UTC. In the previous example, the offset value is ``-05:00``.
in: header
required: true
type: string
Range:
description: |
The ranges of content to get. You can use the
``Range`` header to get portions of data by using one or more
range specifications. To specify many ranges, separate the range
specifications with a comma. The types of range specifications
are: - **Byte range specification**. Use FIRST_BYTE_OFFSET to
specify the start of the data range, and LAST_BYTE_OFFSET to
specify the end. You can omit the LAST_BYTE_OFFSET and if you
do, the value defaults to the offset of the last byte of data.
- **Suffix byte range specification**. Use LENGTH bytes to specify
the length of the data range. The following forms of the header
specify the following ranges of data: - ``Range: bytes=-5``. The
last five bytes. - ``Range: bytes=10-15``. The five bytes of data
after a 10-byte offset. - ``Range: bytes=10-15,-5``. A multi-
part response that contains the last five bytes and the five
bytes of data after a 10-byte offset. The ``Content-Type``
response header contains ``multipart/byteranges``. - ``Range:
bytes=4-6``. Bytes 4 to 6 inclusive. - ``Range: bytes=2-2``. Byte
2, the third byte of the data. - ``Range: bytes=6-``. Byte 6 and
after. - ``Range: bytes=1-3,2-5``. A multi-part response that
contains bytes 1 to 3 inclusive, and bytes 2 to 5 inclusive. The
``Content-Type`` response header contains
``multipart/byteranges``.
in: header
required: false
type: string
Transfer-Encoding:
description: |
Set to ``chunked`` to enable chunked transfer
encoding. If used, do not set the ``Content-Length`` header to a
non-zero value.
in: header
required: false
type: string
X-Account-Bytes-Used:
description: |
The total number of bytes that are stored in
Object Storage for the account.
in: header
required: true
type: integer
X-Account-Container-Count:
description: |
The number of containers.
in: header
required: true
type: integer
X-Account-Meta-name:
description: |
The custom account metadata item, where
``{name}`` is the name of the metadata item. One ``X-Account-
Meta- {name}`` response header appears for each metadata item (for
each ``{name}``).
in: header
required: false
type: string
X-Account-Meta-name_1:
description: |
The account metadata. The ``{name}`` is the name
of metadata item that you want to add, update, or delete. To
delete this item, send an empty value in this header. You must
specify an ``X-Account-Meta- {name}`` header for each metadata
item (for each ``{name}``) that you want to add, update, or
delete.
in: header
required: false
type: string
X-Account-Meta-Temp-URL-Key:
description: |
The secret key value for temporary URLs. If not
set, this header is not returned in the response.
in: header
required: false
type: string
X-Account-Meta-Temp-URL-Key-2:
description: |
A second secret key value for temporary URLs. If
not set, this header is not returned in the response.
The second key enables you to rotate keys by having
two active keys at the same time.
in: header
required: false
type: string
X-Account-Object-Count:
description: |
The number of objects in the account.
in: header
required: true
type: integer
X-Auth-Token:
description: |
Authentication token. If you omit this header,
your request fails unless the account owner has granted you access
through an access control list (ACL).
in: header
required: false
type: string
X-Auth-Token_1:
description: |
Authentication token.
in: header
required: true
type: string
X-Container-Bytes-Used:
description: |
The total number of bytes used.
in: header
required: true
type: integer
X-Container-Meta-Access-Control-Allow-Origin:
description: |
Originating URLs allowed to make cross-origin
requests (CORS), separated by spaces. This heading applies to the
container only, and all objects within the container with this
header applied are CORS-enabled for the allowed origin URLs. A
browser (user-agent) typically issues a `preflighted request
<https://developer.mozilla.org/en-
US/docs/HTTP/Access_control_CORS>`_ , which is an OPTIONS call
that verifies the origin is allowed to make the request. The
Object Storage service returns 200 if the originating URL is
listed in this header parameter, and issues a 401 if the
originating URL is not allowed to make a cross-origin request.
Once a 200 is returned, the browser makes a second request to the
Object Storage service to retrieve the CORS-enabled object.
in: header
required: false
type: string
X-Container-Meta-Access-Control-Expose-Headers:
description: |
Headers the Object Storage service exposes to the
browser (technically, through the ``user-agent`` setting), in the
request response, separated by spaces. By default the Object
Storage service returns the following values for this header: -
All “simple response headers” as listed on
`http://www.w3.org/TR/cors/#simple-response-header
<http://www.w3.org/TR/cors/#simple-response-header>`_. - The
headers ``etag``, ``x-timestamp``, ``x-trans-id``. - All metadata
headers (``X-Container-Meta-*`` for containers and ``X-Object-
Meta-*`` for objects) headers listed in ``X-Container- Meta-
Access-Control-Expose-Headers``.
in: header
required: false
type: string
X-Container-Meta-Access-Control-Max-Age:
description: |
Maximum time for the origin to hold the preflight
results. A browser may make an OPTIONS call to verify the origin
is allowed to make the request. Set the value to an integer number
of seconds after the time that the request was received.
in: header
required: false
type: string
X-Container-Meta-name:
description: |
The container metadata, where ``{name}`` is the
name of metadata item. You must specify an ``X-Container-Meta-
{name}`` header for each metadata item (for each ``{name}``) that
you want to add or update.
in: header
required: false
type: string
X-Container-Meta-name_1:
description: |
The custom container metadata item, where
``{name}`` is the name of the metadata item. One ``X-Container-
Meta- {name}`` response header appears for each metadata item (for
each ``{name}``).
in: header
required: true
type: string
X-Container-Meta-Quota-Bytes:
description: |
Sets maximum size of the container, in bytes.
Typically these values are set by an administrator. Returns a 413
response (request entity too large) when an object PUT operation
exceeds this quota value.
in: header
required: false
type: string
X-Container-Meta-Quota-Count:
description: |
Sets maximum object count of the container.
Typically these values are set by an administrator. Returns a 413
response (request entity too large) when an object PUT operation
exceeds this quota value.
in: header
required: false
type: string
X-Container-Meta-Temp-URL-Key:
description: |
The secret key value for temporary URLs.
in: header
required: false
type: string
X-Container-Meta-Temp-URL-Key-2:
description: |
A second secret key value for temporary URLs. The
second key enables you to rotate keys by having two active keys at
the same time.
in: header
required: false
type: string
X-Container-Meta-Web-Directory-Type:
description: |
Sets the content-type of directory marker
objects. If the header is not set, default is
``application/directory``. Directory marker objects are 0-byte
objects that represent directories to create a simulated
hierarchical structure. For example, if you set ``"X-Container-
Meta-Web-Directory- Type: text/directory"``, Object Storage treats
0-byte objects with a content-type of ``text/directory`` as
directories rather than objects.
in: header
required: false
type: string
X-Container-Object-Count:
description: |
The number of objects.
in: header
required: true
type: integer
X-Container-Read:
description: |
Sets a container access control list (ACL) that grants read access.
Container ACLs are available on any Object Storage cluster, and are
enabled by container rather than by cluster.
To set the container read ACL:
.. code-block:: bash
$ curl -X {PUT|POST} -i -H "X-Auth-Token: TOKEN" -H \
"X-Container-Read: ACL" STORAGE_URL/CONTAINER
For example:
.. code-block:: bash
$ curl -X PUT -i \
-H "X-Auth-Token: 0101010101" \
-H "X-Container-Read: .r:*" \
http://swift.example.com/v1/AUTH_bob/read_container
In the command, specify the ACL in the ``X-Container-Read`` header,
as follows:
- ``.r:*`` All referrers.
- ``.r:example.com,swift.example.com`` Comma-separated list of
referrers.
- ``.rlistings`` Container listing access.
- ``AUTH_username`` Access to a user who authenticates through a
legacy or non-OpenStack-Identity-based authentication system.
- ``LDAP_`` Access to all users who authenticate through an LDAP-
based legacy or non-OpenStack-Identity-based authentication
system.
in: header
required: false
type: string
X-Container-Read_1:
description: |
The ACL that grants read access. If not set, this
header is not returned by this operation.
in: header
required: false
type: string
X-Container-Sync-Key:
description: |
Sets the secret key for container
synchronization. If you remove the secret key, synchronization is
halted.
in: header
required: false
type: string
X-Container-Sync-Key_1:
description: |
The secret key for container synchronization. If
not set, this header is not returned by this operation.
in: header
required: false
type: string
X-Container-Sync-To:
description: |
Sets the destination for container
synchronization. Used with the secret key indicated in the ``X
-Container-Sync-Key`` header. If you want to stop a container from
synchronizing, send a blank value for the ``X-Container-Sync-Key``
header.
in: header
required: false
type: string
X-Container-Sync-To_1:
description: |
The destination for container synchronization. If
not set, this header is not returned by this operation.
in: header
required: false
type: string
X-Container-Write:
description: |
Sets an ACL that grants write access.
in: header
required: false
type: string
X-Container-Write_1:
description: |
The ACL that grants write access. If not set,
this header is not returned by this operation.
in: header
required: false
type: string
X-Copied-From:
description: |
For a copied object, shows the container and
object name from which the new object was copied. The value is in
the ``{container}/{object}`` format.
in: header
required: false
type: string
X-Copied-From-Last-Modified:
description: |
For a copied object, the date and time in `UNIX
Epoch time stamp format
<https://en.wikipedia.org/wiki/Unix_time>`_ when the container and
object name from which the new object was copied was last
modified. For example, ``1440619048`` is equivalent to ``Mon,
Wed, 26 Aug 2015 19:57:28 GMT``.
in: header
required: false
type: integer
X-Copy-From:
description: |
If set, this is the name of an object used to
create the new object by copying the ``X-Copy-From`` object. The
value is in form ``{container}/{object}``. You must UTF-8-encode
and then URL-encode the names of the container and object before
you include them in the header. Using PUT with ``X-Copy-From``
has the same effect as using the COPY operation to copy an object.
Using ``Range`` header with ``X-Copy-From`` will create a new
partial copied object with bytes set by ``Range``.
in: header
required: false
type: string
X-Delete-After:
description: |
The number of seconds after which the system
removes the object. Internally, the Object Storage system stores
this value in the ``X -Delete-At`` metadata item.
in: header
required: false
type: integer
X-Delete-At:
description: |
The date and time in `UNIX Epoch time stamp
format <https://en.wikipedia.org/wiki/Unix_time>`_ when the system
removes the object. For example, ``1440619048`` is equivalent to
``Mon, Wed, 26 Aug 2015 19:57:28 GMT``.
in: header
required: false
type: integer
X-Delete-At_1:
description: |
If set, the date and time in `UNIX Epoch time
stamp format <https://en.wikipedia.org/wiki/Unix_time>`_ when the
system deletes the object. For example, ``1440619048`` is
equivalent to ``Mon, Wed, 26 Aug 2015 19:57:28 GMT``. If not set,
this operation does not return this header.
in: header
required: false
type: integer
X-Detect-Content-Type:
description: |
If set to ``true``, Object Storage guesses the
content type based on the file extension and ignores the value
sent in the ``Content- Type`` header, if present.
in: header
required: false
type: boolean
X-Fresh-Metadata:
description: |
Enables object creation that omits existing user
metadata. If set to ``true``, the COPY request creates an object
without existing user metadata. Default value is ``false``.
in: header
required: false
type: boolean
X-Newest:
description: |
If set to true , Object Storage queries all
replicas to return the most recent one. If you omit this header,
Object Storage responds faster after it finds one valid replica.
Because setting this header to true is more expensive for the back
end, use it only when it is absolutely needed.
in: header
required: false
type: boolean
X-Object-Manifest:
description: |
Set to specify that this is a dynamic large
object manifest object. The value is the container and object name
prefix of the segment objects in the form ``container/prefix``.
You must UTF-8-encode and then URL-encode the names of the
container and prefix before you include them in this header.
in: header
required: false
type: string
X-Object-Manifest_1:
description: |
If set, to this is a dynamic large object
manifest object. The value is the container and object name prefix
of the segment objects in the form ``container/prefix``.
in: header
required: false
type: string
X-Object-Meta-name:
description: |
The object metadata, where ``{name}`` is the name
of the metadata item. You must specify an ``X-Object-Meta-
{name}`` header for each metadata ``{name}`` item that you want to
add or update.
in: header
required: false
type: string
X-Object-Meta-name_1:
description: |
The custom object metadata item, where ``{name}``
is the name of the metadata item. One ``X-Object-Meta- {name}``
response header appears for each metadata ``{name}`` item.
in: header
required: true
type: string
X-Remove-Container-name:
description: |
Removes the metadata item named ``{name}``. For
example, ``X -Remove-Container-Read`` removes the ``X-Container-
Read`` metadata item.
in: header
required: false
type: string
X-Remove-Versions-Location:
description: |
Set to any value to disable versioning.
in: header
required: false
type: string
X-Static-Large-Object:
description: |
Set to ``true`` if this object is a static large
object manifest object.
in: header
required: true
type: boolean
X-Timestamp:
description: |
The date and time in `UNIX Epoch time stamp
format <https://en.wikipedia.org/wiki/Unix_time>`_ when the
account, container, or object was initially created as a current
version. For example, ``1440619048`` is equivalent to ``Mon, Wed,
26 Aug 2015 19:57:28 GMT``.
in: header
required: true
type: integer
X-Trans-Id:
description: |
A unique transaction ID for this request. Your
service provider might need this value if you report a problem.
in: header
required: true
type: string
X-Trans-Id-Extra:
description: |
Extra transaction information. Use the ``X-Trans-
Id-Extra`` request header to include extra information to help you
debug any errors that might occur with large object upload and
other Object Storage transactions. Object Storage appends the
first 32 characters of the ``X-Trans-Id- Extra`` request header
value to the transaction ID value in the generated ``X-Trans-Id``
response header. You must UTF-8-encode and then URL-encode the
extra transaction information before you include it in the ``X
-Trans-Id-Extra`` request header. For example, you can include
extra transaction information when you upload `large objects
<http://docs.openstack.org/user-
guide/cli_swift_large_object_creation.html>`_ such as images. When
you upload each segment and the manifest, include the same value
in the ``X-Trans-Id-Extra`` request header. If an error occurs,
you can find all requests that are related to the large object
upload in the Object Storage logs. You can also use ``X-Trans-Id-
Extra`` strings to help operators debug requests that fail to
receive responses. The operator can search for the extra
information in the logs.
in: header
required: false
type: string
X-Versions-Location:
description: |
The URL-encoded UTF-8 representation of the container that stores
previous versions of objects. If not set, versioning is disabled
for this container. For more information about object versioning,
see `Object versioning <http://docs.openstack.org/developer/
swift/api/object_versioning.html>`_.
in: header
required: false
type: string
X-Versions-Mode:
description: |
The versioning mode for this container. The value must be either
``stack`` or ``history``. If not set, ``stack`` mode will be used.
This setting has no impact unless ``X-Versions-Location`` is set
for the container. For more information about object versioning,
see `Object versioning <http://docs.openstack.org/developer/
swift/api/object_versioning.html>`_.
in: header
required: false
type: string
# variables in path
account:
description: |
The unique name for the account. An account is
also known as the project or tenant.
in: path
required: false
type: string
container:
description: |
The unique name for the container. The container
name must be from 1 to 256 characters long and can start with any
character and contain any pattern. Character set must be UTF-8.
The container name cannot contain a slash (``/``) character
because this character delimits the container and object name. For
example, ``/account/container/object``.
in: path
required: false
type: string
object:
description: |
The unique name for the object.
in: path
required: false
type: string
# variables in query
delimiter:
description: |
Delimiter value, which returns the object names
that are nested in the container. If you do not set a prefix and
set the delimiter to "/" you may get unexpected results where all
the objects are returned instead of only those with the delimiter
set.
in: query
required: false
type: string
end_marker:
description: |
For a string value, x , returns container names
that are less than the marker value.
in: query
required: false
type: string
filename:
description: |
Overrides the default file name. Object Storage
generates a default file name for GET temporary URLs that is based
on the object name. Object Storage returns this value in the
``Content-Disposition`` response header. Browsers can interpret
this file name value as a file attachment to save. For more
information about temporary URLs, see `Temporary URL middleware
<http://docs.openstack.org/developer/
swift/api/temporary_url_middleware.html>`_.
in: query
required: false
type: string
format:
description: |
The response format. Valid values are ``json``,
``xml``, or ``plain``. The default is ``plain``. If you append
the ``format=xml`` or ``format=json`` query parameter to the
storage account URL, the response shows extended container
information serialized in that format. If you append the
``format=plain`` query parameter, the response lists the container
names separated by newlines.
in: query
required: false
type: string
limit:
description: |
For an integer value n , limits the number of
results to n .
in: query
required: false
type: integer
marker:
description: |
For a string value, x , returns container names
that are greater than the marker value.
in: query
required: false
type: string
multipart-manifest:
description: |
If ``?multipart-manifest=put``, the object is a
static large object manifest and the body contains the manifest.
in: query
required: false
type: string
multipart-manifest_1:
description: |
If you include the ``multipart-manifest=delete``
query parameter and the object is a static large object, the
segment objects and manifest object are deleted. If you omit the
``multipart- manifest=delete`` query parameter and the object is a
static large object, the manifest object is deleted but the
segment objects are not deleted. For a bulk delete, the response
body looks the same as it does for a normal bulk delete. In
contrast, a plain object DELETE response has an empty body.
in: query
required: false
type: string
multipart-manifest_get:
description: |
If you include the ``multipart-manifest=get``
query parameter and the object is a large object, the object
contents are not returned. Instead, the manifest is returned in
the ``X-Object-Manifest`` response header for dynamic large
objects or in the response body for static large objects.
in: query
required: false
type: string
multipart-manifest_head:
description: |
If you include the ``multipart-manifest=get`` query parameter and the
object is a large object, the object metadata is not returned. Instead, the
response headers will include the manifest metadata and for dynamic large
objects the ``X-Object-Manifest`` response header.
in: query
required: false
type: string
path:
description: |
For a string value, returns the object names that
are nested in the pseudo path.
in: query
required: false
type: string
prefix:
description: |
Prefix value. Named items in the response begin
with this value.
in: query
required: false
type: string
swiftinfo_expires:
description: |
Filters the response by the expiration date and
time in `UNIX Epoch time stamp format
<https://en.wikipedia.org/wiki/Unix_time>`_. For example,
``1440619048`` is equivalent to ``Mon, Wed, 26 Aug 2015 19:57:28
GMT``.
in: query
required: false
type: integer
swiftinfo_sig:
description: |
A hash-based message authentication code (HMAC)
that enables access to administrator-only information. To use this
parameter, the ``swiftinfo_expires`` parameter is also required.
in: query
required: false
type: string
temp_url_expires:
description: |
The date and time in `UNIX Epoch time stamp
format <https://en.wikipedia.org/wiki/Unix_time>`_ when the
signature for temporary URLs expires. For example, ``1440619048``
is equivalent to ``Mon, Wed, 26 Aug 2015 19:57:28 GMT``. For more
information about temporary URLs, see `Temporary URL middleware
<http://docs.openstack.org/developer/swift/api/temporary
_url_middleware.html>`_.
in: query
required: true
type: integer
temp_url_sig:
description: |
Used with temporary URLs to sign the request with
an HMAC-SHA1 cryptographic signature that defines the allowed HTTP
method, expiration date, full path to the object, and the secret
key for the temporary URL. For more information about temporary
URLs, see `Temporary URL middleware
<http://docs.openstack.org/developer/swif
t/api/temporary_url_middleware.html>`_.
in: query
required: true
type: string
# variables in body
bytes:
description: |
The total number of bytes that are stored in
Object Storage for the account.
in: body
required: true
type: integer
content_type:
description: |
The content type of the object.
in: body
required: true
type: string
count:
description: |
The number of objects in the container.
in: body
required: true
type: integer
hash:
description: |
The MD5 checksum value of the object content.
in: body
required: true
type: string
last_modified:
description: |
The date and time when the object was last modified.
The date and time stamp format is `ISO 8601
<https://en.wikipedia.org/wiki/ISO_8601>`_:
::
CCYY-MM-DDThh:mm:ss±hh:mm
For example, ``2015-08-27T09:49:58-05:00``.
The ``±hh:mm`` value, if included, is the time zone as an offset
from UTC. In the previous example, the offset value is ``-05:00``.
in: body
required: true
type: string
name:
description: |
The name of the container.
in: body
required: true
type: string

View File

@ -0,0 +1 @@
curl -i https://23.253.72.207/v1/$account?format=json -X GET -H "X-Auth-Token: $token"

View File

@ -0,0 +1,2 @@
curl -i https://23.253.72.207/v1/$account?format=xml \
-X GET -H "X-Auth-Token: $token"

View File

@ -0,0 +1,11 @@
HTTP/1.1 200 OK
Content-Length: 96
X-Account-Object-Count: 1
X-Timestamp: 1389453423.35964
X-Account-Meta-Subject: Literature
X-Account-Bytes-Used: 14
X-Account-Container-Count: 2
Content-Type: application/json; charset=utf-8
Accept-Ranges: bytes
X-Trans-Id: tx274a77a8975c4a66aeb24-0052d95365
Date: Fri, 17 Jan 2014 15:59:33 GMT

View File

@ -0,0 +1,11 @@
HTTP/1.1 200 OK
Content-Length: 262
X-Account-Object-Count: 1
X-Timestamp: 1389453423.35964
X-Account-Meta-Subject: Literature
X-Account-Bytes-Used: 14
X-Account-Container-Count: 2
Content-Type: application/xml; charset=utf-8
Accept-Ranges: bytes
X-Trans-Id: tx69f60bc9f7634a01988e6-0052d9544b
Date: Fri, 17 Jan 2014 16:03:23 GMT

View File

@ -0,0 +1,12 @@
[
{
"count": 0,
"bytes": 0,
"name": "janeausten"
},
{
"count": 1,
"bytes": 14,
"name": "marktwain"
}
]

View File

@ -0,0 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<account name="my_account">
<container>
<name>janeausten</name>
<count>0</count>
<bytes>0</bytes>
</container>
<container>
<name>marktwain</name>
<count>1</count>
<bytes>14</bytes>
</container>
</account>

View File

@ -0,0 +1,7 @@
{
"swift": {
"version": "1.11.0"
},
"staticweb": {},
"tempurl": {}
}

View File

@ -0,0 +1,3 @@
GET /{api_version}/{account} HTTP/1.1
Host: storage.swiftdrive.com
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb

View File

@ -0,0 +1,9 @@
HTTP/1.1 200 Ok
Date: Thu, 07 Jun 2010 18:57:07 GMT
Content-Type: text/plain; charset=UTF-8
Content-Length: 32
images
movies
documents
backups

View File

@ -0,0 +1,14 @@
{
"endpoints": [
"http://storage01.swiftdrive.com:6008/d8/583/AUTH_dev/EC_cont1/obj",
"http://storage02.swiftdrive.com:6008/d2/583/AUTH_dev/EC_cont1/obj",
"http://storage02.swiftdrive.com:6006/d3/583/AUTH_dev/EC_cont1/obj",
"http://storage02.swiftdrive.com:6008/d5/583/AUTH_dev/EC_cont1/obj",
"http://storage01.swiftdrive.com:6007/d7/583/AUTH_dev/EC_cont1/obj",
"http://storage02.swiftdrive.com:6007/d4/583/AUTH_dev/EC_cont1/obj",
"http://storage01.swiftdrive.com:6006/d6/583/AUTH_dev/EC_cont1/obj"
],
"headers": {
"X-Backend-Storage-Policy-Index": "2"
}
}

View File

@ -0,0 +1,8 @@
{
"endpoints": [
"http://storage02.swiftdrive:6002/d2/617/AUTH_dev",
"http://storage01.swiftdrive:6002/d8/617/AUTH_dev",
"http://storage01.swiftdrive:6002/d11/617/AUTH_dev"
],
"headers": {}
}

View File

@ -0,0 +1 @@
Goodbye World!

View File

@ -0,0 +1 @@
Hello World Again!

View File

@ -0,0 +1,10 @@
HTTP/1.1 200 OK
Content-Length: 341
X-Container-Object-Count: 2
Accept-Ranges: bytes
X-Container-Meta-Book: TomSawyer
X-Timestamp: 1389727543.65372
X-Container-Bytes-Used: 26
Content-Type: application/json; charset=utf-8
X-Trans-Id: tx26377fe5fab74869825d1-0052d6bdff
Date: Wed, 15 Jan 2014 16:57:35 GMT

View File

@ -0,0 +1,10 @@
HTTP/1.1 200 OK
Content-Length: 500
X-Container-Object-Count: 2
Accept-Ranges: bytes
X-Container-Meta-Book: TomSawyer
X-Timestamp: 1389727543.65372
X-Container-Bytes-Used: 26
Content-Type: application/xml; charset=utf-8
X-Trans-Id: txc75ea9a6e66f47d79e0c5-0052d6be76
Date: Wed, 15 Jan 2014 16:59:35 GMT

View File

@ -0,0 +1,16 @@
[
{
"hash": "451e372e48e0f6b1114fa0724aa79fa1",
"last_modified": "2014-01-15T16:41:49.390270",
"bytes": 14,
"name": "goodbye",
"content_type": "application/octet-stream"
},
{
"hash": "ed076287532e86365e841e92bfc50d8c",
"last_modified": "2014-01-15T16:37:43.427570",
"bytes": 12,
"name": "helloworld",
"content_type": "application/octet-stream"
}
]

View File

@ -0,0 +1,17 @@
<?xml version="1.0" encoding="UTF-8"?>
<container name="marktwain">
<object>
<name>goodbye</name>
<hash>451e372e48e0f6b1114fa0724aa79fa1</hash>
<bytes>14</bytes>
<content_type>application/octet-stream</content_type>
<last_modified>2014-01-15T16:41:49.390270</last_modified>
</object>
<object>
<name>helloworld</name>
<hash>ed076287532e86365e841e92bfc50d8c</hash>
<bytes>12</bytes>
<content_type>application/octet-stream</content_type>
<last_modified>2014-01-15T16:37:43.427570</last_modified>
</object>
</container>

View File

@ -0,0 +1,380 @@
.. -*- rst -*-
========
Accounts
========
Lists containers for an account. Creates, updates, shows, and
deletes account metadata.
Account metadata operations work differently than container and
object metadata operations work. Depending on the contents of your
POST account metadata request, the Object Storage API updates the
metadata in one of these ways:
**Account metadata operations**
+----------------------------------------------------------+---------------------------------------------------------------+
| POST request body contains | Description |
+----------------------------------------------------------+---------------------------------------------------------------+
| A metadata key without a value. | The API removes the metadata item from the account. |
| | |
| The metadata key already exists for the account. | |
+----------------------------------------------------------+---------------------------------------------------------------+
| A metadata key without a value. | The API ignores the metadata key. |
| | |
| The metadata key does not already exist for the account. | |
+----------------------------------------------------------+---------------------------------------------------------------+
| A metadata key value. | The API updates the metadata key value for the account. |
| | |
| The metadata key already exists for the account. | |
+----------------------------------------------------------+---------------------------------------------------------------+
| A metadata key value. | The API adds the metadata key and value pair, or item, to the |
| | account. |
| The metadata key does not already exist for the account. | |
+----------------------------------------------------------+---------------------------------------------------------------+
| One or more account metadata items are omitted. | The API does not change the existing metadata items. |
| | |
| The metadata items already exist for the account. | |
+----------------------------------------------------------+---------------------------------------------------------------+
For these requests, specifying the ``X-Remove-Account-Meta-*``
request header for the key with any value is equivalent to
specifying the ``X-Account-Meta-*`` request header with an empty
value.
Metadata keys must be treated as case-insensitive at all times.
These keys can contain ASCII 7-bit characters that are not control
(0-31) characters, DEL, or a separator character, according to
`HTTP/1.1 <http://www.w3.org/Protocols/rfc2616/rfc2616.html>`_ .
Also, Object Storage does not support the underscore character,
which it silently converts to a hyphen.
The metadata values in Object Storage do not follow HTTP/1.1 rules
for character encodings. You must use a UTF-8 encoding to get a
byte array for any string that contains characters that are not in
the 7-bit ASCII 0-127 range. Otherwise, Object Storage returns the
404 response code for ISO-8859-1 characters in the 128-255 range,
which is a direct violation of the HTTP/1.1 `basic rules
<http://www.w3.org/Protocols/rfc2616/rfc2616-sec2.html#sec2.2>`_.
Show account details and list containers
========================================
.. rest_method:: GET /v1/{account}
Shows details for an account and lists containers, sorted by name, in the account.
The sort order for the name is based on a binary comparison, a
single built-in collating sequence that compares string data by
using the SQLite memcmp() function, regardless of text encoding.
See `Collating Sequences
<http://www.sqlite.org/datatype3.html#collation>`_.
Example requests and responses:
- Show account details and list containers and ask for a JSON
response:
::
curl -i $publicURL?format=json -X GET -H "X-Auth-Token: $token"
- List containers and ask for an XML response:
::
curl -i $publicURL?format=xml -X GET -H "X-Auth-Token: $token"
The response body returns a list of containers. The default
response (``text/plain``) returns one container per line.
If you use query parameters to page through a long list of
containers, you have reached the end of the list if the number of
items in the returned list is less than the request ``limit``
value. The list contains more items if the number of items in the
returned list equals the ``limit`` value.
When asking for a list of containers and there are none, the
response behavior changes depending on whether the request format
is text, JSON, or XML. For a text response, you get a 204 , because
there is no content. However, for a JSON or XML response, you get a
200 with content indicating an empty array.
If the request succeeds, the operation returns one of these status
codes:
- ``OK (200)``. Success. The response body lists the containers.
- ``No Content (204)``. Success. The response body shows no
containers. Either the account has no containers or you are
paging through a long list of names by using the ``marker``,
``limit``, or ``end_marker`` query parameter and you have reached
the end of the list.
Normal response codes: 200
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- limit: limit
- marker: marker
- end_marker: end_marker
- format: format
- prefix: prefix
- delimiter: delimiter
- X-Auth-Token: X-Auth-Token
- X-Newest: X-Newest
- Accept: Accept
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Content-Length: Content-Length
- X-Account-Meta-name: X-Account-Meta-name
- X-Account-Object-Count: X-Account-Object-Count
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2
- X-Timestamp: X-Timestamp
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key
- X-Trans-Id: X-Trans-Id
- Date: Date
- X-Account-Bytes-Used: X-Account-Bytes-Used
- X-Account-Container-Count: X-Account-Container-Count
- Content-Type: Content-Type
- count: count
- bytes: bytes
- name: name
Response Example
----------------
.. literalinclude:: samples/account-containers-list-http-response-xml.txt
:language: javascript
Create, update, or delete account metadata
==========================================
.. rest_method:: POST /v1/{account}
Creates, updates, or deletes account metadata.
To create, update, or delete metadata, use the ``X-Account-
Meta-{name}`` request header, where ``{name}`` is the name of the
metadata item.
Subsequent requests for the same key and value pair overwrite the
existing value.
To delete a metadata header, send an empty value for that header,
such as for the ``X-Account-Meta-Book`` header. If the tool you use
to communicate with Object Storage, such as an older version of
cURL, does not support empty headers, send the ``X-Remove-Account-
Meta-{name}`` header with an arbitrary value. For example, ``X
-Remove-Account-Meta-Book: x``. The operation ignores the arbitrary
value.
If the container already has other custom metadata items, a request
to create, update, or delete metadata does not affect those items.
This operation does not accept a request body.
Example requests and responses:
- Create account metadata:
::
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Account-Meta-Book: MobyDick" -H "X-Account-Meta-Subject: Literature"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx8c2dd6aee35442a4a5646-0052d954fb
Date: Fri, 17 Jan 2014 16:06:19 GMT
- Update account metadata:
::
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Account-Meta-Subject: AmericanLiterature"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx1439b96137364ab581156-0052d95532
Date: Fri, 17 Jan 2014 16:07:14 GMT
- Delete account metadata:
::
curl -i $publicURL -X POST -H "X-Auth-Token: $token" -H "X-Remove-Account-Meta-Subject: x"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx411cf57701424da99948a-0052d9556f
Date: Fri, 17 Jan 2014 16:08:15 GMT
If the request succeeds, the operation returns the ``No Content
(204)`` response code.
To confirm your changes, issue a show account metadata request.
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- X-Auth-Token: X-Auth-Token
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2
- X-Account-Meta-name: X-Account-Meta-name
- Content-Type: Content-Type
- X-Detect-Content-Type: X-Detect-Content-Type
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id
Show account metadata
=====================
.. rest_method:: HEAD /v1/{account}
Shows metadata for an account.
Metadata for the account includes:
- Number of containers
- Number of objects
- Total number of bytes that are stored in Object Storage for the
account
Because the storage system can store large amounts of data, take
care when you represent the total bytes response as an integer;
when possible, convert it to a 64-bit unsigned integer if your
platform supports that primitive type.
Do not include metadata headers in this request.
Show account metadata request:
::
curl -i $publicURL -X HEAD -H "X-Auth-Token: $token"
::
HTTP/1.1 204 No Content
Content-Length: 0
X-Account-Object-Count: 1
X-Account-Meta-Book: MobyDick
X-Timestamp: 1389453423.35964
X-Account-Bytes-Used: 14
X-Account-Container-Count: 2
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
X-Trans-Id: txafb3504870144b8ca40f7-0052d955d4
Date: Fri, 17 Jan 2014 16:09:56 GMT
If the account or authentication token is not valid, the operation
returns the ``Unauthorized (401)`` response code.
Error response codes:204,401,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- X-Auth-Token: X-Auth-Token
- X-Newest: X-Newest
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Content-Length: Content-Length
- X-Account-Meta-name: X-Account-Meta-name
- X-Account-Object-Count: X-Account-Object-Count
- X-Account-Meta-Temp-URL-Key-2: X-Account-Meta-Temp-URL-Key-2
- X-Timestamp: X-Timestamp
- X-Account-Meta-Temp-URL-Key: X-Account-Meta-Temp-URL-Key
- X-Trans-Id: X-Trans-Id
- Date: Date
- X-Account-Bytes-Used: X-Account-Bytes-Used
- X-Account-Container-Count: X-Account-Container-Count
- Content-Type: Content-Type

View File

@ -0,0 +1,506 @@
.. -*- rst -*-
==========
Containers
==========
Lists objects in a container. Creates, shows details for, and
deletes containers. Creates, updates, shows, and deletes container
metadata.
Show container details and list objects
=======================================
.. rest_method:: GET /v1/{account}/{container}
Shows details for a container and lists objects, sorted by name, in the container.
Specify query parameters in the request to filter the list and
return a subset of object names. Omit query parameters to return
the complete list of object names that are stored in the container,
up to 10,000 names. The 10,000 maximum value is configurable. To
view the value for the cluster, issue a GET ``/info`` request.
Example requests and responses:
- ``OK (200)``. Success. The response body lists the objects.
- ``No Content (204)``. Success. The response body shows no objects.
Either the container has no objects or you are paging through a
long list of names by using the ``marker``, ``limit``, or
``end_marker`` query parameter and you have reached the end of
the list.
If the container does not exist, the call returns the ``Not Found
(404)`` response code.
The operation returns the ``Range Not Satisfiable (416)`` response
code for any ranged GET requests that specify more than:
- Fifty ranges.
- Three overlapping ranges.
- Eight non-increasing ranges.
Normal response codes: 200
Error response codes:416,404,204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- container: container
- limit: limit
- marker: marker
- end_marker: end_marker
- prefix: prefix
- format: format
- delimiter: delimiter
- path: path
- X-Auth-Token: X-Auth-Token
- X-Newest: X-Newest
- Accept: Accept
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- X-Container-Meta-name: X-Container-Meta-name
- Content-Length: Content-Length
- X-Container-Object-Count: X-Container-Object-Count
- Accept-Ranges: Accept-Ranges
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Bytes-Used: X-Container-Bytes-Used
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Timestamp: X-Timestamp
- X-Trans-Id: X-Trans-Id
- Date: Date
- Content-Type: Content-Type
- hash: hash
- last_modified: last_modified
- bytes: bytes
- name: name
- content_type: content_type
Response Example
----------------
.. literalinclude:: samples/objects-list-http-response-xml.txt
:language: javascript
Create container
================
.. rest_method:: PUT /v1/{account}/{container}
Creates a container.
You do not need to check whether a container already exists before
issuing a PUT operation because the operation is idempotent: It
creates a container or updates an existing container, as
appropriate.
Example requests and responses:
- Create a container with no metadata:
::
curl -i $publicURL/steven -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
::
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx7f6b7fa09bc2443a94df0-0052d58b56
Date: Tue, 14 Jan 2014 19:09:10 GMT
- Create a container with metadata:
::
curl -i $publicURL/marktwain -X PUT -H "X-Auth-Token: $token" -H "X-Container-Meta-Book: TomSawyer"
::
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx06021f10fc8642b2901e7-0052d58f37
Date: Tue, 14 Jan 2014 19:25:43 GMT
Error response codes:201,204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- container: container
- X-Auth-Token: X-Auth-Token
- X-Container-Read: X-Container-Read
- X-Container-Write: X-Container-Write
- X-Container-Sync-To: X-Container-Sync-To
- X-Container-Sync-Key: X-Container-Sync-Key
- X-Versions-Location: X-Versions-Location
- X-Versions-Mode: X-Versions-Mode
- X-Container-Meta-name: X-Container-Meta-name
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
- Content-Type: Content-Type
- X-Detect-Content-Type: X-Detect-Content-Type
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id
Create, update, or delete container metadata
============================================
.. rest_method:: POST /v1/{account}/{container}
Creates, updates, or deletes custom metadata for a container.
To create, update, or delete a custom metadata item, use the ``X
-Container-Meta-{name}`` header, where ``{name}`` is the name of
the metadata item.
Subsequent requests for the same key and value pair overwrite the
previous value.
To delete container metadata, send an empty value for that header,
such as for the ``X-Container-Meta-Book`` header. If the tool you
use to communicate with Object Storage, such as an older version of
cURL, does not support empty headers, send the ``X-Remove-
Container-Meta-{name}`` header with an arbitrary value. For
example, ``X-Remove-Container-Meta-Book: x``. The operation ignores
the arbitrary value.
If the container already has other custom metadata items, a request
to create, update, or delete metadata does not affect those items.
Example requests and responses:
- Create container metadata:
::
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Container-Meta-Author: MarkTwain" -H "X-Container-Meta-Web-Directory-Type: text/directory" -H "X-Container-Meta-Century: Nineteenth"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx05dbd434c651429193139-0052d82635
Date: Thu, 16 Jan 2014 18:34:29 GMT
- Update container metadata:
::
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Container-Meta-Author: SamuelClemens"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txe60c7314bf614bb39dfe4-0052d82653
Date: Thu, 16 Jan 2014 18:34:59 GMT
- Delete container metadata:
::
curl -i $publicURL/marktwain -X POST -H "X-Auth-Token: $token" -H "X-Remove-Container-Meta-Century: x"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx7997e18da2a34a9e84ceb-0052d826d0
Date: Thu, 16 Jan 2014 18:37:04 GMT
If the request succeeds, the operation returns the ``No Content
(204)`` response code.
To confirm your changes, issue a show container metadata request.
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- container: container
- X-Auth-Token: X-Auth-Token
- X-Container-Read: X-Container-Read
- X-Remove-Container-name: X-Remove-Container-name
- X-Container-Write: X-Container-Write
- X-Container-Sync-To: X-Container-Sync-To
- X-Container-Sync-Key: X-Container-Sync-Key
- X-Versions-Location: X-Versions-Location
- X-Versions-Mode: X-Versions-Mode
- X-Remove-Versions-Location: X-Remove-Versions-Location
- X-Container-Meta-name: X-Container-Meta-name
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count
- X-Container-Meta-Web-Directory-Type: X-Container-Meta-Web-Directory-Type
- Content-Type: Content-Type
- X-Detect-Content-Type: X-Detect-Content-Type
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id
Show container metadata
=======================
.. rest_method:: HEAD /v1/{account}/{container}
Shows container metadata, including the number of objects and the total bytes of all objects stored in the container.
Show container metadata request:
::
curl -i $publicURL/marktwain -X HEAD -H "X-Auth-Token: $token"
::
HTTP/1.1 204 No Content
Content-Length: 0
X-Container-Object-Count: 1
Accept-Ranges: bytes
X-Container-Meta-Book: TomSawyer
X-Timestamp: 1389727543.65372
X-Container-Meta-Author: SamuelClemens
X-Container-Bytes-Used: 14
Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx0287b982a268461b9ec14-0052d826e2
Date: Thu, 16 Jan 2014 18:37:22 GMT
If the request succeeds, the operation returns the ``No Content
(204)`` response code.
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- container: container
- X-Auth-Token: X-Auth-Token
- X-Newest: X-Newest
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- X-Container-Sync-Key: X-Container-Sync-Key
- X-Container-Meta-name: X-Container-Meta-name
- Content-Length: Content-Length
- X-Container-Object-Count: X-Container-Object-Count
- X-Container-Write: X-Container-Write
- X-Container-Meta-Quota-Count: X-Container-Meta-Quota-Count
- Accept-Ranges: Accept-Ranges
- X-Container-Read: X-Container-Read
- X-Container-Meta-Access-Control-Expose-Headers: X-Container-Meta-Access-Control-Expose-Headers
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Bytes-Used: X-Container-Bytes-Used
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Timestamp: X-Timestamp
- X-Container-Meta-Access-Control-Allow-Origin: X-Container-Meta-Access-Control-Allow-Origin
- X-Container-Meta-Access-Control-Max-Age: X-Container-Meta-Access-Control-Max-Age
- Date: Date
- X-Trans-Id: X-Trans-Id
- X-Container-Sync-To: X-Container-Sync-To
- Content-Type: Content-Type
- X-Container-Meta-Quota-Bytes: X-Container-Meta-Quota-Bytes
- X-Versions-Location: X-Versions-Location
- X-Versions-Mode: X-Versions-Mode
Delete container
================
.. rest_method:: DELETE /v1/{account}/{container}
Deletes an empty container.
This operation fails unless the container is empty. An empty
container has no objects.
Delete the ``steven`` container:
::
curl -i $publicURL/steven -X DELETE -H "X-Auth-Token: $token"
If the container does not exist, the response is:
::
HTTP/1.1 404 Not Found
Content-Length: 70
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx4d728126b17b43b598bf7-0052d81e34
Date: Thu, 16 Jan 2014 18:00:20 GMT
If the container exists and the deletion succeeds, the response is:
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txf76c375ebece4df19c84c-0052d81f14
Date: Thu, 16 Jan 2014 18:04:04 GMT
If the container exists but is not empty, the response is:
::
HTTP/1.1 409 Conflict
Content-Length: 95
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx7782dc6a97b94a46956b5-0052d81f6b
Date: Thu, 16 Jan 2014 18:05:31 GMT
<html>
<h1>Conflict
</h1>
<p>There was a conflict when trying to complete your request.
</p>
</html>
Error response codes:404,204,409,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- container: container
- X-Auth-Token: X-Auth-Token
- X-Container-Meta-Temp-URL-Key: X-Container-Meta-Temp-URL-Key
- X-Container-Meta-Temp-URL-Key-2: X-Container-Meta-Temp-URL-Key-2
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id

View File

@ -0,0 +1,688 @@
.. -*- rst -*-
=======
Objects
=======
Creates, replaces, shows details for, and deletes objects. Copies
objects from another object with a new or different name. Updates
object metadata.
Get object content and metadata
===============================
.. rest_method:: GET /v1/{account}/{container}/{object}
Downloads the object content and gets the object metadata.
This operation returns the object metadata in the response headers
and the object content in the response body.
If this is a large object, the response body contains the
concatenated content of the segment objects. To get the manifest
instead of concatenated segment objects for a static large object,
use the ``multipart-manifest`` query parameter.
Example requests and responses:
- Show object details for the ``goodbye`` object in the
``marktwain`` container:
::
curl -i $publicURL/marktwain/goodbye -X GET -H "X-Auth-Token: $token"
::
HTTP/1.1 200 OK
Content-Length: 14
Accept-Ranges: bytes
Last-Modified: Wed, 15 Jan 2014 16:41:49 GMT
Etag: 451e372e48e0f6b1114fa0724aa79fa1
X-Timestamp: 1389804109.39027
X-Object-Meta-Orig-Filename: goodbyeworld.txt
Content-Type: application/octet-stream
X-Trans-Id: tx8145a190241f4cf6b05f5-0052d82a34
Date: Thu, 16 Jan 2014 18:51:32 GMT
Goodbye World!
- Show object details for the ``goodbye`` object, which does not
exist, in the ``janeausten`` container:
::
curl -i $publicURL/janeausten/goodbye -X GET -H "X-Auth-Token: $token"
::
HTTP/1.1 404 Not Found
Content-Length: 70
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx073f7cbb850c4c99934b9-0052d82b04
Date: Thu, 16 Jan 2014 18:55:00 GMT
<html>
<h1>Not Found
</h1>
<p>The resource could not be found.
</p>
</html>
The operation returns the ``Range Not Satisfiable (416)`` response
code for any ranged GET requests that specify more than:
- Fifty ranges.
- Three overlapping ranges.
- Eight non-increasing ranges.
Normal response codes: 200
Error response codes:416,404,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- X-Auth-Token: X-Auth-Token
- X-Newest: X-Newest
- temp_url_sig: temp_url_sig
- temp_url_expires: temp_url_expires
- filename: filename
- multipart-manifest: multipart-manifest_get
- Range: Range
- If-Match: If-Match
- If-None-Match: If-None-Match
- If-Modified-Since: If-Modified-Since
- If-Unmodified-Since: If-Unmodified-Since
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Content-Length: Content-Length
- X-Object-Meta-name: X-Object-Meta-name
- Content-Disposition: Content-Disposition
- Content-Encoding: Content-Encoding
- X-Delete-At: X-Delete-At
- Accept-Ranges: Accept-Ranges
- X-Object-Manifest: X-Object-Manifest
- Last-Modified: Last-Modified
- ETag: ETag
- X-Timestamp: X-Timestamp
- X-Trans-Id: X-Trans-Id
- Date: Date
- X-Static-Large-Object: X-Static-Large-Object
- Content-Type: Content-Type
Response Example
----------------
See examples above.
Create or replace object
========================
.. rest_method:: PUT /v1/{account}/{container}/{object}
Creates an object with data content and metadata, or replaces an existing object with data content and metadata.
The PUT operation always creates an object. If you use this
operation on an existing object, you replace the existing object
and metadata rather than modifying the object. Consequently, this
operation returns the ``Created (201)`` response code.
If you use this operation to copy a manifest object, the new object
is a normal object and not a copy of the manifest. Instead it is a
concatenation of all the segment objects. This means that you
cannot copy objects larger than 5 GB.
Example requests and responses:
- Create object:
::
curl -i $publicURL/janeausten/helloworld.txt -X PUT -H "Content-Length: 1" -H "Content-Type: text/html; charset=UTF-8" -H "X-Auth-Token: $token"
::
HTTP/1.1 201 Created
Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT
Content-Length: 116
Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843
Date: Fri, 17 Jan 2014 17:28:35 GMT
- Replace object:
::
curl -i $publicURL/janeausten/helloworld -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
::
HTTP/1.1 201 Created
Last-Modified: Fri, 17 Jan 2014 17:28:35 GMT
Content-Length: 116
Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx4d5e4f06d357462bb732f-0052d96843
Date: Fri, 17 Jan 2014 17:28:35 GMT
The ``Created (201)`` response code indicates a successful write.
If the request times out, the operation returns the ``Request
Timeout (408)`` response code.
The ``Length Required (411)`` response code indicates a missing
``Transfer-Encoding`` or ``Content-Length`` request header.
If the MD5 checksum of the data that is written to the object store
does not match the optional ``ETag`` value, the operation returns
the ``Unprocessable Entity (422)`` response code.
Error response codes:201,422,411,408,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- multipart-manifest: multipart-manifest
- temp_url_sig: temp_url_sig
- temp_url_expires: temp_url_expires
- filename: filename
- X-Object-Manifest: X-Object-Manifest
- X-Auth-Token: X-Auth-Token
- Content-Length: Content-Length
- Transfer-Encoding: Transfer-Encoding
- Content-Type: Content-Type
- X-Detect-Content-Type: X-Detect-Content-Type
- X-Copy-From: X-Copy-From
- ETag: ETag
- Content-Disposition: Content-Disposition
- Content-Encoding: Content-Encoding
- X-Delete-At: X-Delete-At
- X-Delete-After: X-Delete-After
- X-Object-Meta-name: X-Object-Meta-name
- If-None-Match: If-None-Match
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Content-Length: Content-Length
- ETag: ETag
- X-Timestamp: X-Timestamp
- X-Trans-Id: X-Trans-Id
- Date: Date
- Content-Type: Content-Type
- last_modified: last_modified
Copy object
===========
.. rest_method:: COPY /v1/{account}/{container}/{object}
Copies an object to another object in the object store.
You can copy an object to a new object with the same name. Copying
to the same name is an alternative to using POST to add metadata to
an object. With POST , you must specify all the metadata. With COPY
, you can add additional metadata to the object.
With COPY , you can set the ``X-Fresh-Metadata`` header to ``true``
to copy the object without any existing metadata.
Alternatively, you can use PUT with the ``X-Copy-From`` request
header to accomplish the same operation as the COPY object
operation.
The PUT operation always creates an object. If you use this
operation on an existing object, you replace the existing object
and metadata rather than modifying the object. Consequently, this
operation returns the ``Created (201)`` response code.
If you use this operation to copy a manifest object, the new object
is a normal object and not a copy of the manifest. Instead it is a
concatenation of all the segment objects. This means that you
cannot copy objects larger than 5 GB in size. All metadata is
preserved during the object copy. If you specify metadata on the
request to copy the object, either PUT or COPY , the metadata
overwrites any conflicting keys on the target (new) object.
Example requests and responses:
- Copy the ``goodbye`` object from the ``marktwain`` container to
the ``janeausten`` container:
::
curl -i $publicURL/marktwain/goodbye -X COPY -H "X-Auth-Token: $token" -H "Destination: janeausten/goodbye"
::
HTTP/1.1 201 Created
Content-Length: 0
X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT
X-Copied-From: marktwain/goodbye
Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT
Etag: 451e372e48e0f6b1114fa0724aa79fa1
Content-Type: text/html; charset=UTF-8
X-Object-Meta-Movie: AmericanPie
X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501
Date: Fri, 17 Jan 2014 18:22:57 GMT
- Alternatively, you can use PUT to copy the ``goodbye`` object from
the ``marktwain`` container to the ``janeausten`` container. This
request requires a ``Content-Length`` header, even if it is set
to zero (0).
::
curl -i $publicURL/janeausten/goodbye -X PUT -H "X-Auth-Token: $token" -H "X-Copy-From: /marktwain/goodbye" -H "Content-Length: 0"
::
HTTP/1.1 201 Created
Content-Length: 0
X-Copied-From-Last-Modified: Thu, 16 Jan 2014 21:19:45 GMT
X-Copied-From: marktwain/goodbye
Last-Modified: Fri, 17 Jan 2014 18:22:57 GMT
Etag: 451e372e48e0f6b1114fa0724aa79fa1
Content-Type: text/html; charset=UTF-8
X-Object-Meta-Movie: AmericanPie
X-Trans-Id: txdcb481ad49d24e9a81107-0052d97501
Date: Fri, 17 Jan 2014 18:22:57 GMT
When several replicas exist, the system copies from the most recent
replica. That is, the COPY operation behaves as though the
``X-Newest`` header is in the request.
Error response codes:201,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- X-Auth-Token: X-Auth-Token
- Destination: Destination
- Content-Type: Content-Type
- Content-Encoding: Content-Encoding
- Content-Disposition: Content-Disposition
- X-Object-Meta-name: X-Object-Meta-name
- X-Fresh-Metadata: X-Fresh-Metadata
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Content-Length: Content-Length
- X-Object-Meta-name: X-Object-Meta-name
- X-Copied-From-Last-Modified: X-Copied-From-Last-Modified
- X-Copied-From: X-Copied-From
- Last-Modified: Last-Modified
- ETag: ETag
- X-Timestamp: X-Timestamp
- X-Trans-Id: X-Trans-Id
- Date: Date
- Content-Type: Content-Type
Delete object
=============
.. rest_method:: DELETE /v1/{account}/{container}/{object}
Permanently deletes an object from the object store.
You can use the COPY method to copy the object to a new location.
Then, use the DELETE method to delete the original object.
Object deletion occurs immediately at request time. Any subsequent
GET , HEAD , POST , or DELETE operations return a ``404 Not Found``
error code.
For static large object manifests, you can add the ``?multipart-
manifest=delete`` query parameter. This operation deletes the
segment objects and if all deletions succeed, this operation
deletes the manifest object.
Example request and response:
- Delete the ``helloworld`` object from the ``marktwain`` container:
::
curl -i $publicURL/marktwain/helloworld -X DELETE -H "X-Auth-Token: $token"
::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx36c7606fcd1843f59167c-0052d6fdac
Date: Wed, 15 Jan 2014 21:29:16 GMT
Typically, the DELETE operation does not return a response body.
However, with the ``multipart-manifest=delete`` query parameter,
the response body contains a list of manifest and segment objects
and the status of their DELETE operations.
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- multipart-manifest: multipart-manifest
- X-Auth-Token: X-Auth-Token
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id
Show object metadata
====================
.. rest_method:: HEAD /v1/{account}/{container}/{object}
Shows object metadata.
If the ``Content-Length`` response header is non-zero, the example
cURL command stalls after it prints the response headers because it
is waiting for a response body. However, the Object Storage system
does not return a response body for the HEAD operation.
Example requests and responses:
- Show object metadata:
::
curl -i $publicURL/marktwain/goodbye -X HEAD -H "X-Auth-Token: $token"
::
HTTP/1.1 200 OK
Content-Length: 14
Accept-Ranges: bytes
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
Etag: 451e372e48e0f6b1114fa0724aa79fa1
X-Timestamp: 1389906751.73463
X-Object-Meta-Book: GoodbyeColumbus
Content-Type: application/octet-stream
X-Trans-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f
Date: Thu, 16 Jan 2014 21:13:19 GMT
If the request succeeds, the operation returns the ``200`` response
code.
Normal response codes: 200
Error response codes:204,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- X-Auth-Token: X-Auth-Token
- temp_url_sig: temp_url_sig
- temp_url_expires: temp_url_expires
- filename: filename
- multipart-manifest: multipart-manifest_head
- X-Newest: X-Newest
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Last-Modified: Last-Modified
- Content-Length: Content-Length
- X-Object-Meta-name: X-Object-Meta-name
- Content-Disposition: Content-Disposition
- Content-Encoding: Content-Encoding
- X-Delete-At: X-Delete-At
- X-Object-Manifest: X-Object-Manifest
- Last-Modified: Last-Modified
- ETag: ETag
- X-Timestamp: X-Timestamp
- X-Trans-Id: X-Trans-Id
- Date: Date
- X-Static-Large-Object: X-Static-Large-Object
- Content-Type: Content-Type
Response Example
----------------
See examples above.
Create or update object metadata
================================
.. rest_method:: POST /v1/{account}/{container}/{object}
Creates or updates object metadata.
To create or update custom metadata, use the ``X-Object-
Meta-{name}`` header, where ``{name}`` is the name of the metadata
item.
In addition to the custom metadata, you can update the ``Content-
Type``, ``Content-Encoding``, ``Content-Disposition``, and ``X
-Delete-At`` system metadata items. However you cannot update other
system metadata, such as ``Content-Length`` or ``Last-Modified``.
You can use COPY as an alternate to the POST operation by copying
to the same object. With the POST operation you must specify all
metadata items, whereas with the COPY operation, you need to
specify only changed or additional items.
All metadata is preserved during the object copy. If you specify
metadata on the request to copy the object, either PUT or COPY ,
the metadata overwrites any conflicting keys on the target (new)
object.
A POST request deletes any existing custom metadata that you added
with a previous PUT or POST request. Consequently, you must specify
all custom metadata in the request. However, system metadata is
unchanged by the POST request unless you explicitly supply it in a
request header.
You can also set the ``X-Delete-At`` or ``X-Delete-After`` header
to define when to expire the object.
When used as described in this section, the POST operation creates
or replaces metadata. This form of the operation has no request
body.
You can also use the `form POST feature
<http://docs.openstack.org/liberty/config-reference/content/object-
storage-form-post.html>`_ to upload objects.
Example requests and responses:
- Create object metadata:
::
curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "X-Object-Meta-Book: GoodbyeColumbus"
::
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txb5fb5c91ba1f4f37bb648-0052d84b3f
Date: Thu, 16 Jan 2014 21:12:31 GMT
<html>
<h1>Accepted
</h1>
<p>The request is accepted for processing.
</p>
</html>
- Update object metadata:
::
curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" H "X-Object-Meta-Book: GoodbyeOldFriend"
::
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx5ec7ab81cdb34ced887c8-0052d84ca4
Date: Thu, 16 Jan 2014 21:18:28 GMT
<html>
<h1>Accepted
</h1>
<p>The request is accepted for processing.
</p>
</html>
Error response codes:202,
Request
-------
.. rest_parameters:: parameters.yaml
- account: account
- object: object
- container: container
- X-Auth-Token: X-Auth-Token
- X-Object-Meta-name: X-Object-Meta-name
- X-Delete-At: X-Delete-At
- Content-Disposition: Content-Disposition
- Content-Encoding: Content-Encoding
- X-Delete-After: X-Delete-After
- Content-Type: Content-Type
- X-Detect-Content-Type: X-Detect-Content-Type
- X-Trans-Id-Extra: X-Trans-Id-Extra
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- Date: Date
- X-Timestamp: X-Timestamp
- Content-Length: Content-Length
- Content-Type: Content-Type
- X-Trans-Id: X-Trans-Id

View File

@ -0,0 +1,37 @@
.. -*- rst -*-
=========
Endpoints
=========
If configured, lists endpoints for an account.
List endpoints
==============
.. rest_method:: GET /v1/endpoints
Lists endpoints for an object, account, or container.
When the cloud provider enables middleware to list the
``/endpoints/`` path, software that needs data location information
can use this call to avoid network overhead. The cloud provider can
map the ``/endpoints/`` path to another resource, so this exact
resource might vary from provider to provider. Because it goes
straight to the middleware, the call is not authenticated, so be
sure you have tightly secured the environment and network when
using this call.
Error response codes:201,
Request
-------
This operation does not accept a request body.

View File

@ -0,0 +1,41 @@
.. -*- rst -*-
===============
Discoverability
===============
If configured, lists the activated capabilities for this version of
the OpenStack Object Storage API.
List activated capabilities
===========================
.. rest_method:: GET /info
Lists the activated capabilities for this version of the OpenStack Object Storage API.
Normal response codes: 200
Error response codes:
Request
-------
.. rest_parameters:: parameters.yaml
- swiftinfo_sig: swiftinfo_sig
- swiftinfo_expires: swiftinfo_expires
Response Example
----------------
.. literalinclude:: samples/capabilities-list-response.json
:language: javascript

View File

@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import os
import sys
from hashlib import md5
@ -78,7 +79,7 @@ class Auditor(object):
container_listing = self.audit_container(account, container)
consistent = True
if name not in container_listing:
print " Object %s missing in container listing!" % path
print(" Object %s missing in container listing!" % path)
consistent = False
hash = None
else:
@ -99,14 +100,14 @@ class Auditor(object):
if resp.status // 100 != 2:
self.object_not_found += 1
consistent = False
print ' Bad status GETting object "%s" on %s/%s' \
% (path, node['ip'], node['device'])
print(' Bad status GETting object "%s" on %s/%s'
% (path, node['ip'], node['device']))
continue
if resp.getheader('ETag').strip('"') != calc_hash:
self.object_checksum_mismatch += 1
consistent = False
print ' MD5 does not match etag for "%s" on %s/%s' \
% (path, node['ip'], node['device'])
print(' MD5 does not match etag for "%s" on %s/%s'
% (path, node['ip'], node['device']))
etags.append(resp.getheader('ETag'))
else:
conn = http_connect(node['ip'], node['port'],
@ -116,28 +117,29 @@ class Auditor(object):
if resp.status // 100 != 2:
self.object_not_found += 1
consistent = False
print ' Bad status HEADing object "%s" on %s/%s' \
% (path, node['ip'], node['device'])
print(' Bad status HEADing object "%s" on %s/%s'
% (path, node['ip'], node['device']))
continue
etags.append(resp.getheader('ETag'))
except Exception:
self.object_exceptions += 1
consistent = False
print ' Exception fetching object "%s" on %s/%s' \
% (path, node['ip'], node['device'])
print(' Exception fetching object "%s" on %s/%s'
% (path, node['ip'], node['device']))
continue
if not etags:
consistent = False
print " Failed fo fetch object %s at all!" % path
print(" Failed fo fetch object %s at all!" % path)
elif hash:
for etag in etags:
if resp.getheader('ETag').strip('"') != hash:
consistent = False
self.object_checksum_mismatch += 1
print ' ETag mismatch for "%s" on %s/%s' \
% (path, node['ip'], node['device'])
print(' ETag mismatch for "%s" on %s/%s'
% (path, node['ip'], node['device']))
if not consistent and self.error_file:
print >>open(self.error_file, 'a'), path
with open(self.error_file, 'a') as err_file:
print(path, file=err_file)
self.objects_checked += 1
def audit_container(self, account, name, recurse=False):
@ -146,13 +148,13 @@ class Auditor(object):
if (account, name) in self.list_cache:
return self.list_cache[(account, name)]
self.in_progress[(account, name)] = Event()
print 'Auditing container "%s"' % name
print('Auditing container "%s"' % name)
path = '/%s/%s' % (account, name)
account_listing = self.audit_account(account)
consistent = True
if name not in account_listing:
consistent = False
print " Container %s not in account listing!" % path
print(" Container %s not in account listing!" % path)
part, nodes = \
self.container_ring.get_nodes(account, name.encode('utf-8'))
rec_d = {}
@ -180,8 +182,8 @@ class Auditor(object):
except Exception:
self.container_exceptions += 1
consistent = False
print ' Exception GETting container "%s" on %s/%s' % \
(path, node['ip'], node['device'])
print(' Exception GETting container "%s" on %s/%s' %
(path, node['ip'], node['device']))
break
if results:
marker = results[-1]['name']
@ -202,13 +204,15 @@ class Auditor(object):
for header in responses.values()]
if not obj_counts:
consistent = False
print " Failed to fetch container %s at all!" % path
print(" Failed to fetch container %s at all!" % path)
else:
if len(set(obj_counts)) != 1:
self.container_count_mismatch += 1
consistent = False
print " Container databases don't agree on number of objects."
print " Max: %s, Min: %s" % (max(obj_counts), min(obj_counts))
print(
" Container databases don't agree on number of objects.")
print(
" Max: %s, Min: %s" % (max(obj_counts), min(obj_counts)))
self.containers_checked += 1
self.list_cache[(account, name)] = rec_d
self.in_progress[(account, name)].send(True)
@ -217,7 +221,8 @@ class Auditor(object):
for obj in rec_d.keys():
self.pool.spawn_n(self.audit_object, account, name, obj)
if not consistent and self.error_file:
print >>open(self.error_file, 'a'), path
with open(self.error_file, 'a') as error_file:
print(path, file=error_file)
return rec_d
def audit_account(self, account, recurse=False):
@ -226,7 +231,7 @@ class Auditor(object):
if account in self.list_cache:
return self.list_cache[account]
self.in_progress[account] = Event()
print 'Auditing account "%s"' % account
print('Auditing account "%s"' % account)
consistent = True
path = '/%s' % account
part, nodes = self.account_ring.get_nodes(account)
@ -270,8 +275,8 @@ class Auditor(object):
print(" Account databases for '%s' don't agree on"
" number of containers." % account)
if cont_counts:
print " Max: %s, Min: %s" % (max(cont_counts),
min(cont_counts))
print(" Max: %s, Min: %s" % (max(cont_counts),
min(cont_counts)))
obj_counts = [int(header['x-account-object-count'])
for header in headers]
if len(set(obj_counts)) != 1:
@ -280,8 +285,8 @@ class Auditor(object):
print(" Account databases for '%s' don't agree on"
" number of objects." % account)
if obj_counts:
print " Max: %s, Min: %s" % (max(obj_counts),
min(obj_counts))
print(" Max: %s, Min: %s" % (max(obj_counts),
min(obj_counts)))
containers = set()
for resp in responses.values():
containers.update(container['name'] for container in resp[1])
@ -294,7 +299,8 @@ class Auditor(object):
self.pool.spawn_n(self.audit_container, account,
container, True)
if not consistent and self.error_file:
print >>open(self.error_file, 'a'), path
with open(self.error_file, 'a') as error_file:
print(path, error_file)
return containers
def audit(self, account, container=None, obj=None):
@ -312,9 +318,9 @@ class Auditor(object):
def _print_stat(name, stat):
# Right align stat name in a field of 18 characters
print "{0:>18}: {1}".format(name, stat)
print("{0:>18}: {1}".format(name, stat))
print
print()
_print_stat("Accounts checked", self.accounts_checked)
if self.account_not_found:
_print_stat("Missing Replicas", self.account_not_found)
@ -324,7 +330,7 @@ class Auditor(object):
_print_stat("Container mismatch", self.account_container_mismatch)
if self.account_object_mismatch:
_print_stat("Object mismatch", self.account_object_mismatch)
print
print()
_print_stat("Containers checked", self.containers_checked)
if self.container_not_found:
_print_stat("Missing Replicas", self.container_not_found)
@ -334,7 +340,7 @@ class Auditor(object):
_print_stat("Count mismatch", self.container_count_mismatch)
if self.container_obj_mismatch:
_print_stat("Object mismatch", self.container_obj_mismatch)
print
print()
_print_stat("Objects checked", self.objects_checked)
if self.object_not_found:
_print_stat("Missing Replicas", self.object_not_found)
@ -348,11 +354,11 @@ if __name__ == '__main__':
try:
optlist, args = getopt.getopt(sys.argv[1:], 'c:r:e:d')
except getopt.GetoptError as err:
print str(err)
print usage
print(str(err))
print(usage)
sys.exit(2)
if not args and os.isatty(sys.stdin.fileno()):
print usage
print(usage)
sys.exit()
opts = dict(optlist)
options = {

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import optparse
import os
import sys
@ -64,7 +65,7 @@ def main():
else:
conf_files += Server(arg).conf_files(**options)
for conf_file in conf_files:
print '# %s' % conf_file
print('# %s' % conf_file)
if options['wsgi']:
app_config = appconfig(conf_file)
conf = inspect_app_config(app_config)
@ -77,13 +78,13 @@ def main():
if not isinstance(v, dict):
flat_vars[k] = v
continue
print '[%s]' % k
print('[%s]' % k)
for opt, value in v.items():
print '%s = %s' % (opt, value)
print
print('%s = %s' % (opt, value))
print()
for k, v in flat_vars.items():
print '# %s = %s' % (k, v)
print
print('# %s = %s' % (k, v))
print()
if __name__ == "__main__":
sys.exit(main())

View File

@ -13,6 +13,7 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import traceback
from optparse import OptionParser
@ -34,7 +35,6 @@ from swift.common.ring import Ring
from swift.common.utils import compute_eta, get_time_units, config_true_value
from swift.common.storage_policy import POLICIES
insecure = False
@ -77,9 +77,9 @@ def report(success):
return
next_report = time() + 5
eta, eta_unit = compute_eta(begun, created, need_to_create)
print ('\r\x1B[KCreating %s: %d of %d, %d%s left, %d retries'
% (item_type, created, need_to_create, round(eta), eta_unit,
retries_done)),
print('\r\x1B[KCreating %s: %d of %d, %d%s left, %d retries'
% (item_type, created, need_to_create, round(eta), eta_unit,
retries_done), end='')
stdout.flush()
@ -105,9 +105,9 @@ Usage: %%prog [options] [conf_file]
help='Allow accessing insecure keystone server. '
'The keystone\'s certificate will not be verified.')
parser.add_option('--no-overlap', action='store_true', default=False,
help='No overlap of partitions if running populate \
help="No overlap of partitions if running populate \
more than once. Will increase coverage by amount shown \
in dispersion.conf file')
in dispersion.conf file")
parser.add_option('-P', '--policy-name', dest='policy_name',
help="Specify storage policy name")
@ -127,7 +127,7 @@ Usage: %%prog [options] [conf_file]
policy = POLICIES.get_by_name(options.policy_name)
if policy is None:
exit('Unable to find policy: %s' % options.policy_name)
print 'Using storage policy: %s ' % policy.name
print('Using storage policy: %s ' % policy.name)
swift_dir = conf.get('swift_dir', '/etc/swift')
dispersion_coverage = float(conf.get('dispersion_coverage', 1))
@ -213,15 +213,15 @@ Usage: %%prog [options] [conf_file]
suffix += 1
coropool.waitall()
elapsed, elapsed_unit = get_time_units(time() - begun)
print '\r\x1B[KCreated %d containers for dispersion reporting, ' \
'%d%s, %d retries' % \
print('\r\x1B[KCreated %d containers for dispersion reporting, '
'%d%s, %d retries' %
((need_to_create - need_to_queue), round(elapsed), elapsed_unit,
retries_done)
retries_done))
if options.no_overlap:
con_coverage = container_ring.partition_count - len(parts_left)
print '\r\x1B[KTotal container coverage is now %.2f%%.' % \
print('\r\x1B[KTotal container coverage is now %.2f%%.' %
((float(con_coverage) / container_ring.partition_count
* 100))
* 100)))
stdout.flush()
if object_populate:
@ -269,12 +269,12 @@ Usage: %%prog [options] [conf_file]
suffix += 1
coropool.waitall()
elapsed, elapsed_unit = get_time_units(time() - begun)
print '\r\x1B[KCreated %d objects for dispersion reporting, ' \
'%d%s, %d retries' % \
print('\r\x1B[KCreated %d objects for dispersion reporting, '
'%d%s, %d retries' %
((need_to_create - need_to_queue), round(elapsed), elapsed_unit,
retries_done)
retries_done))
if options.no_overlap:
obj_coverage = object_ring.partition_count - len(parts_left)
print '\r\x1B[KTotal object coverage is now %.2f%%.' % \
((float(obj_coverage) / object_ring.partition_count * 100))
print('\r\x1B[KTotal object coverage is now %.2f%%.' %
((float(obj_coverage) / object_ring.partition_count * 100)))
stdout.flush()

View File

@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import json
from collections import defaultdict
from six.moves.configparser import ConfigParser
@ -54,18 +55,18 @@ def get_error_log(prefix):
if msg_or_exc.http_status == 507:
if identifier not in unmounted:
unmounted.append(identifier)
print >>stderr, 'ERROR: %s is unmounted -- This will ' \
'cause replicas designated for that device to be ' \
'considered missing until resolved or the ring is ' \
'updated.' % (identifier)
print('ERROR: %s is unmounted -- This will '
'cause replicas designated for that device to be '
'considered missing until resolved or the ring is '
'updated.' % (identifier), file=stderr)
stderr.flush()
if debug and identifier not in notfound:
notfound.append(identifier)
print >>stderr, 'ERROR: %s returned a 404' % (identifier)
print('ERROR: %s returned a 404' % (identifier), file=stderr)
stderr.flush()
if not hasattr(msg_or_exc, 'http_status') or \
msg_or_exc.http_status not in (404, 507):
print >>stderr, 'ERROR: %s: %s' % (prefix, msg_or_exc)
print('ERROR: %s: %s' % (prefix, msg_or_exc), file=stderr)
stderr.flush()
return error_log
@ -77,8 +78,8 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
prefix='dispersion_%d' % policy.idx, full_listing=True)[1]]
containers_listed = len(containers)
if not containers_listed:
print >>stderr, 'No containers to query. Has ' \
'swift-dispersion-populate been run?'
print('No containers to query. Has '
'swift-dispersion-populate been run?', file=stderr)
stderr.flush()
return
retries_done = [0]
@ -109,10 +110,10 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
if output_missing_partitions and \
found_count < len(nodes):
missing = len(nodes) - found_count
print '\r\x1B[K',
print('\r\x1B[K', end='')
stdout.flush()
print >>stderr, '# Container partition %s missing %s cop%s' % (
part, missing, 'y' if missing == 1 else 'ies')
print('# Container partition %s missing %s cop%s' % (
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
container_copies_found[0] += found_count
containers_queried[0] += 1
container_copies_missing[len(nodes) - found_count] += 1
@ -121,9 +122,10 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
eta, eta_unit = compute_eta(begun, containers_queried[0],
containers_listed)
if not json_output:
print '\r\x1B[KQuerying containers: %d of %d, %d%s left, %d ' \
print('\r\x1B[KQuerying containers: %d of %d, %d%s left, %d '
'retries' % (containers_queried[0], containers_listed,
round(eta), eta_unit, retries_done[0]),
end='')
stdout.flush()
container_parts = {}
for container in containers:
@ -140,19 +142,19 @@ def container_dispersion_report(coropool, connpool, account, container_ring,
elapsed, elapsed_unit = get_time_units(time() - begun)
container_copies_missing.pop(0, None)
if not json_output:
print '\r\x1B[KQueried %d containers for dispersion reporting, ' \
print('\r\x1B[KQueried %d containers for dispersion reporting, '
'%d%s, %d retries' % (containers_listed, round(elapsed),
elapsed_unit, retries_done[0])
elapsed_unit, retries_done[0]))
if containers_listed - distinct_partitions:
print 'There were %d overlapping partitions' % (
containers_listed - distinct_partitions)
print('There were %d overlapping partitions' % (
containers_listed - distinct_partitions))
for missing_copies, num_parts in container_copies_missing.items():
print missing_string(num_parts, missing_copies,
container_ring.replica_count)
print '%.02f%% of container copies found (%d of %d)' % (
value, copies_found, copies_expected)
print 'Sample represents %.02f%% of the container partition space' % (
100.0 * distinct_partitions / container_ring.partition_count)
print(missing_string(num_parts, missing_copies,
container_ring.replica_count))
print('%.02f%% of container copies found (%d of %d)' % (
value, copies_found, copies_expected))
print('Sample represents %.02f%% of the container partition space' % (
100.0 * distinct_partitions / container_ring.partition_count))
stdout.flush()
return None
else:
@ -177,14 +179,14 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
if err.http_status != 404:
raise
print >>stderr, 'No objects to query. Has ' \
'swift-dispersion-populate been run?'
print('No objects to query. Has '
'swift-dispersion-populate been run?', file=stderr)
stderr.flush()
return
objects_listed = len(objects)
if not objects_listed:
print >>stderr, 'No objects to query. Has swift-dispersion-populate ' \
'been run?'
print('No objects to query. Has swift-dispersion-populate '
'been run?', file=stderr)
stderr.flush()
return
retries_done = [0]
@ -221,10 +223,10 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
if output_missing_partitions and \
found_count < len(nodes):
missing = len(nodes) - found_count
print '\r\x1B[K',
print('\r\x1B[K', end='')
stdout.flush()
print >>stderr, '# Object partition %s missing %s cop%s' % (
part, missing, 'y' if missing == 1 else 'ies')
print('# Object partition %s missing %s cop%s' % (
part, missing, 'y' if missing == 1 else 'ies'), file=stderr)
object_copies_found[0] += found_count
object_copies_missing[len(nodes) - found_count] += 1
objects_queried[0] += 1
@ -233,9 +235,10 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
eta, eta_unit = compute_eta(begun, objects_queried[0],
objects_listed)
if not json_output:
print '\r\x1B[KQuerying objects: %d of %d, %d%s left, %d ' \
print('\r\x1B[KQuerying objects: %d of %d, %d%s left, %d '
'retries' % (objects_queried[0], objects_listed,
round(eta), eta_unit, retries_done[0]),
end='')
stdout.flush()
object_parts = {}
for obj in objects:
@ -251,21 +254,21 @@ def object_dispersion_report(coropool, connpool, account, object_ring,
value = 100.0 * copies_found / copies_expected
elapsed, elapsed_unit = get_time_units(time() - begun)
if not json_output:
print '\r\x1B[KQueried %d objects for dispersion reporting, ' \
print('\r\x1B[KQueried %d objects for dispersion reporting, '
'%d%s, %d retries' % (objects_listed, round(elapsed),
elapsed_unit, retries_done[0])
elapsed_unit, retries_done[0]))
if objects_listed - distinct_partitions:
print 'There were %d overlapping partitions' % (
objects_listed - distinct_partitions)
print('There were %d overlapping partitions' % (
objects_listed - distinct_partitions))
for missing_copies, num_parts in object_copies_missing.items():
print missing_string(num_parts, missing_copies,
object_ring.replica_count)
print(missing_string(num_parts, missing_copies,
object_ring.replica_count))
print '%.02f%% of object copies found (%d of %d)' % \
(value, copies_found, copies_expected)
print 'Sample represents %.02f%% of the object partition space' % (
100.0 * distinct_partitions / object_ring.partition_count)
print('%.02f%% of object copies found (%d of %d)' %
(value, copies_found, copies_expected))
print('Sample represents %.02f%% of the object partition space' % (
100.0 * distinct_partitions / object_ring.partition_count))
stdout.flush()
return None
else:
@ -347,7 +350,7 @@ Usage: %%prog [options] [conf_file]
policy = POLICIES.get_by_name(options.policy_name)
if policy is None:
exit('Unable to find policy: %s' % options.policy_name)
print 'Using storage policy: %s ' % policy.name
print('Using storage policy: %s ' % policy.name)
swift_dir = conf.get('swift_dir', '/etc/swift')
retries = int(conf.get('retries', 5))
@ -405,4 +408,4 @@ Usage: %%prog [options] [conf_file]
coropool, connpool, account, object_ring, retries,
options.partitions, policy)
if json_output:
print json.dumps(output)
print(json.dumps(output))

View File

@ -142,10 +142,10 @@ if __name__ == '__main__':
try:
conf_path = sys.argv[1]
except Exception:
print "Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1]
print("Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1])
sys.exit(1)
if not c.read(conf_path):
print "Unable to read config file %s" % conf_path
print("Unable to read config file %s" % conf_path)
sys.exit(1)
conf = dict(c.items('drive-audit'))
device_dir = conf.get('device_dir', '/srv/node')

View File

@ -74,7 +74,7 @@ if __name__ == '__main__':
ring_name = args[0].rsplit('/', 1)[-1].split('.', 1)[0]
ring = Ring(args[0])
else:
print 'Ring file does not exist'
print('Ring file does not exist')
args.pop(0)
try:

View File

@ -26,6 +26,14 @@ USAGE = \
where:
<server> is the name of a swift service e.g. proxy-server.
The '-server' part of the name may be omitted.
'all', 'main' and 'rest' are reserved words that represent a
group of services.
all: Expands to all swift daemons.
main: Expands to main swift daemons.
(proxy, container, account, object)
rest: Expands to all remaining background daemons (beyond
"main").
(updater, replicator, auditor, etc)
<config> is an explicit configuration filename without the
.conf extension. If <config> is specified then <server> should
refer to a directory containing the configuration file, e.g.:
@ -84,7 +92,7 @@ def main():
if len(args) < 2:
parser.print_help()
print 'ERROR: specify server(s) and command'
print('ERROR: specify server(s) and command')
return 1
command = args[-1]
@ -101,7 +109,7 @@ def main():
status = manager.run_command(command, **options.__dict__)
except UnknownCommandError:
parser.print_help()
print 'ERROR: unknown command, %s' % command
print('ERROR: unknown command, %s' % command)
status = 1
return 1 if status else 0

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import optparse
import subprocess
import sys
@ -30,16 +31,22 @@ Lists old Swift processes.
listing = []
for line in subprocess.Popen(
['ps', '-eo', 'etime,pid,args', '--no-headers'],
stdout=subprocess.PIPE).communicate()[0].split('\n'):
stdout=subprocess.PIPE).communicate()[0].split(b'\n'):
if not line:
continue
hours = 0
try:
etime, pid, args = line.split(None, 2)
etime, pid, args = line.decode('ascii').split(None, 2)
except ValueError:
# This covers both decoding and not-enough-values-to-unpack errors
sys.exit('Could not process ps line %r' % line)
if not args.startswith('/usr/bin/python /usr/bin/swift-') and \
not args.startswith('/usr/bin/python /usr/local/bin/swift-'):
if not args.startswith((
'/usr/bin/python /usr/bin/swift-',
'/usr/bin/python /usr/local/bin/swift-',
'/bin/python /usr/bin/swift-',
'/usr/bin/python3 /usr/bin/swift-',
'/usr/bin/python3 /usr/local/bin/swift-',
'/bin/python3 /usr/bin/swift-')):
continue
args = args.split('-', 1)[1]
etime = etime.split('-')
@ -70,8 +77,6 @@ Lists old Swift processes.
args_len = max(args_len, len(args))
args_len = min(args_len, 78 - hours_len - pid_len)
print ('%%%ds %%%ds %%s' % (hours_len, pid_len)) % \
('Hours', 'PID', 'Command')
print('%*s %*s %s' % (hours_len, 'Hours', pid_len, 'PID', 'Command'))
for hours, pid, args in listing:
print ('%%%ds %%%ds %%s' % (hours_len, pid_len)) % \
(hours, pid, args[:args_len])
print('%*s %*s %s' % (hours_len, hours, pid_len, pid, args[:args_len]))

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import optparse
import os
import signal
@ -104,11 +105,11 @@ Example (sends SIGTERM to all orphaned Swift processes older than two hours):
args_len = max(args_len, len(args))
args_len = min(args_len, 78 - hours_len - pid_len)
print ('%%%ds %%%ds %%s' % (hours_len, pid_len)) % \
('Hours', 'PID', 'Command')
print(('%%%ds %%%ds %%s' % (hours_len, pid_len)) %
('Hours', 'PID', 'Command'))
for hours, pid, args in listing:
print ('%%%ds %%%ds %%s' % (hours_len, pid_len)) % \
(hours, pid, args[:args_len])
print(('%%%ds %%%ds %%s' % (hours_len, pid_len)) %
(hours, pid, args[:args_len]))
if options.signal:
try:
@ -120,7 +121,8 @@ Example (sends SIGTERM to all orphaned Swift processes older than two hours):
if not signum:
sys.exit('Could not translate %r to a signal number.' %
options.signal)
print 'Sending processes %s (%d) signal...' % (options.signal, signum),
print('Sending processes %s (%d) signal...' % (options.signal, signum),
end='')
for hours, pid, args in listing:
os.kill(int(pid), signum)
print 'Done.'
print('Done.')

View File

@ -50,11 +50,11 @@ def main():
try:
conf_path = sys.argv[1]
except Exception:
print "Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1]
print "ex: swift-recon-cron /etc/swift/object-server.conf"
print("Usage: %s CONF_FILE" % sys.argv[0].split('/')[-1])
print("ex: swift-recon-cron /etc/swift/object-server.conf")
sys.exit(1)
if not c.read(conf_path):
print "Unable to read config file %s" % conf_path
print("Unable to read config file %s" % conf_path)
sys.exit(1)
conf = dict(c.items('filter:recon'))
device_dir = conf.get('devices', '/srv/node')
@ -68,7 +68,7 @@ def main():
os.mkdir(lock_dir)
except OSError as e:
logger.critical(str(e))
print str(e)
print(str(e))
sys.exit(1)
try:
asyncs = get_async_count(device_dir, logger)

View File

@ -11,6 +11,7 @@
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import sys
from optparse import OptionParser
@ -67,7 +68,7 @@ def main():
policy.idx, timestamp, options.op, force=options.force)
if not container_name:
return 'ERROR: unable to enqueue!'
print container_name
print(container_name)
if __name__ == "__main__":

View File

@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import hmac
from hashlib import sha1
from os.path import basename
@ -24,28 +25,28 @@ from six.moves import urllib
if __name__ == '__main__':
if len(argv) < 5:
prog = basename(argv[0])
print 'Syntax: %s <method> <seconds> <path> <key>' % prog
print
print 'Where:'
print ' <method> The method to allow; GET for example.'
print ' <seconds> The number of seconds from now to allow requests.'
print ' <path> The full path to the resource.'
print ' Example: /v1/AUTH_account/c/o'
print ' <key> The X-Account-Meta-Temp-URL-Key for the account.'
print
print 'Example output:'
print ' /v1/AUTH_account/c/o?temp_url_sig=34d49efc32fe6e3082e411e' \
'eeb85bd8a&temp_url_expires=1323482948'
print
print 'This can be used to form a URL to give out for the access '
print 'allowed. For example:'
print ' echo https://swift-cluster.example.com`%s GET 60 ' \
'/v1/AUTH_account/c/o mykey`' % prog
print
print 'Might output:'
print ' https://swift-cluster.example.com/v1/AUTH_account/c/o?' \
'temp_url_sig=34d49efc32fe6e3082e411eeeb85bd8a&' \
'temp_url_expires=1323482948'
print('Syntax: %s <method> <seconds> <path> <key>' % prog)
print()
print('Where:')
print(' <method> The method to allow; GET for example.')
print(' <seconds> The number of seconds from now to allow requests.')
print(' <path> The full path to the resource.')
print(' Example: /v1/AUTH_account/c/o')
print(' <key> The X-Account-Meta-Temp-URL-Key for the account.')
print()
print('Example output:')
print(' /v1/AUTH_account/c/o?temp_url_sig=34d49efc32fe6e3082e411e'
'eeb85bd8a&temp_url_expires=1323482948')
print()
print('This can be used to form a URL to give out for the access ')
print('allowed. For example:')
print(' echo \\"https://swift-cluster.example.com`%s GET 60 '
'/v1/AUTH_account/c/o mykey`\\"' % prog)
print()
print('Might output:')
print(' "https://swift-cluster.example.com/v1/AUTH_account/c/o?'
'temp_url_sig=34d49efc32fe6e3082e411eeeb85bd8a&'
'temp_url_expires=1323482948"')
exit(1)
method, seconds, path, key = argv[1:5]
try:
@ -53,7 +54,7 @@ if __name__ == '__main__':
except ValueError:
expires = 0
if expires < 1:
print 'Please use a positive <seconds> value.'
print('Please use a positive <seconds> value.')
exit(1)
parts = path.split('/', 4)
# Must be five parts, ['', 'v1', 'a', 'c', 'o'], must be a v1 request, have
@ -72,4 +73,4 @@ if __name__ == '__main__':
real_path = path
sig = hmac.new(key, '%s\n%s\n%s' % (method, expires, real_path),
sha1).hexdigest()
print '%s?temp_url_sig=%s&temp_url_expires=%s' % (path, sig, expires)
print('%s?temp_url_sig=%s&temp_url_expires=%s' % (path, sig, expires))

View File

@ -13,3 +13,5 @@ python-dev [platform:dpkg]
python-devel [platform:rpm]
rsync
xfsprogs
libssl-dev [platform:dpkg]
openssl-devel [platform:rpm]

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B account-server.conf
\- configuration file for the openstack-swift account server
\- configuration file for the OpenStack Swift account server
@ -77,7 +77,7 @@ The system user that the account server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
Parent directory of where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
@ -125,6 +125,20 @@ You can set fallocate_reserve to the number of bytes or percentage of disk
space you'd like fallocate to reserve, whether there is space for the given
file size or not. Percentage will be used if the value ends with a '%'.
The default is 1%.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -172,6 +186,20 @@ To handle all verbs, including replication verbs, do not specify
set to a true value (e.g. "true" or "1"). To handle only non-replication
verbs, set to "false". Unless you have a separate replication network, you
should not specify any value for "replication_server". The default is empty.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -281,6 +309,20 @@ Format of the rysnc module where the replicator will send data. See
etc/rsyncd.conf-sample for some usage examples.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -303,6 +345,20 @@ Will audit, at most, 1 account per device per interval. The default is 1800 seco
Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -339,6 +395,20 @@ You can search logs for this message if space is not being reclaimed
after you delete account(s).
Default is 2592000 seconds (30 days). This is in addition to any time
requested by delay_reaping.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -348,7 +418,7 @@ requested by delay_reaping.
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-account-server and
also Openstack-Swift as a whole can be found at
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B container-server.conf
\- configuration file for the openstack-swift container server
\- configuration file for the OpenStack Swift container server
@ -83,7 +83,7 @@ The system user that the container server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
Parent directory of where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
@ -131,6 +131,20 @@ You can set fallocate_reserve to the number of bytes or percentage of disk
space you'd like fallocate to reserve, whether there is space for the given
file size or not. Percentage will be used if the value ends with a '%'.
The default is 1%.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -184,6 +198,20 @@ To handle all verbs, including replication verbs, do not specify
set to a True value (e.g. "True" or "1"). To handle only non-replication
verbs, set to "False". Unless you have a separate replication network, you
should not specify any value for "replication_server".
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -293,6 +321,20 @@ Format of the rysnc module where the replicator will send data. See
etc/rsyncd.conf-sample for some usage examples.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -322,6 +364,20 @@ Slowdown will sleep that amount between containers. The default is 0.01 seconds.
Seconds to suppress updating an account that has generated an error. The default is 60 seconds.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -344,6 +400,20 @@ Will audit, at most, 1 container per device per interval. The default is 1800 se
Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. The default is 200.
.IP \fBrecon_cache_path\fR
Path to recon cache directory. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -372,6 +442,20 @@ Connection timeout to external services. The default is 5 seconds.
Server errors from requests will be retried by default. The default is 3.
.IP \fBinternal_client_conf_path\fR
Internal client config file path.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -381,7 +465,7 @@ Internal client config file path.
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-container-server and
also Openstack-Swift as a whole can be found at
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

View File

@ -14,33 +14,33 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH dispersion.conf 5 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B dispersion.conf
\- configuration file for the openstack-swift dispersion tools
\- configuration file for the OpenStack Swift dispersion tools
.SH SYNOPSIS
.LP
.B dispersion.conf
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is the configuration file used by the dispersion populate and report tools.
The file format consists of the '[dispersion]' module as the header and available parameters.
Any line that begins with a '#' symbol is ignored.
The file format consists of the '[dispersion]' module as the header and available parameters.
Any line that begins with a '#' symbol is ignored.
.SH PARAMETERS
.PD 1
.PD 1
.RS 0
.IP "\fBauth_version\fR"
Authentication system API version. The default is 1.0.
.IP "\fBauth_url\fR"
Authentication system URL
.IP "\fBauth_user\fR"
Authentication system URL
.IP "\fBauth_user\fR"
Authentication system account/user name
.IP "\fBauth_key\fR"
Authentication system account/user password
@ -55,7 +55,7 @@ The default is 'publicURL'.
.IP "\fBkeystone_api_insecure\fR"
The default is false.
.IP "\fBswift_dir\fR"
Location of openstack-swift configuration and ring files
Location of OpenStack Swift configuration and ring files
.IP "\fBdispersion_coverage\fR"
Percentage of partition coverage to use. The default is 1.0.
.IP "\fBretries\fR"
@ -76,7 +76,7 @@ Whether to run the object report. The default is yes.
.PD
.SH SAMPLE
.PD 0
.PD 0
.RS 0
.IP "[dispersion]"
.IP "auth_url = https://127.0.0.1:443/auth/v1.0"
@ -94,15 +94,15 @@ Whether to run the object report. The default is yes.
.IP "# container_report = yes"
.IP "# object_report = yes"
.RE
.PD
.PD
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and
and
.BI http://swift.openstack.org

View File

@ -14,13 +14,13 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH object-expirer.conf 5 "03/15/2012" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B object-expirer.conf
\- configuration file for the openstack-swift object exprier daemon
\- configuration file for the OpenStack Swift object expirer daemon
@ -30,38 +30,38 @@
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is the configuration file used by the object expirer daemon. The daemon's
function is to query the internal hidden expiring_objects_account to discover
This is the configuration file used by the object expirer daemon. The daemon's
function is to query the internal hidden expiring_objects_account to discover
objects that need to be deleted and to then delete them.
The configuration file follows the python-pastedeploy syntax. The file is divided
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.
into sections, which are enclosed by square brackets. Each section will contain a
certain number of key/value parameters which are described later.
Any line that begins with a '#' symbol is ignored.
Any line that begins with a '#' symbol is ignored.
You can find more information about python-pastedeploy configuration format at
You can find more information about python-pastedeploy configuration format at
\fIhttp://pythonpaste.org/deploy/#config-format\fR
.SH GLOBAL SECTION
.PD 1
.PD 1
.RS 0
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.
This is indicated by section named [DEFAULT]. Below are the parameters that
are acceptable within this section.
.IP \fBswift_dir\fR
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBuser\fR
The system user that the object server will run as. The default is swift.
.IP \fBlog_name\fR
.IP \fBuser\fR
The system user that the object server will run as. The default is swift.
.IP \fBlog_name\fR
Label used when logging. The default is swift.
.IP \fBlog_facility\fR
.IP \fBlog_facility\fR
Syslog log facility. The default is LOG_LOCAL0.
.IP \fBlog_level\fR
.IP \fBlog_level\fR
Logging level. The default is INFO.
.IP \fBlog_address\fR
Logging address. The default is /dev/log.
@ -88,19 +88,33 @@ The default is 1.
The default is 1.
.IP \fBlog_statsd_metric_prefix\fR
The default is empty.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
.SH PIPELINE SECTION
.PD 1
.PD 1
.RS 0
This is indicated by section name [pipeline:main]. Below are the parameters that
are acceptable within this section.
are acceptable within this section.
.IP "\fBpipeline\fR"
It is used when you need to apply a number of filters. It is a list of filters
It is used when you need to apply a number of filters. It is a list of filters
ended by an application. The default should be \fB"catch_errors cache proxy-server"\fR
.RE
.PD
@ -108,24 +122,38 @@ ended by an application. The default should be \fB"catch_errors cache proxy-serv
.SH APP SECTION
.PD 1
.PD 1
.RS 0
This is indicated by section name [app:object-server]. Below are the parameters
that are acceptable within this section.
.IP "\fBuse\fR"
Entry point for paste.deploy for the object server. This is the reference to the installed python egg.
The default is \fBegg:swift#proxy\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
Entry point for paste.deploy for the object server. This is the reference to the installed python egg.
The default is \fBegg:swift#proxy\fR. See proxy-server.conf-sample for options or See proxy-server.conf manpage.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
.SH FILTER SECTION
.PD 1
.PD 1
.RS 0
Any section that has its name prefixed by "filter:" indicates a filter section.
Filters are used to specify configuration parameters for specific swift middlewares.
Below are the filters available and respective acceptable parameters.
Below are the filters available and respective acceptable parameters.
.RS 0
.IP "\fB[filter:cache]\fR"
@ -140,8 +168,8 @@ The default is \fBegg:swift#memcache\fR. See proxy-server.conf-sample for option
.RE
.RS 0
.IP "\fB[filter:catch_errors]\fR"
.RS 0
.IP "\fB[filter:catch_errors]\fR"
.RE
.RS 3
.IP \fBuse\fR
@ -206,9 +234,9 @@ Path to recon cache directory. The default is /var/cache/swift.
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-object-expirer and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B object-server.conf
\- configuration file for the openstack-swift object server
\- configuration file for the OpenStack Swift object server
@ -77,7 +77,7 @@ The system user that the object server will run as. The default is swift.
.IP \fBswift_dir\fR
Swift configuration directory. The default is /etc/swift.
.IP \fBdevices\fR
Parent directory or where devices are mounted. Default is /srv/node.
Parent directory of where devices are mounted. Default is /srv/node.
.IP \fBmount_check\fR
Whether or not check if the devices are mounted to prevent accidentally writing to
the root device. The default is set to true.
@ -142,6 +142,20 @@ backend node. The default is 60.
The default is 65536.
.IP \fBdisk_chunk_size\fR
The default is 65536.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -233,6 +247,26 @@ version 3.0 or greater. If you set "splice = yes" but the kernel
does not support it, error messages will appear in the object server
logs at startup, but your object servers should continue to function.
The default is false.
.IP \fBnode_timeout\fR
Request timeout to external services. The default is 3 seconds.
.IP \fBconn_timeout\fR
Connection timeout to external services. The default is 0.5 seconds.
.IP \fBcontainer_update_timeout\fR
Time to wait while sending a container update on object update. The default is 1 second.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -386,6 +420,20 @@ The handoffs_first and handoff_delete are options for a special case
such as disk full in the cluster. These two options SHOULD NOT BE
CHANGED, except for such an extreme situations. (e.g. disks filled up
or are about to fill up. Anyway, DO NOT let your drives fill up).
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -461,6 +509,20 @@ Slowdown will sleep that amount between objects. The default is 0.01 seconds.
The recon_cache_path simply sets the directory where stats for a few items will be stored.
Depending on the method of deployment you may need to create this directory manually
and ensure that swift has read/write. The default is /var/cache/swift.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -503,6 +565,20 @@ points and report the result after a full scan.
.IP \fBrsync_tempfile_timeout\fR
Time elapsed in seconds before rsync tempfiles will be unlinked. Config value of "auto"
will try to use object-replicator's rsync_timeout + 900 or fall-back to 86400 (1 day).
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
@ -511,7 +587,7 @@ will try to use object-replicator's rsync_timeout + 900 or fall-back to 86400 (1
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-object-server and
also Openstack-Swift as a whole can be found at
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B proxy-server.conf
\- configuration file for the openstack-swift proxy server
\- configuration file for the OpenStack Swift proxy server
@ -143,6 +143,20 @@ This is very useful when one is managing more than one swift cluster.
Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
.IP \fBstrict_cors_mode\fR
The default is true.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
@ -1030,13 +1044,33 @@ These are the headers whose values will only be shown to swift_owners. The
exact definition of a swift_owner is up to the auth system in use, but
usually indicates administrative responsibilities.
The default is 'x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control'.
.IP \fBrate_limit_after_segment\fR
Start rate-limiting object segments after the Nth segment of a segmented
object. The default is 10 segments.
.IP \fBrate_limit_segments_per_sec\fR
Once segment rate-limiting kicks in for an object, limit segments served to N
per second. The default is 1.
.IP \fBnice_priority\fR
Modify scheduling priority of server processes. Niceness values range from -20
(most favorable to the process) to 19 (least favorable to the process).
The default does not modify priority.
.IP \fBionice_class\fR
Modify I/O scheduling class of server processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and priority.
Work only with ionice_priority.
.IP \fBionice_priority\fR
Modify I/O scheduling priority of server processes. I/O niceness priority
is a number which goes from 0 to 7. The higher the value, the lower
the I/O priority of the process. Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
.RE
.PD
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-proxy-server and
also Openstack-Swift as a whole can be found at
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html
and
.BI http://swift.openstack.org

View File

@ -14,24 +14,24 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-account-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-account-auditor
\- Openstack-swift account auditor
.B swift-account-auditor
\- OpenStack Swift account auditor
.SH SYNOPSIS
.LP
.B swift-account-auditor
.B swift-account-auditor
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The account auditor crawls the local account system checking the integrity of accounts
objects. If corruption is found (in the case of bit rot, for example), the file is
The account auditor crawls the local account system checking the integrity of accounts
objects. If corruption is found (in the case of bit rot, for example), the file is
quarantined, and replication will replace the bad file from another replica.
The options are as follows:
@ -46,16 +46,16 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
.BI swift-account-auditor
and also about Openstack-Swift as a whole can be found at
More in depth documentation in regards to
.BI swift-account-auditor
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html
.SH "SEE ALSO"

View File

@ -1,5 +1,5 @@
.\"
.\" Author: Madhuri Kumari<madhuri.rai07@gmail.com>
.\" Author: Madhuri Kumari<madhuri.rai07@gmail.com>
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
@ -13,28 +13,28 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-account-info 1 "3/22/2014" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-account-info
\- Openstack-swift account-info tool
\- OpenStack Swift account-info tool
.SH SYNOPSIS
.LP
.B swift-account-info
[ACCOUNT_DB_FILE] [SWIFT_DIR]
[ACCOUNT_DB_FILE] [SWIFT_DIR]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve
information about an account that is located on the storage node. One calls
the tool with a given db file as it is stored on the storage node system.
It will then return several information about that account such as;
This is a very simple swift tool that allows a swiftop engineer to retrieve
information about an account that is located on the storage node. One calls
the tool with a given db file as it is stored on the storage node system.
It will then return several information about that account such as;
.PD 0
.IP "- Account"
.IP "- Account"
.IP "- Account hash "
.IP "- Created timestamp "
.IP "- Put timestamp "
@ -46,11 +46,11 @@ It will then return several information about that account such as;
.IP "- ID"
.IP "- User Metadata "
.IP "- Ring Location"
.PD
.PD
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html
.SH "SEE ALSO"

View File

@ -14,24 +14,24 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-account-reaper 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-account-reaper
\- Openstack-swift account reaper
\- OpenStack Swift account reaper
.SH SYNOPSIS
.LP
.B swift-account-reaper
.B swift-account-reaper
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
Removes data from status=DELETED accounts. These are accounts that have
been asked to be removed by the reseller via services remove_storage_account
XMLRPC call.
XMLRPC call.
.PP
The account is not deleted immediately by the services call, but instead
the account is simply marked for deletion by setting the status column in
@ -51,17 +51,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
.BI swift-object-auditor
and also about Openstack-Swift as a whole can be found at
More in depth documentation in regards to
.BI swift-object-auditor
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,31 +14,31 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-account-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-account-replicator
\- Openstack-swift account replicator
.B swift-account-replicator
\- OpenStack Swift account replicator
.SH SYNOPSIS
.LP
.B swift-account-replicator
.B swift-account-replicator
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Account replication uses a combination of hashes and shared high
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Account replication uses a combination of hashes and shared high
water marks to quickly compare subsections of each partition.
.PP
Replication updates are push based. Account replication push missing records over
Replication updates are push based. Account replication push missing records over
HTTP or rsync whole database files. The replicator also ensures that data is removed
from the system. When an account item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
from the system. When an account item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
is removed from the entire system.
The options are as follows:
@ -53,17 +53,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-account-replicator
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,32 +14,32 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-account-server 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-account-server
\- Openstack-swift account server
\- OpenStack Swift account server
.SH SYNOPSIS
.LP
.B swift-account-server
[CONFIG] [-h|--help] [-v|--verbose]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The Account Server's primary job is to handle listings of containers. The listings
are stored as sqlite database files, and replicated across the cluster similar to how
objects are.
objects are.
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-account-server
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html
and
and
.BI http://docs.openstack.org

View File

@ -14,24 +14,24 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-auditor
\- Openstack-swift container auditor
.B swift-container-auditor
\- OpenStack Swift container auditor
.SH SYNOPSIS
.LP
.B swift-container-auditor
.B swift-container-auditor
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The container auditor crawls the local container system checking the integrity of container
objects. If corruption is found (in the case of bit rot, for example), the file is
The container auditor crawls the local container system checking the integrity of container
objects. If corruption is found (in the case of bit rot, for example), the file is
quarantined, and replication will replace the bad file from another replica.
The options are as follows:
@ -46,17 +46,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
.BI swift-container-auditor
and also about Openstack-Swift as a whole can be found at
More in depth documentation in regards to
.BI swift-container-auditor
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,29 +14,29 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-info 1 "3/20/2013" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-info
\- Openstack-swift container-info tool
\- OpenStack Swift container-info tool
.SH SYNOPSIS
.LP
.B swift-container-info
[CONTAINER_DB_FILE] [SWIFT_DIR]
[CONTAINER_DB_FILE] [SWIFT_DIR]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve
This is a very simple swift tool that allows a swiftop engineer to retrieve
information about a container that is located on the storage node.
One calls the tool with a given container db file as
it is stored on the storage node system.
It will then return several information about that container such as;
One calls the tool with a given container db file as
it is stored on the storage node system.
It will then return several information about that container such as;
.PD 0
.IP "- Account it belongs to"
.IP "- Account it belongs to"
.IP "- Container "
.IP "- Created timestamp "
.IP "- Put timestamp "
@ -50,14 +50,14 @@ It will then return several information about that container such as;
.IP "- Hash "
.IP "- ID "
.IP "- User metadata "
.IP "- X-Container-Sync-Point 1 "
.IP "- X-Container-Sync-Point 2 "
.IP "- X-Container-Sync-Point 1 "
.IP "- X-Container-Sync-Point 2 "
.IP "- Location on the ring "
.PD
.PD
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html
.SH "SEE ALSO"

View File

@ -14,31 +14,31 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-replicator
\- Openstack-swift container replicator
.B swift-container-replicator
\- OpenStack Swift container replicator
.SH SYNOPSIS
.LP
.B swift-container-replicator
.B swift-container-replicator
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Container replication uses a combination of hashes and shared high
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Container replication uses a combination of hashes and shared high
water marks to quickly compare subsections of each partition.
.PP
Replication updates are push based. Container replication push missing records over
Replication updates are push based. Container replication push missing records over
HTTP or rsync whole database files. The replicator also ensures that data is removed
from the system. When an container item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
from the system. When an container item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
is removed from the entire system.
The options are as follows:
@ -53,17 +53,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-container-replicator
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,37 +14,37 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-server 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-server
\- Openstack-swift container server
\- OpenStack Swift container server
.SH SYNOPSIS
.LP
.B swift-container-server
[CONFIG] [-h|--help] [-v|--verbose]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The Container Server's primary job is to handle listings of objects. It doesn't know
where those objects are, just what objects are in a specific container. The listings
are stored as sqlite database files, and replicated across the cluster similar to how
objects are. Statistics are also tracked that include the total number of objects, and
The Container Server's primary job is to handle listings of objects. It doesn't know
where those objects are, just what objects are in a specific container. The listings
are stored as sqlite database files, and replicated across the cluster similar to how
objects are. Statistics are also tracked that include the total number of objects, and
total storage usage for that container.
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-container-server
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html
and
and
.BI http://docs.openstack.org
.LP
.LP
.SH "SEE ALSO"
.BR container-server.conf(5)

View File

@ -14,25 +14,25 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-sync 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-sync
\- Openstack-swift container sync
\- OpenStack Swift container sync
.SH SYNOPSIS
.LP
.B swift-container-sync
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
Swift has a feature where all the contents of a container can be mirrored to
another container through background synchronization. Swift cluster operators
configure their cluster to allow/accept sync requests to/from other clusters,
and the user specifies where to sync their container to along with a secret
and the user specifies where to sync their container to along with a secret
synchronization key.
.PP
The swift-container-sync does the job of sending updates to the remote container.
@ -42,14 +42,14 @@ newer rows since the last sync will trigger PUTs or DELETEs to the other contain
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-container-sync
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/overview_container_sync.html
and
and
.BI http://docs.openstack.org
.LP
.LP
.SH "SEE ALSO"
.BR container-server.conf(5)

View File

@ -14,31 +14,31 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-container-updater 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-container-updater
\- Openstack-swift container updater
\- OpenStack Swift container updater
.SH SYNOPSIS
.LP
.B swift-container-updater
.B swift-container-updater
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The container updater is responsible for updating container information in the account database.
The container updater is responsible for updating container information in the account database.
It will walk the container path in the system looking for container DBs and sending updates
to the account server as needed as it goes along.
to the account server as needed as it goes along.
There are times when account data can not be immediately updated. This usually occurs
during failure scenarios or periods of high load. This is where an eventual consistency
window will most likely come in to play.
There are times when account data can not be immediately updated. This usually occurs
during failure scenarios or periods of high load. This is where an eventual consistency
window will most likely come in to play.
In practice, the consistency window is only as large as the frequency at which
the updater runs and may not even be noticed as the proxy server will route
In practice, the consistency window is only as large as the frequency at which
the updater runs and may not even be noticed as the proxy server will route
listing requests to the first account server which responds. The server under
load may not be the one that serves subsequent listing requests one of the other
two replicas may handle the listing.
@ -55,16 +55,16 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-container-updater
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,26 +14,26 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-dispersion-populate 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-dispersion-populate
\- Openstack-swift dispersion populate
\- OpenStack Swift dispersion populate
.SH SYNOPSIS
.LP
.B swift-dispersion-populate [--container-suffix-start] [--object-suffix-start] [--container-only|--object-only] [--insecure] [conf_file]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is one of the swift-dispersion utilities that is used to evaluate the
overall cluster health. This is accomplished by checking if a set of
overall cluster health. This is accomplished by checking if a set of
deliberately distributed containers and objects are currently in their
proper places within the cluster.
.PP
.PP
For instance, a common deployment has three replicas of each object.
The health of that object can be measured by checking if each replica
is in its proper place. If only 2 of the 3 is in place the object's health
@ -48,13 +48,13 @@ we need to run the \fBswift-dispersion-report\fR tool to check the health of eac
of these containers and objects.
.PP
These tools need direct access to the entire cluster and to the ring files.
Installing them on a proxy server will probably do or a box used for swift
administration purposes that also contains the common swift packages and ring.
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the
These tools need direct access to the entire cluster and to the ring files.
Installing them on a proxy server will probably do or a box used for swift
administration purposes that also contains the common swift packages and ring.
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the
same configuration file, /etc/swift/dispersion.conf . The account used by these
tool should be a dedicated account for the dispersion stats and also have admin
privileges.
privileges.
.SH OPTIONS
.RS 0
@ -70,14 +70,14 @@ Start object suffix at NUMBER and resume population at this point; default: 0
Only run object population
.IP "\fB--container-only\fR"
Only run container population
.IP "\fB--object-only\fR"
Only run object population
.IP "\fB--no-overlap\fR"
Increase coverage by amount in dispersion_coverage option with no overlap of existing partitions (if run more than once)
.IP "\fB-P, --policy-name\fR"
Specify storage policy name
.SH CONFIGURATION
.PD 0
Example \fI/etc/swift/dispersion.conf\fR:
.PD 0
Example \fI/etc/swift/dispersion.conf\fR:
.RS 3
.IP "[dispersion]"
@ -93,10 +93,10 @@ Example \fI/etc/swift/dispersion.conf\fR:
.IP "# concurrency = 25"
.IP "# endpoint_type = publicURL"
.RE
.PD
.PD
.SH EXAMPLE
.PP
.PP
.PD 0
$ swift-dispersion-populate
.RS 1
@ -105,17 +105,17 @@ $ swift-dispersion-populate
.RE
.PD
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html#dispersion-report
and
.BI http://swift.openstack.org
.SH "SEE ALSO"
.BR swift-dispersion-report(1),
.BR dispersion.conf (5)
.BR dispersion.conf(5)

View File

@ -14,45 +14,45 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-dispersion-report 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-dispersion-report
\- Openstack-swift dispersion report
\- OpenStack Swift dispersion report
.SH SYNOPSIS
.LP
.B swift-dispersion-report [-d|--debug] [-j|--dump-json] [-p|--partitions] [--container-only|--object-only] [--insecure] [conf_file]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is one of the swift-dispersion utilities that is used to evaluate the
overall cluster health. This is accomplished by checking if a set of
overall cluster health. This is accomplished by checking if a set of
deliberately distributed containers and objects are currently in their
proper places within the cluster.
.PP
.PP
For instance, a common deployment has three replicas of each object.
The health of that object can be measured by checking if each replica
is in its proper place. If only 2 of the 3 is in place the object's health
can be said to be at 66.66%, where 100% would be perfect.
.PP
Once the \fBswift-dispersion-populate\fR has been used to populate the
dispersion account, one should run the \fBswift-dispersion-report\fR tool
Once the \fBswift-dispersion-populate\fR has been used to populate the
dispersion account, one should run the \fBswift-dispersion-report\fR tool
repeatedly for the life of the cluster, in order to check the health of each
of these containers and objects.
.PP
These tools need direct access to the entire cluster and to the ring files.
Installing them on a proxy server will probably do or a box used for swift
administration purposes that also contains the common swift packages and ring.
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the
These tools need direct access to the entire cluster and to the ring files.
Installing them on a proxy server will probably do or a box used for swift
administration purposes that also contains the common swift packages and ring.
Both \fBswift-dispersion-populate\fR and \fBswift-dispersion-report\fR use the
same configuration file, /etc/swift/dispersion.conf . The account used by these
tool should be a dedicated account for the dispersion stats and also have admin
privileges.
privileges.
.SH OPTIONS
.RS 0
@ -60,40 +60,28 @@ privileges.
.IP "\fB-d, --debug\fR"
output any 404 responses to standard error
.SH OPTIONS
.RS 0
.PD 1
.IP "\fB-j, --dump-json\fR"
output dispersion report in json format
.SH OPTIONS
.RS 0
.PD 1
.IP "\fB-p, --partitions\fR"
output the partition numbers that have any missing replicas
.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--container-only\fR"
Only run the container report
.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--object-only\fR"
Only run the object report
.SH OPTIONS
.RS 0
.PD 1
.IP "\fB--insecure\fR"
Allow accessing insecure keystone server. The keystone's certificate will not
be verified.
.IP "\fB-P, --policy-name\fR"
Specify storage policy name
.SH CONFIGURATION
.PD 0
Example \fI/etc/swift/dispersion.conf\fR:
.PD 0
Example \fI/etc/swift/dispersion.conf\fR:
.RS 3
.IP "[dispersion]"
@ -110,12 +98,12 @@ Example \fI/etc/swift/dispersion.conf\fR:
.IP "# dump_json = no"
.IP "# endpoint_type = publicURL"
.RE
.PD
.PD
.SH EXAMPLE
.PP
.PP
.PD 0
$ swift-dispersion-report
$ swift-dispersion-report
.RS 1
@ -129,17 +117,17 @@ $ swift-dispersion-report
.RE
.PD
.SH DOCUMENTATION
.LP
More in depth documentation about the swift-dispersion utilities and
also Openstack-Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html#cluster-health
and
also OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/admin_guide.html#dispersion-report
and
.BI http://swift.openstack.org
.SH "SEE ALSO"
.BR swift-dispersion-populate(1),
.BR dispersion.conf (5)
.BR dispersion.conf(5)

View File

@ -14,25 +14,25 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-get-nodes 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-get-nodes
\- Openstack-swift get-nodes tool
\- OpenStack Swift get-nodes tool
.SH SYNOPSIS
.LP
.B swift-get-nodes
.B swift-get-nodes
\ <ring.gz> <account> [<container> [<object>]]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The swift-get-nodes tool can be used to find out the location where
a particular account, container or object item is located within the
swift cluster nodes. For example, if you have the account hash and a container
name that belongs to that account, you can use swift-get-nodes to lookup
a particular account, container or object item is located within the
swift cluster nodes. For example, if you have the account hash and a container
name that belongs to that account, you can use swift-get-nodes to lookup
where the container resides by using the container ring.
.RS 0
@ -40,7 +40,7 @@ where the container resides by using the container ring.
.RE
.RS 4
.PD 0
.PD 0
.IP "$ swift-get-nodes /etc/swift/account.ring.gz MyAccount-12ac01446be2"
.PD 0
@ -67,12 +67,12 @@ where the container resides by using the container ring.
.IP "ssh 172.24.24.32 ls -lah /srv/node/sde/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/"
.IP "ssh 172.24.24.26 ls -lah /srv/node/sdv/accounts/221082/cce/d7e6ba68cfdce0f0e4ca7890e46cacce/ # [Handoff] "
.PD
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,25 +14,25 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-init 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-init
\- Openstack-swift swift-init tool
\- OpenStack Swift swift-init tool
.SH SYNOPSIS
.LP
.B swift-init
<server> [<server> ...] <command> [options]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The swift-init tool can be used to initialize all swift daemons available as part of
openstack-swift. Instead of calling individual init scripts for each
swift daemon, one can just use swift-init. With swift-init you can initialize
just one swift service, such as the "proxy", or a combination of them. The tool also
OpenStack Swift. Instead of calling individual init scripts for each
swift daemon, one can just use swift-init. With swift-init you can initialize
just one swift service, such as the "proxy", or a combination of them. The tool also
allows one to use the keywords such as "all", "main" and "rest" for the <server> argument.
@ -41,7 +41,7 @@ allows one to use the keywords such as "all", "main" and "rest" for the <server>
.PD 0
.RS 4
.IP "\fIproxy\fR" "4"
.IP " - Initializes the swift proxy daemon"
.IP " - Initializes the swift proxy daemon"
.RE
.RS 4
@ -75,7 +75,7 @@ allows one to use the keywords such as "all", "main" and "rest" for the <server>
.IP " - Initializes all the other \fBswift background daemons\fR"
.IP " (updater, replicator, auditor, reaper, etc)"
.RE
.PD
.PD
\fBCommands:\fR
@ -92,14 +92,14 @@ allows one to use the keywords such as "all", "main" and "rest" for the <server>
.IP "\fIstart\fR: \t\t\t starts a server"
.IP "\fIstatus\fR: \t\t\t display status of tracked pids for server"
.IP "\fIstop\fR: \t\t\t stops a server"
.PD
.PD
.RE
\fBOptions:\fR
.RS 4
.PD 0
.PD 0
.IP "-h, --help \t\t\t show this help message and exit"
.IP "-v, --verbose \t\t\t display verbose output"
.IP "-w, --no-wait \t\t\t won't wait for server to start before returning
@ -112,14 +112,14 @@ allows one to use the keywords such as "all", "main" and "rest" for the <server>
.IP "--strict return non-zero status code if some config is missing. Default mode if server is explicitly named."
.IP "--non-strict return zero status code even if some config is missing. Default mode if server is one of aliases `all`, `main` or `rest`."
.IP "--kill-after-timeout kill daemon and all children after kill-wait period."
.PD
.PD
.RE
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,23 +14,23 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-object-auditor 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-object-auditor
\- Openstack-swift object auditor
.B swift-object-auditor
\- OpenStack Swift object auditor
.SH SYNOPSIS
.LP
.B swift-object-auditor
.B swift-object-auditor
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once] [-z|--zero_byte_fps]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The object auditor crawls the local object system checking the integrity of objects.
If corruption is found (in the case of bit rot, for example), the file is
The object auditor crawls the local object system checking the integrity of objects.
If corruption is found (in the case of bit rot, for example), the file is
quarantined, and replication will replace the bad file from another replica.
The options are as follows:
@ -46,7 +46,7 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.IP "-z ZERO_BYTE_FPS"
@ -56,13 +56,13 @@ The options are as follows:
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
.BI swift-object-auditor
and also about Openstack-Swift as a whole can be found at
More in depth documentation in regards to
.BI swift-object-auditor
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B swift-object-expirer
\- Openstack-swift object expirer
\- OpenStack Swift object expirer
.SH SYNOPSIS
.LP
@ -65,7 +65,7 @@ More in depth documentation in regards to
.BI swift-object-expirer
can be found at
.BI http://swift.openstack.org/overview_expiring_objects.html
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,28 +14,28 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-object-info 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-object-info
\- Openstack-swift object-info tool
\- OpenStack Swift object-info tool
.SH SYNOPSIS
.LP
.B swift-object-info
[OBJECT_FILE] [SWIFT_DIR]
[OBJECT_FILE] [SWIFT_DIR]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
This is a very simple swift tool that allows a swiftop engineer to retrieve
information about an object that is located on the storage node. One calls
the tool with a given object file as it is stored on the storage node system.
It will then return several information about that object such as;
This is a very simple swift tool that allows a swiftop engineer to retrieve
information about an object that is located on the storage node. One calls
the tool with a given object file as it is stored on the storage node system.
It will then return several information about that object such as;
.PD 0
.IP "- Account it belongs to"
.IP "- Account it belongs to"
.IP "- Container "
.IP "- Object hash "
.IP "- Content Type "
@ -44,11 +44,11 @@ It will then return several information about that object such as;
.IP "- Content Length "
.IP "- User Metadata "
.IP "- Location on the ring "
.PD
.PD
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html
.SH "SEE ALSO"

View File

@ -14,31 +14,31 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-object-replicator 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-object-replicator
\- Openstack-swift object replicator
.B swift-object-replicator
\- OpenStack Swift object replicator
.SH SYNOPSIS
.LP
.B swift-object-replicator
.B swift-object-replicator
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Object replication uses a hash list to quickly compare subsections
Replication is designed to keep the system in a consistent state in the face of
temporary error conditions like network outages or drive failures. The replication
processes compare local data with each remote copy to ensure they all contain the
latest version. Object replication uses a hash list to quickly compare subsections
of each partition.
.PP
Replication updates are push based. For object replication, updating is just a matter
Replication updates are push based. For object replication, updating is just a matter
of rsyncing files to the peer. The replicator also ensures that data is removed
from the system. When an object item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
from the system. When an object item is deleted a tombstone is set as the latest
version of the item. The replicator will see the tombstone and ensure that the item
is removed from the entire system.
The options are as follows:
@ -53,17 +53,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-object-replicator
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,39 +14,39 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-object-server 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-object-server
\- Openstack-swift object server.
\- OpenStack Swift object server.
.SH SYNOPSIS
.LP
.B swift-object-server
[CONFIG] [-h|--help] [-v|--verbose]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The Object Server is a very simple blob storage server that can store, retrieve
and delete objects stored on local devices. Objects are stored as binary files
and delete objects stored on local devices. Objects are stored as binary files
on the filesystem with metadata stored in the file's extended attributes (xattrs).
This requires that the underlying filesystem choice for object servers support
xattrs on files. Some filesystems, like ext3, have xattrs turned off by default.
This requires that the underlying filesystem choice for object servers support
xattrs on files. Some filesystems, like ext3, have xattrs turned off by default.
Each object is stored using a path derived from the object name's hash and the operation's
timestamp. Last write always wins, and ensures that the latest object version will be
served. A deletion is also treated as a version of the file (a 0 byte file ending with
".ts", which stands for tombstone). This ensures that deleted files are replicated
".ts", which stands for tombstone). This ensures that deleted files are replicated
correctly and older versions don't magically reappear due to failure scenarios.
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-object-server
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html
and
and
.BI http://docs.openstack.org

View File

@ -14,36 +14,36 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-object-updater 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-object-updater
\- Openstack-swift object updater
\- OpenStack Swift object updater
.SH SYNOPSIS
.LP
.B swift-object-updater
[CONFIG] [-h|--help] [-v|--verbose] [-o|--once]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The object updater is responsible for updating object information in container listings.
It will check to see if there are any locally queued updates on the filesystem of each
devices, what is also known as async pending file(s), walk each one and update the
The object updater is responsible for updating object information in container listings.
It will check to see if there are any locally queued updates on the filesystem of each
devices, what is also known as async pending file(s), walk each one and update the
container listing.
For example, suppose a container server is under load and a new object is put
into the system. The object will be immediately available for reads as soon as
the proxy server responds to the client with success. However, the object
server has not been able to update the object listing in the container server.
Therefore, the update would be queued locally for a later update. Container listings,
For example, suppose a container server is under load and a new object is put
into the system. The object will be immediately available for reads as soon as
the proxy server responds to the client with success. However, the object
server has not been able to update the object listing in the container server.
Therefore, the update would be queued locally for a later update. Container listings,
therefore, may not immediately contain the object. This is where an eventual consistency
window will most likely come in to play.
window will most likely come in to play.
In practice, the consistency window is only as large as the frequency at which
the updater runs and may not even be noticed as the proxy server will route
In practice, the consistency window is only as large as the frequency at which
the updater runs and may not even be noticed as the proxy server will route
listing requests to the first container server which responds. The server under
load may not be the one that serves subsequent listing requests one of the other
two replicas may handle the listing.
@ -60,17 +60,17 @@ The options are as follows:
.IP "-o"
.IP "--once"
.RS 4
.IP "only run one pass of daemon"
.IP "only run one pass of daemon"
.RE
.PD
.PD
.RE
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-object-updater
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -0,0 +1,69 @@
.\"
.\" Author: Paul Dardeau <paul.dardeau@intel.com>
.\" Copyright (c) 2016 OpenStack Foundation.
.\"
.\" Licensed under the Apache License, Version 2.0 (the "License");
.\" you may not use this file except in compliance with the License.
.\" You may obtain a copy of the License at
.\"
.\" http://www.apache.org/licenses/LICENSE-2.0
.\"
.\" Unless required by applicable law or agreed to in writing, software
.\" distributed under the License is distributed on an "AS IS" BASIS,
.\" WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.TH swift-oldies 1 "8/04/2016" "Linux" "OpenStack Swift"
.SH NAME
.LP
.B swift-oldies
\- OpenStack Swift oldies tool
.SH SYNOPSIS
.LP
.B swift-oldies
[-h|--help] [-a|--age]
.SH DESCRIPTION
.PP
Lists Swift processes that have been running more than a specific length of
time (in hours). This is done by scanning the list of currently executing
processes (via ps command) and examining the execution time of those python
processes whose program names begin with 'swift-'.
Example (see all Swift processes older than two days):
swift-oldies \-a 48
The options are as follows:
.RS 4
.PD 0
.IP "-a HOURS"
.IP "--age=HOURS"
.RS 4
.IP "Look for processes at least HOURS old; default: 720 (30 days)"
.RE
.PD 0
.IP "-h"
.IP "--help"
.RS 4
.IP "Display program help and exit"
.PD
.RE
.SH DOCUMENTATION
.LP
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html
.SH "SEE ALSO"
.BR swift-orphans(1)

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B swift-orphans
\- Openstack-swift orphans tool
\- OpenStack Swift orphans tool
.SH SYNOPSIS
.LP
@ -65,6 +65,6 @@ The options are as follows:
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html

View File

@ -14,35 +14,35 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-proxy-server 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-proxy-server
\- Openstack-swift proxy server.
.B swift-proxy-server
\- OpenStack Swift proxy server.
.SH SYNOPSIS
.LP
.B swift-proxy-server
[CONFIG] [-h|--help] [-v|--verbose]
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The Swift Proxy Server is responsible for tying together the rest of the Swift architecture.
For each request, it will look up the location of the account, container, or object in the
ring and route the request accordingly. The public API is also exposed through the Proxy
Server. A large number of failures are also handled in the Proxy Server. For example,
The Swift Proxy Server is responsible for tying together the rest of the Swift architecture.
For each request, it will look up the location of the account, container, or object in the
ring and route the request accordingly. The public API is also exposed through the Proxy
Server. A large number of failures are also handled in the Proxy Server. For example,
if a server is unavailable for an object PUT, it will ask the ring for a handoff server
and route there instead. When objects are streamed to or from an object server, they are
streamed directly through the proxy server to or from the user the proxy server does
streamed directly through the proxy server to or from the user the proxy server does
not spool them.
.SH DOCUMENTATION
.LP
More in depth documentation in regards to
More in depth documentation in regards to
.BI swift-proxy-server
and also about Openstack-Swift as a whole can be found at
and also about OpenStack Swift as a whole can be found at
.BI http://swift.openstack.org/index.html

View File

@ -20,7 +20,7 @@
.SH NAME
.LP
.B swift-recon
\- Openstack-swift recon middleware cli tool
\- OpenStack Swift recon middleware cli tool
.SH SYNOPSIS
.LP
@ -124,7 +124,7 @@ cronjob to run the swift-recon-cron script periodically:
.SH DOCUMENTATION
.LP
More documentation about Openstack-Swift can be found at
More documentation about OpenStack Swift can be found at
.BI http://swift.openstack.org/index.html
Also more specific documentation about swift-recon can be found at
.BI http://swift.openstack.org/admin_guide.html#cluster-telemetry-and-monitoring

View File

@ -14,26 +14,26 @@
.\" implied.
.\" See the License for the specific language governing permissions and
.\" limitations under the License.
.\"
.\"
.TH swift-ring-builder 1 "8/26/2011" "Linux" "OpenStack Swift"
.SH NAME
.SH NAME
.LP
.B swift-ring-builder
\- Openstack-swift ring builder
\- OpenStack Swift ring builder
.SH SYNOPSIS
.LP
.B swift-ring-builder
<builder_file> <commands> <arguments> <...>
.SH DESCRIPTION
.SH DESCRIPTION
.PP
The swift-ring-builder utility is used to create, search and manipulate
the swift storage ring. The ring-builder assigns partitions to devices and
The swift-ring-builder utility is used to create, search and manipulate
the swift storage ring. The ring-builder assigns partitions to devices and
writes an optimized Python structure to a gzipped, pickled file on disk for
shipping out to the servers. The server processes just check the modification
time of the file occasionally and reload their in-memory copies of the ring
shipping out to the servers. The server processes just check the modification
time of the file occasionally and reload their in-memory copies of the ring
structure as needed. Because of how the ring-builder manages changes to the
ring, using a slightly older ring usually just means one of the three replicas
for a subset of the partitions will be incorrect, which can be easily worked around.
@ -59,12 +59,12 @@ needs to interact with the rings manually.
.SH SEARCH
.PD 0
.PD 0
.IP "\fB<search-value>\fR"
.RS 5
.IP "Can be of the form:"
.IP "d<device_id>z<zone>-<ip>:<port>/<device_name>_<meta>"
.IP "d<device_id>r<region>z<zone>-<ip>:<port>/<device_name>_<meta>"
.IP "Any part is optional, but you must include at least one, examples:"
@ -73,6 +73,7 @@ needs to interact with the rings manually.
.IP "z1 Matches devices in zone 1"
.IP "z1-1.2.3.4 Matches devices in zone 1 with the ip 1.2.3.4"
.IP "1.2.3.4 Matches devices in any zone with the ip 1.2.3.4"
.IP "r1z1:5678 Matches devices in zone 1 present in region 1 using port 5678"
.IP "z1:5678 Matches devices in zone 1 using port 5678"
.IP ":5678 Matches devices that use port 5678"
.IP "/sdb1 Matches devices with the device name sdb1"
@ -81,12 +82,12 @@ needs to interact with the rings manually.
.IP "[::1] Matches devices in any zone with the ip ::1"
.IP "z1-[::1]:5678 Matches devices in zone 1 with ip ::1 and port 5678"
.RE
Most specific example:
.RS 3
d74z1-1.2.3.4:5678/sdb1_"snet: 5.6.7.8"
.RE
d74z1-1.2.3.4:5678/sdb1_"snet: 5.6.7.8"
.RE
Nerd explanation:
@ -94,7 +95,7 @@ Nerd explanation:
.IP "All items require their single character prefix except the ip, in which case the - is optional unless the device id or zone is also included."
.RE
.RE
.PD
.PD
.SH OPTIONS
@ -104,12 +105,12 @@ Assume a yes response to all questions
.SH COMMANDS
.PD 0
.PD 0
.IP "\fB<builder_file>\fR"
.RS 5
Shows information about the ring and the devices within.
Shows information about the ring and the devices within.
.RE
@ -123,15 +124,15 @@ Shows information about matching devices.
.IP "\fBadd\fR r<region>z<zone>-<ip>:<port>/<device_name>_<meta> <weight>"
.IP "\fBadd\fR -r <region> -z <zone> -i <ip> -p <port> -d <device_name> -m <meta> -w <weight>"
.RS 5
Adds a device to the ring with the given information. No partitions will be
assigned to the new device until after running 'rebalance'. This is so you
Adds a device to the ring with the given information. No partitions will be
assigned to the new device until after running 'rebalance'. This is so you
can make multiple device changes and rebalance them all just once.
.RE
.IP "\fBcreate\fR <part_power> <replicas> <min_part_hours>"
.RS 5
Creates <builder_file> with 2^<part_power> partitions and <replicas>.
Creates <builder_file> with 2^<part_power> partitions and <replicas>.
<min_part_hours> is number of hours to restrict moving a partition more than once.
.RE
@ -143,7 +144,7 @@ the devices matching the search values given. The first column is the
assigned partition number and the second column is the number of device
matches for that partition. The list is ordered from most number of matches
to least. If there are a lot of devices to match against, this command
could take a while to run.
could take a while to run.
.RE
@ -155,37 +156,37 @@ Attempts to rebalance the ring by reassigning partitions that haven't been recen
.IP "\fBremove\fR <search-value> "
.RS 5
Removes the device(s) from the ring. This should normally just be used for
a device that has failed. For a device you wish to decommission, it's best
to set its weight to 0, wait for it to drain all its data, then use this
remove command. This will not take effect until after running 'rebalance'.
Removes the device(s) from the ring. This should normally just be used for
a device that has failed. For a device you wish to decommission, it's best
to set its weight to 0, wait for it to drain all its data, then use this
remove command. This will not take effect until after running 'rebalance'.
This is so you can make multiple device changes and rebalance them all just once.
.RE
.IP "\fBset_info\fR <search-value> <ip>:<port>/<device_name>_<meta>"
.RS 5
Resets the device's information. This information isn't used to assign
partitions, so you can use 'write_ring' afterward to rewrite the current
ring with the newer device information. Any of the parts are optional
in the final <ip>:<port>/<device_name>_<meta> parameter; just give what you
want to change. For instance set_info d74 _"snet: 5.6.7.8" would just
Resets the device's information. This information isn't used to assign
partitions, so you can use 'write_ring' afterward to rewrite the current
ring with the newer device information. Any of the parts are optional
in the final <ip>:<port>/<device_name>_<meta> parameter; just give what you
want to change. For instance set_info d74 _"snet: 5.6.7.8" would just
update the meta data for device id 74.
.RE
.IP "\fBset_min_part_hours\fR <hours>"
.RS 5
Changes the <min_part_hours> to the given <hours>. This should be set to
however long a full replication/update cycle takes. We're working on a way
Changes the <min_part_hours> to the given <hours>. This should be set to
however long a full replication/update cycle takes. We're working on a way
to determine this more easily than scanning logs.
.RE
.IP "\fBset_weight\fR <search-value> <weight>"
.RS 5
Resets the device's weight. No partitions will be reassigned to or from the
device until after running 'rebalance'. This is so you can make multiple
Resets the device's weight. No partitions will be reassigned to or from the
device until after running 'rebalance'. This is so you can make multiple
device changes and rebalance them all just once.
.RE
@ -198,8 +199,8 @@ Just runs the validation routines on the ring.
.IP "\fBwrite_ring\fR"
.RS 5
Just rewrites the distributable ring file. This is done automatically after
a successful rebalance, so really this is only useful after one or more 'set_info'
Just rewrites the distributable ring file. This is done automatically after
a successful rebalance, so really this is only useful after one or more 'set_info'
calls when no rebalance is needed but you want to send out the new device information.
.RE
@ -208,18 +209,18 @@ calls when no rebalance is needed but you want to send out the new device inform
set_min_part_hours set_weight validate write_ring
\fBExit codes:\fR 0 = ring changed, 1 = ring did not change, 2 = error
.PD
.PD
.SH DOCUMENTATION
.LP
More in depth documentation about the swift ring and also Openstack-Swift as a
whole can be found at
.BI http://swift.openstack.org/overview_ring.html,
.BI http://swift.openstack.org/admin_guide.html#managing-the-rings
and
More in depth documentation about the swift ring and also OpenStack Swift as a
whole can be found at
.BI http://swift.openstack.org/overview_ring.html,
.BI http://swift.openstack.org/admin_guide.html#managing-the-rings
and
.BI http://swift.openstack.org

View File

@ -286,6 +286,96 @@ using the format `regex_pattern_X = regex_expression`, where `X` is a number.
This script has been tested on Ubuntu 10.04 and Ubuntu 12.04, so if you are
using a different distro or OS, some care should be taken before using in production.
------------------------------
Preventing Disk Full Scenarios
------------------------------
Prevent disk full scenarios by ensuring that the ``proxy-server`` blocks PUT
requests and rsync prevents replication to the specific drives.
You can prevent `proxy-server` PUT requests to low space disks by ensuring
``fallocate_reserve`` is set in the ``object-server.conf``. By default,
``fallocate_reserve`` is set to 1%. This blocks PUT requests that leave the
free disk space below 1% of the disk.
In order to prevent rsync replication to specific drives, firstly
setup ``rsync_module`` per disk in your ``object-replicator``.
Set this in ``object-server.conf``:
.. code::
[object-replicator]
rsync_module = {replication_ip}::object_{device}
Set the individual drives in ``rsync.conf``. For example:
.. code::
[object_sda]
max connections = 4
lock file = /var/lock/object_sda.lock
[object_sdb]
max connections = 4
lock file = /var/lock/object_sdb.lock
Finally, monitor the disk space of each disk and adjust the rsync
``max connections`` per drive to ``-1``. We recommend utilising your existing
monitoring solution to achieve this. The following is an example script:
.. code-block:: python
#!/usr/bin/env python
import os
import errno
RESERVE = 500 * 2 ** 20 # 500 MiB
DEVICES = '/srv/node1'
path_template = '/etc/rsync.d/disable_%s.conf'
config_template = '''
[object_%s]
max connections = -1
'''
def disable_rsync(device):
with open(path_template % device, 'w') as f:
f.write(config_template.lstrip() % device)
def enable_rsync(device):
try:
os.unlink(path_template % device)
except OSError as e:
# ignore file does not exist
if e.errno != errno.ENOENT:
raise
for device in os.listdir(DEVICES):
path = os.path.join(DEVICES, device)
st = os.statvfs(path)
free = st.f_bavail * st.f_frsize
if free < RESERVE:
disable_rsync(device)
else:
enable_rsync(device)
For the above script to work, ensure ``/etc/rsync.d/`` conf files are
included, by specifying ``&include`` in your ``rsync.conf`` file:
.. code::
&include /etc/rsync.d
Use this in conjunction with a cron job to periodically run the script, for example:
.. code::
# /etc/cron.d/devicecheck
* * * * * root /some/path/to/disable_rsync.py
.. _dispersion_report:
-----------------
@ -406,132 +496,141 @@ When you specify a policy the containers created also include the policy index,
thus even when running a container_only report, you will need to specify the
policy not using the default.
-----------------------------------
Geographically Distributed Clusters
-----------------------------------
-----------------------------------------------
Geographically Distributed Swift Considerations
-----------------------------------------------
Swift's default configuration is currently designed to work in a
single region, where a region is defined as a group of machines with
high-bandwidth, low-latency links between them. However, configuration
options exist that make running a performant multi-region Swift
cluster possible.
Swift provides two features that may be used to distribute replicas of objects
across multiple geographically distributed data-centers: with
:doc:`overview_global_cluster` object replicas may be dispersed across devices
from different data-centers by using `regions` in ring device descriptors; with
:doc:`overview_container_sync` objects may be copied between independent Swift
clusters in each data-center. The operation and configuration of each are
described in their respective documentation. The following points should be
considered when selecting the feature that is most appropriate for a particular
use case:
For the rest of this section, we will assume a two-region Swift
cluster: region 1 in San Francisco (SF), and region 2 in New York
(NY). Each region shall contain within it 3 zones, numbered 1, 2, and
3, for a total of 6 zones.
#. Global Clusters allows the distribution of object replicas across
data-centers to be controlled by the cluster operator on per-policy basis,
since the distribution is determined by the assignment of devices from
each data-center in each policy's ring file. With Container Sync the end
user controls the distribution of objects across clusters on a
per-container basis.
~~~~~~~~~~~~~
read_affinity
~~~~~~~~~~~~~
#. Global Clusters requires an operator to coordinate ring deployments across
multiple data-centers. Container Sync allows for independent management of
separate Swift clusters in each data-center, and for existing Swift
clusters to be used as peers in Container Sync relationships without
deploying new policies/rings.
This setting, combined with sorting_method setting, makes the proxy server prefer local backend servers for
GET and HEAD requests over non-local ones. For example, it is
preferable for an SF proxy server to service object GET requests
by talking to SF object servers, as the client will receive lower
latency and higher throughput.
#. Global Clusters seamlessly supports features that may rely on
cross-container operations such as large objects and versioned writes.
Container Sync requires the end user to ensure that all required
containers are sync'd for these features to work in all data-centers.
By default, Swift randomly chooses one of the three replicas to give
to the client, thereby spreading the load evenly. In the case of a
geographically-distributed cluster, the administrator is likely to
prioritize keeping traffic local over even distribution of results.
This is where the read_affinity setting comes in.
#. Global Clusters makes objects available for GET or HEAD requests in both
data-centers even if a replica of the object has not yet been
asynchronously migrated between data-centers, by forwarding requests
between data-centers. Container Sync is unable to serve requests for an
object in a particular data-center until the asynchronous sync process has
copied the object to that data-center.
Example::
#. Global Clusters may require less storage capacity than Container Sync to
achieve equivalent durability of objects in each data-center. Global
Clusters can restore replicas that are lost or corrupted in one
data-center using replicas from other data-centers. Container Sync
requires each data-center to independently manage the durability of
objects, which may result in each data-center storing more replicas than
with Global Clusters.
[app:proxy-server]
sorting_method = affinity
read_affinity = r1=100
#. Global Clusters execute all account/container metadata updates
synchronously to account/container replicas in all data-centers, which may
incur delays when making updates across WANs. Container Sync only copies
objects between data-centers and all Swift internal traffic is
confined to each data-center.
This will make the proxy attempt to service GET and HEAD requests from
backends in region 1 before contacting any backends in region 2.
However, if no region 1 backends are available (due to replica
placement, failed hardware, or other reasons), then the proxy will
fall back to backend servers in other regions.
#. Global Clusters does not yet guarantee the availability of objects stored
in Erasure Coded policies when one data-center is offline. With Container
Sync the availability of objects in each data-center is independent of the
state of other data-centers once objects have been synced. Container Sync
also allows objects to be stored using different policy types in different
data-centers.
Example::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Checking handoff partition distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[app:proxy-server]
sorting_method = affinity
read_affinity = r1z1=100, r1=200
You can check if handoff partitions are piling up on a server by
comparing the expected number of partitions with the actual number on
your disks. First get the number of partitions that are currently
assigned to a server using the ``dispersion`` command from
``swift-ring-builder``::
This will make the proxy attempt to service GET and HEAD requests from
backends in region 1 zone 1, then backends in region 1, then any other
backends. If a proxy is physically close to a particular zone or
zones, this can provide bandwidth savings. For example, if a zone
corresponds to servers in a particular rack, and the proxy server is
in that same rack, then setting read_affinity to prefer reads from
within the rack will result in less traffic between the top-of-rack
switches.
swift-ring-builder sample.builder dispersion --verbose
Dispersion is 0.000000, Balance is 0.000000, Overload is 0.00%
Required overload is 0.000000%
--------------------------------------------------------------------------
Tier Parts % Max 0 1 2 3
--------------------------------------------------------------------------
r1 8192 0.00 2 0 0 8192 0
r1z1 4096 0.00 1 4096 4096 0 0
r1z1-172.16.10.1 4096 0.00 1 4096 4096 0 0
r1z1-172.16.10.1/sda1 4096 0.00 1 4096 4096 0 0
r1z2 4096 0.00 1 4096 4096 0 0
r1z2-172.16.10.2 4096 0.00 1 4096 4096 0 0
r1z2-172.16.10.2/sda1 4096 0.00 1 4096 4096 0 0
r1z3 4096 0.00 1 4096 4096 0 0
r1z3-172.16.10.3 4096 0.00 1 4096 4096 0 0
r1z3-172.16.10.3/sda1 4096 0.00 1 4096 4096 0 0
r1z4 4096 0.00 1 4096 4096 0 0
r1z4-172.16.20.4 4096 0.00 1 4096 4096 0 0
r1z4-172.16.20.4/sda1 4096 0.00 1 4096 4096 0 0
r2 8192 0.00 2 0 8192 0 0
r2z1 4096 0.00 1 4096 4096 0 0
r2z1-172.16.20.1 4096 0.00 1 4096 4096 0 0
r2z1-172.16.20.1/sda1 4096 0.00 1 4096 4096 0 0
r2z2 4096 0.00 1 4096 4096 0 0
r2z2-172.16.20.2 4096 0.00 1 4096 4096 0 0
r2z2-172.16.20.2/sda1 4096 0.00 1 4096 4096 0 0
The read_affinity setting may contain any number of region/zone
specifiers; the priority number (after the equals sign) determines the
ordering in which backend servers will be contacted. A lower number
means higher priority.
As you can see from the output, each server should store 4096 partitions, and
each region should store 8192 partitions. This example used a partition power
of 13 and 3 replicas.
Note that read_affinity only affects the ordering of primary nodes
(see ring docs for definition of primary node), not the ordering of
handoff nodes.
With write_affinity enabled it is expected to have a higher number of
partitions on disk compared to the value reported by the
swift-ring-builder dispersion command. The number of additional (handoff)
partitions in region r1 depends on your cluster size, the amount
of incoming data as well as the replication speed.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
write_affinity and write_affinity_node_count
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's use the example from above with 6 nodes in 2 regions, and write_affinity
configured to write to region r1 first. `swift-ring-builder` reported that
each node should store 4096 partitions::
This setting makes the proxy server prefer local backend servers for
object PUT requests over non-local ones. For example, it may be
preferable for an SF proxy server to service object PUT requests
by talking to SF object servers, as the client will receive lower
latency and higher throughput. However, if this setting is used, note
that a NY proxy server handling a GET request for an object that was
PUT using write affinity may have to fetch it across the WAN link, as
the object won't immediately have any replicas in NY. However,
replication will move the object's replicas to their proper homes in
both SF and NY.
Expected partitions for region r2: 8192
Handoffs stored across 4 nodes in region r1: 8192 / 4 = 2048
Maximum number of partitions on each server in region r1: 2048 + 4096 = 6144
Note that only object PUT requests are affected by the write_affinity
setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT
requests are not affected.
Worst case is that handoff partitions in region 1 are populated with new
object replicas faster than replication is able to move them to region 2.
In that case you will see ~ 6144 partitions per
server in region r1. Your actual number should be lower and
between 4096 and 6144 partitions (preferably on the lower side).
This setting lets you trade data distribution for throughput. If
write_affinity is enabled, then object replicas will initially be
stored all within a particular region or zone, thereby decreasing the
quality of the data distribution, but the replicas will be distributed
over fast WAN links, giving higher throughput to clients. Note that
the replicators will eventually move objects to their proper,
well-distributed homes.
Now count the number of object partitions on a given server in region 1,
for example on 172.16.10.1. Note that the pathnames might be
different; `/srv/node/` is the default mount location, and `objects`
applies only to storage policy 0 (storage policy 1 would use
`objects-1` and so on)::
The write_affinity setting is useful only when you don't typically
read objects immediately after writing them. For example, consider a
workload of mainly backups: if you have a bunch of machines in NY that
periodically write backups to Swift, then odds are that you don't then
immediately read those backups in SF. If your workload doesn't look
like that, then you probably shouldn't use write_affinity.
find -L /srv/node/ -maxdepth 3 -type d -wholename "*objects/*" | wc -l
The write_affinity_node_count setting is only useful in conjunction
with write_affinity; it governs how many local object servers will be
tried before falling back to non-local ones.
Example::
[app:proxy-server]
write_affinity = r1
write_affinity_node_count = 2 * replicas
Assuming 3 replicas, this configuration will make object PUTs try
storing the object's replicas on up to 6 disks ("2 * replicas") in
region 1 ("r1"). Proxy server tries to find 3 devices for storing the
object. While a device is unavailable, it queries the ring for the 4th
device and so on until 6th device. If the 6th disk is still unavailable,
the last replica will be sent to other region. It doesn't mean there'll
have 6 replicas in region 1.
You should be aware that, if you have data coming into SF faster than
your link to NY can transfer it, then your cluster's data distribution
will get worse and worse over time as objects pile up in SF. If this
happens, it is recommended to disable write_affinity and simply let
object PUTs traverse the WAN link, as that will naturally limit the
object growth rate to what your WAN link can handle.
If this number is always on the upper end of the expected partition
number range (4096 to 6144) or increasing you should check your
replication speed and maybe even disable write_affinity.
Please refer to the next section how to collect metrics from Swift, and
especially :ref:`swift-recon -r <recon-replication>` how to check replication
stats.
--------------------------------
@ -658,6 +757,8 @@ This information can also be queried via the swift-recon command line utility::
Time to wait for a response from a server
--swiftdir=SWIFTDIR Default = /etc/swift
.. _recon-replication:
For example, to obtain container replication info from all hosts in zone "3"::
fhines@ubuntu:~$ swift-recon container -r --zone 3

View File

@ -6,19 +6,19 @@ You can store multiple versions of your content so that you can recover
from unintended overwrites. Object versioning is an easy way to
implement version control, which you can use with any type of content.
Note
~~~~
.. note::
You cannot version a large-object manifest file, but the large-object
manifest file can point to versioned segments.
You cannot version a large-object manifest file, but the large-object
manifest file can point to versioned segments.
.. note::
It is strongly recommended that you put non-current objects in a
different container than the container where current object versions
reside.
It is strongly recommended that you put non-current objects in a
different container than the container where current object versions
reside.
To enable object versioning, the cloud provider sets the
``allow_versions`` option to ``TRUE`` in the container configuration
file.
To allow object versioning within a cluster, the cloud provider should add the
``versioned_writes`` filter to the pipeline and set the
``allow_versioned_writes`` option to ``true`` in the
``[filter:versioned_writes]`` section of the proxy-server configuration file.
The ``X-Versions-Location`` header defines the
container that holds the non-current versions of your objects. You
@ -29,13 +29,21 @@ object versioning for all objects in the container. With a comparable
container automatically create non-current versions in the ``archive``
container.
Here's an example:
The ``X-Versions-Mode`` header defines the behavior of ``DELETE`` requests to
objects in the versioned container. In the default ``stack`` mode, deleting an
object will restore the most-recent version from the ``archive`` container,
overwriting the curent version. Alternatively you may specify ``history``
mode, where deleting an object will copy the current version to the
``archive`` then remove it from the ``current`` container.
Example Using ``stack`` Mode
----------------------------
#. Create the ``current`` container:
.. code::
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive"
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive" -H "X-Versions-Mode: stack"
.. code::
@ -70,7 +78,7 @@ Here's an example:
.. code::
<length><object_name><timestamp>
<length><object_name>/<timestamp>
Where ``length`` is the 3-character, zero-padded hexadecimal
character length of the object, ``<object_name>`` is the object name,
@ -117,12 +125,10 @@ Here's an example:
009my_object/1390512682.92052
Note
~~~~
A **POST** request to a versioned object updates only the metadata
for the object and does not create a new version of the object. New
versions are created only when the content of the object changes.
.. note::
A **POST** request to a versioned object updates only the metadata
for the object and does not create a new version of the object. New
versions are created only when the content of the object changes.
#. Issue a **DELETE** request to a versioned object to remove the
current version of the object and replace it with the next-most
@ -163,21 +169,163 @@ Note
on it. If want to completely remove an object and you have five
versions of it, you must **DELETE** it five times.
#. To disable object versioning for the ``current`` container, remove
its ``X-Versions-Location`` metadata header by sending an empty key
value.
Example Using ``history`` Mode
----------------------------
#. Create the ``current`` container:
.. code::
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: "
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive" -H "X-Versions-Mode: history"
.. code::
HTTP/1.1 202 Accepted
Content-Length: 76
HTTP/1.1 201 Created
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txe2476de217134549996d0-0052e19038
Date: Thu, 23 Jan 2014 21:57:12 GMT
X-Trans-Id: txb91810fb717347d09eec8-0052e18997
Date: Thu, 23 Jan 2014 21:28:55 GMT
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
#. Create the first version of an object in the ``current`` container:
.. code::
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code::
HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
Date: Thu, 23 Jan 2014 21:31:22 GMT
Nothing is written to the non-current version container when you
initially **PUT** an object in the ``current`` container. However,
subsequent **PUT** requests that edit an object trigger the creation
of a version of that object in the ``archive`` container.
These non-current versions are named as follows:
.. code::
<length><object_name>/<timestamp>
Where ``length`` is the 3-character, zero-padded hexadecimal
character length of the object, ``<object_name>`` is the object name,
and ``<timestamp>`` is the time when the object was initially created
as a current version.
#. Create a second version of the object in the ``current`` container:
.. code::
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code::
HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
Date: Thu, 23 Jan 2014 21:41:32 GMT
#. Issue a **GET** request to a versioned object to get the current
version of the object. You do not have to do any request redirects or
metadata lookups.
List older versions of the object in the ``archive`` container:
.. code::
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code::
HTTP/1.1 200 OK
Content-Length: 30
X-Container-Object-Count: 1
Accept-Ranges: bytes
X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0
Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
Date: Thu, 23 Jan 2014 21:45:50 GMT
009my_object/1390512682.92052
.. note::
A **POST** request to a versioned object updates only the metadata
for the object and does not create a new version of the object. New
versions are created only when the content of the object changes.
#. Issue a **DELETE** request to a versioned object to copy the
current version of the object to the archive container then delete it from
the current container. Subsequent **GET** requests to the object in the
current container will return 404 Not Found.
.. code::
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
.. code::
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
Date: Thu, 23 Jan 2014 21:51:25 GMT
List older versions of the object in the ``archive`` container::
.. code::
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code::
HTTP/1.1 200 OK
Content-Length: 90
X-Container-Object-Count: 3
Accept-Ranges: bytes
X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0
Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
Date: Thu, 23 Jan 2014 21:51:41 GMT
009my_object/1390512682.92052
009my_object/1390512692.23062
009my_object/1390513885.67732
In addition to the two previous versions of the object, the archive
container has a "delete marker" to record when the object was deleted.
To permanently delete a previous version, issue a **DELETE** to the version
in the archive container.
Disabling Object Versioning
---------------------------
To disable object versioning for the ``current`` container, remove
its ``X-Versions-Location`` metadata header by sending an empty key
value.
.. code::
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: "
.. code::
HTTP/1.1 202 Accepted
Content-Length: 76
Content-Type: text/html; charset=UTF-8
X-Trans-Id: txe2476de217134549996d0-0052e19038
Date: Thu, 23 Jan 2014 21:57:12 GMT
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>

View File

@ -178,3 +178,8 @@ storage host name. For example, prefix the path with
https://swift-cluster.example.com/v1/my_account/container/object
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
&temp_url_expires=1374497657
Note that if the above example is copied exactly, and used in a command
shell, then the ampersand is interpreted as an operator and the URL
will be truncated. Enclose the URL in quotation marks to avoid this.

View File

@ -114,3 +114,4 @@ Other
* `Swift Browser <https://github.com/zerovm/swift-browser>`_ - JavaScript interface for Swift
* `swift-ui <https://github.com/fanatic/swift-ui>`_ - OpenStack Swift web browser
* `Swift Durability Calculator <https://github.com/enovance/swift-durability-calculator>`_ - Data Durability Calculation Tool for Swift
* `swiftbackmeup <https://github.com/redhat-cip/swiftbackmeup>`_ - Utility that allows one to create backups and upload them to OpenStack Swift

View File

@ -1,4 +1,4 @@
================
Deployment Guide
================
@ -17,8 +17,8 @@ or 6.
Deployment Options
------------------
The swift services run completely autonomously, which provides for a lot of
flexibility when architecting the hardware deployment for swift. The 4 main
The Swift services run completely autonomously, which provides for a lot of
flexibility when architecting the hardware deployment for Swift. The 4 main
services are:
#. Proxy Services
@ -101,8 +101,12 @@ into consideration can include physical location, power availability, and
network connectivity. For example, in a small cluster you might decide to
split the zones up by cabinet, with each cabinet having its own power and
network connectivity. The zone concept is very abstract, so feel free to use
it in whatever way best isolates your data from failure. Zones are referenced
by number, beginning with 1.
it in whatever way best isolates your data from failure. Each zone exists
in a region.
A region is also an abstract concept that may be used to distinguish between
geographically separated areas as well as can be used within same datacenter.
Regions and zones are referenced by a positive integer.
You can now start building the ring with::
@ -114,17 +118,18 @@ specific partition can be moved in succession (24 is a good value for this).
Devices can be added to the ring with::
swift-ring-builder <builder_file> add z<zone>-<ip>:<port>/<device_name>_<meta> <weight>
swift-ring-builder <builder_file> add r<region>z<zone>-<ip>:<port>/<device_name>_<meta> <weight>
This will add a device to the ring where <builder_file> is the name of the
builder file that was created previously, <zone> is the number of the zone
this device is in, <ip> is the ip address of the server the device is in,
<port> is the port number that the server is running on, <device_name> is
the name of the device on the server (for example: sdb1), <meta> is a string
of metadata for the device (optional), and <weight> is a float weight that
determines how many partitions are put on the device relative to the rest of
the devices in the cluster (a good starting point is 100.0 x TB on the drive).
Add each device that will be initially in the cluster.
builder file that was created previously, <region> is the number of the region
the zone is in, <zone> is the number of the zone this device is in, <ip> is
the ip address of the server the device is in, <port> is the port number that
the server is running on, <device_name> is the name of the device on the server
(for example: sdb1), <meta> is a string of metadata for the device (optional),
and <weight> is a float weight that determines how many partitions are put on
the device relative to the rest of the devices in the cluster (a good starting
point is 100.0 x TB on the drive).Add each device that will be initially in the
cluster.
Once all of the devices are added to the ring, run::
@ -265,7 +270,7 @@ lexicographical order. Filenames starting with '.' are ignored. A mixture of
file and directory configuration paths is not supported - if the configuration
path is a file only that file will be parsed.
The swift service management tool ``swift-init`` has adopted the convention of
The Swift service management tool ``swift-init`` has adopted the convention of
looking for ``/etc/swift/{type}-server.conf.d/`` if the file
``/etc/swift/{type}-server.conf`` file does not exist.
@ -510,6 +515,27 @@ network_chunk_size 65536 Size of chunks to read/write over t
disk_chunk_size 65536 Size of chunks to read/write to disk
container_update_timeout 1 Time to wait while sending a container
update on object update.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
================================ ========== ==========================================
.. _object-server-options:
@ -594,6 +620,27 @@ splice no Use splice() for zero-copy
will appear in the object server
logs at startup, but your object
servers should continue to function.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
============================= ====================== ===============================================
[object-replicator]
@ -686,6 +733,33 @@ rsync_error_log_line_length 0 Limits how long rsync err
ring_check_interval 15 Interval for checking new ring
file
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server
processes. Niceness values
range from -20 (most favorable
to the process) to 19 (least
favorable to the process).
The default does not modify
priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class
values are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify
class and priority.
Linux supports io scheduling
priorities and classes since
2.6.13 with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority
is a number which goes from
0 to 7. The higher the value,
the lower the I/O priority of
the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE
is set.
=========================== ======================== ================================
[object-updater]
@ -705,6 +779,27 @@ node_timeout DEFAULT or 10 Request timeout to external services. Th
sections use 3 as the final default).
slowdown 0.01 Time in seconds to wait between objects
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
================== =================== ==========================================
[object-auditor]
@ -736,6 +831,27 @@ rsync_tempfile_timeout auto Time elapsed in seconds before r
of "auto" try to use object-replicator's
rsync_timeout + 900 or fallback to 86400
(1 day).
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
=========================== =================== ==========================================
------------------------------
@ -816,6 +932,26 @@ db_preallocation off If you don't mind the extra disk sp
in overhead, you can turn this on to preallocate
disk space with SQLite databases to decrease
fragmentation.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13
with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server processes.
I/O niceness priority is a number which
goes from 0 to 7. The higher the value,
the lower the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
=============================== ========== ============================================
[container-server]
@ -848,6 +984,28 @@ replication_server Configure parameter for creati
have a separate replication network, you
should not specify any value for
"replication_server".
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are
IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
============================== ================ ========================================
[container-replicator]
@ -910,6 +1068,35 @@ rsync_compress no Allow rsync to compress data
example: .tar.gz, mp3) might
slow down the syncing process.
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server
processes. Niceness values
range from -20 (most favorable
to the process) to 19 (least
favorable to the process).
The default does not modify
priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class
values are
IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify
class and priority. Linux
supports io scheduling
priorities and classes since
2.6.13 with the CFQ io
scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of
server processes. I/O niceness
priority is a number which goes
from 0 to 7.
The higher the value, the lower
the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE
is set.
================== =========================== =============================
[container-updater]
@ -934,6 +1121,29 @@ account_suppression_time 60 Seconds to suppress updating an
error (timeout, not yet found,
etc.)
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server
processes. Niceness values range
from -20 (most favorable to the
process) to 19 (least favorable
to the process). The default does
not modify priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class
values are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower
the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
======================== ================= ==================================
[container-auditor]
@ -950,6 +1160,28 @@ containers_per_second 200 Maximum containers audited per second.
Should be tuned according to individual
system specs. 0 is unlimited.
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are
IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
===================== ================= =======================================
----------------------------
@ -1030,6 +1262,26 @@ fallocate_reserve 1% You can set fallocate_reserve to th
they completely run out of space; you can
make the services pretend they're out of
space early.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server processes.
I/O niceness priority is a number which
goes from 0 to 7. The higher the value,
the lower the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
=============================== ========== =============================================
[account-server]
@ -1060,6 +1312,27 @@ replication_server Configure parameter for creating
have a separate replication network, you
should not specify any value for
"replication_server".
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
============================= ============== ==========================================
[account-replicator]
@ -1120,6 +1393,32 @@ rsync_compress no Allow rsync to compress data
.tar.gz, mp3) might slow down
the syncing process.
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server
processes. Niceness values
range from -20 (most favorable
to the process) to 19 (least
favorable to the process).
The default does not modify
priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class
values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE
(best-effort), and IOPRIO_CLASS_IDLE
(idle).
The default does not modify
class and priority. Linux supports
io scheduling priorities and classes
since 2.6.13 with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority
is a number which goes from 0 to 7.
The higher the value, the lower
the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE
is set.
================== ========================= ===============================
[account-auditor]
@ -1136,6 +1435,28 @@ accounts_per_second 200 Maximum accounts audited per second.
Should be tuned according to individual
system specs. 0 is unlimited.
recon_cache_path /var/cache/swift Path to recon cache
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are
IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
==================== ================ =======================================
[account-reaper]
@ -1164,6 +1485,27 @@ reap_warn_after 2892000 If the account fails to be be reaped due
space is not being reclaimed after you
delete account(s). This is in addition to
any time requested by delay_reaping.
nice_priority None Scheduling priority of server processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server processes.
I/O niceness class values are IOPRIO_CLASS_RT
(realtime), IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the I/O
priority of the process. Work only with
ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
================== =============== =========================================
.. _proxy-server-config:
@ -1271,6 +1613,30 @@ disallowed_sections swift.valid_api_versions Allows the abili
the dict level with a ".".
expiring_objects_container_divisor 86400
expiring_objects_account_name expiring_objects
nice_priority None Scheduling priority of server
processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort) and
IOPRIO_CLASS_IDLE (idle).
The default does not
modify class and priority. Linux
supports io scheduling priorities
and classes since 2.6.13 with
the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower
the I/O priority of the process.
Work only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
==================================== ======================== ========================================
[proxy-server]
@ -1397,6 +1763,29 @@ concurrency_timeout conn_timeout This parameter controls how long
firing of the threads. This number
should be between 0 and node_timeout.
The default is conn_timeout (0.5).
nice_priority None Scheduling priority of server
processes.
Niceness values range from -20 (most
favorable to the process) to 19 (least
favorable to the process). The default
does not modify priority.
ionice_class None I/O scheduling class of server
processes. I/O niceness class values
are IOPRIO_CLASS_RT (realtime),
IOPRIO_CLASS_BE (best-effort),
and IOPRIO_CLASS_IDLE (idle).
The default does not modify class and
priority. Linux supports io scheduling
priorities and classes since 2.6.13
with the CFQ io scheduler.
Work only with ionice_priority.
ionice_priority None I/O scheduling priority of server
processes. I/O niceness priority is
a number which goes from 0 to 7.
The higher the value, the lower the
I/O priority of the process. Work
only with ionice_class.
Ignored if IOPRIO_CLASS_IDLE is set.
============================ =============== =====================================
[tempauth]
@ -1537,6 +1926,17 @@ more workers, raising the number of workers and lowering the maximum number of
clients serviced per worker can lessen the impact of CPU intensive or stalled
requests.
The `nice_priority` parameter can be used to set program scheduling priority.
The `ionice_class` and `ionice_priority` parameters can be used to set I/O scheduling
class and priority on the systems that use an I/O scheduler that supports
I/O priorities. As at kernel 2.6.17 the only such scheduler is the Completely
Fair Queuing (CFQ) I/O scheduler. If you run your Storage servers all together
on the same servers, you can slow down the auditors or prioritize
object-server I/O via these parameters (but probably do not need to change
it on the proxy). It is a new feature and the best practices are still
being developed. On some systems it may be required to run the daemons as root.
For more info also see setpriority(2) and ioprio_set(2).
The above configuration setting should be taken as suggestions and testing
of configuration settings should be done to ensure best utilization of CPU,
network connectivity, and disk I/O.
@ -1581,7 +1981,7 @@ We do not recommend running Swift on RAID, but if you are using
RAID it is also important to make sure that the proper sunit and swidth
settings get set so that XFS can make most efficient use of the RAID array.
For a standard swift install, all data drives are mounted directly under
For a standard Swift install, all data drives are mounted directly under
``/srv/node`` (as can be seen in the above example of mounting ``/dev/sda1`` as
``/srv/node/sda``). If you choose to mount the drives in another directory,
be sure to set the `devices` config option in all of the server configs to

View File

@ -90,6 +90,11 @@ For example, this command would run the functional tests using policy
SWIFT_TEST_POLICY=silver tox -e func
To run a single functional test, use the ``--no-discover`` option together with
a path to a specific test method, for example::
tox -e func -- --no-discover test.functional.tests.TestFile.testCopy
In-process functional testing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -106,9 +111,16 @@ set using environment variables:
- the optional in-memory object server may be selected by setting the
environment variable ``SWIFT_TEST_IN_MEMORY_OBJ`` to a true value.
- encryption may be added to the proxy pipeline by setting the
environment variable ``SWIFT_TEST_IN_PROCESS_CONF_LOADER`` to
``encryption``. Or when using tox, specify the tox environment
``func-in-process-encryption``
- the proxy-server ``object_post_as_copy`` option may be set using the
environment variable ``SWIFT_TEST_IN_PROCESS_OBJECT_POST_AS_COPY``.
- logging to stdout may be enabled by setting ``SWIFT_TEST_DEBUG_LOGS``.
For example, this command would run the in-process mode functional tests with
the proxy-server using object_post_as_copy=False (the 'fast-POST' mode)::
@ -120,6 +132,12 @@ tox environment::
tox -e func-in-process-fast-post
To debug the functional tests, use the 'in-process test' mode and pass the
``--pdb`` flag to tox::
SWIFT_TEST_IN_PROCESS=1 tox -e func -- --pdb \
test.functional.tests.TestFile.testCopy
The 'in-process test' mode searches for ``proxy-server.conf`` and
``swift.conf`` config files from which it copies config options and overrides
some options to suit in process testing. The search will first look for config
@ -224,4 +242,3 @@ another year added, and date ranges are not needed.::
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -196,10 +196,12 @@ headers)
All user resources in Swift (i.e. account, container, objects) can have
user metadata associated with them. Middleware may also persist custom
metadata to accounts and containers safely using System Metadata. Some
core swift features which predate sysmeta have added exceptions for
core Swift features which predate sysmeta have added exceptions for
custom non-user metadata headers (e.g. :ref:`acls`,
:ref:`large-objects`)
.. _usermeta:
^^^^^^^^^^^^^
User Metadata
^^^^^^^^^^^^^
@ -209,7 +211,7 @@ User metadata takes the form of ``X-<type>-Meta-<key>: <value>``, where
and ``<key>`` and ``<value>`` are set by the client.
User metadata should generally be reserved for use by the client or
client applications. An perfect example use-case for user metadata is
client applications. A perfect example use-case for user metadata is
`python-swiftclient`_'s ``X-Object-Meta-Mtime`` which it stores on
object it uploads to implement its ``--changed`` option which will only
upload files that have changed since the last upload.
@ -223,6 +225,20 @@ borrows the user metadata namespace is :ref:`tempurl`. An example of
middleware which uses custom non-user metadata to avoid the user
metadata namespace is :ref:`slo-doc`.
User metadata that is stored by a PUT or POST request to a container or account
resource persists until it is explicitly removed by a subsequent PUT or POST
request that includes a header ``X-<type>-Meta-<key>`` with no value or a
header ``X-Remove-<type>-Meta-<key>: <ignored-value>``. In the latter case the
``<ignored-value>`` is not stored. All user metadata stored with an account or
container resource is deleted when the account or container is deleted.
User metadata that is stored with an object resource has a different semantic;
object user metadata persists until any subsequent PUT or POST request is made
to the same object, at which point all user metadata stored with that object is
deleted en-masse and replaced with any user metadata included with the PUT or
POST request. As a result, it is not possible to update a subset of the user
metadata items stored with an object while leaving some items unchanged.
.. _sysmeta:
^^^^^^^^^^^^^^^
@ -237,7 +253,7 @@ Swift WSGI Server.
All headers on client requests in the form of ``X-<type>-Sysmeta-<key>``
will be dropped from the request before being processed by any
middleware. All headers on responses from back-end systems in the form
of ``X-<type>-Sysmeta-<key>`` will be removed after all middleware has
of ``X-<type>-Sysmeta-<key>`` will be removed after all middlewares have
processed the response but before the response is sent to the client.
See :ref:`gatekeeper` middleware for more information.
@ -249,3 +265,52 @@ modified directly by client requests, and the outgoing filter ensures
that removing middleware that uses a specific system metadata key
renders it benign. New middleware should take advantage of system
metadata.
System metadata may be set on accounts and containers by including headers with
a PUT or POST request. Where a header name matches the name of an existing item
of system metadata, the value of the existing item will be updated. Otherwise
existing items are preserved. A system metadata header with an empty value will
cause any existing item with the same name to be deleted.
System metadata may be set on objects using only PUT requests. All items of
existing system metadata will be deleted and replaced en-masse by any system
metadata headers included with the PUT request. System metadata is neither
updated nor deleted by a POST request: updating individual items of system
metadata with a POST request is not yet supported in the same way that updating
individual items of user metadata is not supported. In cases where middleware
needs to store its own metadata with a POST request, it may use Object Transient
Sysmeta.
.. _transient_sysmeta:
^^^^^^^^^^^^^^^^^^^^^^^^
Object Transient-Sysmeta
^^^^^^^^^^^^^^^^^^^^^^^^
If middleware needs to store object metadata with a POST request it may do so
using headers of the form ``X-Object-Transient-Sysmeta-<key>: <value>``.
All headers on client requests in the form of
``X-Object-Transient-Sysmeta-<key>`` will be dropped from the request before
being processed by any middleware. All headers on responses from back-end
systems in the form of ``X-Object-Transient-Sysmeta-<key>`` will be removed
after all middlewares have processed the response but before the response is
sent to the client. See :ref:`gatekeeper` middleware for more information.
Transient-sysmeta updates on an object have the same semantic as user
metadata updates on an object (see :ref:`usermeta`) i.e. whenever any PUT or
POST request is made to an object, all existing items of transient-sysmeta are
deleted en-masse and replaced with any transient-sysmeta included with the PUT
or POST request. Transient-sysmeta set by a middleware is therefore prone to
deletion by a subsequent client-generated POST request unless the middleware is
careful to include its transient-sysmeta with every POST. Likewise, user
metadata set by a client is prone to deletion by a subsequent
middleware-generated POST request, and for that reason middleware should avoid
generating POST requests that are independent of any client request.
Transient-sysmeta deliberately uses a different header prefix to user metadata
so that middlewares can avoid potential conflict with user metadata keys.
Transient-sysmeta deliberately uses a different header prefix to system
metadata to emphasize the fact that the data is only persisted until a
subsequent POST.

View File

@ -38,7 +38,7 @@ Installing dependencies
sudo apt-get update
sudo apt-get install curl gcc memcached rsync sqlite3 xfsprogs \
git-core libffi-dev python-setuptools \
liberasurecode-dev
liberasurecode-dev libssl-dev
sudo apt-get install python-coverage python-dev python-nose \
python-xattr python-eventlet \
python-greenlet python-pastedeploy \
@ -50,7 +50,7 @@ Installing dependencies
sudo yum update
sudo yum install curl gcc memcached rsync sqlite xfsprogs git-core \
libffi-devel xinetd liberasurecode-devel \
python-setuptools \
openssl-devel python-setuptools \
python-coverage python-devel python-nose \
pyxattr python-eventlet \
python-greenlet python-paste-deploy \

View File

@ -10,29 +10,29 @@ Object Storage installation guide for OpenStack Mitaka
------------------------------------------------------
* `openSUSE Leap 42.1 and SUSE Linux Enterprise Server 12 SP1 <http://docs.openstack.org/mitaka/install-guide-obs/swift.html>`_
* `RHEL 7, CentOS 7 <http://docs.openstack.org/mitaka/install-guide-rdo/swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/mitaka/install-guide-ubuntu/swift.html>`_
* `RHEL 7, CentOS 7 <http://docs.openstack.org/mitaka/install-guide-rdo/swift.html>`__
* `Ubuntu 14.04 <http://docs.openstack.org/mitaka/install-guide-ubuntu/swift.html>`__
Object Storage installation guide for OpenStack Liberty
-------------------------------------------------------
* `openSUSE 13.2 and SUSE Linux Enterprise Server 12 <http://docs.openstack.org/liberty/install-guide-obs/swift.html>`_
* `RHEL 7, CentOS 7 <http://docs.openstack.org/liberty/install-guide-rdo/swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/liberty/install-guide-ubuntu/swift.html>`_
* `openSUSE 13.2 and SUSE Linux Enterprise Server 12 <http://docs.openstack.org/liberty/install-guide-obs/swift.html>`__
* `RHEL 7, CentOS 7 <http://docs.openstack.org/liberty/install-guide-rdo/swift.html>`__
* `Ubuntu 14.04 <http://docs.openstack.org/liberty/install-guide-ubuntu/swift.html>`__
Object Storage installation guide for OpenStack Kilo
----------------------------------------------------
* `openSUSE 13.2 and SUSE Linux Enterprise Server 12 <http://docs.openstack.org/kilo/install-guide/install/zypper/content/ch_swift.html>`_
* `openSUSE 13.2 and SUSE Linux Enterprise Server 12 <http://docs.openstack.org/kilo/install-guide/install/zypper/content/ch_swift.html>`__
* `RHEL 7, CentOS 7, and Fedora 21 <http://docs.openstack.org/kilo/install-guide/install/yum/content/ch_swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_swift.html>`__
Object Storage installation guide for OpenStack Juno
----------------------------------------------------
* `openSUSE 13.1 and SUSE Linux Enterprise Server 11 <http://docs.openstack.org/juno/install-guide/install/zypper/content/ch_swift.html>`_
* `RHEL 7, CentOS 7, and Fedora 20 <http://docs.openstack.org/juno/install-guide/install/yum/content/ch_swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/juno/install-guide/install/apt/content/ch_swift.html>`_
* `Ubuntu 14.04 <http://docs.openstack.org/juno/install-guide/install/apt/content/ch_swift.html>`__
Object Storage installation guide for OpenStack Icehouse
--------------------------------------------------------

View File

@ -52,11 +52,13 @@ Overview and Concepts
ratelimit
overview_large_objects
overview_object_versioning
overview_global_cluster
overview_container_sync
overview_expiring_objects
cors
crossdomain
overview_erasure_code
overview_encryption
overview_backing_store
ring_background
associated_projects

View File

@ -96,6 +96,26 @@ DLO support centers around a user specified filter that matches
segments and concatenates them together in object listing order. Please see
the DLO docs for :ref:`dlo-doc` further details.
.. _encryption:
Encryption
==========
Encryption middleware should be deployed in conjunction with the
:ref:`keymaster` middleware.
.. automodule:: swift.common.middleware.crypto
:members:
:show-inheritance:
.. automodule:: swift.common.middleware.crypto.encrypter
:members:
:show-inheritance:
.. automodule:: swift.common.middleware.crypto.decrypter
:members:
:show-inheritance:
.. _formpost:
FormPost
@ -108,7 +128,7 @@ FormPost
.. _gatekeeper:
GateKeeper
=============
==========
.. automodule:: swift.common.middleware.gatekeeper
:members:
@ -123,6 +143,18 @@ Healthcheck
:members:
:show-inheritance:
.. _keymaster:
Keymaster
=========
Keymaster middleware should be deployed in conjunction with the
:ref:`encryption` middleware.
.. automodule:: swift.common.middleware.crypto.keymaster
:members:
:show-inheritance:
.. _keystoneauth:
KeystoneAuth

View File

@ -102,7 +102,7 @@ reseller_request to True. This can be used by other middlewares.
TempAuth will now allow OPTIONS requests to go through without a token.
The user starts a session by sending a ReST request to the auth system to
The user starts a session by sending a REST request to the auth system to
receive the auth token and a URL to the Swift system.
-------------
@ -143,7 +143,7 @@ having this in your ``/etc/keystone/default_catalog.templates`` ::
catalog.RegionOne.object_store.adminURL = http://swiftproxy:8080/
catalog.RegionOne.object_store.internalURL = http://swiftproxy:8080/v1/AUTH_$(tenant_id)s
On your Swift Proxy server you will want to adjust your main pipeline
On your Swift proxy server you will want to adjust your main pipeline
and add auth_token and keystoneauth in your
``/etc/swift/proxy-server.conf`` like this ::
@ -326,7 +326,7 @@ Extending Auth
TempAuth is written as wsgi middleware, so implementing your own auth is as
easy as writing new wsgi middleware, and plugging it in to the proxy server.
The KeyStone project and the Swauth project are examples of additional auth
The Keystone project and the Swauth project are examples of additional auth
services.
Also, see :doc:`development_auth`.

View File

@ -14,9 +14,19 @@ synchronization key.
.. note::
If you are using the large objects feature you will need to ensure both
your manifest file and your segment files are synced if they happen to be
in different containers.
If you are using the large objects feature and syncing to another cluster
then you will need to ensure that manifest files and segment files are
synced. If segment files are in a different container than their manifest
then both the manifest's container and the segments' container must be
synced. The target container for synced segment files must always have the
same name as their source container in order for them to be resolved by
synced manifests.
.. note::
If you are using encryption middleware in the cluster from which objects
are being synced, then you should follow the instructions to configure
:ref:`container_sync_client_config` to be compatible with encryption.
--------------------------
Configuring Container Sync

View File

@ -0,0 +1,477 @@
=================
Object Encryption
=================
Swift supports the optional encryption of object data at rest on storage nodes.
The encryption of object data is intended to mitigate the risk of users' data
being read if an unauthorised party were to gain physical access to a disk.
.. note::
Swift's data-at-rest encryption accepts plaintext object data from the
client, encrypts it in the cluster, and stores the encrypted data. This
protects object data from inadvertently being exposed if a data drive
leaves the Swift cluster. If a user wishes to ensure that the plaintext
data is always encrypted while in transit and in storage, it is strongly
recommended that the data be encrypted before sending it to the Swift
cluster. Encrypting on the client side is the only way to ensure that the
data is fully encrypted for its entire lifecycle.
Encryption of data at rest is implemented by middleware that may be included in
the proxy server WSGI pipeline. The feature is internal to a Swift cluster and
not exposed through the API. Clients are unaware that data is encrypted by this
feature internally to the Swift service; internally encrypted data should never
be returned to clients via the Swift API.
The following data are encrypted while at rest in Swift:
* Object content i.e. the content of an object PUT request's body
* The entity tag (ETag) of objects that have non-zero content
* All custom user object metadata values i.e. metadata sent using
X-Object-Meta- prefixed headers with PUT or POST requests
Any data or metadata not included in the list above are not encrypted,
including:
* Account, container and object names
* Account and container custom user metadata values
* All custom user metadata names
* Object Content-Type values
* Object size
* System metadata
.. note::
This feature is intended to provide `confidentiality` of data that is at
rest i.e. to protect user data from being read by an attacker that gains
access to disks on which object data is stored.
This feature is not intended to prevent undetectable `modification`
of user data at rest.
This feature is not intended to protect against an attacker that gains
access to Swift's internal network connections, or gains access to key
material or is able to modify the Swift code running on Swift nodes.
.. _encryption_deployment:
------------------------
Deployment and operation
------------------------
Encryption is deployed by adding two middleware filters to the proxy
server WSGI pipeline and including their respective filter configuration
sections in the `proxy-server.conf` file. :ref:`Additional steps
<container_sync_client_config>` are required if the container sync feature is
being used.
The `keymaster` and `encryption` middleware filters must be to the right of all
other middleware in the pipeline apart from the final proxy-logging middleware,
and in the order shown in this example::
<other middleware> keymaster encryption proxy-logging proxy-server
[filter:keymaster]
use = egg:swift#keymaster
encryption_root_secret = your_secret
[filter:encryption]
use = egg:swift#encryption
# disable_encryption = False
See the `proxy-server.conf-sample` file for further details on the middleware
configuration options.
The keymaster config option ``encryption_root_secret`` MUST be set to a value
of at least 44 valid base-64 characters before the middleware is used and
should be consistent across all proxy servers. The minimum length of 44 has
been chosen because it is the length of a base-64 encoded 32 byte value.
.. note::
The ``encryption_root_secret`` option holds the master secret key used for
encryption. The security of all encrypted data critically depends on this
key and it should therefore be set to a high-entropy value. For example, a
suitable ``encryption_root_secret`` may be obtained by base-64 encoding a
32 byte (or longer) value generated by a cryptographically secure random
number generator.
The ``encryption_root_secret`` value is necessary to recover any encrypted
data from the storage system, and therefore, it must be guarded against
accidental loss. Its value (and consequently, the proxy-server.conf file)
should not be stored on any disk that is in any account, container or
object ring.
The ``encryption_root_secret`` value should not be changed once deployed.
Doing so would prevent Swift from properly decrypting data that was
encrypted using the former value, and would therefore result in the loss of
that data.
One method for generating a suitable value for ``encryption_root_secret`` is to
use the ``openssl`` command line tool::
openssl rand -base64 32
Once deployed, the encryption filter will by default encrypt object data and
metadata when handling PUT and POST requests and decrypt object data and
metadata when handling GET and HEAD requests. COPY requests are transformed
into GET and PUT requests by the :ref:`copy` middleware before reaching the
encryption middleware and as a result object data and metadata is decrypted and
re-encrypted when copied.
Upgrade Considerations
----------------------
When upgrading an existing cluster to deploy encryption, the following sequence
of steps is recommended:
#. Upgrade all object servers
#. Upgrade all proxy servers
#. Add keymaster and encryption middlewares to every proxy server's middleware
pipeline with the encryption ``disable_encryption`` option set to ``True``
and the keymaster ``encryption_root_secret`` value set as described above.
#. If required, follow the steps for :ref:`container_sync_client_config`.
#. Finally, change the encryption ``disable_encryption`` option to ``False``
Objects that existed in the cluster prior to the keymaster and encryption
middlewares being deployed are still readable with GET and HEAD requests. The
content of those objects will not be encrypted unless they are written again by
a PUT or COPY request. Any user metadata of those objects will not be encrypted
unless it is written again by a PUT, POST or COPY request.
Disabling Encryption
--------------------
Once deployed, the keymaster and encryption middlewares should not be removed
from the pipeline. To do so will cause encrypted object data and/or metadata to
be returned in response to GET or HEAD requests for objects that were
previously encrypted.
Encryption of inbound object data may be disabled by setting the encryption
``disable_encryption`` option to ``True``, in which case existing encrypted
objects will remain encrypted but new data written with PUT, POST or COPY
requests will not be encrypted. The keymaster and encryption middlewares should
remain in the pipeline even when encryption of new objects is not required. The
encryption middleware is needed to handle GET requests for objects that may
have been previously encrypted. The keymaster is needed to provide keys for
those requests.
.. _container_sync_client_config:
Container sync configuration
----------------------------
If container sync is being used then the keymaster and encryption middlewares
must be added to the container sync internal client pipeline. The following
configuration steps are required:
#. Create a custom internal client configuration file for container sync (if
one is not already in use) based on the sample file
`internal-client.conf-sample`. For example, copy
`internal-client.conf-sample` to `/etc/swift/container-sync-client.conf`.
#. Modify this file to include the middlewares in the pipeline in
the same way as described above for the proxy server.
#. Modify the container-sync section of all container server config files to
point to this internal client config file using the
``internal_client_conf_path`` option. For example::
internal_client_conf_path = /etc/swift/container-sync-client.conf
.. note::
The ``encryption_root_secret`` value is necessary to recover any encrypted
data from the storage system, and therefore, it must be guarded against
accidental loss. Its value (and consequently, the custom internal client
configuration file) should not be stored on any disk that is in any
account, container or object ring.
.. note::
These container sync configuration steps will be necessary for container
sync probe tests to pass if the encryption middlewares are included in the
proxy pipeline of a test cluster.
--------------
Implementation
--------------
Encryption scheme
-----------------
Plaintext data is encrypted to ciphertext using the AES cipher with 256-bit
keys implemented by the python `cryptography package
<https://pypi.python.org/pypi/cryptography>`_. The cipher is used in counter
(CTR) mode so that any byte or range of bytes in the ciphertext may be
decrypted independently of any other bytes in the ciphertext. This enables very
simple handling of ranged GETs.
In general an item of unencrypted data, ``plaintext``, is transformed to an
item of encrypted data, ``ciphertext``::
ciphertext = E(plaintext, k, iv)
where ``E`` is the encryption function, ``k`` is an encryption key and ``iv``
is a unique initialization vector (IV) chosen for each encryption context. For
example, the object body is one encryption context with a randomly chosen IV.
The IV is stored as metadata of the encrypted item so that it is available for
decryption::
plaintext = D(ciphertext, k, iv)
where ``D`` is the decryption function.
The implementation of CTR mode follows `NIST SP800-38A
<http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf>`_, and the
full IV passed to the encryption or decryption function serves as the initial
counter block.
In general any encrypted item has accompanying crypto-metadata that describes
the IV and the cipher algorithm used for the encryption::
crypto_metadata = {"iv": <16 byte value>,
"cipher": "AES_CTR_256"}
This crypto-metadata is stored either with the ciphertext (for user
metadata and etags) or as a separate header (for object bodies).
Key management
--------------
A keymaster middleware is responsible for providing the keys required for each
encryption and decryption operation. Two keys are required when handling object
requests: a `container key` that is uniquely associated with the container path
and an `object key` that is uniquely associated with the object path. These
keys are made available to the encryption middleware via a callback function
that the keymaster installs in the WSGI request environ.
The current keymaster implementation derives container and object keys from the
``encryption_root_secret`` in a deterministic way by constructing a SHA256
HMAC using the ``encryption_root_secret`` as a key and the container or object
path as a message, for example::
object_key = HMAC(encryption_root_secret, "/a/c/o")
Other strategies for providing object and container keys may be employed by
future implementations of alternative keymaster middleware.
During each object PUT, a random key is generated to encrypt the object body.
This random key is then encrypted using the object key provided by the
keymaster. This makes it safe to store the encrypted random key alongside the
encrypted object data and metadata.
This process of `key wrapping` enables more efficient re-keying events when the
object key may need to be replaced and consequently any data encrypted using
that key must be re-encrypted. Key wrapping minimizes the amount of data
encrypted using those keys to just other randomly chosen keys which can be
re-wrapped efficiently without needing to re-encrypt the larger amounts of data
that were encrypted using the random keys.
.. note::
Re-keying is not currently implemented. Key wrapping is implemented
in anticipation of future re-keying operations.
Encryption middleware
---------------------
The encryption middleware is composed of an `encrypter` component and a
`decrypter` component.
Encrypter operation
^^^^^^^^^^^^^^^^^^^
Custom user metadata
++++++++++++++++++++
The encrypter encrypts each item of custom user metadata using the object key
provided by the keymaster and an IV that is randomly chosen for that metadata
item. The encrypted values are stored as :ref:`transient_sysmeta` with
associated crypto-metadata appended to the encrypted value. For example::
X-Object-Meta-Private1: value1
X-Object-Meta-Private2: value2
are transformed to::
X-Object-Transient-Sysmeta-Crypto-Meta-Private1:
E(value1, object_key, header_iv_1); swift_meta={"iv": header_iv_1,
"cipher": "AES_CTR_256"}
X-Object-Transient-Sysmeta-Crypto-Meta-Private2:
E(value2, object_key, header_iv_2); swift_meta={"iv": header_iv_2,
"cipher": "AES_CTR_256"}
The unencrypted custom user metadata headers are removed.
Object body
+++++++++++
Encryption of an object body is performed using a randomly chosen body key
and a randomly chosen IV::
body_ciphertext = E(body_plaintext, body_key, body_iv)
The body_key is wrapped using the object key provided by the keymaster and a
randomly chosen IV::
wrapped_body_key = E(body_key, object_key, body_key_iv)
The encrypter stores the associated crypto-metadata in a system metadata
header::
X-Object-Sysmeta-Crypto-Body-Meta:
{"iv": body_iv,
"cipher": "AES_CTR_256",
"body_key": {"key": wrapped_body_key,
"iv": body_key_iv}}
Note that in this case there is an extra item of crypto-metadata which stores
the wrapped body key and its IV.
Entity tag
++++++++++
While encrypting the object body the encrypter also calculates the ETag (md5
digest) of the plaintext body. This value is encrypted using the object key
provided by the keymaster and a randomly chosen IV, and saved as an item of
system metadata, with associated crypto-metadata appended to the encrypted
value::
X-Object-Sysmeta-Crypto-Etag:
E(md5(plaintext), object_key, etag_iv); swift_meta={"iv": etag_iv,
"cipher": "AES_CTR_256"}
The encrypter also forces an encrypted version of the plaintext ETag to be sent
with container updates by adding an update override header to the PUT request.
The associated crypto-metadata is appended to the encrypted ETag value of this
update override header::
X-Object-Sysmeta-Container-Update-Override-Etag:
E(md5(plaintext), container_key, override_etag_iv);
meta={"iv": override_etag_iv, "cipher": "AES_CTR_256"}
The container key is used for this encryption so that the decrypter is able
to decrypt the ETags in container listings when handling a container request,
since object keys may not be available in that context.
Since the plaintext ETag value is only known once the encrypter has completed
processing the entire object body, the ``X-Object-Sysmeta-Crypto-Etag`` and
``X-Object-Sysmeta-Container-Update-Override-Etag`` headers are sent after the
encrypted object body using the proxy server's support for request footers.
.. _conditional_requests:
Conditional Requests
++++++++++++++++++++
In general, an object server evaluates conditional requests with
``If[-None]-Match`` headers by comparing values listed in an
``If[-None]-Match`` header against the ETag that is stored in the object
metadata. This is not possible when the ETag stored in object metadata has been
encrypted. The encrypter therefore calculates an HMAC using the object key and
the ETag while handling object PUT requests, and stores this under the metadata
key ``X-Object-Sysmeta-Crypto-Etag-Mac``::
X-Object-Sysmeta-Crypto-Etag-Mac: HMAC(object_key, md5(plaintext))
Like other ETag-related metadata, this is sent after the encrypted object body
using the proxy server's support for request footers.
The encrypter similarly calculates an HMAC for each ETag value included in
``If[-None]-Match`` headers of conditional GET or HEAD requests, and appends
these to the ``If[-None]-Match`` header. The encrypter also sets the
``X-Backend-Etag-Is-At`` header to point to the previously stored
``X-Object-Sysmeta-Crypto-Etag-Mac`` metadata so that the object server
evaluates the conditional request by comparing the HMAC values included in the
``If[-None]-Match`` with the value stored under
``X-Object-Sysmeta-Crypto-Etag-Mac``. For example, given a conditional request
with header::
If-Match: match_etag
the encrypter would transform the request headers to include::
If-Match: match_etag,HMAC(object_key, match_etag)
X-Backend-Etag-Is-At: X-Object-Sysmeta-Crypto-Etag-Mac
This enables the object server to perform an encrypted comparison to check
whether the ETags match, without leaking the ETag itself or leaking information
about the object body.
Decrypter operation
^^^^^^^^^^^^^^^^^^^
For each GET or HEAD request to an object, the decrypter inspects the response
for encrypted items (revealed by crypto-metadata headers), and if any are
discovered then it will:
#. Fetch the object and container keys from the keymaster via its callback
#. Decrypt the ``X-Object-Sysmeta-Crypto-Etag`` value
#. Decrypt the ``X-Object-Sysmeta-Container-Update-Override-Etag`` value
#. Decrypt metadata header values using the object key
#. Decrypt the wrapped body key found in ``X-Object-Sysmeta-Crypto-Body-Meta``
#. Decrypt the body using the body key
For each GET request to a container that would include ETags in its response
body, the decrypter will:
#. GET the response body with the container listing
#. Fetch the container key from the keymaster via its callback
#. Decrypt any encrypted ETag entries in the container listing using the
container key
Impact on other Swift services and features
-------------------------------------------
Encryption has no impact on :ref:`versioned_writes` other than that any
previously unencrypted objects will be encrypted as they are copied to or from
the versions container. Keymaster and encryption middlewares should be placed
after ``versioned_writes`` in the proxy server pipeline, as described in
:ref:`encryption_deployment`.
`Container Sync` uses an internal client to GET objects that are to be sync'd.
This internal client must be configured to use the keymaster and encryption
middlewares as described :ref:`above <container_sync_client_config>`.
Encryption has no impact on the `object-auditor` service. Since the ETag
header saved with the object at rest is the md5 sum of the encrypted object
body then the auditor will verify that encrypted data is valid.
Encryption has no impact on the `object-expirer` service. ``X-Delete-At`` and
``X-Delete-After`` headers are not encrypted.
Encryption has no impact on the `object-replicator` and `object-reconstructor`
services. These services are unaware of the object or EC fragment data being
encrypted.
Encryption has no impact on the `container-reconciler` service. The
`container-reconciler` uses an internal client to move objects between
different policy rings. The destination object has the same URL as the source
object and the object is moved without re-encryption.
Considerations for developers
-----------------------------
Developers should be aware that keymaster and encryption middlewares rely on
the path of an object remaining unchanged. The included keymaster derives keys
for containers and objects based on their paths and the
``encryption_root_secret``. The keymaster does not rely on object metadata to
inform its generation of keys for GET and HEAD requests because when handling
:ref:`conditional_requests` it is required to provide the object key before any
metadata has been read from the object.
Developers should therefore give careful consideration to any new features that
would relocate object data and metadata within a Swift cluster by means that do
not cause the object data and metadata to pass through the encryption
middlewares in the proxy pipeline and be re-encrypted.
The crypto-metadata associated with each encrypted item does include some
`key_id` metadata that is provided by the keymaster and contains the path used
to derive keys. This `key_id` metadata is persisted in anticipation of future
scenarios when it may be necessary to decrypt an object that has been relocated
without re-encrypting, in which case the metadata could be used to derive the
keys that were used for encryption. However, this alone is not sufficient to
handle conditional requests and to decrypt container listings where objects
have been relocated, and further work will be required to solve those issues.

View File

@ -12,7 +12,7 @@ As expiring objects are added to the system, the object servers will record the
Usually, just one instance of the ``swift-object-expirer`` daemon needs to run for a cluster. This isn't exactly automatic failover high availability, but if this daemon doesn't run for a few hours it should not be any real issue. The expired-but-not-yet-deleted objects will still ``404 Not Found`` if someone tries to ``GET`` or ``HEAD`` them and they'll just be deleted a bit later when the daemon is restarted.
By default, the ``swift-object-expirer`` daemon will run with a concurrency of 1. Increase this value to get more concurrency. A concurrency of 1 may not be enough to delete expiring objects in a timely fashion for a particular swift cluster.
By default, the ``swift-object-expirer`` daemon will run with a concurrency of 1. Increase this value to get more concurrency. A concurrency of 1 may not be enough to delete expiring objects in a timely fashion for a particular Swift cluster.
It is possible to run multiple daemons to do different parts of the work if a single process with a concurrency of more than 1 is not enough (see the sample config file for details).

View File

@ -0,0 +1,133 @@
===============
Global Clusters
===============
--------
Overview
--------
Swift's default configuration is currently designed to work in a
single region, where a region is defined as a group of machines with
high-bandwidth, low-latency links between them. However, configuration
options exist that make running a performant multi-region Swift
cluster possible.
For the rest of this section, we will assume a two-region Swift
cluster: region 1 in San Francisco (SF), and region 2 in New York
(NY). Each region shall contain within it 3 zones, numbered 1, 2, and
3, for a total of 6 zones.
---------------------------
Configuring Global Clusters
---------------------------
~~~~~~~~~~~~~
read_affinity
~~~~~~~~~~~~~
This setting, combined with sorting_method setting, makes the proxy
server prefer local backend servers for GET and HEAD requests over
non-local ones. For example, it is preferable for an SF proxy server
to service object GET requests by talking to SF object servers, as the
client will receive lower latency and higher throughput.
By default, Swift randomly chooses one of the three replicas to give
to the client, thereby spreading the load evenly. In the case of a
geographically-distributed cluster, the administrator is likely to
prioritize keeping traffic local over even distribution of results.
This is where the read_affinity setting comes in.
Example::
[app:proxy-server]
sorting_method = affinity
read_affinity = r1=100
This will make the proxy attempt to service GET and HEAD requests from
backends in region 1 before contacting any backends in region 2.
However, if no region 1 backends are available (due to replica
placement, failed hardware, or other reasons), then the proxy will
fall back to backend servers in other regions.
Example::
[app:proxy-server]
sorting_method = affinity
read_affinity = r1z1=100, r1=200
This will make the proxy attempt to service GET and HEAD requests from
backends in region 1 zone 1, then backends in region 1, then any other
backends. If a proxy is physically close to a particular zone or
zones, this can provide bandwidth savings. For example, if a zone
corresponds to servers in a particular rack, and the proxy server is
in that same rack, then setting read_affinity to prefer reads from
within the rack will result in less traffic between the top-of-rack
switches.
The read_affinity setting may contain any number of region/zone
specifiers; the priority number (after the equals sign) determines the
ordering in which backend servers will be contacted. A lower number
means higher priority.
Note that read_affinity only affects the ordering of primary nodes
(see ring docs for definition of primary node), not the ordering of
handoff nodes.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
write_affinity and write_affinity_node_count
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This setting makes the proxy server prefer local backend servers for
object PUT requests over non-local ones. For example, it may be
preferable for an SF proxy server to service object PUT requests
by talking to SF object servers, as the client will receive lower
latency and higher throughput. However, if this setting is used, note
that a NY proxy server handling a GET request for an object that was
PUT using write affinity may have to fetch it across the WAN link, as
the object won't immediately have any replicas in NY. However,
replication will move the object's replicas to their proper homes in
both SF and NY.
Note that only object PUT requests are affected by the write_affinity
setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT
requests are not affected.
This setting lets you trade data distribution for throughput. If
write_affinity is enabled, then object replicas will initially be
stored all within a particular region or zone, thereby decreasing the
quality of the data distribution, but the replicas will be distributed
over fast WAN links, giving higher throughput to clients. Note that
the replicators will eventually move objects to their proper,
well-distributed homes.
The write_affinity setting is useful only when you don't typically
read objects immediately after writing them. For example, consider a
workload of mainly backups: if you have a bunch of machines in NY that
periodically write backups to Swift, then odds are that you don't then
immediately read those backups in SF. If your workload doesn't look
like that, then you probably shouldn't use write_affinity.
The write_affinity_node_count setting is only useful in conjunction
with write_affinity; it governs how many local object servers will be
tried before falling back to non-local ones.
Example::
[app:proxy-server]
write_affinity = r1
write_affinity_node_count = 2 * replicas
Assuming 3 replicas, this configuration will make object PUTs try
storing the object's replicas on up to 6 disks ("2 * replicas") in
region 1 ("r1"). Proxy server tries to find 3 devices for storing the
object. While a device is unavailable, it queries the ring for the 4th
device and so on until 6th device. If the 6th disk is still unavailable,
the last replica will be sent to other region. It doesn't mean there'll
have 6 replicas in region 1.
You should be aware that, if you have data coming into SF faster than
your replicators are transferring it to NY, then your cluster's data
distribution will get worse and worse over time as objects pile up in SF.
If this happens, it is recommended to disable write_affinity and simply let
object PUTs traverse the WAN link, as that will naturally limit the
object growth rate to what your WAN link can handle.

View File

@ -90,7 +90,7 @@ History
Dynamic large object support has gone through various iterations before
settling on this implementation.
The primary factor driving the limitation of object size in swift is
The primary factor driving the limitation of object size in Swift is
maintaining balance among the partitions of the ring. To maintain an even
dispersion of disk usage throughout the cluster the obvious storage pattern
was to simply split larger objects into smaller segments, which could then be
@ -121,7 +121,7 @@ The current "user manifest" design was chosen in order to provide a
transparent download of large objects to the client and still provide the
uploading client a clean API to support segmented uploads.
To meet an many use cases as possible swift supports two types of large
To meet an many use cases as possible Swift supports two types of large
object manifests. Dynamic and static large object manifests both support
the same idea of allowing the user to upload many segments to be later
downloaded as a single file.
@ -143,7 +143,7 @@ also improves concurrent upload speed. It has the disadvantage that the
manifest is finalized once PUT. Any changes to it means it has to be replaced.
Between these two methods the user has great flexibility in how (s)he chooses
to upload and retrieve large objects to swift. Swift does not, however, stop
to upload and retrieve large objects to Swift. Swift does not, however, stop
the user from harming themselves. In both cases the segments are deletable by
the user at any time. If a segment was deleted by mistake, a dynamic large
object, having no way of knowing it was ever there, would happily ignore the

View File

@ -49,7 +49,7 @@ Containers and Policies
Policies are implemented at the container level. There are many advantages to
this approach, not the least of which is how easy it makes life on
applications that want to take advantage of them. It also ensures that
Storage Policies remain a core feature of swift independent of the auth
Storage Policies remain a core feature of Swift independent of the auth
implementation. Policies were not implemented at the account/auth layer
because it would require changes to all auth systems in use by Swift
deployers. Each container has a new special immutable metadata element called

View File

@ -18,7 +18,7 @@ account-server.conf to delay the actual deletion of data. At this time, there
is no utility to undelete an account; one would have to update the account
database replicas directly, setting the status column to an empty string and
updating the put_timestamp to be greater than the delete_timestamp. (On the
TODO list is writing a utility to perform this task, preferably through a ReST
TODO list is writing a utility to perform this task, preferably through a REST
call.)
The account reaper runs on each account server and scans the server
@ -53,7 +53,7 @@ History
At first, a simple approach of deleting an account through completely external
calls was considered as it required no changes to the system. All data would
simply be deleted in the same way the actual user would, through the public
ReST API. However, the downside was that it would use proxy resources and log
REST API. However, the downside was that it would use proxy resources and log
everything when it didn't really need to. Also, it would likely need a
dedicated server or two, just for issuing the delete requests.

View File

@ -2,7 +2,7 @@
Replication
===========
Because each replica in swift functions independently, and clients generally
Because each replica in Swift functions independently, and clients generally
require only a simple majority of nodes responding to consider an operation
successful, transient failures like network partitions can quickly cause
replicas to diverge. These differences are eventually reconciled by

View File

@ -22,7 +22,7 @@ number, each replica will be assigned to a different device in the ring.
Devices are added to the ring to describe the capacity available for
part-replica assignment. Devices are placed into failure domains consisting
of region, zone, and server. Regions can be used to describe geo-graphically
of region, zone, and server. Regions can be used to describe geographical
systems characterized by lower-bandwidth or higher latency between machines in
different regions. Many rings will consist of only a single region. Zones
can be used to group devices based on physical locations, power separations,
@ -80,7 +80,8 @@ the list of devices is a dictionary with the following keys:
====== ======= ==============================================================
id integer The index into the list devices.
zone integer The zone the devices resides in.
zone integer The zone the device resides in.
region integer The region the zone resides in.
weight float The relative weight of the device in comparison to other
devices. This usually corresponds directly to the amount of
disk space the device has compared to other devices. For

View File

@ -4,7 +4,7 @@
Rate Limiting
=============
Rate limiting in swift is implemented as a pluggable middleware. Rate
Rate limiting in Swift is implemented as a pluggable middleware. Rate
limiting is performed on requests that result in database writes to the
account and container sqlite dbs. It uses memcached and is dependent on
the proxy servers having highly synchronized time. The rate limits are

View File

@ -51,6 +51,18 @@ bind_port = 6202
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon account-server
@ -73,6 +85,18 @@ use = egg:swift#account
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server". Default is empty.
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
@ -127,6 +151,18 @@ use = egg:swift#recon
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-auditor]
# You can override the default log routing for this app here (don't use set!):
@ -140,6 +176,18 @@ use = egg:swift#recon
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-reaper]
# You can override the default log routing for this app here (don't use set!):
@ -158,7 +206,7 @@ use = egg:swift#recon
# seconds; 2592000 = 30 days for example.
# delay_reaping = 0
#
# If the account fails to be be reaped due to a persistent error, the
# If the account fails to be reaped due to a persistent error, the
# account reaper will log a message such as:
# Account <name> has not been reaped since <date>
# You can search logs for this message if space is not being reclaimed
@ -166,6 +214,18 @@ use = egg:swift#recon
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.

View File

@ -22,6 +22,18 @@
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-reconciler]
# The reconciler will re-attempt reconciliation if the source object is not
@ -32,6 +44,18 @@
# interval = 30
# Server errors from requests will be retried by default
# request_tries = 3
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

View File

@ -57,6 +57,18 @@ bind_port = 6201
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon container-server
@ -82,6 +94,18 @@ use = egg:swift#container
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
@ -136,6 +160,18 @@ use = egg:swift#recon
# rsync_module = {replication_ip}::container
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-updater]
# You can override the default log routing for this app here (don't use set!):
@ -156,6 +192,18 @@ use = egg:swift#recon
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-auditor]
# You can override the default log routing for this app here (don't use set!):
@ -169,6 +217,18 @@ use = egg:swift#recon
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-sync]
# You can override the default log routing for this app here (don't use set!):
@ -195,6 +255,18 @@ use = egg:swift#recon
#
# Internal client config file path
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.

Some files were not shown because too many files have changed in this diff Show More