Relocate Swift configuration docs to os_swift role

Change-Id: Ic3e6540556eeb0518d8e6c59b32f38a427071799
This commit is contained in:
Travis Truman 2016-08-11 10:34:43 -05:00
parent 9d81f521d2
commit d3305e4be0
10 changed files with 678 additions and 26 deletions

View File

@ -1,20 +1,11 @@
OpenStack-Ansible Swift
#######################
:tags: openstack, swift, cloud, ansible
:category: \*nix
================================
Swift role for OpenStack-Ansible
================================
Ansible role to install OpenStack Swift and Swift registry.
Ansible role to install OpenStack swift and swift registry.
This role will install the following:
* swift
Documentation for the project can be found at:
http://docs.openstack.org/developer/openstack-ansible-os_swift
.. code-block:: yaml
- name: Install swift server
hosts: swift_all
user: root
roles:
- { role: "os_swift", tags: [ "os-swift" ] }
vars:
external_lb_vip_address: 172.16.24.1
internal_lb_vip_address: 192.168.0.1
The project home is at:
http://launchpad.net/openstack-ansible

View File

@ -0,0 +1,35 @@
`Home <index.html>`_ OpenStack-Ansible Swift
Add to existing deployment
==========================
Complete the following procedure to deploy swift on an
existing deployment.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all keystone users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to swift.
If this value is ``False``, by default only users with the
``admin`` or ``swiftoperator`` role can create containers or
manage tenants.
When the backend type for the glance is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
#. Run the swift play:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml

View File

@ -0,0 +1,322 @@
`Home <index.html>`_ OpenStack-Ansible Swift
Configuring the service
=======================
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
file**
#. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to
``/etc/openstack_deploy/conf.d/swift.yml``:
.. code-block:: shell-session
# cp /etc/openstack_deploy/conf.d/swift.yml.example \
/etc/openstack_deploy/conf.d/swift.yml
#. Update the global override values:
.. code-block:: yaml
# global_overrides:
# swift:
# part_power: 8
# weight: 100
# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /srv/node
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True
# statsd_host: statsd.example.com
# statsd_port: 8125
# statsd_metric_prefix:
# statsd_default_sample_rate: 1.0
# statsd_sample_rate_factor: 1.0
``part_power``
Set the partition power value based on the total amount of
storage the entire ring uses.
Multiply the maximum number of drives ever used with the swift
installation by 100 and round that value up to the
closest power of two value. For example, a maximum of six drives,
times 100, equals 600. The nearest power of two above 600 is two
to the power of nine, so the partition power is nine. The
partition power cannot be changed after the swift rings
are built.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. For
example, a 1 TB disk would have a weight of 100, while a 2 TB
drive would have a weight of 200.
``min_part_hours``
The default value is 1. Set the minimum partition hours to the
amount of time to lock a partition's replicas after moving a partition.
Moving multiple replicas at the same time
makes data inaccessible. This value can be set separately in the
swift, container, account, and policy sections with the value in
lower sections superseding the value in the swift section.
``repl_number``
The default value is 3. Set the replication number to the number
of replicas of each object. This value can be set separately in
the swift, container, account, and policy sections with the value
in the more granular sections superseding the value in the swift
section.
``storage_network``
By default, the swift services listen on the default
management IP. Optionally, specify the interface of the storage
network.
If the ``storage_network`` is not set, but the ``storage_ips``
per host are set (or the ``storage_ip`` is not on the
``storage_network`` interface) the proxy server is unable
to connect to the storage services.
``replication_network``
Optionally, specify a dedicated replication network interface, so
dedicated replication can be setup. If this value is not
specified, no dedicated ``replication_network`` is set.
Replication does not work properly if the ``repl_ip`` is not set on
the ``replication_network`` interface.
``drives``
Set the default drives per host. This is useful when all hosts
have the same drives. These can be overridden on a per host
basis.
``mount_point``
Set the ``mount_point`` value to the location where the swift
drives are mounted. For example, with a mount point of ``/srv/node``
and a drive of ``sdc``, a drive is mounted at ``/srv/node/sdc`` on the
``swift_host``. This can be overridden on a per-host basis.
``storage_policies``
Storage policies determine on which hardware data is stored, how
the data is stored across that hardware, and in which region the
data resides. Each storage policy must have an unique ``name``
and a unique ``index``. There must be a storage policy with an
index of 0 in the ``swift.yml`` file to use any legacy containers
created before storage policies were instituted.
``default``
Set the default value to ``yes`` for at least one policy. This is
the default storage policy for any non-legacy containers that are
created.
``deprecated``
Set the deprecated value to ``yes`` to turn off storage policies.
For account and container rings, ``min_part_hours`` and
``repl_number`` are the only values that can be set. Setting them
in this section overrides the defaults for the specific ring.
``statsd_host``
Swift supports sending extra metrics to a ``statsd`` host. This option
sets the ``statsd`` host to receive ``statsd`` metrics. Specifying
this here applies to all hosts in the cluster.
If ``statsd_host`` is left blank or omitted, then ``statsd`` are
disabled.
All ``statsd`` settings can be overridden or you can specify deeper in the
structure if you want to only catch ``statsdv`` metrics on certain hosts.
``statsd_port``
Optionally, use this to specify the ``statsd`` server's port you are
sending metrics to. Defaults to 8125 of omitted.
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
These ``statsd`` related options are more complex and are
used to tune how many samples are sent to ``statsd``. Omit them unless
you need to tweak these settings, if so first read:
http://docs.openstack.org/developer/swift/admin_guide.html
#. Update the swift proxy hosts values:
.. code-block:: yaml
# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# statsd_metric_prefix: proxy01
# infra-node2:
# ip: 192.0.2.2
# statsd_metric_prefix: proxy02
# infra-node3:
# ip: 192.0.2.3
# statsd_metric_prefix: proxy03
``swift-proxy_hosts``
Set the ``IP`` address of the hosts so Ansible connects to
to deploy the ``swift-proxy`` containers. The ``swift-proxy_hosts``
value matches the infra nodes.
``statsd_metric_prefix``
This metric is optional, and also only evaluated it you have defined
``statsd_host`` somewhere. It allows you define a prefix to add to
all ``statsd`` metrics sent from this hose. If omitted, use the node name.
#. Update the swift hosts values:
.. code-block:: yaml
# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# statsd_metric_prefix: node1
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# statsd_metric_prefix: node2
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# statsd_metric_prefix: node3
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
``swift_hosts``
Specify the hosts to be used as the storage nodes. The ``ip`` is
the address of the host to which Ansible connects. Set the name
and IP address of each swift host. The ``swift_hosts``
section is not required.
``swift_vars``
Contains the swift host specific values.
``storage_ip`` and ``repl_ip``
Base these values on the IP addresses of the host's
``storage_network`` or ``replication_network``. For example, if
the ``storage_network`` is ``br-storage`` and host1 has an IP
address of 1.1.1.1 on ``br-storage``, then this is the IP address
in use for ``storage_ip``. If only the ``storage_ip``
is specified, then the ``repl_ip`` defaults to the ``storage_ip``.
If neither are specified, both default to the host IP
address.
Overriding these values on a host or drive basis can cause
problems if the IP address that the service listens on is based
on a specified ``storage_network`` or ``replication_network`` and
the ring is set to a different IP address.
``zone``
The default is 0. Optionally, set the swift zone for the
ring.
``region``
Optionally, set the swift region for the ring.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. This value
can be specified on a host or drive basis (if specified at both,
the drive setting takes precedence).
``groups``
Set the groups to list the rings to which a host's drive belongs.
This can be set on a per drive basis which overrides the host
setting.
``drives``
Set the names of the drives on the swift host. Specify at least
one name.
``statsd_metric_prefix``
This metric is optional, and only evaluates if ``statsd_host`` is defined
somewhere. This allows you to define a prefix to add to
all ``statsd`` metrics sent from the hose. If omitted, use the node name.
In the following example, ``swift-node5`` shows values in the
``swift_hosts`` section that override the global values. Groups
are set, which overrides the global settings for drive ``sdb``. The
weight is overridden for the host and specifically adjusted on drive
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
for ``sdb``.
.. code-block:: yaml
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
#. Ensure the ``swift.yml`` is in the ``/etc/openstack_deploy/conf.d/``
folder.

View File

@ -0,0 +1,101 @@
`Home <index.html>`_ OpenStack-Ansible Swift
Storage devices
===============
This section offers a set of prerequisite instructions for setting up
Object Storage (swift) storage devices. The storage devices must be set up
before installing swift.
**Procedure 5.1. Configuring and mounting storage devices**
Object Storage recommends a minimum of three swift hosts
with five storage disks. The example commands in this procedure
use the storage devices ``sdc`` through to ``sdg``.
#. Determine the storage devices on the node to be used for swift.
#. Format each device on the node used for storage with XFS. While
formatting the devices, add a unique label for each device.
Without labels, a failed drive causes mount points to shift and
data to become inaccessible.
For example, create the file systems on the devices using the
``mkfs`` command:
.. code-block:: shell-session
# apt-get install xfsprogs
# mkfs.xfs -f -i size=1024 -L sdc /dev/sdc
# mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
# mkfs.xfs -f -i size=1024 -L sde /dev/sde
# mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
# mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
#. Add the mount locations to the ``fstab`` file so that the storage
devices are remounted on boot. The following example mount options
are recommended when using XFS:
.. code-block:: shell-session
LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
#. Create the mount points for the devices using the ``mkdir`` command:
.. code-block:: shell-session
# mkdir -p /srv/node/sdc
# mkdir -p /srv/node/sdd
# mkdir -p /srv/node/sde
# mkdir -p /srv/node/sdf
# mkdir -p /srv/node/sdg
The mount point is referenced as the ``mount_point`` parameter in
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``):
.. code-block:: shell-session
# mount /srv/node/sdc
# mount /srv/node/sdd
# mount /srv/node/sde
# mount /srv/node/sdf
# mount /srv/node/sdg
To view an annotated example of the ``swift.yml`` file, see `Appendix A,
*OSA configuration files* <app-configfiles.html>`_.
For the following mounted devices:
+--------------------------------------+--------------------------------------+
| Device | Mount location |
+======================================+======================================+
| /dev/sdc | /srv/node/sdc |
+--------------------------------------+--------------------------------------+
| /dev/sdd | /srv/node/sdd |
+--------------------------------------+--------------------------------------+
| /dev/sde | /srv/node/sde |
+--------------------------------------+--------------------------------------+
| /dev/sdf | /srv/node/sdf |
+--------------------------------------+--------------------------------------+
| /dev/sdg | /srv/node/sdg |
+--------------------------------------+--------------------------------------+
Table: Table 5.1. Mounted devices
The entry in the ``swift.yml``:
.. code-block:: yaml
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# - name: sdg
# mount_point: /srv/node

View File

@ -0,0 +1,66 @@
`Home <index.html>`_ OpenStack-Ansible Swift
Integrate with the Image Service (glance)
=========================================
As an option, you can create images in Image Service (glance) and
store them using Object Storage (swift).
If there is an existing glance backend (for example,
cloud files) but you want to add swift to use as the glance backend,
you can re-add any images from glance after moving
to swift. Images are no longer available if there is a change in the
glance variables when you begin using swift.
**Procedure 5.3. Integrating Object Storage with Image Service**
This procedure requires the following:
- OSA Kilo (v11)
- Object Storage v2.2.0
#. Update the glance options in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
# Glance Options
glance_default_store: swift
glance_swift_store_auth_address: '{{ auth_identity_uri }}'
glance_swift_store_container: glance_images
glance_swift_store_endpoint_type: internalURL
glance_swift_store_key: '{{ glance_service_password }}'
glance_swift_store_region: RegionOne
glance_swift_store_user: 'service:glance'
- ``glance_default_store``: Set the default store to ``swift``.
- ``glance_swift_store_auth_address``: Set to the local
authentication address using the
``'{{ auth_identity_uri }}'`` variable.
- ``glance_swift_store_container``: Set the container name.
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
``internalURL``.
- ``glance_swift_store_key``: Set the glance password using
the ``{{ glance_service_password }}`` variable.
- ``glance_swift_store_region``: Set the region. The default value
is ``RegionOne``.
- ``glance_swift_store_user``: Set the tenant and user name to
``'service:glance'``.
#. Rerun the glance configuration plays.
#. Run the glance playbook:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-glance-install.yml --tags "glance-config"

View File

@ -0,0 +1,48 @@
`Home <index.html>`_ OpenStack-Ansible Swift
Storage policies
================
Storage policies allow segmenting the cluster for various purposes
through the creation of multiple object rings. Using policies, different
devices can belong to different rings with varying levels of
replication. By supporting multiple object rings, swift can
segregate the objects within a single cluster.
Use storage policies for the following situations:
- Differing levels of replication: A provider may want to offer 2x
replication and 3x replication, but does not want to maintain two
separate clusters. They can set up a 2x policy and a 3x policy and
assign the nodes to their respective rings.
- Improving performance: Just as solid state drives (SSD) can be used
as the exclusive members of an account or database ring, an SSD-only
object ring can be created to implement a low-latency or high
performance policy.
- Collecting nodes into groups: Different object rings can have
different physical servers so that objects in specific storage
policies are always placed in a specific data center or geography.
- Differing storage implementations: A policy can be used to direct
traffic to collected nodes that use a different disk file (for
example: Kinetic, GlusterFS).
Most storage clusters do not require more than one storage policy. The
following problems can occur if using multiple storage policies per
cluster:
- Creating a second storage policy without any specified drives (all
drives are part of only the account, container, and default storage
policy groups) creates an empty ring for that storage policy.
- Only use a non-default storage policy if specified when creating
a container, using the ``X-Storage-Policy: <policy-name>`` header.
After creating the container, it uses the storage policy.
Other containers continue using the default or another specified
storage policy.
For more information about storage policies, see: `Storage
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_

View File

@ -0,0 +1,62 @@
`Home <index.html>`_ OpenStack-Ansible Swift
.. _configure-swift:
Configuring swift
=================
.. toctree::
configure-swift-devices.rst
configure-swift-config.rst
configure-swift-glance.rst
configure-swift-add.rst
configure-swift-policies.rst
Object Storage (swift) is a multi-tenant Object Storage system. It is
highly scalable, can manage large amounts of unstructured data, and
provides a RESTful HTTP API.
The following procedure describes how to set up storage devices and
modify the Object Storage configuration files to enable swift
usage.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity (keystone) users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to Object Storage.
If this value is ``False``, then by default, only users with the
admin or ``swiftoperator`` role are allowed to create containers or
manage tenants.
When the backend type for the Image Service (glance) is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
Overview
~~~~~~~~
Object Storage (swift) is configured using the
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
``/etc/openstack_deploy/user_variables.yml`` file.
When installing swift, use the group variables in the
``/etc/openstack_deploy/conf.d/swift.yml`` file for the Ansible
playbooks. Some variables cannot
be changed after they are set, while some changes require re-running the
playbooks. The values in the ``swift_hosts`` section supersede values in
the ``swift`` section.
To view the configuration files, including information about which
variables are required and which are optional, see `Appendix A, *OSA
configuration files* <app-configfiles.html>`_.

View File

@ -1,12 +1,31 @@
os_swift Docs
=============
================================
Swift role for OpenStack-Ansible
================================
Role for installing OpenStack Swift.
.. toctree::
:maxdepth: 2
Basic Role Example
^^^^^^^^^^^^^^^^^^
configure-swift.rst
.. code-block:: yaml
Default Variables
^^^^^^^^^^^^^^^^^
- role: "os_swift"
ROLE_VARS...
.. literalinclude:: ../../defaults/main.yml
:language: yaml
:start-after: under the License.
Example Playbook
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../examples/playbook.yml
:language: yaml
Tags
^^^^
This role supports two tags: ``swift-install`` and ``swift-config``
The ``swift-install`` tag can be used to install and upgrade.
The ``swift-config`` tag can be used to maintain configuration of the
service.

8
examples/playbook.yml Normal file
View File

@ -0,0 +1,8 @@
- name: Install swift server
hosts: swift_all
user: root
roles:
- { role: "os_swift", tags: [ "os-swift" ] }
vars:
external_lb_vip_address: 172.16.24.1
internal_lb_vip_address: 192.168.0.1

View File

@ -30,7 +30,7 @@ setenv =
[testenv:docs]
commands=
commands =
bash -c "rm -rf doc/build"
doc8 doc
python setup.py build_sphinx