Update swift content

I updated the swift content in the installation guide as
follows:

1) Consolidated Identity service configuration into proxy
   service configuration section.
2) Changed initial example architecture to install the proxy
   service on the controller node instead of a separate node.
   However, I used wording that supports separate and/or
   multiple proxy services.
3) The steps to configure memcached break installations with
   horizon because the latter references 127.0.0.1 instead
   of the management network interface IP address. After
   consolidating the proxy service to a single instance on
   the controller node, it now references 127.0.0.1.
4) Changed initial example architecture to install two
   storage nodes rather than five storage nodes to reduce
   the barrier to entry for users that want to try swift.
   However, I used wording that supports additional storage
   nodes and also references the swift documentation for
   larger deployments.
5) Changed the initial example architecture to use only the
   management network instead of a separate storage/replication
   network because the original instructions didn't actually use
   the latter. Fully implementing a separate storage/replication
   network requires a considerably more complex configuration
   and eliminating it reduces the barrier to entry for users
   that want to try swift. However, I used wording that
   mentions a separate storage/replication network and also
   references the swift documentation for larger deployments.
6) The Ubuntu and RHEL/CentOS/Fedora packages include ancient
   upstream/example configuration files that contain
   deprecated/defunct options and lack options that enable
   useful features. Packages take a while to fix, so the
   instructions now pull the upstream configuration files.
7) Removed the steps to create disk labels and partitions on
   the storage nodes because a variety of methods exist and
   users should choose their own method.
8) Changed rsync to run as a standard service rather than use
   the xinetd wrapper on RHEL/CentOS/Fedora.
9) Separated account, container, and object ring creation steps
   into separate sections to improve clarity and reduce typos.
10) Reduced partition power from 18 to 10 during ring creation
    because the example architecture shouldn't scale to the higher
    value using the example architecture.
11) Changed storage node service management steps on Ubuntu to
    use 'swift-init' rather than manually starting a large
    number of services.
12) Modified file names and IDs to increase consistency with
    other content in the guide.

I recommend backporting to Juno as this chapter wasn't
functional in time for release. Also, I'm planning to
improve the diagrams in a future patch.

Change-Id: I26710efe16e8cb6ed1a20ebc70b3f6d5962536d1
Partial-Bug: #1389382
Implements: blueprint installation-guide-improvements
backport: juno
This commit is contained in:
Matthew Kassawara 2014-11-06 10:51:07 -06:00
parent 34445c310d
commit ff5ef1b66a
14 changed files with 882 additions and 474 deletions

View File

@ -3148,9 +3148,9 @@
</glossentry>
<glossentry>
<glossterm>extended attributes (xattrs)</glossterm>
<glossterm>extended attributes (xattr)</glossterm>
<indexterm class="singular">
<primary>extended attributes (xattrs)</primary>
<primary>extended attributes (xattr)</primary>
</indexterm>
<glossdef>
@ -9008,6 +9008,19 @@
<para>An OpenStack-supported hypervisor.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>XFS</glossterm>
<indexterm class="singular">
<primary>XFS</primary>
</indexterm>
<glossdef>
<para>High-performance 64-bit file system created by Silicon
Graphics. Excels in parallel I/O operations and data
consistency.</para>
</glossdef>
</glossentry>
</glossdiv>
<!-- .Y. -->

View File

@ -28,9 +28,8 @@
Compute, at least one networking service, and the dashboard, OpenStack
Object Storage can operate independently of most other services. If your
use case only involves Object Storage, you can skip to
<xref linkend="object-storage-system-requirements"/>. However, the
dashboard will not work without at least OpenStack Image Service and
Compute.</para>
<xref linkend="ch_swift"/>. However, the dashboard will not run without
at least OpenStack Image Service and Compute.</para>
</note>
<note>
<para>You must use an account with administrative privileges to configure

View File

@ -11,22 +11,19 @@
Service, also known as Keystone.</para>
<xi:include href="../common/section_getstart_object-storage.xml"/>
<xi:include
href="object-storage/section_object-storage-sys-requirements.xml"/>
href="object-storage/section_swift-system-reqs.xml"/>
<xi:include
href="object-storage/section_object-storage-network-planning.xml"/>
href="object-storage/section_swift-example-arch.xml"/>
<xi:include
href="object-storage/section_object-storage-example-install-arch.xml"/>
<xi:include href="object-storage/section_object-storage-install.xml"/>
href="object-storage/section_swift-controller-node.xml"/>
<xi:include
href="object-storage/section_object-storage-install-config-storage-nodes.xml"/>
href="object-storage/section_swift-storage-node.xml"/>
<xi:include
href="object-storage/section_object-storage-install-config-proxy-node.xml"/>
href="object-storage/section_swift-initial-rings.xml"/>
<xi:include
href="object-storage/section_start-storage-node-services.xml"/>
href="object-storage/section_swift-finalize-installation.xml"/>
<xi:include
href="object-storage/section_object-storage-verifying-install.xml"/>
<xi:include
href="object-storage/section_object-storage-adding-proxy-server.xml"/>
href="object-storage/section_swift-verify.xml"/>
<section xml:id="section_swift_next_steps">
<title>Next steps</title>
<para>Your OpenStack environment now includes Object Storage. You can

View File

@ -1,56 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="example-object-storage-installation-architecture">
<title>Example of Object Storage installation architecture</title>
<itemizedlist>
<listitem>
<para>Node: A host machine that runs one or more OpenStack
Object Storage services.</para>
</listitem>
<listitem>
<para>Proxy node: Runs proxy services.</para>
</listitem>
<listitem>
<para>Storage node: Runs account, container, and object
services. Contains the SQLite databases.</para>
</listitem>
<listitem>
<para>Ring: A set of mappings between OpenStack Object
Storage data to physical devices.</para>
</listitem>
<listitem>
<para>Replica: A copy of an object. By default, three
copies are maintained in the cluster.</para>
</listitem>
<listitem>
<para>Zone: A logically separate section of the cluster,
related to independent failure characteristics.</para>
</listitem>
<listitem>
<para>Region (optional): A logically separate section of
the cluster, representing distinct physical locations
such as cities or countries. Similar to zones but
representing physical locations of portions of the
cluster rather than logical segments.</para>
</listitem>
</itemizedlist>
<para>To increase reliability and performance, you can add
additional proxy servers.</para>
<para>This document describes each storage node as a separate zone
in the ring. At a minimum, five zones are recommended. A zone
is a group of nodes that are as isolated as possible from other
nodes (separate servers, network, power, even geography). The
ring guarantees that every replica is stored in a separate
zone. This diagram shows one possible configuration for a
minimal installation:</para>
<para><inlinemediaobject>
<imageobject>
<imagedata
fileref="../figures/swift_install_arch.png"
/>
</imageobject>
</inlinemediaobject></para>
</section>

View File

@ -1,220 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="installing-and-configuring-the-proxy-node">
<title>Install and configure the proxy node</title>
<para>The proxy server takes each request and looks up locations
for the account, container, or object and routes the requests
correctly. The proxy server also handles API requests. You
enable account management by configuring it in the
<filename>/etc/swift/proxy-server.conf</filename> file.</para>
<note>
<para>The Object Storage processes run under a separate user
and group, set by configuration options, and referred to as
<literal>swift:swift</literal>. The default
user is <literal>swift</literal>.</para>
</note>
<procedure>
<step>
<para>Install swift-proxy service:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-proxy memcached python-keystoneclient python-swiftclient python-webob</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-swift-proxy memcached python-swiftclient python-keystoneclient python-xml</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Modify memcached to listen on the default interface
on a local, non-public network. Edit this line in
the <filename>/etc/memcached.conf</filename> file:</para>
<programlisting>-l 127.0.0.1</programlisting>
<para>Change it to:</para>
<programlisting>-l <replaceable>PROXY_LOCAL_NET_IP</replaceable></programlisting>
</step>
<step os="rhel;centos;fedora;opensuse;sles">
<para>Modify memcached to listen on the default interface
on a local, non-public network. Edit
the <filename>/etc/sysconfig/memcached</filename> file:</para>
<programlisting os="rhel;centos;fedora">OPTIONS="-l <replaceable>PROXY_LOCAL_NET_IP</replaceable>"</programlisting>
<programlisting os="opensuse;sles">MEMCACHED_PARAMS="-l <replaceable>PROXY_LOCAL_NET_IP</replaceable>"</programlisting>
</step>
<step os="ubuntu;debian">
<para>Restart the memcached service:</para>
<screen><prompt>#</prompt> <userinput>service memcached restart</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>Start the memcached service and configure it to start when
the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start memcached.service</userinput></screen>
</step>
<step os="opensuse;sles">
<para>Start the memcached service and configure it to start when
the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start memcached.service</userinput></screen>
</step>
<step>
<para><phrase os="ubuntu;debian">Create</phrase>
<phrase os="rhel;centos;fedora;opensuse;sles">Edit</phrase>
<filename>/etc/swift/proxy-server.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
bind_port = 8080
user = swift
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache authtoken keystoneauth proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = _member_,admin,swiftoperator
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = <replaceable>controller</replaceable>
auth_uri = http://controller:5000
# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = <replaceable>SWIFT_PASS</replaceable>
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
set log_name = cache
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:gatekeeper]
use = egg:swift#gatekeeper
[filter:proxy-logging]
use = egg:swift#proxy_logging
</programlisting>
<note>
<para>If you run multiple memcache servers, put the
multiple IP:port listings in the [filter:cache]
section of the
<filename>/etc/swift/proxy-server.conf</filename> file:</para>
<literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout>
<para>Only the proxy server uses memcache.</para>
</note>
<warning>
<para><literal>keystoneclient.middleware.auth_token</literal>: You
must configure <literal>auth_uri</literal> to point to the public
identity endpoint. Otherwise, clients might not be able to
authenticate against an admin endpoint.
</para>
</warning>
</step>
<step>
<para>Create the account, container, and object rings. The
builder command creates a builder file
with a few parameters. The parameter with the value of
18 represents 2 ^ 18th, the value that the partition
is sized to. Set this “partition power” value
based on the total amount of storage you expect your
entire ring to use. The value 3 represents the
number of replicas of each object, with the last value
being the number of hours to restrict moving a
partition more than once.</para>
<screen><prompt>#</prompt> <userinput>cd /etc/swift</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder account.builder create 18 3 1</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder create 18 3 1</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder create 18 3 1</userinput></screen>
</step>
<step>
<para>For every storage device on each node add entries to
each ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add z<replaceable>ZONE</replaceable>-<replaceable>STORAGE_LOCAL_NET_IP</replaceable>:6002[R<replaceable>STORAGE_REPLICATION_NET_IP</replaceable>:6005]/<replaceable>DEVICE</replaceable> 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder add z<replaceable>ZONE</replaceable>-<replaceable>STORAGE_LOCAL_NET_IP_1</replaceable>:6001[R<replaceable>STORAGE_REPLICATION_NET_IP</replaceable>:6004]/<replaceable>DEVICE</replaceable> 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder add z<replaceable>ZONE</replaceable>-<replaceable>STORAGE_LOCAL_NET_IP_1</replaceable>:6000[R<replaceable>STORAGE_REPLICATION_NET_IP</replaceable>:6003]/<replaceable>DEVICE</replaceable> 100</userinput></screen>
<note>
<para>You must omit the optional <parameter>STORAGE_REPLICATION_NET_IP</parameter> parameter if you
do not want to use dedicated network for
replication.</para>
</note>
<para>For example, if a storage node
has a partition in Zone 1 on IP 10.0.0.1, the storage node has
address 10.0.1.1 from replication network. The mount point of
this partition is <filename>/srv/node/sdb1</filename>, and the
path in <filename>/etc/rsyncd.conf</filename> is
<filename>/srv/node/</filename>, the DEVICE would be sdb1 and
the commands are:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add z1-10.0.0.1:6002R10.0.1.1:6005/sdb1 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder add z1-10.0.0.1:6001R10.0.1.1:6004/sdb1 100</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder add z1-10.0.0.1:6000R10.0.1.1:6003/sdb1 100</userinput></screen>
<note>
<para>If you assume five zones with one node for each
zone, start ZONE at 1. For each additional node,
increment ZONE by 1.</para>
</note>
</step>
<step>
<para>Verify the ring contents for each ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder</userinput></screen>
</step>
<step>
<para>Rebalance the rings:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder rebalance</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder container.builder rebalance</userinput>
<prompt>#</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput></screen>
<note>
<para>Rebalancing rings can take some time.</para>
</note>
</step>
<step>
<para>Copy the <filename>account.ring.gz</filename>,
<filename>container.ring.gz</filename>, and
<filename>object.ring.gz</filename> files to each
of the Proxy and Storage nodes in <filename>/etc/swift</filename>.</para>
</step>
<step>
<para>Make sure the swift user owns all configuration files:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Restart the Proxy service:</para>
<screen><prompt>#</prompt> <userinput>service swift-proxy restart</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>Start the Proxy service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>Start the Proxy service and configure it to start when the
system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service openstack-swift-proxy start</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-swift-proxy on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -1,138 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="installing-and-configuring-storage-nodes">
<title>Install and configure storage nodes</title>
<note>
<para>Object Storage works on any file system that supports
Extended Attributes (XATTRS). XFS shows the best overall
performance for the swift use case after considerable
testing and benchmarking at Rackspace. It is also the only
file system that has been thoroughly tested. See the <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/"
><citetitle>OpenStack Configuration
Reference</citetitle></link> for additional
recommendations.</para>
</note>
<procedure>
<step>
<para>Install storage node packages:</para>
<para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-account swift-container swift-object xfsprogs</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-account openstack-swift-container \
openstack-swift-object xfsprogs xinetd</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-swift-account openstack-swift-container \
openstack-swift-object python-xml xfsprogs xinetd</userinput></screen></para>
</step>
<step>
<para>For each device on the node that you want to use for
storage, set up the XFS volume
(<literal>/dev/sdb</literal> is used as an
example). Use a single partition per drive. For
example, in a server with 12 disks you may use one or
two disks for the operating system which should not be
touched in this step. The other 10 or 11 disks should
be partitioned with a single partition, then formatted
in XFS.</para>
<screen><prompt>#</prompt> <userinput>fdisk /dev/sdb</userinput>
<prompt>#</prompt> <userinput>mkfs.xfs /dev/sdb1</userinput>
<prompt>#</prompt> <userinput>echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" &gt;&gt; /etc/fstab</userinput>
<prompt>#</prompt> <userinput>mkdir -p /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>mount /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /srv/node</userinput></screen>
</step>
<step>
<para os="ubuntu;debian;rhel;centos;fedora">Create
<filename>/etc/rsyncd.conf</filename>:</para>
<para os="sles;opensuse">Replace the content of
<filename>/etc/rsyncd.conf</filename> with:</para>
<programlisting language="ini">uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <replaceable>STORAGE_LOCAL_NET_IP</replaceable>
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</programlisting>
</step>
<step>
<para>(Optional) If you want to separate rsync and
replication traffic to replication network, set
<literal>STORAGE_REPLICATION_NET_IP</literal>
instead of
<literal>STORAGE_LOCAL_NET_IP</literal>:</para>
<programlisting language="ini">address = <replaceable>STORAGE_REPLICATION_NET_IP</replaceable></programlisting>
</step>
<step os="ubuntu;debian">
<para>Edit the following line in
<filename>/etc/default/rsync</filename>:</para>
<programlisting language="ini">RSYNC_ENABLE=true</programlisting>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Edit the following line in
<filename>/etc/xinetd.d/rsync</filename>:</para>
<programlisting language="ini">disable = no</programlisting>
</step>
<step>
<para os="ubuntu;debian">Start the <systemitem
class="service">rsync</systemitem> service:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service rsync start</userinput></screen>
<para os="rhel;centos;fedora">Start the <systemitem
class="service">xinetd</systemitem> service:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl start xinetd.service</userinput></screen>
<para os="sles;opensuse">Start the <systemitem
class="service">xinetd</systemitem> service and configure it to
start when the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service xinetd start</userinput>
<prompt>#</prompt> <userinput>chkconfig xinetd on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable xinetd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start xinetd.service</userinput></screen>
<note>
<para>The rsync service requires no authentication, so
run it on a local, private network.</para>
</note>
</step>
<step>
<para>Set the <option>bind_ip</option> configuration option to
<replaceable>STORAGE_LOCAL_NET_IP</replaceable>.</para>
<para>Edit the <filename>account-server.conf</filename>,
<filename>container-server.conf</filename> and
<filename>object-server.conf</filename> files in the
<filename>/etc/swift</filename> directory. In each file,
update the <literal>[DEFAULT]</literal> section as follows:
<programlisting language="ini">[DEFAULT]
bind_ip = <replaceable>STORAGE_LOCAL_NET_IP</replaceable>
...</programlisting>
</para>
</step>
<step>
<para>Make sure the swift user owns all configuration files:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift</userinput></screen>
</step>
<step>
<para>Create the swift recon cache directory and set its
permissions:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /var/swift/recon</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /var/swift/recon</userinput></screen>
</step>
</procedure>
</section>

View File

@ -1,43 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="verify-object-storage-installation">
<title>Verify the installation</title>
<para>You can run these commands from the proxy server or any
server that has access to the Identity Service.</para>
<procedure>
<step>
<para>Make sure that your credentials are set up correctly in the
<filename>admin-openrc.sh</filename> file and source it:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step><para>Run the following <command>swift</command> command:</para>
<screen><prompt>$</prompt> <userinput>swift stat</userinput>
<computeroutput>Account: AUTH_11b9758b7049476d9b48f7a91ea11493
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1381434243.83760
X-Trans-Id: txdcdd594565214fb4a2d33-0052570383
X-Put-Timestamp: 1381434243.83760</computeroutput></screen>
</step>
<step>
<para>Run the following <command>swift</command> commands to upload
files to a container. Create the <filename>test.txt</filename> and
<filename>test2.txt</filename> test files locally if needed.</para>
<screen><prompt>$</prompt> <userinput>swift upload myfiles test.txt</userinput>
<prompt>$</prompt> <userinput>swift upload myfiles test2.txt</userinput></screen>
</step>
<step>
<para>Run the following <command>swift</command> command to
download all files from the <literal>myfiles</literal>
container:</para>
<screen><prompt>$</prompt> <userinput>swift download myfiles</userinput>
<computeroutput>test2.txt [headers 0.267s, total 0.267s, 0.000s MB/s]
test.txt [headers 0.271s, total 0.271s, 0.000s MB/s]</computeroutput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,195 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-install-controller-node">
<title>Install and configure the controller node</title>
<para>This section describes how to install and configure the proxy
service that handles requests for the account, container, and object
services that reside on the storage nodes. For simplicity, this
guide installs and configures the proxy service on the controller node.
However, you can run the proxy service on any node with network
connectivity to the storage nodes. Additionally, you can install and
configure the proxy service on multiple nodes to increase performance
and redundancy. For more information, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<procedure>
<title>To configure prerequisites</title>
<para>The proxy service relies on an authentication and authorization
mechanism such as the Identity service. However, unlike other services,
it also offers an internal mechanism that allows it to operate without
any other OpenStack services. However, for simplicity, this guide
references the Identity service in <xref linkend="ch_keystone"/>. Before
you configure the Object Storage service, you must create Identity
service credentials including endpoints.</para>
<note>
<para>The Object Storage service does not use a SQL database on
the controller node.</para>
</note>
<step>
<para>To create the Identity service credentials, complete these
steps:</para>
<substeps>
<step>
<para>Create a <literal>swift</literal> user:</para>
<screen><prompt>$</prompt> <userinput>keystone user-create --name swift --pass <replaceable>SWIFT_PASS</replaceable></userinput>
<computeroutput>+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | |
| enabled | True |
| id | d535e5cbd2b74ac7bfb97db9cced3ed6 |
| name | swift |
| username | swift |
+----------+----------------------------------+</computeroutput></screen>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with a suitable
password.</para>
</step>
<step>
<para>Link the <literal>swift</literal> user to the
<literal>service</literal> tenant and <literal>admin</literal>
role:</para>
<screen><prompt>$</prompt> <userinput>keystone user-role-add --user swift --tenant service --role admin</userinput></screen>
<note>
<para>This command provides no output.</para>
</note>
</step>
<step>
<para>Create the <literal>swift</literal> service:</para>
<screen><prompt>$</prompt> <userinput>keystone service-create --name swift --type object-store \
--description "OpenStack Object Storage"</userinput>
<computeroutput>+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Object Storage |
| enabled | True |
| id | 75ef509da2c340499d454ae96a2c5c34 |
| name | swift |
| type | object-store |
+-------------+----------------------------------+</computeroutput></screen>
</step>
</substeps>
</step>
<step>
<para>Create the Identity service endpoints:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--service-id $(keystone service-list | awk '/ object-store / {print $2}') \
--publicurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--internalurl 'http://<replaceable>controller</replaceable>:8080/v1/AUTH_%(tenant_id)s' \
--adminurl http://<replaceable>controller</replaceable>:8080 \
--region regionOne</userinput>
<computeroutput>+-------------+---------------------------------------------------+
| Property | Value |
+-------------+---------------------------------------------------+
| adminurl | http://controller:8080/ |
| id | af534fb8b7ff40a6acf725437c586ebe |
| internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| publicurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
| region | regionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
+-------------+---------------------------------------------------+</computeroutput></screen>
</step>
</procedure>
<procedure>
<title>To install and configure the controller node components</title>
<step>
<para>Install the packages:</para>
<note>
<para>Complete OpenStack environments already include some of these
packages.</para>
</note>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-proxy python-swiftclient python-keystoneclient memcached</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token memcached</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install openstack-swift-proxy python-swiftclient python-keystoneclient memcached python-xml</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>Create the <literal>/etc/swift</literal> directory.</para>
</step>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the proxy service configuration file from the Object
Storage source repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/proxy-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/swift/proxy-server.conf</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the bind port, user, and configuration directory:</para>
<programlisting language="ini">[DEFAULT]
...
bind_port = 8080
user = swift
swift_dir = /etc/swift</programlisting>
</step>
<step>
<para>In the <literal>[pipeline]</literal> section, enable
the appropriate modules:</para>
<programlisting language="ini">[pipeline]
pipeline = authtoken cache healthcheck keystoneauth proxy-logging proxy-server</programlisting>
<note>
<para>For more information on other modules that enable
additional features, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
</note>
</step>
<step>
<para>In the <literal>[app:proxy-server]</literal> section, enable
account management:</para>
<programlisting language="ini">[app:proxy-server]
...
allow_account_management = true
account_autocreate = true</programlisting>
</step>
<step>
<para>In the <literal>[filter:keystoneauth]</literal> section,
configure the operator roles:</para>
<programlisting language="ini">[filter:keystoneauth]
use = egg:swift#keystoneauth
...
operator_roles = admin,_member_</programlisting>
<note os="ubuntu;debian;rhel;centos;fedora">
<para>You might need to uncomment this section.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:authtoken]</literal> section,
configure Identity service access:</para>
<programlisting language="ini">[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
...
auth_uri = http://<replaceable>controller</replaceable>:5000/v2.0
identity_url = http://<replaceable>controller</replaceable>:35357
admin_tenant_name = service
admin_user = swift
admin_password = <replaceable>SWIFT_PASS</replaceable>
delay_auth_decision = true</programlisting>
<para>Replace <replaceable>SWIFT_PASS</replaceable> with the
password you chose for the <literal>swift</literal> user in the
Identity service.</para>
<note os="ubuntu;debian;rhel;centos;fedora">
<para>You might need to uncomment this section.</para>
</note>
<note>
<para>Comment out any <literal>auth_host</literal>,
<literal>auth_port</literal>, and
<literal>auth_protocol</literal> options because the
<literal>identity_uri</literal> option replaces them.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:cache]</literal> section, configure
the <application>memcached</application> location:</para>
<programlisting language="ini">[filter:cache]
...
memcache_servers = 127.0.0.1:11211</programlisting>
</step>
</substeps>
</step>
</procedure>
</section>

View File

@ -0,0 +1,56 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-example-arch">
<title>Example architecture</title>
<para>In a production environment, the Object Storage service requires
at least two proxy nodes and five storage nodes. For simplicity, this
guide uses a minimal architecture with the proxy service running on
the existing OpenStack controller node and two storage nodes. However,
these concepts still apply.</para>
<itemizedlist>
<listitem>
<para>Node: A host machine that runs one or more OpenStack
Object Storage services.</para>
</listitem>
<listitem>
<para>Proxy node: Runs proxy services.</para>
</listitem>
<listitem>
<para>Storage node: Runs account, container, and object
services. Contains the SQLite databases.</para>
</listitem>
<listitem>
<para>Ring: A set of mappings between OpenStack Object
Storage data to physical devices.</para>
</listitem>
<listitem>
<para>Replica: A copy of an object. By default, three
copies are maintained in the cluster.</para>
</listitem>
<listitem>
<para>Zone (optional): A logically separate section of the cluster,
related to independent failure characteristics.</para>
</listitem>
<listitem>
<para>Region (optional): A logically separate section of
the cluster, representing distinct physical locations
such as cities or countries. Similar to zones but
representing physical locations of portions of the
cluster rather than logical segments.</para>
</listitem>
</itemizedlist>
<para>To increase reliability and performance, you can add
additional proxy servers.</para>
<para>The following diagram shows one possible architecture for a
minimal production environment:</para>
<para>
<inlinemediaobject>
<imageobject>
<imagedata fileref="../figures/swift_install_arch.png"/>
</imageobject>
</inlinemediaobject>
</para>
</section>

View File

@ -0,0 +1,133 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-finalize-installation">
<title>Finalize installation</title>
<procedure>
<title>Configure hashes and default storage policy</title>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the <filename>/etc/swift/swift.conf</filename> file from
the Object Storage repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/swift.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/swift/swift.conf</filename> file and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[swift-hash]</literal> section, configure
the hash path prefix and suffix for your environment.</para>
<programlisting language="ini">[swift-hash]
...
swift_hash_path_suffix = <replaceable>HASH_PATH_PREFIX</replaceable>
swift_hash_path_prefix = <replaceable>HASH_PATH_SUFFIX</replaceable></programlisting>
<para>Replace <replaceable>HASH_PATH_PREFIX</replaceable> and
<replaceable>HASH_PATH_SUFFIX</replaceable> with a unique values
that remain a secret.</para>
<warning>
<para>Do not change or lose these values!</para>
</warning>
</step>
<step>
<para>In the <literal>[storage-policy:0]</literal> section,
configure the default storage policy:</para>
<programlisting language="ini">[storage-policy:0]
...
name = Policy-0
default = yes</programlisting>
</step>
</substeps>
</step>
<step>
<para>Copy the <filename>swift.conf</filename> file to
the <literal>/etc/swift</literal> directory on each storage
and additional nodes running the proxy service.</para>
</step>
<step>
<para>On all nodes, ensure proper ownership of the configuration
directory:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /etc/swift</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>On the controller node and any other nodes running the proxy
service, restart the Object Storage proxy service including
its dependencies:</para>
<screen><prompt>#</prompt> <userinput>service memcached restart</userinput>
<prompt>#</prompt> <userinput>service swift-proxy restart</userinput></screen>
</step>
<step os="rhel;centos;fedora;sles;opensuse">
<para>On the controller node and any other nodes running the proxy
service, start the Object Storage proxy service including its
dependencies and configure them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service memcached.service</userinput></screen>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service memcached start</userinput>
<prompt>#</prompt> <userinput>service openstack-swift-proxy start</userinput>
<prompt>#</prompt> <userinput>chkconfig memcached on</userinput>
<prompt>#</prompt> <userinput>chkconfig openstack-swift-proxy on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-proxy.service memcached.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-proxy.service memcached.service</userinput></screen>
</step>
<step os="ubuntu;debian">
<para>On the storage nodes, start the Object Storage services:</para>
<screen><prompt>#</prompt> <userinput>swift-init all start</userinput></screen>
<note>
<para>The storage node runs many Object Storage services and the
<command>swift-init</command> command makes them easier to
manage. You can ignore errors from services not running on the
storage node.</para>
</note>
</step>
<step os="rhel;centos;fedora">
<para>On the storage nodes, start the Object Storage services and
configure them to start when the system boots:</para>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>On the storage nodes, start the Object Storage services and
configure them to start when the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>for service in \
openstack-swift-account openstack-swift-account-auditor \
openstack-swift-account-reaper openstack-swift-account-replicator; do \
service $service start; chkconfig $service on; done</userinput>
<prompt>#</prompt> <userinput>for service in \
openstack-swift-container openstack-swift-container-auditor \
openstack-swift-container-replicator openstack-swift-container-updater; do \
service $service start; chkconfig $service on; done</userinput>
<prompt>#</prompt> <userinput>for service in \
openstack-swift-object openstack-swift-object-auditor \
openstack-swift-object-replicator openstack-swift-object-updater; do \
service $service start; chkconfig $service on; done</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-account.service openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service openstack-swift-account-replicator.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-container.service openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service openstack-swift-container-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service</userinput></screen>
</step>
</procedure>
</section>

View File

@ -0,0 +1,166 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-initial-rings">
<title>Create initial rings</title>
<para>Before starting the Object Storage services, you must create
the initial account, container, and object rings. The ring builder
creates configuration files that each node uses to determine and
deploy the storage architecture. For simplicity, this guide uses one
region and zone with 2^10 (1024) maximum partitions, 3 replicas of each
object, and 1 hour minimum time between moving a partition more than
once. For Object Storage, a partition indicates a directory on a storage
device rather than a conventional partition table. For more information,
see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<section xml:id="swift-initial-rings-account">
<title>Account ring</title>
<para>The account server uses the account ring to maintain lists
of containers.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>account.builder</filename> file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6002/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder add r1z1-10.0.0.51:6002/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder</userinput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder account.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-container">
<title>Container ring</title>
<para>The container server uses the container ring to maintain lists
of objects. However, it does not track object locations.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>container.builder</filename>
file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6001/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder add r1z1-10.0.0.51:6001/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder</userinput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder container.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-object">
<title>Object ring</title>
<para>The object server uses the object ring to maintain lists
of object locations on local devices.</para>
<procedure>
<title>To create the ring</title>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Change to the <literal>/etc/swift</literal> directory.</para>
</step>
<step>
<para>Create the base <filename>object.builder</filename> file:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder create 10 3 1</userinput></screen>
</step>
<step>
<para>Add each storage node to the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder \
add r1z1-<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>:6000/<replaceable>DEVICE_NAME</replaceable> <replaceable>DEVICE_WEIGHT</replaceable></userinput></screen>
<para>Replace
<replaceable>STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage node.
Replace <replaceable>DEVICE_NAME</replaceable> with a storage
device name on the same storage node. For example, using the first
storage node in
<xref linkend="swift-install-storage-node"/> with the
<literal>/dev/sdb1</literal> storage device and weight of 100:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder add r1z1-10.0.0.51:6000/sdb1 100</userinput></screen>
<para>Repeat this command for each storage device on each storage
node. The example architecture requires four variations of this
command.</para>
</step>
<step>
<para>Verify the ring contents:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder</userinput></screen>
</step>
<step>
<para>Rebalance the ring:</para>
<screen><prompt>#</prompt> <userinput>swift-ring-builder object.builder rebalance</userinput></screen>
<note>
<para>This process can take a while.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="swift-initial-rings-distribute">
<title>Distribute ring configuration files</title>
<para>Copy the <filename>account.ring.gz</filename>,
<filename>container.ring.gz</filename>, and
<filename>object.ring.gz</filename> files to the
<literal>/etc/swift</literal> directory on each storage node and
any additional nodes running the proxy service.</para>
</section>
</section>

View File

@ -0,0 +1,256 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-install-storage-node">
<title>Install and configure the storage nodes</title>
<para>This section describes how to install and configure storage nodes
for the Object Storage service. For simplicity, this configuration
references two storage nodes, each containing two empty local block
storage devices. Each of the devices, <literal>/dev/sdb</literal> and
<literal>/dev/sdc</literal>, must contain a suitable partition table
with one partition occupying the entire device. Although the Object
Storage service supports any file system with
<glossterm>extended attributes (xattr)</glossterm>, testing and
benchmarking indicates the best performance and reliability on
<glossterm>XFS</glossterm>. For more information on horizontally
scaling your environment, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
<procedure>
<title>To configure prerequisites</title>
<para>You must configure each storage node before you install and
configure the Object Storage service on it. Similar to the controller
node, each storage node contains one network interface on the
<glossterm>management network</glossterm>. Optionally, each storage
node can contain a second network interface on a separate network for
replication. For more information, see
<xref linkend="ch_basic_environment"/>.</para>
<step>
<para>Configure unique items on the first storage node:</para>
<substeps>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.51</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>object1</replaceable>.</para>
</step>
</substeps>
</step>
<step>
<para>Configure unique items on the second storage node:</para>
<substeps>
<step>
<para>Configure the management interface:</para>
<para>IP address: 10.0.0.52</para>
<para>Network mask: 255.255.255.0 (or /24)</para>
<para>Default gateway: 10.0.0.1</para>
</step>
<step>
<para>Set the hostname of the node to
<replaceable>object2</replaceable>.</para>
</step>
</substeps>
</step>
<step>
<para>Configure shared items on both storage nodes:</para>
<substeps>
<step>
<para>Copy the contents of the <filename>/etc/hosts</filename> file
from the controller node and add the following to it:</para>
<programlisting language="ini"># object1
10.0.0.51 object1
# object2
10.0.0.52 object2</programlisting>
<para>Also add this content to the <filename>/etc/hosts</filename>
file on all other nodes in your environment.</para>
</step>
<step>
<para>Install and configure
<glossterm baseform="Network Time Protocol (NTP)">NTP</glossterm>
using the instructions in
<xref linkend="basics-ntp-other-nodes"/>.</para>
</step>
<step>
<para>Install the supporting utility packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install xfsprogs rsync</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install xfsprogs rsync</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install xfsprogs rsync xinetd</userinput></screen>
</step>
<step>
<para>Format the <literal>/dev/sdb1</literal> and
<literal>/dev/sdc1</literal> partitions as XFS:</para>
<screen><prompt>#</prompt> <userinput>mkfs.xfs /dev/sdb1</userinput>
<prompt>#</prompt> <userinput>mkfs.xfs /dev/sdc1</userinput></screen>
</step>
<step>
<para>Create the mount directory structure:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>mkdir -p /srv/node/sdc1</userinput></screen>
</step>
<step>
<para>Edit the <filename>/etc/fstab</filename> file and add the
following to it:</para>
<programlisting language="ini">/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 2</programlisting>
</step>
<step>
<para>Mount the devices:</para>
<screen><prompt>#</prompt> <userinput>mount /srv/node/sdb1</userinput>
<prompt>#</prompt> <userinput>mount /srv/node/sdc1</userinput></screen>
</step>
</substeps>
</step>
<step>
<para>Edit the <filename>/etc/rsyncd.conf</filename> file and add the
following to it:</para>
<programlisting language="ini">uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</programlisting>
<para>Replace <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage
node.</para>
<note>
<para>The <systemitem role="service">rsync</systemitem> service
requires no authentication, so consider running it on a private
network.</para>
</note>
</step>
<step os="ubuntu;debian">
<para>Edit the <filename>/etc/default/rsync</filename> file and enable
the <systemitem role="service">rsync</systemitem> service:</para>
<programlisting language="ini">RSYNC_ENABLE=true</programlisting>
</step>
<step os="sles;opensuse">
<para>Edit the <filename>/etc/xinetd.d/rsync</filename> file and enable
the <systemitem role="service">rsync</systemitem> service:</para>
<programlisting language="ini">disable = no</programlisting>
</step>
<step os="ubuntu;debian">
<para>Start the <systemitem class="service">rsync</systemitem>
service:</para>
<screen><prompt>#</prompt> <userinput>service rsync start</userinput></screen>
</step>
<step os="rhel;centos;fedora">
<para>Start the <systemitem class="service">rsyncd</systemitem> service
and configure it to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable rsyncd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start rsyncd.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>Start the <systemitem class="service">xinetd</systemitem> service
and configure it to start when the system boots:</para>
<para os="sles">On SLES:</para>
<screen os="sles"><prompt>#</prompt> <userinput>service xinetd start</userinput>
<prompt>#</prompt> <userinput>chkconfig xinetd on</userinput></screen>
<para os="opensuse">On openSUSE:</para>
<screen os="opensuse"><prompt>#</prompt> <userinput>systemctl enable xinetd.service</userinput>
<prompt>#</prompt> <userinput>systemctl start xinetd.service</userinput></screen>
</step>
</procedure>
<procedure>
<title>Install and configure storage node components</title>
<note>
<para>Perform these steps on each storage node.</para>
</note>
<step>
<para>Install the packages:</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install swift swift-account swift-container swift-object</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-swift-account openstack-swift-container \
openstack-swift-object</userinput></screen>
<screen os="opensuse;sles"><prompt>#</prompt> <userinput>zypper install openstack-swift-account openstack-swift-container \
openstack-swift-object python-xml</userinput></screen>
</step>
<step os="ubuntu;debian;rhel;centos;fedora">
<para>Obtain the accounting, container, and object service configuration
files from the Object Storage source repository:</para>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/account-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample</userinput></screen>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/container-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample</userinput></screen>
<screen><prompt>#</prompt> <userinput>curl -o /etc/swift/object-server.conf \
https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample</userinput></screen>
</step>
<step>
<para>Edit the
<filename>/etc/swift/account-server.conf</filename>,
<filename>/etc/swift/container-server.conf</filename>, and
<filename>/etc/swift/object-server.conf</filename> files and
complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
bind port, bind address, user, configuration directory, and
mount point directory:</para>
<programlisting language="ini">[DEFAULT]
...
bind_ip = <replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node</programlisting>
<para>Replace
<replaceable>MANAGEMENT_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the management network on the storage
node.</para>
</step>
<step>
<para>In the <literal>[pipeline]</literal> section, enable
the appropriate modules:</para>
<programlisting language="ini">[pipeline]
pipeline = healthcheck recon account-server</programlisting>
<note>
<para>For more information on other modules that enable
additional features, see the
<link xlink:href="http://docs.openstack.org/developer/swift/deployment_guide.html"
>Deployment Guide</link>.</para>
</note>
</step>
<step>
<para>In the <literal>[filter:recon]</literal> section, configure
the recon cache directory:</para>
<programlisting language="ini">[filter:recon]
...
recon_cache_path = /var/cache/swift</programlisting>
</step>
</substeps>
</step>
<step>
<para>Ensure proper ownership of the mount point directory
structure:</para>
<screen><prompt>#</prompt> <userinput>chown -R swift:swift /srv/node</userinput></screen>
</step>
<step>
<para>Create the <literal>recon</literal> directory and ensure proper
ownership of it:</para>
<screen><prompt>#</prompt> <userinput>mkdir -p /var/cache/swift</userinput>
<prompt>#</prompt> <userinput>chown -R swift:swift /var/cache/swift</userinput></screen>
</step>
</procedure>
</section>

View File

@ -7,9 +7,9 @@
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="object-storage-system-requirements">
xml:id="swift-system-reqs">
<?dbhtml stop-chunking?>
<title>System requirements for Object Storage</title>
<title>System requirements</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack Object
Storage is designed to run on commodity hardware.</para>
<note>

View File

@ -0,0 +1,50 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="swift-verify">
<title>Verify operation</title>
<para>This section describes how to verify operation of the Object
Storage service.</para>
<procedure>
<note>
<para>Perform these steps on the controller node.</para>
</note>
<step>
<para>Source the <literal>demo</literal> tenant credentials:</para>
<screen><prompt>$</prompt> <userinput>source demo-openrc.sh</userinput></screen>
</step>
<step>
<para>Show the service status:</para>
<screen><prompt>$</prompt> <userinput>swift stat</userinput>
<computeroutput>Account: AUTH_11b9758b7049476d9b48f7a91ea11493
Containers: 0
Objects: 0
Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1381434243.83760
X-Trans-Id: txdcdd594565214fb4a2d33-0052570383
X-Put-Timestamp: 1381434243.83760</computeroutput></screen>
</step>
<step>
<para>Upload a test file:</para>
<screen><prompt>$</prompt> <userinput>swift upload demo-container1 <replaceable>FILE</replaceable></userinput></screen>
<para>Replace <replaceable>FILE</replaceable> with the name of a local
file to upload to the <literal>demo-container1</literal>
container.</para>
</step>
<step>
<para>List containers:</para>
<screen><prompt>$</prompt> <userinput>swift list</userinput>
<computeroutput>demo-container1</computeroutput></screen>
</step>
<step>
<para>Download a test file:</para>
<screen><prompt>$</prompt> <userinput>swift download demo-container1 <replaceable>FILE</replaceable></userinput></screen>
<para>Replace <replaceable>FILE</replaceable> with the name of the
file uploaded to the <literal>demo-container1</literal>
container.</para>
</step>
</procedure>
</section>