openstack-manuals/doc/src/docbkx/openstack-compute-admin/computevolumes.xml

1063 lines
57 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_volumes">
<title>Volumes</title>
<section xml:id="cinder-vs-nova-volumes">
<title>Cinder Versus Nova-Volumes</title>
<para>You now have two options in terms of Block Storage.
Currently (as of the Folsom release) both are nearly
identical in terms of functionality, API's and even the
general theory of operation. Keep in mind however that
Nova-Volumes is deprecated and will be removed at the
release of Grizzly. </para>
<para>See the Cinder section of the <link
xlink:href="http://docs.openstack.org/folsom/openstack-compute/install/apt/content/osfolubuntu-cinder.html"
>Folsom Install Guide</link> for Cinder-specific
information.</para>
</section>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>Nova-volume is the service that allows you to give extra block level storage to your
OpenStack Compute instances. You may recognize this as a similar offering from Amazon
EC2 known as Elastic Block Storage (EBS). However, nova-volume is not the same
implementation that EC2 uses today. Nova-volume is an iSCSI solution that employs the
use of Logical Volume Manager (LVM) for Linux. Note that a volume may only be attached
to one instance at a time. This is not a shared storage solution like a SAN of NFS on
which multiple servers can attach to.</para>
<para>Before going any further; let's discuss the nova-volume implementation in OpenStack: </para>
<para>The nova-volumes service uses iSCSI-exposed LVM volumes to the compute nodes which run
instances. Thus, there are two components involved: </para>
<para>
<orderedlist>
<listitem>
<para>lvm2, which works with a VG called "nova-volumes" (Refer to <link
xlink:href="http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)"
>http://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)</link> for
further details)</para>
</listitem>
<listitem>
<para>open-iscsi, the iSCSI implementation which manages iSCSI sessions on the
compute nodes </para>
</listitem>
</orderedlist>
</para>
<para>Here is what happens from the volume creation to its attachment: </para>
<orderedlist>
<listitem>
<para>The volume is created via <command>nova volume-create</command>; which creates an LV into the
volume group (VG) "nova-volumes" </para>
</listitem>
<listitem>
<para>The volume is attached to an instance via <command>nova volume-attach</command>; which creates a
unique iSCSI IQN that will be exposed to the compute node </para>
</listitem>
<listitem>
<para>The compute node which run the concerned instance has now an active ISCSI
session; and a new local storage (usually a /dev/sdX disk) </para>
</listitem>
<listitem>
<para>libvirt uses that local storage as a storage for the instance; the instance
get a new disk (usually a /dev/vdX disk) </para>
</listitem>
</orderedlist>
<para>For this particular walk through, there is one cloud controller running nova-api,
nova-scheduler, nova-objectstore, nova-network and nova-volume services. There are two
additional compute nodes running nova-compute. The walk through uses a custom
partitioning scheme that carves out 60GB of space and labels it as LVM. The network is a
/28 .80-.95, and FlatManger is the NetworkManager setting for OpenStack Compute (Nova). </para>
<para>Please note that the network mode doesn't interfere at
all with the way nova-volume works, but networking must be
set up for nova-volumes to work. Please refer to <link
linkend="ch_networking">Networking</link> for more
details.</para>
<para>To set up Compute to use volumes, ensure that nova-volume is installed along with
lvm2. The guide will be split in four parts : </para>
<para>
<itemizedlist>
<listitem>
<para>Installing the nova-volume service on the cloud controller.</para>
</listitem>
<listitem>
<para>Configuring the "nova-volumes" volume group on the compute
nodes.</para>
</listitem>
<listitem>
<para>Troubleshooting your nova-volume installation.</para>
</listitem>
<listitem>
<para>Backup your nova volumes.</para>
</listitem>
</itemizedlist>
</para>
<xi:include href="install-nova-volume.xml" />
<xi:include href="configure-nova-volume.xml" />
<xi:include href="troubleshoot-nova-volume.xml" />
<xi:include href="troubleshoot-cinder.xml" />
<xi:include href="backup-nova-volume-disks.xml" />
</section>
<section xml:id="volume-drivers">
<title>Volume drivers</title>
<para>The default nova-volume behaviour can be altered by
using different volume drivers that are included in Nova
codebase. To set volume driver, use
<literal>volume_driver</literal> flag. The default is
as follows:</para>
<programlisting>
volume_driver=nova.volume.driver.ISCSIDriver
iscsi_helper=tgtadm
</programlisting>
<section xml:id="ceph-rados">
<title>Ceph RADOS block device (RBD)</title>
<para>By Sebastien Han from <link xlink:href="http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/">http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/</link></para>
<para>If you are using KVM or QEMU as your hypervisor, the
Compute service can be configured to use
<link xlink:href="http://ceph.com/ceph-storage/block-storage/">
Ceph's RADOS block devices (RBD)</link> for volumes. </para>
<para>Ceph is a massively scalable, open source,
distributed storage system. It is comprised of an
object store, block store, and a POSIX-compliant
distributed file system. The platform is capable of
auto-scaling to the exabyte level and beyond, it runs
on commodity hardware, it is self-healing and
self-managing, and has no single point of failure.
Ceph is in the Linux kernel and is integrated with the
OpenStack™ cloud operating system. As a result of its
open source nature, this portable storage platform may
be installed and used in public or private clouds.<figure>
<title>Ceph-architecture.png</title>
<mediaobject>
<imageobject>
<imagedata
fileref="http://sebastien-han.fr/images/Ceph-architecture.png"
/>
</imageobject>
</mediaobject>
</figure></para>
<simplesect>
<title>RADOS?</title>
<para>You can easily get confused by the denomination:
Ceph? RADOS?</para>
<para><emphasis>RADOS: Reliable Autonomic Distributed
Object Store</emphasis> is an object storage.
RADOS takes care of distributing the objects
across the whole storage cluster and replicating
them for fault tolerance. It is built with 3 major
components:</para>
<itemizedlist>
<listitem>
<para><emphasis>Object Storage Device
(ODS)</emphasis>: the storage daemon -
RADOS service, the location of your data.
You must have this daemon running on each
server of your cluster. For each OSD you
can have an associated hard drive disks.
For performance purpose its usually
better to pool your hard drive disk with
raid arrays, LVM or btrfs pooling. With
that, for one server your will have one
daemon running. By default, three pools
are created: data, metadata and
RBD.</para>
</listitem>
<listitem>
<para><emphasis>Meta-Data Server
(MDS)</emphasis>: this is where the
metadata are stored. MDSs builds POSIX
file system on top of objects for Ceph
clients. However if you are not using the
Ceph File System, you do not need a meta
data server.</para>
</listitem>
<listitem>
<para><emphasis>Monitor (MON)</emphasis>: this
lightweight daemon handles all the
communications with the external
applications and the clients. It also
provides a consensus for distributed
decision making in a Ceph/RADOS cluster.
For instance when you mount a Ceph shared
on a client you point to the address of a
MON server. It checks the state and the
consistency of the data. In an ideal setup
you will at least run 3
<code>ceph-mon</code> daemons on
separate servers. Quorum decisions and
calculs are elected by a majority vote, we
expressly need odd number.</para>
</listitem>
</itemizedlist>
<para>Ceph developers recommend to use btrfs as a
file system for the storage. Using XFS is also
possible and might be a better alternative for
production environments. Neither Ceph nor Btrfs
are ready for production. It could be really risky
to put them together. This is why XFS is an
excellent alternative to btrfs. The ext4
file system is also compatible but doesnt take
advantage of all the power of Ceph.</para>
<note>
<para>We recommend configuring Ceph to use the XFS
file system in the near term, and btrfs in the
long term once it is stable enough for
production.</para>
</note>
<para>See <link xlink:href="http://ceph.com/docs/master/rec/filesystem/"
>ceph.com/docs/master/rec/file system/</link> for more information about usable file
systems.</para>
</simplesect>
<simplesect><title>Ways to store, use and expose data</title>
<para>There are several ways to store and access your data.</para>
<itemizedlist>
<listitem>
<para><emphasis>RADOS</emphasis>: as an
object, default storage mechanism.</para>
</listitem>
<listitem><para><emphasis>RBD</emphasis>: as a block
device. The Linux kernel RBD (rados block
device) driver allows striping a Linux block
device over multiple distributed object store
data objects. It is compatible with the kvm
RBD image.</para></listitem>
<listitem><para><emphasis>CephFS</emphasis>: as a file,
POSIX-compliant file system.</para></listitem>
</itemizedlist>
<para>Ceph exposes its distributed object store (RADOS) and it can be accessed via multiple interfaces:</para>
<itemizedlist>
<listitem><para><emphasis>RADOS Gateway</emphasis>:
Swift and Amazon-S3 compatible RESTful
interface. See <link xlink:href="http://ceph.com/wiki/RADOS_Gateway"
>RADOS_Gateway</link> for further information.</para></listitem>
<listitem><para><emphasis>librados</emphasis> and the
related C/C++ bindings.</para></listitem>
<listitem><para><emphasis>rbd and QEMU-RBD</emphasis>:
Linux kernel and QEMU block devices that
stripe data across multiple
objects.</para></listitem>
</itemizedlist>
<para>For detailed installation instructions and
benchmarking information, see <link
xlink:href="http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/"
>http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/</link>. </para>
</simplesect>
</section>
<section xml:id="ibm-storwize-svc-driver">
<title>IBM Storwize family and SVC volume driver</title>
<para>The volume management driver for Storwize family and
SAN Volume Controller (SVC) provides OpenStack Compute
instances with access to IBM Storwize family or SVC
storage systems.</para>
<section xml:id="ibm-storwize-svc-driver1">
<title>Configuring the Storwize family and SVC system
</title>
<simplesect>
<title>iSCSI configuration</title>
<para>The Storwize family or SVC system must be
configured for iSCSI.
Each Storwize family or SVC node should have
at least one iSCSI IP address.
The driver uses an iSCSI IP address associated
with the volume's preferred
node (if available) to attach the volume to the
instance, otherwise it uses the first available
iSCSI IP address of the system.
The driver obtains the iSCSI IP address
directly from the storage system;
there is no need to provide these
iSCSI IP addresses directly to the driver. </para>
<note>
<para>You should make sure that the compute nodes
have iSCSI network access to the Storwize family
or SVC system. </para>
</note>
</simplesect>
<simplesect>
<title>Configuring storage pools</title>
<para>The driver allocates all volumes in a single
pool.
The pool should be created in advance and be
provided to the driver using the
<literal>storwize_svc_volpool_name</literal>
flag.
Details about the configuration flags and how
to provide the flags to the driver appear in the
<link linkend="ibm-storwize-svc-driver2">
next section</link>. </para>
</simplesect>
<simplesect>
<title>Configuring user authentication for the driver
</title>
<para>The driver requires access to the Storwize
family or SVC system management interface.
The driver communicates with
the management using SSH.
The driver should be provided with the Storwize
family or SVC management IP using the
<literal>san_ip</literal>
flag, and the management port should be
provided by the
<literal>san_ssh_port</literal> flag.
By default, the port value is configured to
be port 22 (SSH). </para>
<note>
<para>Make sure the compute node running
the nova-volume management driver has SSH
network access to
the storage system. </para>
</note>
<para>To allow the driver to communicate with the
Storwize family or SVC system,
you must provide the driver with
a user on the storage system. The driver has two
authentication methods: password-based
authentication and SSH key pair authentication.
The user should have an Administrator role.
It is suggested to create a new
user for the management driver.
Please consult with your
storage and security administrator regarding
the preferred authentication method and how
passwords or SSH keys should be stored in a
secure manner. </para>
<note>
<para>When creating a new user on the Storwize or
SVC system, make sure the user belongs to
the Administrator group or to another group
that has an Administrator role. </para>
</note>
<para>If using password authentication, assign a
password to the user on the Storwize or SVC
system.
The driver configuration flags for the user and
password are <literal>san_login</literal> and
<literal>san_password</literal>, respectively.
</para>
<para>If you are using the SSH key pair
authentication, create SSH
private and public keys using the instructions
below or by any other method.
Associate the public key with the
user by uploading the public key: select the
"choose file" option in the Storwize family or
SVC management GUI under "SSH public key".
Alternatively, you may associate the SSH public
key using the command line interface;
details can be found in the
Storwize and SVC documentation.
The private key should
be provided to the driver using the
<literal>san_private_key</literal>
configuration flag. </para>
</simplesect>
<simplesect>
<title>Creating a SSH key pair using OpenSSH</title>
<para>You can create an SSH key pair using OpenSSH,
by running:
</para>
<programlisting>
ssh-keygen -t rsa
</programlisting>
<para>The command prompts for a file to save the key
pair.
For example, if you select 'key' as the
filename, two files will be created:
<literal>key</literal> and
<literal>key.pub</literal>.
The <literal>key</literal>
file holds the private SSH key and
<literal>key.pub</literal> holds the public
SSH key.
</para>
<para>The command also prompts for a passphrase,
which should be empty.</para>
<para>The private key file should be provided to the
driver using the
<literal>san_private_key</literal>
configuration flag. The public key should be
uploaded to the Storwize family or SVC system
using the storage management GUI or command
line interface. </para>
</simplesect>
</section>
<section xml:id="ibm-storwize-svc-driver2">
<title>Configuring the Storwize family and SVC driver
</title>
<simplesect>
<title>Enabling the Storwize family and SVC driver
</title>
<para>Set the volume driver to the Storwize family and
SVC driver by setting the
<literal>volume_driver</literal> option in
<filename>nova.conf</filename> as follows: </para>
<programlisting>
volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
</programlisting>
</simplesect>
<simplesect>
<title>Configuring options for the Storwize family and
SVC driver in nova.conf</title>
<para>The following options apply to all volumes and
cannot be changed for a specific volume. </para>
<table rules="all">
<caption>List of configuration flags for Storwize
storage and SVC driver</caption>
<col width="35%"/>
<col width="15%"/>
<col width="15%"/>
<col width="35%"/>
<thead>
<tr>
<td>Flag name</td>
<td>Type</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td><para><literal>san_ip</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td><para>Management IP or host name</para>
</td>
</tr>
<tr>
<td><para><literal>san_ssh_port</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>22</para></td>
<td><para>Management port</para></td>
</tr>
<tr>
<td><para><literal>san_login</literal></para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td><para>Management login username</para>
</td>
</tr>
<tr>
<td><para><literal>san_password</literal>
</para>
</td>
<td><para>Required
<footnote xml:id='storwize-svc-fn1'>
<para>The authentication requires either a
password
(<literal>san_password</literal>) or
SSH private key
(<literal>san_private_key</literal>).
One must be specified. If both are
specified the driver will use only the
SSH private key.
</para></footnote>
</para></td>
<td><para></para></td>
<td><para>Management login password</para>
</td>
</tr>
<tr>
<td><para><literal>san_private_key</literal>
</para>
</td>
<td><para>Required
<footnoteref linkend='storwize-svc-fn1'/>
</para></td>
<td><para></para></td>
<td><para>Management login SSH private key
</para>
</td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_volpool_name</literal>
</para>
</td>
<td><para>Required</para></td>
<td><para></para></td>
<td><para>Pool name for volumes</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_vtype</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>Striped</para></td>
<td><para>Volume virtualization type
<footnote xml:id='storwize-svc-fn2'>
<para>More details on this configuration
option are available in the Storwize
family and SVC command line
documentation under the
<literal>mkvdisk</literal> command.
</para></footnote>
</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_rsize</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>2%</para></td>
<td><para>Initial physical allocation
<footnote xml:id='storwize-svc-fn3'>
<para>
The driver creates thin-provisioned
volumes by default. The
<literal>storwize_svc_vol_rsize</literal>
flag defines the initial physical
allocation size for thin-provisioned
volumes, or if set to
<literal>-1</literal>,
the driver creates full allocated
volumes. More details
about the available options are available
in the Storwize family and SVC
documentation.
</para></footnote>
</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_warning</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>0 (disabled)</para></td>
<td><para>Space allocation warning threshold
<footnoteref linkend='storwize-svc-fn2'/>
</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_autoexpand</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>True</para></td>
<td><para>Enable or disable volume auto expand
<footnote xml:id='storwize-svc-fn4'>
<para>
Defines whether thin-provisioned volumes
can be auto expanded by the storage
system, a value of <literal>True</literal>
means that auto expansion is
enabled, a value of
<literal>False</literal>
disables auto expansion.
Details about this option can be
found in the
<literal>autoexpand</literal>
flag of the Storwize
family and SVC command line interface
<literal>mkvdisk</literal> command.
</para></footnote>
</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_grainsize</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>256</para></td>
<td><para>Volume grain size
<footnoteref linkend='storwize-svc-fn2'/>
in KB</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_compression
</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>False</para></td>
<td><para>
Enable or disable Real-Time Compression
<footnote xml:id='storwize-svc-fn5'>
<para>Defines whether Real-time Compression
is used for the volumes created with
OpenStack. Details on Real-time
Compression can be found in the
Storwize family and SVC documentation.
The Storwize or SVC system must have
compression enabled for this feature
to work.
</para>
</footnote>
</para>
</td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_vol_easytier</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>True</para></td>
<td><para>Enable or disable Easy Tier
<footnote xml:id='storwize-svc-fn6'>
<para>Defines whether Easy Tier is used
for the volumes created with OpenStack.
Details on EasyTier can be found in the
Storwize family and SVC documentation.
The Storwize or SVC system must have
Easy Tier enabled for this feature to
work.
</para></footnote>
</para></td>
</tr>
<tr>
<td><para>
<literal>storwize_svc_flashcopy_timeout
</literal>
</para>
</td>
<td><para>Optional</para></td>
<td><para>120</para></td>
<td><para>FlashCopy timeout threshold
<footnote xml:id='storwize-svc-fn7'>
<para>The driver wait timeout threshold
when creating an OpenStack
snapshot. This is actually the
maximum amount of time the driver
will wait for the Storwize family
or SVC system to prepare a new
FlashCopy mapping. The driver
accepts a maximum wait time of 600
seconds (10 minutes). </para></footnote> (seconds)</para></td>
</tr>
</tbody>
</table>
</simplesect>
</section>
</section>
<section xml:id="nexenta-driver">
<title>Nexenta</title>
<para>NexentaStor Appliance is NAS/SAN software platform designed for building reliable and fast network storage arrays. The NexentaStor is based on
the OpenSolaris and uses ZFS as a disk management system. NexentaStor can serve as a storage node for the OpenStack and provide block-level volumes
for the virtual servers via iSCSI protocol. </para>
<para>The Nexenta driver allows you to use Nexenta SA to
store Nova volumes. Every Nova volume is represented
by a single zvol in a predefined Nexenta volume. For
every new volume the driver creates a iSCSI target and
iSCSI target group that are used to access it from
compute hosts.</para>
<para>To use Nova with Nexenta Storage Appliance, you should:</para>
<itemizedlist>
<listitem><para>set
<literal>volume_driver=nova.volume.nexenta.volume.NexentaDriver</literal>.</para></listitem>
<listitem><para>set <literal>--nexenta_host</literal> flag to the hostname or IP
of your NexentaStor</para></listitem>
<listitem><para>set <literal>--nexenta_user</literal> and
<literal>--nexenta_password</literal> to
the username and password of the user with all
necessary privileges on the appliance,
including the access to REST API</para></listitem>
<listitem><para>set <literal>--nexenta_volume</literal> to the name of the
volume on the appliance that you would like to
use in Nova, or create a volume named
<literal>nova</literal> (it will be used
by default)</para></listitem>
</itemizedlist>
<para>Nexenta driver has a lot of tunable flags. Some of them you might want to change:</para>
<itemizedlist>
<listitem><para><literal>nexenta_target_prefix</literal> defines the prefix that
will be prepended to volume id to form target
name on Nexenta</para></listitem>
<listitem><para><literal>nexenta_target_group_prefix</literal> defines the
prefix for target groups</para></listitem>
<listitem><para><literal>nexenta_blocksize</literal> can be set to the size of
the blocks in newly created zvols on
appliance, with the suffix; for example, the
default 8K means 8 kilobytes</para></listitem>
<listitem><para><literal>nexenta_sparse</literal> is boolean and can be set to
use sparse zvols to save space on
appliance</para></listitem>
</itemizedlist>
<para>Some flags that you might want to keep with the
default values:</para>
<itemizedlist>
<listitem><para><literal>nexenta_rest_port</literal> is the port where Nexenta
listens for REST requests (the same port where
the NMV works)</para></listitem>
<listitem><para><literal>nexenta_rest_protocol</literal> can be set to
<literal>http</literal> or
<literal>https</literal>, but the default
is <literal>auto</literal> which makes the
driver try to use HTTP and switch to HTTPS in
case of failure</para></listitem>
<listitem><para><literal>nexenta_iscsi_target_portal_port</literal> is the port
to connect to Nexenta over iSCSI</para></listitem>
</itemizedlist>
</section>
<section xml:id="xensm">
<title>Using the XenAPI Storage Manager Volume Driver</title>
<para>The Xen Storage Manager Volume driver (xensm) is a
XenAPI hypervisor specific volume driver, and can be used
to provide basic storage functionality, including
volume creation and destruction, on a number of
different storage back-ends. It also enables the
capability of using more sophisticated storage
back-ends for operations like cloning/snapshots, etc.
The list below shows some of the storage plugins
already supported in Citrix XenServer and Xen Cloud Platform
(XCP): </para>
<orderedlist>
<listitem>
<para>NFS VHD: Storage repository (SR) plugin which stores disks as Virtual Hard Disk (VHD)
files on a remote Network File System (NFS).
</para>
</listitem>
<listitem>
<para>Local VHD on LVM: SR plugin which represents disks as VHD disks on Logical Volumes (LVM)
within a locally-attached Volume Group.
</para>
</listitem>
<listitem>
<para>HBA LUN-per-VDI driver: SR plugin which represents Logical Units (LUs)
as Virtual Disk Images (VDIs) sourced by host bus adapters (HBAs).
E.g. hardware-based iSCSI or FC support.
</para>
</listitem>
<listitem>
<para>NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server,
providing use of fast snapshot and clone features on the filer.
</para>
</listitem>
<listitem>
<para>LVHD over FC: SR plugin which represents disks as VHDs on Logical Volumes
within a Volume Group created on an HBA LUN. E.g. hardware-based iSCSI or FC support.
</para>
</listitem>
<listitem>
<para>iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI.
Does not support creation of VDIs but accesses existing LUNs on a target.
</para>
</listitem>
<listitem>
<para>LVHD over iSCSI: SR plugin which represents disks as
Logical Volumes within a Volume Group created on an iSCSI LUN.
</para>
</listitem>
<listitem>
<para>EqualLogic: SR driver for mapping of LUNs to VDIs on a
EQUALLOGIC array group, providing use of fast snapshot and clone features on the array.
</para>
</listitem>
</orderedlist>
<section xml:id="xensmdesign">
<title>Design and Operation</title>
<simplesect>
<title>Definitions</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold"
>Backend:</emphasis> A term for a
particular storage backend. This could
be iSCSI, NFS, Netapp etc. </para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Backend-config:</emphasis> All the
parameters required to connect to a
specific backend. For e.g. For NFS,
this would be the server, path, etc. </para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Flavor:</emphasis> This term is
equivalent to volume "types". A
user friendly term to specify some
notion of quality of service. For
example, "gold" might mean that the
volumes will use a backend where
backups are possible. A flavor can be
associated with multiple backends. The
volume scheduler, with the help of the
driver, will decide which backend will
be used to create a volume of a
particular flavor. Currently, the
driver uses a simple "first-fit"
policy, where the first backend that
can successfully create this volume is
the one that is used. </para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Operation</title>
<para>The admin uses the nova-manage command
detailed below to add flavors and backends. </para>
<para>One or more nova-volume service instances
will be deployed per availability zone. When
an instance is started, it will create storage
repositories (SRs) to connect to the backends
available within that zone. All nova-volume
instances within a zone can see all the
available backends. These instances are
completely symmetric and hence should be able
to service any
<literal>create_volume</literal> request
within the zone. </para>
<note>
<title>On XenServer, PV guests
required</title>
<para>Note that when using XenServer you can
only attach a volume to a PV guest.</para>
</note>
</simplesect>
</section>
<section xml:id="xensmconfig">
<title>Configuring XenAPI Storage Manager</title>
<simplesect>
<title>Prerequisites
</title>
<orderedlist>
<listitem>
<para>xensm requires that you use either Citrix XenServer or XCP as the hypervisor.
The Netapp and EqualLogic backends are not supported on XCP.
</para>
</listitem>
<listitem>
<para>
Ensure all <emphasis role="bold">hosts</emphasis> running volume and compute services
have connectivity to the storage system.
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>Configuration
</title>
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Set the following configuration options for the nova volume service:
(nova-compute also requires the volume_driver configuration option.)
</emphasis>
</para>
<programlisting>
--volume_driver="nova.volume.xensm.XenSMDriver"
--use_local_volumes=False
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">The backend configurations that the volume driver uses need to be
created before starting the volume service.
</emphasis>
</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create &lt;label> &lt;description>
<prompt>$</prompt> nova-manage sm flavor_delete &lt;label>
<prompt>$</prompt> nova-manage sm backend_add &lt;flavor label> &lt;SR type> [config connection parameters]
Note: SR type and config connection parameters are in keeping with the XenAPI Command Line Interface. http://support.citrix.com/article/CTX124887
<prompt>$</prompt> nova-manage sm backend_delete &lt;backend-id>
</programlisting>
<para> Example: For the NFS storage manager plugin, the steps
below may be used.
</para>
<programlisting>
<prompt>$</prompt> nova-manage sm flavor_create gold "Not all that glitters"
<prompt>$</prompt> nova-manage sm flavor_delete gold
<prompt>$</prompt> nova-manage sm backend_add gold nfs name_label=mybackend server=myserver serverpath=/local/scratch/myname
<prompt>$</prompt> nova-manage sm backend_remove 1
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Start nova-volume and nova-compute with the new configuration options.
</emphasis>
</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Creating and Accessing the volumes from VMs </title>
<para>Currently, the flavors have not been tied to
the volume types API. As a result, we simply
end up creating volumes in a "first fit" order
on the given backends. </para>
<para>The standard euca-* or openstack API
commands (such as volume extensions) should be
used for creating, destroying, attaching, or
detaching volumes. </para>
</simplesect>
</section>
<section xml:id="cinder-volumes-solidfire">
<title>Configuring Cinder or Nova-Volumes to use a SolidFire Cluster</title>
<para>The SolidFire Cluster is a high performance all SSD iSCSI storage device,
providing massive scale out capability and extreme fault tolerance. A key
feature of the SolidFire cluster is the ability to set and modify during
operation specific QoS levels on a volume per volume basis. The SolidFire
cluster offers all of these things along with de-duplication, compression and an
architecture that takes full advantage of SSD's.</para>
<para>To configure and use a SolidFire cluster with Nova-Volumes modify your
<filename>nova.conf</filename> or <filename>cinder.conf</filename> file as shown below:</para>
<programlisting>
volume_driver=nova.volume.solidfire.SolidFire
iscsi_ip_prefix=172.17.1.* # the prefix of your SVIP
san_ip=172.17.1.182 # the address of your MVIP
san_login=sfadmin # your cluster admin login
san_password=sfpassword # your cluster admin password
</programlisting>
<para>To configure and use a SolidFire cluster with Cinder, modify your cinder.conf
file similarly to how you would a nova.conf:</para>
<programlisting>
volume_driver=cinder.volume.solidfire.SolidFire
iscsi_ip_prefix=172.17.1.* # the prefix of your SVIP
san_ip=172.17.1.182 # the address of your MVIP
san_login=sfadmin # your cluster admin login
san_password=sfpassword # your cluster admin password
</programlisting>
</section>
</section>
</section>
<section xml:id="boot-from-volume">
<title>Boot From Volume</title>
<para>The Compute service has preliminary support for booting an instance from a
volume.</para>
<simplesect>
<title>Creating a bootable volume</title>
<para>To create a bootable volume, mount the volume to an existing instance, and then
build a volume-backed image. Here is an example based on <link
xlink:href="https://github.com/openstack-dev/devstack/blob/master/exercises/boot_from_volume.sh"
>exercises/boot_from_volume.sh</link>. This example assumes that you have a
running instance with a 1GB volume mounted at <literal>/dev/vdc</literal>. These
commands will make the mounted volume bootable using a CirrOS image. As
root:<screen><prompt>#</prompt> <userinput>mkfs.ext3 -b 1024 /dev/vdc 1048576</userinput>
<prompt>#</prompt> <userinput>mkdir /tmp/stage</userinput>
<prompt>#</prompt> <userinput>mount /dev/vdc /tmp/stage</userinput>
<prompt>#</prompt> <userinput>cd /tmp</userinput>
<prompt>#</prompt> <userinput>wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz</userinput>
<prompt>#</prompt> <userinput>gunzip cirros-0.3.0-x86_64-rootfs.img.gz</userinput>
<prompt>#</prompt> <userinput>mkdir /tmp/cirros</userinput>
<prompt>#</prompt> <userinput>mount /tmp/cirros-0.3.0-x86_64-rootfs.img /tmp/cirros</userinput>
<prompt>#</prompt> <userinput>cp -pr /tmp/cirros/* /tmp/stage</userinput>
<prompt>#</prompt> <userinput>umount /tmp/cirros</userinput>
<prompt>#</prompt> <userinput>sync</userinput>
<prompt>#</prompt> <userinput>umount /tmp/stage</userinput></screen></para>
<para>Detach the volume once you are done.</para>
</simplesect>
<simplesect>
<title>Booting an instance from the volume</title>
<para>To boot a new instance from the volume, use the
<command>nova boot</command> command with the
<literal>--block_device_mapping</literal> flag.
The output for <command>nova help boot</command> shows
the following documentation about this
flag:<screen><computeroutput> --block_device_mapping &lt;dev_name=mapping>
Block device mapping in the format &lt;dev_name=&lt;id>:&lt;typ
e>:&lt;size(GB)>:&lt;delete_on_terminate>.
</computeroutput></screen></para>
<para>The command arguments are:<variablelist>
<varlistentry>
<term><literal>dev_name</literal></term>
<listitem>
<para>A device name where the volume will be attached in the system at
<filename>/dev/<replaceable>dev_name</replaceable></filename>.
This value is typically <literal>vda</literal>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>id</literal></term>
<listitem>
<para>The ID of the volume to boot from, as shown in the output of
<command>nova volume-list</command>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>type</literal></term>
<listitem>
<para>This is either <literal>snap</literal>, which means that the
volume was created from a snapshot, or anything other than
<literal>snap</literal> (a blank string is valid). In the
example above, the volume was not created from a snapshot, so we
will leave this field blank in our example below.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>size (GB)</literal></term>
<listitem>
<para>The size of the volume, in GB. It is safe to leave this blank and
have the Compute service infer the size.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><literal>delete_on_terminate</literal></term>
<listitem>
<para>A boolean to indicate whether the volume should be deleted when
the instance is terminated. True can be specified as
<literal>True</literal> or <literal>1</literal>. False can be
specified as <literal>False</literal> or
<literal>0</literal>.</para>
</listitem>
</varlistentry>
</variablelist></para>
<para><note>
<para>Because of bug <link
xlink:href="https://bugs.launchpad.net/nova/+bug/1008622"
>#1008622</link>, you must specify an image when booting from a volume,
even though this image will not be used.</para>
</note>The following example will attempt boot from volume with
ID=<literal>13</literal>, it will not delete on terminate. Replace the
<literal>--image</literal> flag with a valid image on your system, and the
<literal>--key_name</literal> with a valid keypair
name:<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>f4addd24-4e8a-46bb-b15d-fae2591f1a35</replaceable> --flavor 2 --key_name <replaceable>mykey</replaceable> --block_device_mapping vda=13:::0 boot-from-vol-test</userinput></screen></para>
</simplesect>
</section>
<section xml:id="cinder-multiple-volumes">
<title>Multiple Cinder volumes</title>
<para> There is possibility for adding one or more volumes to an already existing Cinder
setup. Cinder uses by default in Folsom <literal>SimpleScheduler</literal> scheduler for
electing a volume and consequently allows multiple volume backends. <note>
<para>Detailed steps for LVM as backend and Ubuntu 12.04 are given in <link
xlink:href="http://sbauza.wordpress.com/2013/06/03/adding-a-second-cinder-volume-with-folsom/"
>http://sbauza.wordpress.com/2013/06/03/adding-a-second-cinder-volume-with-folsom/</link></para>
</note></para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Create the LVM cinder-volumes volume group on the new
Cinder host</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Install the Cinder-volume package on the new
host</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Configure <literal>/etc/cinder/cinder.conf</literal> and
<literal>/etc/cinder/api-paste.ini</literal> to match the correct values
for SQL backend, <literal>iscsi_ip_adress</literal> and API
credentials</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Restart <literal>tgt</literal> and
<literal>cinder-volume</literal> services on the new
host</emphasis></para>
</listitem>
</orderedlist>
<para>In order to check, you can issue <literal>cinder-manage host list</literal> which prompts you all your Cinder backends.</para>
</section>
</chapter>