operations-guide/doc/openstack-ops/ch_ops_user_facing.xml

2741 lines
138 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="user_facing_operations"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/2000/svg"
xmlns:ns4="http://www.w3.org/1998/Math/MathML"
xmlns:ns3="http://www.w3.org/1999/xhtml"
xmlns:ns="http://docbook.org/ns/docbook">
<title>User-Facing Operations</title>
<para>This guide is for OpenStack operators and does not seek to be an
exhaustive reference for users, but as an operator, you should have a basic
understanding of how to use the cloud facilities. This chapter looks at
OpenStack from a basic user perspective, which helps you understand your
users' needs and determine, when you get a trouble ticket, whether it is a
user issue or a service issue. The main concepts covered are images,
flavors, security groups, block storage, shared file system storage, and instances.</para>
<section xml:id="user_facing_images">
<title>Images</title>
<?dbhtml stop-chunking?>
<para>OpenStack images can often be thought of as "virtual machine
templates." Images can also be standard installation media such as ISO
images. Essentially, they contain bootable file systems that are used to
launch instances.<indexterm class="singular">
<primary>user training</primary>
<secondary>images</secondary>
</indexterm></para>
<section xml:id="add_images">
<title>Adding Images</title>
<para>Several pre-made images exist and can easily be imported into the
Image service. A common image to add is the CirrOS image, which is very
small and used for testing purposes.<indexterm class="singular">
<primary>images</primary>
<secondary>adding</secondary>
</indexterm> To add this image, simply do:</para>
<screen><prompt>$</prompt> <userinput>wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img</userinput>
<prompt>$</prompt> <userinput>glance image-create --name='cirros image' --is-public=true \
--container-format=bare --disk-format=qcow2 &lt; cirros-0.3.4-x86_64-disk.img</userinput></screen>
<para>The <code>glance image-create</code> command provides a large set
of options for working with your image. For example, the
<code> min-disk</code> option is useful for images that require root
disks of a certain size (for example, large Windows images). To view
these options, do:</para>
<screen><prompt>$</prompt> <userinput>glance help image-create</userinput></screen>
<para>The <code>location</code> option is important to note. It does not
copy the entire image into the Image service, but references an original
location where the image can be found. Upon launching an instance of
that image, the Image service accesses the image from the location
specified.</para>
<para>The <code>copy-from</code> option copies the image from the
location specified into the <code>/var/lib/glance/images</code>
directory. The same thing is done when using the STDIN redirection with
&lt;, as shown in the example.</para>
<para>Run the following command to view the properties of existing
images:</para>
<screen><prompt>$</prompt> <userinput>glance image-show &lt;image-uuid&gt;</userinput></screen>
</section>
<section xml:id="add_signed_images">
<title>Adding Signed Images</title>
<para>To provide a chain of trust from an end user to the Image
service, and the Image service to Compute, an end user can import
signed images into the Image service that can be verified in Compute.
Appropriate Image service properties need to be set to enable signature
verification. Currently, signature verification is provided in Compute
only, but an accompanying feature in the Image service is targeted for
Mitaka.</para>
<note><para>Prior to the steps below, an asymmetric keypair and certificate
must be generated. In this example, these are called private_key.pem and
new_cert.crt, respectively, and both reside in the current directory. Also
note that the image in this example is cirros-0.3.4-x86_64-disk.img, but
any image can be used.</para></note>
<para>The following are steps needed to create the signature used for the signed images:</para>
<procedure>
<step>
<para>Retrieve image for upload</para>
<screen><prompt>$</prompt> <userinput>wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img > cirros-0.3.4-x86_64-disk.img</userinput></screen>
</step>
<step>
<para>Use private key to create a signature of the image</para>
<note>
<para>The following implicit values are being used to create the
signature in this example:
<itemizedlist>
<listitem><para>Signature hash method = SHA-256</para></listitem>
<listitem><para>Signature key type = RSA-PSS</para></listitem>
</itemizedlist>
</para>
</note>
<note>
<para>The following options are currently supported:
<itemizedlist>
<listitem><para>Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512</para></listitem>
<listitem><para>Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS</para></listitem>
</itemizedlist>
</para>
</note>
<para>Generate signature of image and convert it to a base64 representation:</para>
<screen><prompt>$</prompt> <userinput>openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:\
pss -out image-file.signature cirros-0.3.4-x86_64-disk.img</userinput></screen>
<screen><prompt>$</prompt> <userinput>base64 image-file.signature > signature_64</userinput></screen>
<screen><prompt>$</prompt> <userinput>cat signature_64
'c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwMgRxzFYeUyydRTWCcUS2ZLudPR9X7rM
THFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4='</userinput></screen>
</step>
<step><para>Create context</para>
<screen><prompt>$</prompt> <userinput>python
&gt;&gt;&gt; from keystoneclient.v3 import client
&gt;&gt;&gt; keystone_client = client.Client(username='demo',
user_domain_name='Default',
password='password',
project_name='demo',
auth_url='http://localhost:5000/v3')
&gt;&gt;&gt; from oslo_context import context
&gt;&gt;&gt; context = context.RequestContext(auth_token=keystone_client.auth_token,
tenant=keystone_client.project_id)</userinput></screen>
</step>
<step><para>Encode certificate in DER format</para>
<screen><userinput>
&gt;&gt;&gt; from cryptography import x509 as cryptography_x509
&gt;&gt;&gt; from cryptography.hazmat import backends
&gt;&gt;&gt; from cryptography.hazmat.primitives import serialization
&gt;&gt;&gt; with open("new_cert.crt", "rb") as cert_file:
&gt;&gt;&gt; cert = cryptography_x509.load_pem_x509_certificate(
cert_file.read(),
backend=backends.default_backend()
)
&gt;&gt;&gt; certificate_der = cert.public_bytes(encoding=serialization.Encoding.DER)</userinput></screen>
</step>
<step><para>Upload Certificate in DER format to Castellan</para>
<screen><userinput>
&gt;&gt;&gt; from castellan.common.objects import x_509
&gt;&gt;&gt; from castellan import key_manager
&gt;&gt;&gt; castellan_cert = x_509.X509(certificate_der)
&gt;&gt;&gt; key_API = key_manager.API()
&gt;&gt;&gt; cert_uuid = key_API.store(context, castellan_cert)
&gt;&gt;&gt; cert_uuid
u'62a33f41-f061-44ba-9a69-4fc247d3bfce'
</userinput></screen>
</step>
<step><para>Upload Image to Image service, with Signature Metadata</para>
<note>
<para>The following signature properties are used:
<itemizedlist>
<listitem>
<para>img_signature uses the signature called signature_64</para>
</listitem>
<listitem>
<para>img_signature_certificate_uuid uses the value from cert_uuid in
section 5 above</para>
</listitem>
<listitem>
<para>img_signature_hash_method matches 'SHA-256' in section 2 above</para>
</listitem>
<listitem>
<para>img_signature_key_type matches 'RSA-PSS' in section 2 above</para>
</listitem>
</itemizedlist>
</para>
</note>
<screen><prompt>$</prompt> <userinput>source openrc demo</userinput>
<prompt>$</prompt> <userinput>export OS_IMAGE_API_VERSION=2</userinput>
<prompt>$</prompt> <userinput>glance image-create\
--property name=cirrosSignedImage_goodSignature\
--property is-public=true\
--container-format bare\
--disk-format qcow2\
--property img_signature='c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwM
gRxzFYeUyydRTWCcUS2ZLudPR9X7rMTHFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/
SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4='\
--property img_signature_certificate_uuid='62a33f41-f061-44ba-9a69-4fc247d3bfce'\
--property img_signature_hash_method='SHA-256'\
--property img_signature_key_type='RSA-PSS'\
&lt; ~/cirros-0.3.4-x86_64-disk.img</userinput></screen>
</step>
<step><para>Signature verification will occur when Compute boots the signed image</para>
<note>
<para>As of the Mitaka release, Compute supports instance signature validation. This is enabled by setting the verify_glance_signatures flag in nova.conf to TRUE. When enabled, Compute will automatically validate signed instances prior to its launch.</para>
</note>
</step>
</procedure>
</section>
<section xml:id="sharing_images">
<title>Sharing Images Between Projects</title>
<para>In a multi-tenant cloud environment, users sometimes want to share
their personal images or snapshots with other projects.<indexterm
class="singular">
<primary>projects</primary>
<secondary>sharing images between</secondary>
</indexterm><indexterm class="singular">
<primary>images</primary>
<secondary>sharing between projects</secondary>
</indexterm> This can be done on the command line with the
<literal>glance</literal> tool by the owner of the image.</para>
<para>To share an image or snapshot with another project, do the
following:</para>
<procedure>
<step>
<para>Obtain the UUID of the image:</para>
<screen><prompt>$</prompt> <userinput>glance image-list</userinput></screen>
</step>
<step>
<para>Obtain the UUID of the project with which you want to share
your image. Unfortunately, non-admin users are unable to use the
<literal>keystone</literal> command to do this. The easiest solution
is to obtain the UUID either from an administrator of the cloud or
from a user located in the project.</para>
</step>
<step>
<para>Once you have both pieces of information, run the
<literal>glance</literal> command:</para>
<screen><prompt>$</prompt> <userinput>glance member-create &lt;image-uuid&gt; &lt;project-uuid&gt;</userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>glance member-create 733d1c44-a2ea-414b-aca7-69decf20d810 \
771ed149ef7e4b2b88665cc1c98f77ca</userinput></screen>
<para>Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access
to image 733d1c44-a2ea-414b-aca7-69decf20d810.</para>
</step>
</procedure>
</section>
<section xml:id="delete_images">
<title>Deleting Images</title>
<para>To delete an image,<indexterm class="singular">
<primary>images</primary>
<secondary>deleting</secondary>
</indexterm> just execute:</para>
<screen><prompt>$</prompt> <userinput>glance image-delete &lt;image uuid&gt;</userinput></screen>
<note>
<para>Deleting an image does not affect instances or snapshots that
were based on the image.</para>
</note>
</section>
<section xml:id="other_cli">
<title>Other CLI Options</title>
<para>A full set of options can be found using:<indexterm
class="singular">
<primary>images</primary>
<secondary>CLI options for</secondary>
</indexterm></para>
<screen><prompt>$</prompt> <userinput>glance help</userinput></screen>
<para>or the <link xlink:href="http://docs.openstack.org/cli-reference/glance.html">Command-Line Interface Reference
</link>.</para>
</section>
<section xml:id="image_service_and_database">
<title>The Image service and the Database</title>
<para>The only thing that the Image service does not store in a database
is the image itself. The Image service database has two main
tables:<indexterm class="singular">
<primary>databases</primary>
<secondary>Image service</secondary>
</indexterm><indexterm class="singular">
<primary>Image service</primary>
<secondary>database tables</secondary>
</indexterm></para>
<itemizedlist role="compact">
<listitem>
<para><literal>images</literal></para>
</listitem>
<listitem>
<para><literal>image_properties</literal></para>
</listitem>
</itemizedlist>
<para>Working directly with the database and SQL queries can provide you
with custom lists and reports of images. Technically, you can update
properties about images through the database, although this is not
generally recommended.</para>
</section>
<section xml:id="sample_image_database">
<title>Example Image service Database Queries</title>
<para>One interesting example is modifying the table of images and the
owner of that image. This can be easily done if you simply display the
unique ID of the owner. <indexterm class="singular">
<primary>Image service</primary>
<secondary>database queries</secondary>
</indexterm>This example goes one step further and displays the
readable name of the owner:</para>
<screen><prompt>mysql&gt;</prompt> <userinput>select glance.images.id,
glance.images.name, keystone.tenant.name, is_public from
glance.images inner join keystone.tenant on
glance.images.owner=keystone.tenant.id;</userinput></screen>
<para>Another example is displaying all properties for a certain
image:</para>
<screen><prompt>mysql&gt;</prompt> <userinput>select name, value from
image_properties where id = &lt;image_id&gt;</userinput></screen>
</section>
</section>
<section xml:id="flavors">
<title>Flavors</title>
<para>Virtual hardware templates are called "flavors" in OpenStack,
defining sizes for RAM, disk, number of cores, and so on. The default
install provides five flavors.</para>
<para>These are configurable by admin users (the rights may also be
delegated to other users by redefining the access controls for
<code>compute_extension:flavormanage</code> in
<code>/etc/nova/policy.json</code> on the <code>nova-api</code> server).
To get the list of available flavors on your system, run:<indexterm
class="singular">
<primary>DAC (discretionary access control)</primary>
</indexterm><indexterm class="singular">
<primary>flavor</primary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>flavors</secondary>
</indexterm></para>
<screen><prompt>$</prompt> <userinput>nova flavor-list</userinput>
<computeroutput>+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+</computeroutput></screen>
<para>The <code>nova flavor-create</code> command allows authorized users
to create new flavors. Additional flavor manipulation commands can be
shown with the command: <screen><prompt>$</prompt> <userinput>nova help | grep flavor</userinput></screen></para>
<para>Flavors define a number of parameters, resulting in the user having
a choice of what type of virtual machine to run—just like they would have
if they were purchasing a physical server. <xref
linkend="table-flavor-params" /> lists the elements that can be set. Note
in particular <phrase
role="keep-together"><literal>extra_specs</literal>,</phrase> which can be
used to define free-form characteristics, giving a lot of flexibility
beyond just the size of RAM, CPU, and Disk.<indexterm class="singular">
<primary>base image</primary>
</indexterm></para>
<table rules="all" xml:id="table-flavor-params">
<caption>Flavor parameters</caption>
<col width="25%" />
<col width="75%" />
<thead>
<tr>
<th><para> <emphasis role="bold">Column</emphasis> </para></th>
<th><para> <emphasis role="bold">Description</emphasis> </para></th>
</tr>
</thead>
<tbody>
<tr>
<td><para>ID</para></td>
<td><para>Unique ID (integer or UUID) for the flavor.</para></td>
</tr>
<tr>
<td><para>Name</para></td>
<td><para>A descriptive name, such as xx.size_name, is conventional
but not required, though some third-party tools may rely on
it.</para></td>
</tr>
<tr>
<td><para>Memory_MB</para></td>
<td><para>Virtual machine memory in megabytes.</para></td>
</tr>
<tr>
<td><para>Disk</para></td>
<td><para>Virtual root disk size in gigabytes. This is an ephemeral
disk the base image is copied into. You don't use it when you boot
from a persistent volume. The "0" size is a special case that uses
the native base image size as the size of the ephemeral root
volume.</para></td>
</tr>
<tr>
<td><para>Ephemeral</para></td>
<td><para>Specifies the size of a secondary ephemeral data disk.
This is an empty, unformatted disk and exists only for the life of
the instance.</para></td>
</tr>
<tr>
<td><para>Swap</para></td>
<td><para>Optional swap space allocation for the
instance.</para></td>
</tr>
<tr>
<td><para>VCPUs</para></td>
<td><para>Number of virtual CPUs presented to the
instance.</para></td>
</tr>
<tr>
<td><para>RXTX_Factor</para></td>
<td><para>Optional property that allows created servers to have a
different bandwidth<indexterm class="singular">
<primary>bandwidth</primary>
<secondary>capping</secondary>
</indexterm> cap from that defined in the network they are
attached to. This factor is multiplied by the rxtx_base property of
the network. Default value is 1.0 (that is, the same as the attached
network).</para></td>
</tr>
<tr>
<td><para>Is_Public</para></td>
<td><para>Boolean value that indicates whether the flavor is
available to all users or private. Private flavors do not get the
current tenant assigned to them. Defaults to
<literal>True</literal>.</para></td>
</tr>
<tr>
<td><para>extra_specs</para></td>
<td><para>Additional optional restrictions on which compute nodes
the flavor can run on. This is implemented as key-value pairs that
must match against the corresponding key-value pairs on compute
nodes. Can be used to implement things like special resources (such
as flavors that can run only on compute nodes with GPU
hardware).</para></td>
</tr>
</tbody>
</table>
<section xml:id="private-flavors">
<title>Private Flavors</title>
<para>A user might need a custom flavor that is uniquely tuned for a
project she is working on. For example, the user might require 128 GB of
memory. If you create a new flavor as described above, the user would
have access to the custom flavor, but so would all other tenants in your
cloud. Sometimes this sharing isn't desirable. In this scenario,
allowing all users to have access to a flavor with 128 GB of memory
might cause your cloud to reach full capacity very quickly. To prevent
this, you can restrict access to the custom flavor using the
<literal>nova</literal> command:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-access-add &lt;flavor-id&gt; &lt;project-id&gt;</userinput></screen>
<para>To view a flavor's access list, do the following:</para>
<screen><prompt>$</prompt> <userinput>nova flavor-access-list &lt;flavor-id&gt;</userinput></screen>
<note>
<title>Best Practices</title>
<para>Once access to a flavor has been restricted, no other projects
besides the ones granted explicit access will be able to see the
flavor. This includes the admin project. Make sure to add the admin
project in addition to the original project.</para>
<para>It's also helpful to allocate a specific numeric range for
custom and private flavors. On UNIX-based systems, nonsystem accounts
usually have a UID starting at 500. A similar approach can be taken
with custom flavors. This helps you easily identify which flavors are
custom, private, and public for the entire cloud.</para>
</note>
</section>
<simplesect>
<title>How Do I Modify an Existing Flavor?</title>
<para>The OpenStack dashboard simulates the ability to modify a flavor
by deleting an existing flavor and creating a new one with the same
name.</para>
</simplesect>
</section>
<section xml:id="security_groups">
<?dbhtml stop-chunking?>
<title>Security Groups</title>
<para>A common new-user issue with OpenStack is failing to set an
appropriate security group when launching an instance. As a result, the
user is unable to contact the instance on the network.<indexterm
class="singular">
<primary>security groups</primary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>security groups</secondary>
</indexterm></para>
<para>Security groups are sets of IP filter rules that are applied to an
instance's networking. They are project specific, and project members can
edit the default rules for their group and add new rules sets. All
projects have a "default" security group, which is applied to instances
that have no other security group defined. Unless changed, this security
group denies all incoming traffic.</para>
<section xml:id="general-security-group-config">
<title>General Security Groups Configuration</title>
<para>The <code>nova.conf</code> option
<code>allow_same_net_traffic</code> (which defaults to
<literal>true</literal>) globally controls whether the rules apply to
hosts that share a network. When set to <literal>true</literal>, hosts
on the same subnet are not filtered and are allowed to pass all types of
traffic between them. On a flat network, this allows all instances from
all projects unfiltered communication. With VLAN networking, this allows
access between instances within the same project. If
<code>allow_same_net_traffic</code> is set to <literal>false</literal>,
security groups are enforced for all connections. In this case, it is
possible for projects to simulate <code>allow_same_net_traffic</code> by
configuring their default security group to allow all traffic from their
subnet.</para>
<tip>
<para>As noted in the previous chapter, the number of rules per
security group is controlled by the
<code>quota_security_group_rules</code>, and the number of allowed
security groups per project is controlled by the
<code>quota_security_groups</code> quota.</para>
</tip>
</section>
<section xml:id="end-user-config-sec-group">
<title>End-User Configuration of Security Groups</title>
<para>Security groups for the current project can be found on the
OpenStack dashboard under <guilabel>Access &amp; Security</guilabel>. To
see details of an existing group, select the <guilabel>edit</guilabel>
action for that security group. Obviously, modifying existing groups can
be done from this <guilabel>edit</guilabel> interface. There is a
<guibutton>Create Security Group</guibutton> button on the main
<guilabel>Access &amp; Security</guilabel> page for creating new groups.
We discuss the terms used in these fields when we explain the
command-line equivalents.</para>
<section xml:id="config-sec-group-by-nova-command">
<title>Setting with nova command</title>
<para>From the command line, you can get a list of security groups for
the project you're acting in using the <literal>nova</literal>
command:</para>
<?hard-pagebreak ?>
<screen><prompt>$</prompt> <userinput>nova secgroup-list</userinput>
<computeroutput>+---------+-------------+
| Name | Description |
+---------+-------------+
| default | default |
| open | all ports |
+---------+-------------+</computeroutput></screen>
<para>To view the details of the "open" security group:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list-rules open</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | 255 | 0.0.0.0/0 | |
| tcp | 1 | 65535 | 0.0.0.0/0 | |
| udp | 1 | 65535 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>These rules are all "allow" type rules, as the default is deny.
The first column is the IP protocol (one of icmp, tcp, or udp), and the
second and third columns specify the affected port range. The fourth
column specifies the IP range in CIDR format. This example shows the
full port range for all protocols allowed from all IPs.</para>
<para>When adding a new security group, you should pick a descriptive
but brief name. This name shows up in brief descriptions of the
instances that use it where the longer description field often does not.
Seeing that an instance is using security group <literal>http</literal>
is much easier to understand than <literal>bobs_group</literal> or
<literal>secgrp1</literal>.</para>
<para>As an example, let's create a security group that allows web
traffic anywhere on the Internet. We'll call this group
<literal>global_http</literal>, which is clear and reasonably concise,
encapsulating what is allowed and from where. From the command line,
do:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-create \
global_http "allow web traffic from the Internet"</userinput>
<computeroutput>+-------------+-------------------------------------+
| Name | Description |
+-------------+-------------------------------------+
| global_http | allow web traffic from the Internet |
+-------------+-------------------------------------+</computeroutput></screen>
<para>This creates the empty security group. To make it do what we want,
we need to add some rules:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule &lt;secgroup&gt; &lt;ip-proto&gt; &lt;from-port&gt; &lt;to-port&gt; &lt;cidr&gt;</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 80 | 80 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>Note that the arguments are positional, and the
<literal>from-port</literal> and <literal>to-port</literal> arguments
specify the allowed local port range connections. These arguments are
not indicating source and destination ports of the connection. More
complex rule sets can be built up through multiple invocations of
<literal>nova secgroup-add-rule</literal>. For example, if you want to
pass both http and https traffic, do this:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 443 | 443 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>Despite only outputting the newly added rule, this operation is
additive:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-list-rules global_http</userinput>
<computeroutput>+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 80 | 80 | 0.0.0.0/0 | |
| tcp | 443 | 443 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+</computeroutput></screen>
<para>The inverse operation is called
<literal>secgroup-delete-rule</literal>, using the same format. Whole
security groups can be removed with
<literal>secgroup-delete</literal>.</para>
<para>To create security group rules for a cluster of instances, you
want to use <phrase role="keep-together">SourceGroups</phrase>.</para>
<para>SourceGroups are a special dynamic way of defining the CIDR of
allowed sources. The user specifies a SourceGroup (security group name)
and then all the users' other instances using the specified SourceGroup
are selected dynamically. This dynamic selection alleviates the need for
individual rules to allow each new member of the <phrase
role="keep-together">cluster</phrase>.</para>
<para>The code is structured like this: <code>nova
secgroup-add-group-rule &lt;secgroup&gt; &lt;source-group&gt;
&lt;ip-proto&gt; &lt;from-port&gt; &lt;to-port&gt;</code>. An example
usage is shown here:</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-group-rule cluster global-http tcp 22 22</userinput></screen>
<para>The "cluster" rule allows SSH access from any other instance that
uses the <literal>global-http</literal> group.</para>
</section>
<section xml:id="config-sec-group-by-neutron-command">
<title>Setting with neutron command</title>
<para>If your environment is using Neutron, you can configure security groups settings using the <literal>neutron</literal> command.
Get a list of security groups for the project you are acting in, by using following command:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-list</userinput>
<computeroutput>+--------------------------------------+---------+-------------+
| id | name | description |
+--------------------------------------+---------+-------------+
| 6777138a-deb7-4f10-8236-6400e7aff5b0 | default | default |
| 750acb39-d69b-4ea0-a62d-b56101166b01 | open | all ports |
+--------------------------------------+---------+-------------+</computeroutput></screen>
<para>To view the details of the "open" security group:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-show open</userinput>
<computeroutput>+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| description | all ports |
| id | 750acb39-d69b-4ea0-a62d-b56101166b01 |
| name | open |
| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "361a1b62-95dd-46e1-8639-c3b2000aab60"} |
| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "udp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "496ba8b7-d96e-4655-920f-068a3d4ddc36"} |
| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "icmp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "50642a56-3c4e-4b31-9293-0a636759a156"} |
| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv6", "id": "f46f35eb-8581-4ca1-bbc9-cf8d0614d067"} |
| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "fb6f2d5e-8290-4ed8-a23b-c6870813c921"} |
| tenant_id | 607ec981611a4839b7b06f6dfa81317d |
+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+</computeroutput></screen>
<para>These rules are all "allow" type rules, as the default is deny.
This example shows the full port range for all protocols allowed from all IPs.
This section describes the most common security-group-rule parameters:</para>
<variablelist>
<varlistentry>
<term>direction</term>
<listitem>
<para>The direction in which the security group rule is applied.
Valid values are <literal>ingress</literal> or <literal>egress</literal>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>remote_ip_prefix</term>
<listitem>
<para>This attribute value matches the specified IP prefix as the
source IP address of the IP packet.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>protocol</term>
<listitem>
<para>The protocol that is matched by the security group rule.
Valid values are <literal>null</literal>, <literal>tcp</literal>,
<literal>udp</literal>, <literal>icmp</literal>,
and <literal>icmpv6</literal>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>port_range_min</term>
<listitem>
<para>The minimum port number in the range that is matched
by the security group rule. If the protocol is TCP or UDP,
this value must be less than or equal to the
<literal>port_range_max</literal> attribute value. If the
protocol is ICMP or ICMPv6, this value must be an
ICMP or ICMPv6 type, respectively.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>port_range_max</term>
<listitem>
<para>The maximum port number in the range that is matched
by the security group rule.
The <literal>port_range_min</literal> attribute constrains
the <literal>port_range_max</literal> attribute. If the
protocol is ICMP or ICMPv6, this value must be an ICMP or
ICMPv6 type, respectively.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ethertype</term>
<listitem>
<para>Must be <literal>IPv4</literal> or <literal>IPv6</literal>,
and addresses represented in CIDR must match the ingress or egress rules.</para>
</listitem>
</varlistentry>
</variablelist>
<para>When adding a new security group, you should pick a descriptive
but brief name. This name shows up in brief descriptions of the
instances that use it where the longer description field often does not.
Seeing that an instance is using security group <literal>http</literal>
is much easier to understand than <literal>bobs_group</literal> or
<literal>secgrp1</literal>.</para>
<para>This example creates a security group that allows web
traffic anywhere on the Internet. We'll call this group
<literal>global_http</literal>, which is clear and reasonably concise,
encapsulating what is allowed and from where. From the command line,
do:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-create \
global_http --description "allow web traffic from the Internet"</userinput>
<computeroutput>Created a new security_group:
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| description | allow web traffic from the Internet |
| id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 |
| name | global_http |
| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} |
| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} |
| tenant_id | 341f49145ec7445192dc3c2abc33500d |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+</computeroutput></screen>
<para>Immediately after create, the security group has only an allow egress rule.
To make it do what we want, we need to add some rules:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create [-h]
[-f {html,json,json,shell,table,value,yaml,yaml}]
[-c COLUMN] [--max-width &lt;integer&gt;]
[--noindent] [--prefix PREFIX]
[--request-format {json,xml}]
[--tenant-id TENANT_ID]
[--direction {ingress,egress}]
[--ethertype ETHERTYPE]
[--protocol PROTOCOL]
[--port-range-min PORT_RANGE_MIN]
[--port-range-max PORT_RANGE_MAX]
[--remote-ip-prefix REMOTE_IP_PREFIX]
[--remote-group-id REMOTE_GROUP]
SECURITY_GROUP</userinput>
<prompt>$</prompt> <userinput>neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0 global_http</userinput>
<computeroutput>Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | 88ec4762-239e-492b-8583-e480e9734622 |
| port_range_max | 80 |
| port_range_min | 80 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 |
| tenant_id | 341f49145ec7445192dc3c2abc33500d |
+-------------------+--------------------------------------+</computeroutput></screen>
<para>More complex rule sets can be built up through multiple invocations of
<literal>neutron security-group-rule-create</literal>. For example, if you want to
pass both http and https traffic, do this:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --direction ingress --ethertype ipv4 --protocol tcp --port-range-min 443 --port-range-max 443 --remote-ip-prefix 0.0.0.0/0 global_http</userinput>
<computeroutput>Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | c50315e5-29f3-408e-ae15-50fdc03fb9af |
| port_range_max | 443 |
| port_range_min | 443 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 |
| tenant_id | 341f49145ec7445192dc3c2abc33500d |
+-------------------+--------------------------------------+</computeroutput></screen>
<para>Despite only outputting the newly added rule, this operation is
additive:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-show global_http</userinput>
<computeroutput>+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| description | allow web traffic from the Internet |
| id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 |
| name | global_http |
| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} |
| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 80, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 80, "ethertype": "IPv4", "id": "88ec4762-239e-492b-8583-e480e9734622"} |
| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} |
| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 443, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 443, "ethertype": "IPv4", "id": "c50315e5-29f3-408e-ae15-50fdc03fb9af"} |
| tenant_id | 341f49145ec7445192dc3c2abc33500d |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+</computeroutput></screen>
<para>The inverse operation is called
<literal>security-group-rule-delete</literal>, specifying security-group-rule ID.
Whole security groups can be removed with
<literal>security-group-delete</literal>.</para>
<para>To create security group rules for a cluster of instances,
use <phrase role="keep-together">RemoteGroups</phrase>.</para>
<para>RemoteGroups are a dynamic way of defining the CIDR of
allowed sources. The user specifies a RemoteGroup (security group name)
and then all the users' other instances using the specified RemoteGroup
are selected dynamically. This dynamic selection alleviates the need for
individual rules to allow each new member of the <phrase
role="keep-together">cluster</phrase>.</para>
<para>The code is similar to the above example of <literal>security-group-rule-create</literal>.
To use RemoteGroup, specify <literal>--remote-group-id</literal>
instead of <literal>--remote-ip-prefix</literal>.
For example:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --direction ingress \
--ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-group-id global_http cluster</userinput></screen>
<para>The "cluster" rule allows SSH access from any other instance that
uses the <literal>global-http</literal> group.</para>
</section>
</section>
</section>
<section xml:id="user_facing_block_storage">
<?dbhtml stop-chunking?>
<title>Block Storage</title>
<para>OpenStack volumes are persistent block-storage devices that may be
attached and detached from instances, but they can be attached to only one
instance at a time. Similar to an external hard drive, they do not provide
shared storage in the way a network file system or object store does. It
is left to the operating system in the instance to put a file system on
the block device and mount it, or not.
<indexterm class="singular">
<primary>block storage</primary>
</indexterm>
<indexterm class="singular">
<primary>storage</primary>
<secondary>block storage</secondary>
</indexterm>
<indexterm class="singular">
<primary>user training</primary>
<secondary>block storage</secondary>
</indexterm>
</para>
<para>As with other removable disk technology, it is important that the
operating system is not trying to make use of the disk before removing it.
On Linux instances, this typically involves unmounting any file systems
mounted from the volume. The OpenStack volume service cannot tell whether
it is safe to remove volumes from an instance, so it does what it is told.
If a user tells the volume service to detach a volume from an instance
while it is being written to, you can expect some level of file system
corruption as well as faults from whatever process within the instance was
using the device.</para>
<para>There is nothing OpenStack-specific in being aware of the steps
needed to access block devices from within the instance operating system,
potentially formatting them for first use and being cautious when removing
them. What is specific is how to create new volumes and attach and detach
them from instances. These operations can all be done from the
<guilabel>Volumes</guilabel> page of the dashboard or by using the
<literal>cinder</literal> command-line client.</para>
<para>To add new volumes, you need only a name and a volume size in
gigabytes. Either put these into the <guilabel>create volume</guilabel>
web form or use the command line:</para>
<screen><prompt>$</prompt> <userinput>cinder create --display-name test-volume 10</userinput></screen>
<para>This creates a 10 GB volume named <literal>test-volume</literal>. To
list existing volumes and the instances they are connected to, if
any:</para>
<screen><prompt>$</prompt> <userinput>cinder list</userinput>
<computeroutput>+------------+---------+--------------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+------------+---------+--------------------+------+-------------+-------------+
| 0821...19f | active | test-volume | 10 | None | |
+------------+---------+--------------------+------+-------------+-------------+</computeroutput></screen>
<para>OpenStack Block Storage also allows creating snapshots of
volumes. Remember that this is a block-level snapshot that is crash
consistent, so it is best if the volume is not connected to an instance
when the snapshot is taken and second best if the volume is not in use on
the instance it is attached to. If the volume is under heavy use, the
snapshot may have an inconsistent file system. In fact, by default, the
volume service does not take a snapshot of a volume that is attached to an
image, though it can be forced to. To take a volume snapshot, either
select <guilabel>Create Snapshot</guilabel> from the actions column next
to the volume name on the dashboard volume page, or run this from the
command line:</para>
<programlisting>usage: cinder snapshot-create [--force &lt;True|False&gt;]
[--display-name &lt;display-name&gt;]
[--display-description &lt;display-description&gt;]
&lt;volume-id&gt;
Add a new snapshot.
Positional arguments: &lt;volume-id&gt; ID of the volume to snapshot
Optional arguments: --force &lt;True|False&gt; Optional flag to indicate whether to
snapshot a volume even if its
attached to an instance.
(Default=False)
--display-name &lt;display-name&gt; Optional snapshot name.
(Default=None)
--display-description &lt;display-description&gt;
Optional snapshot description. (Default=None)</programlisting>
<note><para>For more information about updating Block Storage volumes (for example, resizing or
transferring), see the <link xlink:href="http://docs.openstack.org/user-guide/"
>OpenStack End User Guide</link>.</para></note>
<section xml:id="block_storage_creation_failures">
<title>Block Storage Creation Failures</title>
<para>If a user tries to create a volume and the volume immediately goes
into an error state, the best way to troubleshoot is to grep the cinder
log files for the volume's UUID. First try the log files on the cloud
controller, and then try the storage node where the volume was attempted
to be created:</para>
<screen><prompt>#</prompt> <userinput>grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log</userinput></screen>
</section>
</section>
<section xml:id="user_facing_shared_file_systems">
<?dbhtml stop-chunking?>
<title>Shared File Systems Service</title>
<para>Similar to Block Storage, the Shared File System is a persistent
storage, called share, that can be used in multi-tenant environments.
Users create and mount a share as a remote file system on any machine
that allows mounting shares, and has network access to share exporter.
This share can then be used for storing, sharing, and exchanging files.
The default configuration of the Shared File Systems service depends
on the back-end driver the admin chooses when starting the Shared File
Systems service.
For more information about existing back-end drivers, see section
<link xlink:href="http://docs.openstack.org/developer/manila/devref/index.html#share-backends">"Share Backends"</link>
of Shared File Systems service Developer Guide. For example,
in case of OpenStack Block Storage based back-end is used, the Shared
File Systems service cares about everything, including VMs, networking,
keypairs, and security groups. Other configurations require more
detailed knowledge of shares functionality to set up and tune specific
parameters and modes of shares functioning.
</para>
<para>
Shares are a remote mountable file system, so users can mount a share
to multiple hosts, and have it accessed from multiple hosts by multiple
users at a time. With the Shared File Systems service, you can perform
a large number of operations with shares:
<itemizedlist>
<listitem>
<para>Create, update, delete and force-delete shares</para>
</listitem>
<listitem>
<para>Change access rules for shares, reset share state</para>
</listitem>
<listitem>
<para>Specify quotas for existing users or tenants</para>
</listitem>
<listitem>
<para>Create share networks</para>
</listitem>
<listitem>
<para>Define new share types</para>
</listitem>
<listitem>
<para>Perform operations with share snapshots: create, change name,
create a share from a snapshot, delete</para>
</listitem>
<listitem>
<para>Operate with consistency groups</para>
</listitem>
<listitem>
<para>Use security services</para>
</listitem>
</itemizedlist>
For more information on share management see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_share_management.html">
“Share management”</link> of chapter “Shared File Systems” in
OpenStack Cloud Administrator Guide.
As to Security services, you should remember that different drivers
support different authentication methods, while generic driver does not
support Security Services at all (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_security_services.html">
“Security services”</link> of chapter “Shared File Systems” in
OpenStack Cloud Administrator Guide).
</para>
<para>
You can create a share in a network, list shares, and
show information for, update, and delete a specified share. You can
also create snapshots of shares (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_snapshots.html">
“Share snapshots”</link> of chapter “Shared File Systems” in OpenStack
Cloud Administrator Guide).
</para>
<para>
There are default and specific share types that allow you to filter or
choose back-ends before you create a share. Functions and behaviour of
share type is similar to Block Storage volume type (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_share_types.html">
“Share types”</link> of chapter “Shared File Systems” in OpenStack
Cloud Administrator Guide).
</para>
<para>
To help users keep and restore their data, Shared File Systems service
provides a mechanism to create and operate snapshots (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_snapshots.html">
“Share snapshots”</link> of chapter “Shared File Systems” in OpenStack
Cloud Administrator Guide).
</para>
<para>
A security service stores configuration information for clients for
authentication and authorization. Inside Manila a share network can be
associated with up to three security types (for detailed
information see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_security_services.html">
“Security services”</link> of chapter “Shared File Systems” in
OpenStack Cloud Administrator Guide):
<itemizedlist>
<listitem>
<para>LDAP</para>
</listitem>
<listitem>
<para>Kerberos</para>
</listitem>
<listitem>
<para>Microsoft Active Directory</para>
</listitem>
</itemizedlist>
</para>
<para>
Shared File Systems service differs from the principles
implemented in Block Storage. Shared File Systems service can work in
two modes:
<itemizedlist>
<listitem>
<para>Without interaction with share networks, in so called
"no share servers" mode.</para>
</listitem>
<listitem>
<para>Interacting with share networks.</para>
</listitem>
</itemizedlist>
Networking service is used by the Shared File Systems service to
directly operate with share servers. For switching interaction with
Networking service on, create a share specifying a share network.
To use "share servers" mode even being out of OpenStack, a network
plugin called StandaloneNetworkPlugin is used. In this case,
provide network information in the configuration: IP range, network
type, and segmentation ID.
Also you can add security services to a share network (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_networking.html">
“Networking”</link> of chapter “Shared File Systems” in OpenStack
Cloud Administrator Guide).
</para>
<para>
The main idea of consistency groups is to enable you to create
snapshots at the exact same point in time from multiple file system
shares. Those snapshots can be then used for restoring all shares that
were associated with the consistency group (see section
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_cgroups.html">
“Consistency groups”</link> of chapter “Shared File Systems” in
OpenStack Cloud Administrator Guide).
</para>
<para>
Shared File System storage allows administrators to set limits and
quotas for specific tenants and users. Limits are the resource
limitations that are allowed for each tenant or user. Limits consist
of:
<itemizedlist>
<listitem>
<para>Rate limits</para>
</listitem>
<listitem>
<para>Absolute limits</para>
</listitem>
</itemizedlist>
Rate limits control the frequency at which users can issue specific API
requests. Rate limits are configured by administrators in a config file.
Also, administrator can specify quotas also known as max values of
absolute limits per tenant. Whereas users can see only the amount of
their consumed resources.
Administrator can specify rate limits or quotas for the following
resources:
<itemizedlist>
<listitem>
<para>Max amount of space awailable for all shares</para>
<para>Max number of shares</para>
<para>Max number of shared networks</para>
<para>Max number of share snapshots</para>
<para>Max total amount of all snapshots</para>
<para>Type and number of API calls that can be made in a
specific time interval</para>
</listitem>
</itemizedlist>
User can see his rate limits and absolute limits by running commands
<code>manila rate-limits</code> and <code>manila absolute-limits</code>
respectively.
For more details on limits and quotas see subsection
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_quotas.html">
"Quotas and limits"</link> of "Share management" section of OpenStack
Cloud Administrator Guide document.
</para>
<para>
This section lists several of the most important Use Cases that
demonstrate the main functions and abilities of Shared File Systems
service:
<itemizedlist>
<listitem>
<para>Create share</para>
</listitem>
<listitem>
<para>Operating with a share</para>
</listitem>
<listitem>
<para>Manage access to shares</para>
</listitem>
<listitem>
<para>Create snapshots</para>
</listitem>
<listitem>
<para>Create a share network</para>
</listitem>
<listitem>
<para>Manage a share network</para>
</listitem>
</itemizedlist>
</para>
<note>
<para>Shared File Systems service cannot warn you
beforehand if it is safe to write a specific large amount of data onto
a certain share or to remove a consistency group if it has a number of
shares assigned to it. In such a potentially erroneous situations, if a
mistake happens, you can expect some error message or even failing of
shares or consistency groups into an incorrect status. You can also
expect some level of system corruption if a user tries to unmount an
unmanaged share while a process is using it for data transfer.
</para>
</note>
<section xml:id="create_share">
<title>Create Share</title>
<para>
In this section, we examine the process of creating a simple share.
It consists of several steps:
<itemizedlist>
<listitem>
<para>Check if there is an appropriate share type defined in the
Shared File Systems service
</para>
</listitem>
<listitem>
<para>If such a share type does not exist, an Admin should create
it using <code>manila type-create</code> command before other
users are able to use it</para>
</listitem>
<listitem>
<para>Using a share network is optional. However if you need one,
check if there is an appropriate network defined in Shared File
Systems service by using <code>manila share-network-list</code>
command. For the information on creating a share network, see
<xref linkend="create_a_share_network" /> below in this chapter.
</para>
</listitem>
<listitem>
<para>Create a public share using <code>manila create</code></para>
</listitem>
<listitem>
<para>Make sure that the share has been created successfully and is
ready to use (check the share status and see the share export
location)</para>
</listitem>
</itemizedlist>
Below is the same whole procedure described step by step and in more
detail.
</para>
<note>
<para>
Before you start, make sure that Shared File Systems service is
installed on your OpenStack cluster and is ready to use.
</para>
</note>
<para>By default, there are no share types defined in Shared File Systems
service, so you can check if a required one has been already created:
<screen><prompt>$</prompt> <userinput>manila type-list</userinput>
<computeroutput>+------+--------+-----------+-----------+----------------------------------+----------------------+
| ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs |
+------+--------+-----------+-----------+----------------------------------+----------------------+
| c0...| default| public | YES | driver_handles_share_servers:True| snapshot_support:True|
+------+--------+-----------+-----------+----------------------------------+----------------------+</computeroutput></screen>
</para>
<para>If the share types list is empty or does not contain a type you
need, create the required share type using this command:
<screen><prompt>$</prompt> <userinput>manila type-create netapp1 False --is_public True</userinput></screen>
This command will create a public share with the following parameters:
<code>name = netapp1</code>, <code>spec_driver_handles_share_servers = False</code>
</para>
<para>You can now create a public share with
my_share_net network, default share type, NFS shared file systems
protocol, and 1 GB size:
<screen><prompt>$</prompt> <userinput>manila create nfs 1 --name "Share1" --description "My first share" --share-type default --share-network my_share_net --metadata aim=testing --public</userinput>
<computeroutput>+-----------------------------+--------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------+
| status | None |
| share_type_name | default |
| description | My first share |
| availability_zone | None |
| share_network_id | None |
| export_locations | [] |
| share_server_id | None |
| host | None |
| snapshot_id | None |
| is_public | True |
| task_state | None |
| snapshot_support | True |
| id | aca648eb-8c03-4394-a5cc-755066b7eb66 |
| size | 1 |
| name | Share1 |
| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 |
| created_at | 2015-09-24T12:19:06.925951 |
| export_location | None |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 20787a7ba11946adad976463b57d8a2f |
| metadata | {u'aim': u'testing'} |
+-----------------------------+--------------------------------------+</computeroutput></screen>
</para>
<para>
To confirm that creation has been successful, see the share in the
share list:
<screen><prompt>$</prompt> <userinput>manila list</userinput>
<computeroutput>+----+-------+-----+------------+-----------+-------------------------------+----------------------+
| ID | Name | Size| Share Proto| Share Type| Export location | Host |
+----+-------+-----+------------+-----------+-------------------------------+----------------------+
| a..| Share1| 1 | NFS | c0086... | 10.254.0.3:/shares/share-2d5..| manila@generic1#GEN..|
+----+-------+-----+------------+-----------+-------------------------------+----------------------+</computeroutput></screen>
</para>
<para>
Check the share status and see the share export location. After
creation, the share status should become <code>available</code>:
<screen><prompt>$</prompt> <userinput>manila show Share1</userinput>
<computeroutput>+-----------------------------+-------------------------------------------+
| Property | Value |
+-----------------------------+-------------------------------------------+
| status | available |
| share_type_name | default |
| description | My first share |
| availability_zone | nova |
| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a |
| export_locations | 10.254.0.3:/shares/share-2d5e2c0a-1f84... |
| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 |
| host | manila@generic1#GENERIC1 |
| snapshot_id | None |
| is_public | True |
| task_state | None |
| snapshot_support | True |
| id | aca648eb-8c03-4394-a5cc-755066b7eb66 |
| size | 1 |
| name | Share1 |
| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 |
| created_at | 2015-09-24T12:19:06.000000 |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 20787a7ba11946adad976463b57d8a2f |
| metadata | {u'aim': u'testing'} |
+-----------------------------+-------------------------------------------+</computeroutput></screen>
The value <code>is_public</code> defines the level of visibility for the
share: whether other tenants can or cannot see the share. By default,
the share is private. Now you can mount the created share like a remote
file system and use it for your purposes.
<tip>
<para>See subsection
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_share_management.html">
“Share Management”</link> of “Shared File Systems” section of
Cloud Administration Guide document for the details on share
management operations.
</para>
</tip>
</para>
</section>
<section xml:id="manage_access_to_shares">
<title>Manage Access To Shares</title>
<para>
Currently, you have a share and would like to control access to this
share for other users. For this, you have to perform a number of steps
and operations. Before getting to manage access to the share, pay
attention to the following important parameters.
To grant or deny access to a share, specify one of these supported
share access levels:
<itemizedlist>
<listitem>
<para>
<code>rw</code>: read and write (RW) access. This is the default
value.
</para>
</listitem>
<listitem>
<para>
<code>ro:</code> read-only (RO) access.
</para>
</listitem>
</itemizedlist>
Additionally, you should also specify one of these supported
authentication methods:
<itemizedlist>
<listitem>
<para>
<code>ip</code>: authenticates an instance through its IP address.
A valid format is XX.XX.XX.XX orXX.XX.XX.XX/XX.
For example 0.0.0.0/0.
</para>
</listitem>
<listitem>
<para>
<code>cert</code>: authenticates an instance through a TLS
certificate. Specify the TLS identity as the IDENTKEY. A valid
value is any string up to 64 characters long in the common name
(CN) of the certificate. The meaning of a string depends on its
interpretation.
</para>
</listitem>
<listitem>
<para>
<code>user</code>: authenticates by a specified user or group
name. A valid value is an alphanumeric string that can contain
some special characters and is from 4 to 32 characters long.
</para>
</listitem>
</itemizedlist>
<note>
<para>Do not mount a share without an access rule! This can lead to
an exception.</para>
</note>
</para>
<para>
Allow access to the share with IP access type and 10.254.0.4 IP address:
<screen><prompt>$</prompt> <userinput>manila access-allow Share1 ip 10.254.0.4 --access-level rw</userinput>
<computeroutput>+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| share_id | 7bcd888b-681b-4836-ac9c-c3add4e62537 |
| access_type | ip |
| access_to | 10.254.0.4 |
| access_level | rw |
| state | new |
| id | de715226-da00-4cfc-b1ab-c11f3393745e |
+--------------+--------------------------------------+</computeroutput></screen>
</para>
<para>
Mount the Share:
<screen><prompt>$</prompt> <userinput>sudo mount -v -t nfs 10.254.0.5:/shares/share-5789ddcf-35c9-4b64-a28a-7f6a4a574b6a /mnt/</userinput></screen>
Then check if the share mounted successfully and according to the
specified access rules:
<screen><prompt>$</prompt> <userinput>manila access-list Share1</userinput>
<computeroutput>+--------------------------------------+-------------+------------+--------------+--------+
| id | access type | access to | access level | state |
+--------------------------------------+-------------+------------+--------------+--------+
| 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | user | demo | rw | error |
| de715226-da00-4cfc-b1ab-c11f3393745e | ip | 10.254.0.4 | rw | active |
+--------------------------------------+-------------+------------+--------------+--------+</computeroutput></screen>
</para>
<note>
<para>
Different share features are supported by different share drivers.
In these examples there was used generic (Cinder as a back-end)
driver that does not support <code>user</code> and
<code>cert</code> authentication methods.
</para>
</note>
<tip>
<para>
For the details of features supported by different drivers see
section
<link xlink:href="http://docs.openstack.org/developer/manila/devref/share_back_ends_feature_support_mapping.html">
“Manila share features support mapping”</link> of Manila Developer Guide document.
</para>
</tip>
</section>
<section xml:id="manage_shares">
<title>Manage Shares</title>
<para>
There are several other useful operations you would perform when working with shares.
</para>
<section xml:id="update_share">
<title>Update Share</title>
<para>
To change the name of a share, or update its description, or level of
visibility for other tenants, use this command:
<screen><prompt>$</prompt> <userinput>manila update Share1 --description "My first share. Updated" --is-public False</userinput></screen>
Check the attributes of the updated Share1:
<screen><prompt>$</prompt> <userinput>manila show Share1</userinput>
<computeroutput>+-----------------------------+--------------------------------------------+
| Property | Value |
+-----------------------------+--------------------------------------------+
| status | available |
| share_type_name | default |
| description | My first share. Updated |
| availability_zone | nova |
| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a |
| export_locations | 10.254.0.3:/shares/share-2d5e2c0a-1f84-... |
| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 |
| host | manila@generic1#GENERIC1 |
| snapshot_id | None |
| is_public | False |
| task_state | None |
| snapshot_support | True |
| id | aca648eb-8c03-4394-a5cc-755066b7eb66 |
| size | 1 |
| name | Share1 |
| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 |
| created_at | 2015-09-24T12:19:06.000000 |
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 20787a7ba11946adad976463b57d8a2f |
| metadata | {u'aim': u'testing'} |
+-----------------------------+--------------------------------------------+</computeroutput></screen>
</para>
</section>
<section xml:id="reset_share_state">
<title>Reset Share State</title>
<para>
Sometimes a share may appear and then
hang in an erroneous or a transitional state. Unprivileged users do
not have the appropriate access rights to correct this situation.
However, having cloud administrator's permissions, you can reset the
share's state by using
<screen><prompt>$</prompt> <userinput>manila reset-state [state state] share_name</userinput></screen>
command to reset share state, where state indicates which state to
assign the share to. Options include:
<code>available, error, creating, deleting, error_deleting</code>
states.
</para>
<para>
After running
<screen><prompt>$</prompt> <userinput>manila reset-state Share2 --state deleting</userinput></screen>
check the share's status:
<screen><prompt>$</prompt> <userinput>manila show Share2</userinput>
<computeroutput>+-----------------------------+-------------------------------------------+
| Property | Value |
+-----------------------------+-------------------------------------------+
| status | deleting |
| share_type_name | default |
| description | share from a snapshot. |
| availability_zone | nova |
| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a |
| export_locations | [] |
| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 |
| host | manila@generic1#GENERIC1 |
| snapshot_id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd |
| is_public | False |
| task_state | None |
| snapshot_support | True |
| id | b6b0617c-ea51-4450-848e-e7cff69238c7 |
| size | 1 |
| name | Share2 |
| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 |
| created_at | 2015-09-25T06:25:50.000000 |
| export_location | 10.254.0.3:/shares/share-1dc2a471-3d47-...|
| share_proto | NFS |
| consistency_group_id | None |
| source_cgsnapshot_member_id | None |
| project_id | 20787a7ba11946adad976463b57d8a2f |
| metadata | {u'source': u'snapshot'} |
+-----------------------------+-------------------------------------------+</computeroutput></screen>
</para>
</section>
<section xml:id="delete_share">
<title>Delete Share</title>
<para>
If you do not need a share any more, you can delete it using
<command>manila delete share_name_or_ID</command> command like:
<screen><prompt>$</prompt> <userinput>manila delete Share2</userinput></screen>
</para>
<note>
<para>
If you specified the consistency group while creating a share,
you should provide the --consistency-group parameter to delete
the share:
</para>
</note>
<para>
<screen><prompt>$</prompt> <userinput>manila delete ba52454e-2ea3-47fa-a683-3176a01295e6 --consistency-group ffee08d9-c86c-45e5-861e-175c731daca2</userinput></screen>
</para>
<para>
Sometimes it appears that a share hangs in one of transitional states
(i.e. <code>creating, deleting, managing, unmanaging, extending, and shrinking</code>).
In that case, to delete it, you need
<command>manila force-delete share_name_or_ID</command> command and
administrative permissions to run it:
<screen><prompt>$</prompt> <userinput>manila force-delete b6b0617c-ea51-4450-848e-e7cff69238c7</userinput></screen>
</para>
<tip>
<para>
For more details and additional information about other cases, features,
API commands etc, see subsection
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_share_management.html">
“Share Management”</link> of “Shared File Systems”
section of Cloud Administration Guide document.
</para>
</tip>
</section>
</section>
<section xml:id="create_snapshots">
<title>Create Snapshots</title>
<para>
The Shared File Systems service provides a mechanism of snapshots to
help users to restore their own data. To create a snapshot, use
<command>manila snapshot-create</command> command like:
<screen><prompt>$</prompt> <userinput>manila snapshot-create Share1 --name Snapshot1 --description "Snapshot of Share1"</userinput>
<computeroutput>+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| status | creating |
| share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 |
| name | Snapshot1 |
| created_at | 2015-09-25T05:27:38.862040 |
| share_proto | NFS |
| id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd |
| size | 1 |
| share_size | 1 |
| description | Snapshot of Share1 |
+-------------+--------------------------------------+</computeroutput></screen>
</para>
<para>
Then, if needed, update the name and description of the created snapshot:
<screen><prompt>$</prompt> <userinput>manila snapshot-rename Snapshot1 Snapshot_1 --description "Snapshot of Share1. Updated."</userinput></screen>
To make sure that the snapshot is available, run:
<screen><prompt>$</prompt> <userinput>manila snapshot-show Snapshot1</userinput>
<computeroutput>+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| status | available |
| share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 |
| name | Snapshot1 |
| created_at | 2015-09-25T05:27:38.000000 |
| share_proto | NFS |
| id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd |
| size | 1 |
| share_size | 1 |
| description | Snapshot of Share1 |
+-------------+--------------------------------------+</computeroutput></screen>
<tip>
<para>
For more details and additional information on snapshots, see
subsection
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_snapshots.html">
“Share Snapshots”</link> of “Shared File Systems” section of
“Cloud Administration Guide” document.
</para>
</tip>
</para>
</section>
<section xml:id="create_a_share_network">
<title>Create a Share Network</title>
<para>
To control a share network, Shared File Systems service requires
interaction with Networking service to manage share servers on its own.
If the selected driver runs in a mode that requires such kind of
interaction, you need to specify the share network when a share is
created. For the information on share creation, see
<xref linkend="create_share" /> earlier in this chapter.
Initially, check the existing share networks type list by:
<screen><prompt>$</prompt> <userinput>manila share-network-list</userinput>
<computeroutput>+--------------------------------------+--------------+
| id | name |
+--------------------------------------+--------------+
+--------------------------------------+--------------+</computeroutput></screen>
</para>
<para>
If share network list is empty or does not contain a required network,
just create, for example, a share network with a private network and
subnetwork.
<screen><prompt>$</prompt> <userinput>manila share-network-create --neutron-net-id 5ed5a854-21dc-4ed3-870a-117b7064eb21 --neutron-subnet-id 74dcfb5a-b4d7-4855-86f5-a669729428dc --name my_share_net --description "My first share network"</userinput>
<computeroutput>+-------------------+--------------------------------------+
| Property | Value |
+-------------------+--------------------------------------+
| name | my_share_net |
| segmentation_id | None |
| created_at | 2015-09-24T12:06:32.602174 |
| neutron_subnet_id | 74dcfb5a-b4d7-4855-86f5-a669729428dc |
| updated_at | None |
| network_type | None |
| neutron_net_id | 5ed5a854-21dc-4ed3-870a-117b7064eb21 |
| ip_version | None |
| nova_net_id | None |
| cidr | None |
| project_id | 20787a7ba11946adad976463b57d8a2f |
| id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a |
| description | My first share network |
+-------------------+--------------------------------------+</computeroutput></screen>
The <code>segmentation_id</code>, <code>cidr</code>, <code>ip_version</code>,
and <code>network_type</code> share network attributes are
automatically set to the values determined by the network provider.
</para>
<para>
Then check if the network became created by requesting the networks
list once again:
<screen><prompt>$</prompt> <userinput>manila share-network-list</userinput>
<computeroutput>+--------------------------------------+--------------+
| id | name |
+--------------------------------------+--------------+
| 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net |
+--------------------------------------+--------------+</computeroutput></screen>
</para>
<para>
Finally, to create a share that uses this share network, get to Create
Share use case described earlier in this chapter.
<tip>
<para>
See subsection <link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_share_networks.html">
“Share Networks”</link> of “Shared File Systems” section of Cloud
Administration Guide document for more details.
</para>
</tip>
</para>
</section>
<section xml:id="manage_a_share_network">
<title>Manage a Share Network</title>
<para>
There is a pair of useful commands that help manipulate share networks.
To start, check the network list:
<screen><prompt>$</prompt> <userinput>manila share-network-list</userinput>
<computeroutput>+--------------------------------------+--------------+
| id | name |
+--------------------------------------+--------------+
| 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net |
+--------------------------------------+--------------+</computeroutput></screen>
If you configured the back-end with <code>driver_handles_share_servers = True</code>
(with the share servers) and had already some operations in the Shared
File Systems service, you can see <code>manila_service_network</code>
in the neutron list of networks. This network was created by the share
driver for internal usage.
<screen><prompt>$</prompt> <userinput>neutron net-list</userinput>
<computeroutput>+--------------+------------------------+------------------------------------+
| id | name | subnets |
+--------------+------------------------+------------------------------------+
| 3b5a629a-e...| manila_service_network | 4f366100-50... 10.254.0.0/28 |
| bee7411d-d...| public | 884a6564-01... 2001:db8::/64 |
| | | e6da81fa-55... 172.24.4.0/24 |
| 5ed5a854-2...| private | 74dcfb5a-bd... 10.0.0.0/24 |
| | | cc297be2-51... fd7d:177d:a48b::/64 |
+--------------+------------------------+------------------------------------+</computeroutput></screen>
</para>
<para>
You also can see detailed information about the share network including
<code>network_type, segmentation_id</code> fields:
<screen><prompt>$</prompt> <userinput>neutron net-show manila_service_network</userinput>
<computeroutput>+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 3b5a629a-e7a1-46a3-afb2-ab666fb884bc |
| mtu | 0 |
| name | manila_service_network |
| port_security_enabled | True |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 1068 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | 4f366100-5108-4fa2-b5b1-989a121c1403 |
| tenant_id | 24c6491074e942309a908c674606f598 |
+---------------------------+--------------------------------------+</computeroutput></screen>
You also can add and remove the security services to the share network.
</para>
<tip>
<para>
For details, see subsection
<link xlink:href="http://docs.openstack.org/admin-guide-cloud/shared_file_systems_security_services.html">
"Security Services"</link> of “Shared File Systems” section of
Cloud Administration Guide document.
</para>
</tip>
</section>
</section>
<section xml:id="instances">
<?dbhtml stop-chunking?>
<title>Instances</title>
<para>Instances are the running virtual machines within an OpenStack
cloud. This section deals with how to work with them and their underlying
images, their network properties, and how they are represented in the
database.<indexterm class="singular">
<primary>user training</primary>
<secondary>instances</secondary>
</indexterm></para>
<section xml:id="start_instances">
<title>Starting Instances</title>
<para>To launch an instance, you need to select an image, a flavor, and
a name. The name needn't be unique, but your life will be simpler if it
is because many tools will use the name in place of the UUID so long as
the name is unique. You can start an instance from the dashboard from
the <guibutton>Launch Instance</guibutton> button on the
<guilabel>Instances</guilabel> page or by selecting the
<guilabel>Launch Instance</guilabel> action next to an image or snapshot
on the <guilabel>Images</guilabel> page.<indexterm
class="singular">
<primary>instances</primary>
<secondary>starting</secondary>
</indexterm></para>
<para>On the command line, do this:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor &lt;flavor&gt; --image &lt;image&gt; &lt;name&gt;</userinput></screen>
<para>There are a number of optional items that can be specified. You
should read the rest of this section before trying to start an instance,
but this is the base command that later details are layered upon.</para>
<para>To delete instances from the dashboard, select the
<guilabel>Delete instance</guilabel> action next to the instance on
the <guilabel>Instances</guilabel> page.
<note>
<para>In releases prior to Mitaka, select the equivalent <guilabel>Terminate instance</guilabel> action.</para>
</note>.
From the command line, do this:</para>
<screen><prompt>$</prompt> <userinput>nova delete &lt;instance-uuid&gt;</userinput></screen>
<para>It is important to note that powering off an instance does not
terminate it in the OpenStack sense.</para>
</section>
<section xml:id="instance_boot_failures">
<title>Instance Boot Failures</title>
<para>If an instance fails to start and immediately moves to an error
state, there are a few different ways to track down what has gone wrong.
Some of these can be done with normal user access, while others require
access to your log server or compute nodes.<indexterm class="singular">
<primary>instances</primary>
<secondary>boot failures</secondary>
</indexterm></para>
<para>The simplest reasons for nodes to fail to launch are quota
violations or the scheduler being unable to find a suitable compute node
on which to run the instance. In these cases, the error is apparent when
you run a <code>nova show</code> on the faulted instance:<indexterm
class="singular">
<primary>config drive</primary>
</indexterm></para>
<screen><prompt>$</prompt> <userinput>nova show test-instance</userinput></screen>
<screen><?db-font-size 55%?>
<computeroutput>+------------------------+-----------------------------------------------------\
| Property | Value /
+------------------------+-----------------------------------------------------\
| OS-DCF:diskConfig | MANUAL /
| OS-EXT-STS:power_state | 0 \
| OS-EXT-STS:task_state | None /
| OS-EXT-STS:vm_state | error \
| accessIPv4 | /
| accessIPv6 | \
| config_drive | /
| created | 2013-03-01T19:28:24Z \
| fault | {u'message': u'NoValidHost', u'code': 500, u'created/
| flavor | xxl.super (11) \
| hostId | /
| id | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84 \
| image | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) /
| key_name | None \
| metadata | {} /
| name | test-instance \
| security_groups | [{u'name': u'default'}] /
| status | ERROR \
| tenant_id | 98333a1a28e746fa8c629c83a818ad57 /
| updated | 2013-03-01T19:28:26Z \
| user_id | a1ef823458d24a68955fec6f3d390019 /
+------------------------+-----------------------------------------------------\</computeroutput>
</screen>
<para>In this case, looking at the <literal>fault</literal> message
shows <literal>NoValidHost</literal>, indicating that the scheduler was
unable to match the instance requirements.</para>
<para>If <code>nova show</code> does not sufficiently explain the
failure, searching for the instance UUID in the
<code>nova-compute.log</code> on the compute node it was scheduled on or
the <code>nova-scheduler.log</code> on your scheduler hosts is a good
place to start looking for lower-level problems.</para>
<para>Using <code>nova show</code> as an admin user will show the
compute node the instance was scheduled on as <code>hostId</code>. If
the instance failed during scheduling, this field is blank.</para>
</section>
<section xml:id="instance_specific_data">
<title>Using Instance-Specific Data</title>
<para>There are two main types of instance-specific data: metadata and
user data.<indexterm class="singular">
<primary>metadata</primary>
<secondary>instance metadata</secondary>
</indexterm><indexterm class="singular">
<primary>instances</primary>
<secondary>instance-specific data</secondary>
</indexterm></para>
<section xml:id="instance_metadata">
<title>Instance metadata</title>
<para>For Compute, instance metadata is a collection of key-value
pairs associated with an instance. Compute reads and writes to these
key-value pairs any time during the instance lifetime, from inside and
outside the instance, when the end user uses the Compute API to do so.
However, you cannot query the instance-associated key-value pairs with
the metadata service that is compatible with the Amazon EC2 metadata
service.</para>
<para>For an example of instance metadata, users can generate and
register SSH keys using the <literal>nova</literal> command:</para>
<screen><prompt>$</prompt> <userinput>nova keypair-add mykey &gt; mykey.pem</userinput></screen>
<para>This creates a key named <userinput>mykey</userinput>, which you
can associate with instances. The file <filename>mykey.pem</filename>
is the private key, which should be saved to a secure location because
it allows root access to instances the <userinput>mykey</userinput>
key is associated with.</para>
<para>Use this command to register an existing key with
OpenStack:</para>
<screen><prompt>$</prompt> <userinput>nova keypair-add --pub-key mykey.pub mykey</userinput></screen>
<note>
<para>You must have the matching private key to access instances
associated with this key.</para>
</note>
<para>To associate a key with an instance on boot, add
<code>--key_name mykey</code> to your command line. For
example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image ubuntu-cloudimage --flavor 2 --key_name mykey myimage</userinput></screen>
<para>When booting a server, you can also add arbitrary metadata so
that you can more easily identify it among other running instances.
Use the <code>--meta</code> option with a key-value pair, where you
can make up the string for both the key and the value. For example,
you could add a description and also the creator of the server:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image=test-image --flavor=1 \
--meta description='Small test image' smallimage</userinput></screen>
<para>When viewing the server information, you can see the metadata
included on the <phrase role="keep-together">metadata</phrase>
line:</para>
<?hard-pagebreak ?>
<screen><prompt>$</prompt> <userinput>nova show smallimage</userinput>
<computeroutput>+------------------------+-----------------------------------------+
| Property | Value |
+------------------------+-----------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2012-05-16T20:48:23Z |
| flavor | m1.small |
| hostId | de0...487 |
| id | 8ec...f915 |
| image | natty-image |
| key_name | |
| metadata | {u'description': u'Small test image'} |
| name | smallimage |
| private network | 172.16.101.11 |
| progress | 0 |
| public network | 10.4.113.11 |
| status | ACTIVE |
| tenant_id | e83...482 |
| updated | 2012-05-16T20:48:35Z |
| user_id | de3...0a9 |
+------------------------+-----------------------------------------+</computeroutput></screen>
</section>
<section xml:id="instance_user_data">
<title>Instance user data</title>
<para>The <code>user-data</code> key is a special key in the metadata
service that holds a file that cloud-aware applications within the
guest instance can access. For example, <link
xlink:href="https://help.ubuntu.com/community/CloudInit"
xlink:title="OpenStack Image service">cloudinit</link> is an open
source package from Ubuntu, but available in most distributions, that
handles early initialization of a cloud instance that makes use of
this user data.<indexterm class="singular">
<primary>user data</primary>
</indexterm></para>
<para>This user data can be put in a file on your local system and
then passed in at instance creation with the flag <code>--user-data
&lt;user-data-file&gt;</code>. For example:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file mydatainstance</userinput></screen>
<para>To understand the difference between user data and metadata,
realize that user data is created before an instance is started. User
data is accessible from within the instance when it is running. User
data can be used to store configuration, a script, or anything the
tenant wants.</para>
</section>
<section xml:id="file_injection">
<title>File injection</title>
<para>Arbitrary local files can also be placed into the instance file
system at creation time by using the <code>--file
&lt;dst-path=src-path&gt;</code> option. You may store up to five
files.<indexterm class="singular">
<primary>file injection</primary>
</indexterm></para>
<para>For example, let's say you have a special
<filename>authorized_keys</filename> file named
special_authorized_keysfile that for some reason you want to put on
the instance instead of using the regular SSH key injection. In this
case, you can use the following command:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image ubuntu-cloudimage --flavor 1 \
--file /root/.ssh/authorized_keys=special_authorized_keysfile authkeyinstance</userinput></screen>
</section>
</section>
</section>
<section xml:id="associate_security_groups">
<title>Associating Security Groups</title>
<para>Security groups, as discussed earlier, are typically required to
allow network traffic to an instance, unless the default security group
for a project has been modified to be more permissive.<indexterm
class="singular">
<primary>security groups</primary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>security groups</secondary>
</indexterm></para>
<para>Adding security groups is typically done on instance boot. When
launching from the dashboard, you do this on the <guilabel>Access &amp;
Security</guilabel> tab of the <guilabel>Launch Instance</guilabel>
dialog. When launching from the command line, append
<code>--security-groups</code> with a comma-separated list of security
groups.</para>
<para>It is also possible to add and remove security groups when an
instance is running. Currently this is only available through the
command-line tools. Here is an example:</para>
<screen><prompt>$</prompt> <userinput>nova add-secgroup &lt;server&gt; &lt;securitygroup&gt;</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova remove-secgroup &lt;server&gt; &lt;securitygroup&gt;</userinput></screen>
</section>
<section xml:id="floating_ips">
<title>Floating IPs</title>
<para>Where floating IPs are configured in a deployment, each project will
have a limited number of floating IPs controlled by a quota. However,
these need to be allocated to the project from the central pool prior to
their use—usually by the administrator of the project. To allocate a
floating IP to a project, use the <guibutton>Allocate IP To
Project</guibutton> button on the <guilabel>Floating IPs</guilabel> tab
of the <guilabel>Access &amp; Security</guilabel> page of the dashboard.
The command line can also be used:<indexterm class="singular">
<primary>address pool</primary>
</indexterm><indexterm class="singular">
<primary>IP addresses</primary>
<secondary>floating</secondary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>floating IPs</secondary>
</indexterm></para>
<screen><prompt>$</prompt> <userinput>nova floating-ip-create</userinput></screen>
<para>Once allocated, a floating IP can be assigned to running instances
from the dashboard either by selecting <guibutton>Associate Floating
IP</guibutton> from the actions drop-down next to the IP on the
<guilabel>Floating IPs</guilabel> tab of the
<guilabel>Access &amp; Security</guilabel> page or by making this
selection next to the instance you want to associate it with on the
<guilabel>Instances</guilabel> page. The inverse action,
<guibutton>Dissociate Floating IP</guibutton>, is available from the
<guilabel>Floating IPs</guilabel> tab of the
<guilabel>Access &amp; Security</guilabel> page and from the
<guilabel>Instances</guilabel> page.</para>
<para>To associate or disassociate a floating IP with a server from the
command line, use the following commands:</para>
<screen><prompt>$</prompt> <userinput>nova add-floating-ip &lt;server&gt; &lt;address&gt;</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova remove-floating-ip &lt;server&gt; &lt;address&gt;</userinput></screen>
</section>
<section xml:id="attach_block_storage">
<title>Attaching Block Storage</title>
<para>You can attach block storage to instances from the dashboard on the
<guilabel>Volumes</guilabel> page. Click the <guibutton>Manage
Attachments</guibutton> action next to the volume you want to
attach.<indexterm class="singular">
<primary>storage</primary>
<secondary>block storage</secondary>
</indexterm><indexterm class="singular">
<primary>block storage</primary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>block storage</secondary>
</indexterm></para>
<para>To perform this action from command line, run the following
command:</para>
<screen><prompt>$</prompt> <userinput>nova volume-attach &lt;server&gt; &lt;volume&gt; &lt;device&gt;</userinput></screen>
<para>You can also specify block device<indexterm class="singular">
<primary>block device</primary>
</indexterm> mapping at instance boot time through the nova command-line
client with this option set:</para>
<screen><userinput>--block-device-mapping &lt;dev-name=mapping&gt; </userinput></screen>
<para><phrase role="keep-together">The block device mapping format is
<code>&lt;dev-name&gt;=&lt;id&gt;:&lt;type&gt;:&lt;size(GB)&gt;:</code></phrase><phrase
role="keep-together"><code>&lt;delete-on-terminate&gt;</code>,
where:</phrase></para>
<variablelist>
<varlistentry>
<term>dev-name</term>
<listitem>
<para>A device name where the volume is attached in the system at
<code>/dev/<replaceable>dev_name</replaceable></code></para>
</listitem>
</varlistentry>
<varlistentry>
<term>id</term>
<listitem>
<para>The ID of the volume to boot from, as shown in the output of
<literal>nova volume-list</literal></para>
</listitem>
</varlistentry>
<varlistentry>
<term>type</term>
<listitem>
<para>Either <literal>snap</literal>, which means that the volume
was created from a snapshot, or anything other than
<literal>snap</literal> (a blank string is valid). In the preceding
example, the volume was not created from a snapshot, so we leave
this field blank in our following example.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>size (GB)</term>
<listitem>
<para>The size of the volume in gigabytes. It is safe to leave this
blank and have the Compute Service infer the size.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>delete-on-terminate</term>
<listitem>
<para>A boolean to indicate whether the volume should be deleted
when the instance is terminated. True can be specified as
<literal>True</literal> or <literal>1</literal>. False can be
specified as <literal>False</literal> or
<literal>0</literal>.</para>
</listitem>
</varlistentry>
</variablelist>
<para>The following command will boot a new instance and attach a volume
at the same time. The volume of ID 13 will be attached as
<code>/dev/vdc</code>. It is not a snapshot, does not specify a size, and
will not be deleted when the instance is terminated:</para>
<screen><prompt>$</prompt> <userinput>nova boot --image 4042220e-4f5e-4398-9054-39fbd75a5dd7 \
--flavor 2 --key-name mykey --block-device-mapping vdc=13:::0 \
boot-with-vol-test</userinput></screen>
<para>If you have previously prepared block storage with a bootable file
system image, it is even possible to boot from persistent block storage.
The following command boots an image from the specified volume. It is
similar to the previous command, but the image is omitted and the volume
is now attached as <code>/dev/vda</code>:</para>
<screen><prompt>$</prompt> <userinput>nova boot --flavor 2 --key-name mykey \
--block-device-mapping vda=13:::0 boot-from-vol-test</userinput></screen>
<para>Read more detailed instructions for launching an instance from a
bootable volume in the <link
xlink:href="http://docs.openstack.org/user-guide/cli_nova_launch_instance_from_volume.html">OpenStack End User
Guide</link>.</para>
<para>To boot normally from an image and attach block storage, map to a
device other than vda. You can find instructions for launching an instance
and attaching a volume to the instance and for copying the image to the
attached volume in the <link
xlink:href="http://docs.openstack.org/user-guide/dashboard_launch_instances.html">OpenStack End User
Guide</link>.</para>
</section>
<section xml:id="snapshots">
<?dbhtml stop-chunking?>
<title>Taking Snapshots</title>
<para>The OpenStack snapshot mechanism allows you to create new images
from running instances. This is very convenient for upgrading base images
or for taking a published image and customizing it for local use. To
snapshot a running instance to an image using the CLI, do this:<indexterm
class="singular">
<primary>base image</primary>
</indexterm><indexterm class="singular">
<primary>snapshot</primary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>snapshots</secondary>
</indexterm></para>
<screen><prompt>$</prompt> <userinput>nova image-create &lt;instance name or uuid&gt; &lt;name of new image&gt;</userinput></screen>
<para>The dashboard interface for snapshots can be confusing because the
snapshots and images are displayed in the <guilabel>Images</guilabel>
page. However, an instance snapshot <emphasis>is</emphasis> an image. The
only difference between an image that you upload directly to the Image
Service and an image that you create by snapshot is that an image created
by snapshot has additional properties in the glance database. These
properties are found in the <literal>image_properties</literal> table and
include:</para>
<informaltable rules="all">
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><para><literal>image_type</literal></para></td>
<td><para>snapshot</para></td>
</tr>
<tr>
<td><para><literal>instance_uuid</literal></para></td>
<td><para>&lt;uuid of instance that was snapshotted&gt;</para></td>
</tr>
<tr>
<td><para><literal>base_image_ref</literal></para></td>
<td><para>&lt;uuid of original image of instance that was
snapshotted&gt;</para></td>
</tr>
<tr>
<td><para><literal>image_location</literal></para></td>
<td><para>snapshot</para></td>
</tr>
</tbody>
</informaltable>
<section xml:id="live-snapshots">
<title>Live Snapshots</title>
<para>Live snapshots is a feature that allows users to snapshot the
running virtual machines without pausing them. These snapshots are
simply disk-only snapshots. Snapshotting an instance can now be
performed with no downtime (assuming QEMU 1.3+ and libvirt 1.0+ are
used).<indexterm class="singular">
<primary>live snapshots</primary>
</indexterm></para>
<note>
<title>Disable live snapshotting</title>
<para>If you use libvirt version <literal>1.2.2</literal>,
you may experience intermittent problems with live snapshot creation.
</para>
<para>To effectively disable the libvirt live snapshotting, until the problem
is resolved, add the below setting to nova.conf.</para>
<programlisting language="ini">[workarounds]
disable_libvirt_livesnapshot = True</programlisting>
</note>
<sidebar>
<title>Ensuring Snapshots of Linux Guests Are Consistent</title>
<para>The following section is from Sébastien Han's <link
xlink:href="http://www.sebastien-han.fr/blog/2012/12/10/openstack-perform-consistent-snapshots/"
xlink:title="OpenStack Image service">“OpenStack: Perform Consistent
Snapshots” blog entry</link>.</para>
<para>A snapshot captures the state of the file system, but not the
state of the memory. Therefore, to ensure your snapshot contains the
data that you want, before your snapshot you need to ensure
that:</para>
<itemizedlist role="compact">
<listitem>
<para>Running programs have written their contents to disk</para>
</listitem>
<listitem>
<para>The file system does not have any "dirty" buffers: where
programs have issued the command to write to disk, but the
operating system has not yet done the write</para>
</listitem>
</itemizedlist>
<para>To ensure that important services have written their contents to
disk (such as databases), we recommend that you read the documentation
for those applications to determine what commands to issue to have
them sync their contents to disk. If you are unsure how to do this,
the safest approach is to simply stop these running services
normally.</para>
<para>To deal with the "dirty" buffer issue, we recommend using the
sync command before snapshotting:</para>
<screen><prompt>#</prompt> <userinput>sync</userinput></screen>
<para>Running <code>sync</code> writes dirty buffers (buffered blocks
that have been modified but not written yet to the disk block) to
disk.</para>
<para>Just running <code>sync</code> is not enough to ensure that the
file system is consistent. We recommend that you use the
<code>fsfreeze</code> tool, which halts new access to the file system,
and create a stable image on disk that is suitable for snapshotting.
The <code>fsfreeze</code> tool supports several file systems,
including ext3, ext4, and XFS. If your virtual machine instance is
running on Ubuntu, install the util-linux package to get
<literal>fsfreeze</literal>:</para>
<note>
<para>In the very common case where the underlying snapshot is
done via LVM, the filesystem freeze is automatically handled by LVM.
</para>
</note>
<screen><prompt>#</prompt> <userinput>apt-get install util-linux</userinput></screen>
<para>If your operating system doesn't have a version of
<literal>fsfreeze</literal> available, you can use
<literal>xfs_freeze</literal> instead, which is available on Ubuntu in
the xfsprogs package. Despite the "xfs" in the name, xfs_freeze also
works on ext3 and ext4 if you are using a Linux kernel version 2.6.29
or greater, since it works at the virtual file system (VFS) level
starting at 2.6.29. The xfs_freeze version supports the same
command-line arguments as <literal>fsfreeze</literal>.</para>
<para>Consider the example where you want to take a snapshot of a
persistent block storage volume, detected by the guest operating
system as <literal>/dev/vdb</literal> and mounted on
<literal>/mnt</literal>. The fsfreeze command accepts two
arguments:</para>
<variablelist>
<varlistentry>
<term>-f</term>
<listitem>
<para>Freeze the system</para>
</listitem>
</varlistentry>
<varlistentry>
<term>-u</term>
<listitem>
<para>Thaw (unfreeze) the system</para>
</listitem>
</varlistentry>
</variablelist>
<para>To freeze the volume in preparation for snapshotting, you would
do the following, as root, inside the instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -f /mnt</userinput></screen>
<para>You <emphasis>must mount the file system</emphasis> before you
run the <literal>fsfreeze</literal> command.</para>
<para>When the <literal>fsfreeze -f</literal> command is issued, all
ongoing transactions in the file system are allowed to complete, new
write system calls are halted, and other calls that modify the file
system are halted. Most importantly, all dirty data, metadata, and log
information are written to disk.</para>
<para>Once the volume has been frozen, do not attempt to read from or
write to the volume, as these operations hang. The operating system
stops every I/O operation and any I/O attempts are delayed until the
file system has been unfrozen.</para>
<para>Once you have issued the <literal>fsfreeze</literal> command, it
is safe to perform the snapshot. For example, if your instance was
named <literal>mon-instance</literal> and you wanted to snapshot it to
an image named <literal>mon-snapshot</literal>, you could now run the
following:</para>
<screen><prompt>$</prompt> <userinput>nova image-create mon-instance mon-snapshot</userinput></screen>
<para>When the snapshot is done, you can thaw the file system with the
following command, as root, inside of the instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -u /mnt</userinput></screen>
<para>If you want to back up the root file system, you can't simply
run the preceding command because it will freeze the prompt. Instead,
run the following one-liner, as root, inside the instance:</para>
<screen><prompt>#</prompt> <userinput>fsfreeze -f / &amp;&amp; read x; fsfreeze -u /</userinput>
</screen>
<para>After this command it is common practice to call <command>nova image-create</command> from your
workstation, and once done press enter in your instance shell to unfreeze it.
Obviously you could automate this, but at least it will let you properly synchronize.</para>
</sidebar>
<sidebar>
<title>Ensuring Snapshots of Windows Guests Are Consistent</title>
<para>Obtaining consistent snapshots of Windows VMs is conceptually
similar to obtaining consistent snapshots of Linux VMs, although it
requires additional utilities to coordinate with a Windows-only
subsystem designed to facilitate consistent backups.</para>
<para>Windows XP and later releases include a Volume Shadow Copy
Service (VSS) which provides a framework so that compliant
applications can be consistently backed up on a live filesystem. To
use this framework, a VSS requestor is run that signals to the VSS
service that a consistent backup is needed. The VSS service notifies
compliant applications (called VSS writers) to quiesce their data
activity. The VSS service then tells the copy provider to create
a snapshot. Once the snapshot has been made, the VSS service
unfreezes VSS writers and normal I/O activity resumes.</para>
<para>QEMU provides a guest agent that can be run in guests running
on KVM hypervisors. This guest agent, on Windows VMs, coordinates with
the Windows VSS service to facilitate a workflow which ensures
consistent snapshots. This feature requires at least QEMU 1.7. The
relevant guest agent commands are:</para>
<variablelist>
<varlistentry>
<term>guest-file-flush</term>
<listitem>
<para>Write out "dirty" buffers to disk, similar to the
Linux <literal>sync</literal> operation.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>guest-fsfreeze</term>
<listitem>
<para>Suspend I/O to the disks, similar to the Linux
<literal>fsfreeze -f</literal> operation.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>guest-fsfreeze-thaw</term>
<listitem>
<para>Resume I/O to the disks, similar to the Linux
<literal>fsfreeze -u</literal> operation.</para>
</listitem>
</varlistentry>
</variablelist>
<para>To obtain snapshots of a Windows VM these commands can be
scripted in sequence: flush the filesystems, freeze the filesystems,
snapshot the filesystems, then unfreeze the filesystems. As with
scripting similar workflows against Linux VMs, care must be used
when writing such a script to ensure error handling is thorough and
filesystems will not be left in a frozen state.</para>
</sidebar>
</section>
</section>
<section xml:id="database_instances">
<?dbhtml stop-chunking?>
<title>Instances in the Database</title>
<para>While instance information is stored in a number of database tables,
the table you most likely need to look at in relation to user instances is
the instances table.<indexterm class="singular">
<primary>instances</primary>
<secondary>database information</secondary>
</indexterm><indexterm class="singular">
<primary>databases</primary>
<secondary>instance information in</secondary>
</indexterm><indexterm class="singular">
<primary>user training</primary>
<secondary>instances</secondary>
</indexterm></para>
<para>The instances table carries most of the information related to both
running and deleted instances. It has a bewildering array of fields; for
an exhaustive list, look at the database. These are the most useful fields
for operators looking to form queries:</para>
<itemizedlist>
<listitem>
<para>The <literal>deleted</literal> field is set to
<literal>1</literal> if the instance has been deleted and
<literal>NULL</literal> if it has not been deleted. This field is
important for excluding deleted instances from your queries.</para>
</listitem>
<listitem>
<para>The <literal>uuid</literal> field is the UUID of the instance
and is used throughout other tables in the database as a foreign key.
This ID is also reported in logs, the dashboard, and command-line
tools to uniquely identify an instance.</para>
</listitem>
<listitem>
<para>A collection of foreign keys are available to find relations to
the instance. The most useful of these—<literal>user_id</literal> and
<literal>project_id</literal>—are the UUIDs of the user who launched
the instance and the project it was launched in.</para>
</listitem>
<listitem>
<para>The <literal>host</literal> field tells which compute node is
hosting the instance.</para>
</listitem>
<listitem>
<para>The <literal>hostname</literal> field holds the name of the
instance when it is launched. The display-name is initially the same
as hostname but can be reset using the nova rename command.</para>
</listitem>
</itemizedlist>
<para>A number of time-related fields are useful for tracking when state
changes happened on an instance:</para>
<itemizedlist role="compact">
<listitem>
<para><literal>created_at</literal></para>
</listitem>
<listitem>
<para><literal>updated_at</literal></para>
</listitem>
<listitem>
<para><literal>deleted_at</literal></para>
</listitem>
<listitem>
<para><literal>scheduled_at</literal></para>
</listitem>
<listitem>
<para><literal>launched_at</literal></para>
</listitem>
<listitem>
<para><literal>terminated_at</literal></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="user-facing-outro">
<title>Good Luck!</title>
<para>This section was intended as a brief introduction to some of the
most useful of many OpenStack commands. For an exhaustive list, please
refer to the <link xlink:href="http://docs.openstack.org/user-guide-admin/">Admin User
Guide</link>, and for additional hints and tips, see the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/">Cloud Admin Guide</link>. We hope
your users remain happy and recognize your hard work! (For more hard work,
turn the page to the next chapter, where we discuss the system-facing
operations: maintenance, failures and debugging.)</para>
</section>
</chapter>