General updates to Compute for style and convention

Editing the nested sections for the compute chapter. Mostly grammar, wording,
style, convention, etc. This patch includes ipv6 and migrations. Watch this
space for more.

Change-Id: I3fe8c2d253f0e7e27df43280e03844580bcfe6c4
Partial-Bug: #1251195
This commit is contained in:
Lana Brindley 2015-01-22 14:21:57 +10:00
parent 45fa1c95f0
commit 6429aed4f7
2 changed files with 259 additions and 183 deletions

View File

@ -2,36 +2,45 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_configuring-compute-to-use-ipv6-addresses">
<title>Configure Compute to use IPv6 addresses</title>
<para>If you are using OpenStack Compute with <systemitem>nova-network</systemitem>, you can put
Compute into IPv4/IPv6 dual-stack mode, so that it uses both IPv4 and IPv6 addresses for
communication. In IPv4/IPv6 dual-stack mode, instances can acquire their IPv6 global unicast
address by using a stateless address auto configuration mechanism [RFC 4862/2462]. IPv4/IPv6
dual-stack mode works with both <literal>VlanManager</literal> and
<literal>FlatDHCPManager</literal> networking modes. In <literal>VlanManager</literal>, each
project uses a different 64-bit global routing prefix. In <literal>FlatDHCPManager</literal>,
all instances use one 64-bit global routing prefix.</para>
<para>This configuration was tested with VM images that have an IPv6 stateless address auto
configuration capability. This capability is required for any VM you want to run with an IPv6
address. You must use EUI-64 address for stateless address auto configuration. Each node that
executes a <literal>nova-*</literal> service must have <literal>python-netaddr</literal> and
<literal>radvd</literal> installed.</para>
<para>If you are using OpenStack Compute with
<systemitem>nova-network</systemitem>, you can put Compute into dual-stack
mode, so that it uses both IPv4 and IPv6 addresses for communication. In
dual-stack mode, instances can acquire their IPv6 global unicast address
by using a stateless address auto-configuration mechanism [RFC 4862/2462].
IPv4/IPv6 dual-stack mode works with both <literal>VlanManager</literal>
and <literal>FlatDHCPManager</literal> networking modes.
</para>
<para>In <literal>VlanManager</literal> networking mode, each project uses a
different 64-bit global routing prefix. In
<literal>FlatDHCPManager</literal> mode, all instances use one 64-bit
global routing prefix.
</para>
<para>This configuration was tested with virtual machine images that have an
IPv6 stateless address auto-configuration capability. This capability is
required for any VM to run with an IPv6 address. You must use an EUI-64
address for stateless address auto-configuration. Each node that executes
a <literal>nova-*</literal> service must have
<literal>python-netaddr</literal> and <literal>radvd</literal> installed.
</para>
<procedure>
<title>Switch into IPv4/IPv6 dual-stack mode</title>
<step>
<para>On all nodes running a <literal>nova-*</literal> service, install
<systemitem>python-netaddr</systemitem>:</para>
<para>For every node running a <literal>nova-*</literal> service,
install <systemitem>python-netaddr</systemitem>:
</para>
<screen><prompt>#</prompt> <userinput>apt-get install python-netaddr</userinput></screen>
</step>
<step>
<para>On all <literal>nova-network</literal> nodes, install <literal>radvd</literal> and
configure IPv6 networking:</para>
<para>For every node running <literal>nova-network</literal>, install
<literal>radvd</literal> and configure IPv6 networking:
</para>
<screen><prompt>#</prompt> <userinput>apt-get install radvd</userinput>
<prompt>#</prompt> <userinput>echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding</userinput>
<prompt>#</prompt> <userinput>echo 0 &gt; /proc/sys/net/ipv6/conf/all/accept_ra</userinput></screen>
<prompt>#</prompt> <userinput>echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding</userinput>
<prompt>#</prompt> <userinput>echo 0 &gt; /proc/sys/net/ipv6/conf/all/accept_ra</userinput></screen>
</step>
<step>
<para>Edit the <filename>nova.conf</filename> file on all nodes to specify <literal>use_ipv6 =
True</literal>.</para>
<para>On all nodes, edit the <filename>nova.conf</filename> file and
specify <literal>use_ipv6 = True</literal>.</para>
</step>
<step>
<para>Restart all <literal>nova-*</literal> services.</para>
@ -40,23 +49,23 @@
<note>
<para>You can add a fixed range for IPv6 addresses to the <command>nova network-create</command>
command. Specify <option>public</option> or <option>private</option> after the
<option>network-create</option> parameter.</para>
<option>network-create</option> parameter.
</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 <replaceable>FIXED_RANGE_V4</replaceable> --vlan <replaceable>VLAN_ID</replaceable> --vpn <replaceable>VPN_START</replaceable> --fixed-range-v6 <replaceable>FIXED_RANGE_V6</replaceable></userinput></screen>
<para>You can set IPv6 global routing prefix by using the <parameter>--fixed_range_v6</parameter>
parameter. The default value for the parameter is <literal>fd00::/48</literal>.</para>
<itemizedlist>
<listitem>
<para>When you use <literal>FlatDHCPManager</literal>, the command uses the original
<parameter>--fixed_range_v6</parameter> value. For example:</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.2.0/24 --fixed-range-v6 fd00:1::/48</userinput></screen>
</listitem>
<listitem>
<para>When you use <literal>VlanManager</literal>, the command increments the subnet ID to
create subnet prefixes. Guest VMs use this prefix to generate their IPv6 global unicast
address. For example:</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48</userinput></screen>
</listitem>
</itemizedlist>
<para>You can set the IPv6 global routing prefix by using the
<parameter>--fixed_range_v6</parameter> parameter. The default value for
the parameter is <literal>fd00::/48</literal>.
</para>
<para>When you use <literal>FlatDHCPManager</literal>, the command
uses the original <parameter>--fixed_range_v6</parameter> value. For
example:
</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.2.0/24 --fixed-range-v6 fd00:1::/48</userinput></screen>
<para>When you use <literal>VlanManager</literal>, the command increments
the subnet ID to create subnet prefixes. Guest VMs use this prefix to
generate their IPv6 global unicast address. For example:
</para>
<screen><prompt>$</prompt> <userinput>nova network-create public --fixed-range-v4 10.0.1.0/24 --vlan 100 --vpn 1000 --fixed-range-v6 fd00:1::/48</userinput></screen>
</note>
<xi:include href="../../common/tables/nova-ipv6.xml"/>
</section>

View File

@ -4,45 +4,56 @@
<?dbhtml stop-chunking?>
<title>Configure migrations</title>
<note>
<para>Only cloud administrators can perform live migrations. If your cloud is configured to use
cells, you can perform live migration within but not between cells.</para>
<para>Only cloud administrators can perform live migrations. If your cloud
is configured to use cells, you can perform live migration within but
not between cells.
</para>
</note>
<para>Migration enables an administrator to move a virtual-machine instance from one compute host
to another. This feature is useful when a compute host requires maintenance. Migration can also
be useful to redistribute the load when many VM instances are running on a specific physical
machine.</para>
<para>Migration enables an administrator to move a virtual-machine instance
from one compute host to another. This feature is useful when a compute
host requires maintenance. Migration can also be useful to redistribute
the load when many VM instances are running on a specific physical machine.
</para>
<para>The migration types are:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Migration</emphasis> (or non-live migration). The instance is shut
down (and the instance knows that it was rebooted) for a period of time to be moved to
another hypervisor.</para>
<para><emphasis role="bold">Non-live migration</emphasis> (sometimes
referred to simply as 'migration'). The instance is shut down for a
period of time to be moved to another hypervisor. In this case, the
instance recognizes that it was rebooted.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Live migration</emphasis> (or true live migration). Almost no
instance downtime. Useful when the instances must be kept running during the migration. The
types of <firstterm>live migration</firstterm> are:</para>
<para><emphasis role="bold">Live migration</emphasis> (or 'true live
migration'). Almost no instance downtime. Useful when the instances
must be kept running during the migration. The different types of live
migration are:
</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Shared storage-based live migration</emphasis>. Both
hypervisors have access to shared storage.</para>
<para><emphasis role="bold">Shared storage-based live migration</emphasis>.
Both hypervisors have access to shared storage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Block live migration</emphasis>. No shared storage is
required. Incompatible with read-only devices such as CD-ROMs and <link
<para><emphasis role="bold">Block live migration</emphasis>. No
shared storage is required. Incompatible with read-only devices
such as CD-ROMs and <link
xlink:href="http://docs.openstack.org/user-guide/content/config-drive.html"
>Configuration Drive (config_drive)</link>.</para>
>Configuration Drive (config_drive)</link>.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Volume-backed live migration</emphasis>. When instances are
backed by volumes rather than ephemeral disk, no shared storage is required, and
migration is supported (currently only in libvirt-based hypervisors).</para>
<para><emphasis role="bold">Volume-backed live migration</emphasis>.
Instances are backed by volumes rather than ephemeral disk, no
shared storage is required, and migration is supported (currently
only available for libvirt-based hypervisors).
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>The following sections describe how to configure your hosts and compute nodes for migrations
by using the KVM and XenServer hypervisors.</para>
<para>The following sections describe how to configure your hosts and
compute nodes for migrations by using the KVM and XenServer hypervisors.
</para>
<section xml:id="configuring-migrations-kvm-libvirt">
<title>KVM-Libvirt</title>
<itemizedlist>
@ -52,34 +63,41 @@
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage:</emphasis>
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename> (for example,
<filename>/var/lib/nova/instances</filename>) has to be mounted by shared storage. This
guide uses NFS but other options, including the <link
xlink:href="http://gluster.org/community/documentation//index.php/OSConnect">OpenStack
Gluster Connector</link> are available.</para>
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>
(for example, <filename>/var/lib/nova/instances</filename>) has to
be mounted by shared storage. This guide uses NFS but other options,
including the <link
xlink:href="http://gluster.org/community/documentation//index.php/OSConnect">
OpenStack Gluster Connector</link> are available.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Instances:</emphasis> Instance can be migrated with iSCSI based
volumes</para>
<para><emphasis role="bold">Instances:</emphasis> Instance can be
migrated with iSCSI-based volumes.
</para>
</listitem>
</itemizedlist>
<note>
<itemizedlist>
<listitem>
<para>Because the Compute service does not use the libvirt live migration functionality by
default, guests are suspended before migration and might experience several minutes of
downtime. For details, see <xref linkend="true-live-migration-kvm-libvirt"/>.</para>
<para>Because the Compute service does not use the libvirt live
migration functionality by default, guests are suspended before
migration and might experience several minutes of downtime. For
details, see <xref linkend="true-live-migration-kvm-libvirt"/>.
</para>
</listitem>
<listitem>
<para>This guide assumes the default value for <option>instances_path</option> in your
<filename>nova.conf</filename> file
(<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>). If you
have changed the <literal>state_path</literal> or <literal>instances_path</literal>
variables, modify accordingly.</para>
<para>This guide assumes the default value for
<option>instances_path</option> in your <filename>nova.conf</filename> file
(<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>).
If you have changed the <literal>state_path</literal> or
<literal>instances_path</literal> variables, modify the commands
accordingly.</para>
</listitem>
<listitem>
<para>You must specify <literal>vncserver_listen=0.0.0.0</literal> or live migration does
not work correctly.</para>
<para>You must specify <literal>vncserver_listen=0.0.0.0</literal>
or live migration will not work correctly.
</para>
</listitem>
</itemizedlist>
</note>
@ -87,89 +105,108 @@
<title>Example Compute installation environment</title>
<itemizedlist>
<listitem>
<para>Prepare at least three servers; for example, <literal>HostA</literal>,
<literal>HostB</literal>, and <literal>HostC</literal>:</para>
<para>Prepare at least three servers. In this example, we refer to
the servers as <literal>HostA</literal>, <literal>HostB</literal>,
and <literal>HostC</literal>:</para>
<itemizedlist>
<listitem>
<para><literal>HostA</literal> is the <firstterm baseform="cloud controller">Cloud
Controller</firstterm>, and should run these services: <systemitem class="service"
>nova-api</systemitem>, <systemitem class="service">nova-scheduler</systemitem>,
<literal>nova-network</literal>, <systemitem class="service"
>cinder-volume</systemitem>, and <literal>nova-objectstore</literal>.</para>
<para><literal>HostA</literal> is the
<firstterm baseform="cloud controller">Cloud Controller</firstterm>,
and should run these services: <systemitem class="service">
nova-api</systemitem>, <systemitem class="service">nova-scheduler</systemitem>,
<literal>nova-network</literal>, <systemitem class="service">
cinder-volume</systemitem>, and <literal>nova-objectstore</literal>.
</para>
</listitem>
<listitem>
<para><literal>HostB</literal> and <literal>HostC</literal> are the <firstterm
baseform="compute node">compute nodes</firstterm> that run <systemitem
class="service">nova-compute</systemitem>.</para>
<para><literal>HostB</literal> and <literal>HostC</literal> are
the <firstterm baseform="compute node">compute nodes</firstterm>
that run <systemitem class="service">nova-compute</systemitem>.
</para>
</listitem>
</itemizedlist>
<para>Ensure that <literal><replaceable>NOVA-INST-DIR</replaceable></literal> (set with
<literal>state_path</literal> in the <filename>nova.conf</filename> file) is the same
on all hosts.</para>
<para>Ensure that <literal><replaceable>NOVA-INST-DIR</replaceable></literal>
(set with <literal>state_path</literal> in the <filename>nova.conf</filename>
file) is the same on all hosts.
</para>
</listitem>
<listitem>
<para>In this example, <literal>HostA</literal> is the NFSv4 server that exports
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename> directory.
<literal>HostB</literal> and <literal>HostC</literal> are NFSv4 clients that mount
it.</para>
<para>In this example, <literal>HostA</literal> is the NFSv4 server
that exports <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>
directory. <literal>HostB</literal> and <literal>HostC</literal>
are NFSv4 clients that mount <literal>HostA</literal>.</para>
</listitem>
</itemizedlist>
<procedure>
<title>To configure your system</title>
<title>Configuring your system</title>
<step>
<para>Configure your DNS or <filename>/etc/hosts</filename> and ensure it is consistent
across all hosts. Make sure that the three hosts can perform name resolution with each
other. As a test, use the <command>ping</command> command to ping each host from one
another.</para>
<para>Configure your DNS or <filename>/etc/hosts</filename> and
ensure it is consistent across all hosts. Make sure that the three
hosts can perform name resolution with each other. As a test, use
the <command>ping</command> command to ping each host from one
another.
</para>
<screen><prompt>$</prompt> <userinput>ping HostA</userinput>
<prompt>$</prompt> <userinput>ping HostB</userinput>
<prompt>$</prompt> <userinput>ping HostC</userinput></screen>
<prompt>$</prompt> <userinput>ping HostB</userinput>
<prompt>$</prompt> <userinput>ping HostC</userinput>
</screen>
</step>
<step>
<para>Ensure that the UID and GID of your Compute and libvirt users are identical between
each of your servers. This ensures that the permissions on the NFS mount works
correctly.</para>
<para>Ensure that the UID and GID of your Compute and libvirt users
are identical between each of your servers. This ensures that the
permissions on the NFS mount works correctly.
</para>
</step>
<step>
<para>Export <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename> from
<literal>HostA</literal>, and have it readable and writable by the Compute user on
<literal>HostB</literal> and <literal>HostC</literal>.</para>
<para>Export <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>
from <literal>HostA</literal>, and ensure it is readable and
writable by the Compute user on <literal>HostB</literal> and
<literal>HostC</literal>.
</para>
<para>For more information, see: <link
xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo"
>SettingUpNFSHowTo</link> or <link
xlink:href="http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/"
>CentOS / Red Hat: Setup NFS v4.0 File Server</link></para>
>CentOS/Red Hat: Setup NFS v4.0 File Server</link></para>
</step>
<step>
<para>Configure the NFS server at <literal>HostA</literal> by adding the following line to
the <filename>/etc/exports</filename> file:</para>
<para>Configure the NFS server at <literal>HostA</literal> by adding
the following line to the <filename>/etc/exports</filename> file:
</para>
<programlisting><replaceable>NOVA-INST-DIR</replaceable>/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</programlisting>
<para>Change the subnet mask (<literal>255.255.0.0</literal>) to the appropriate value to
include the IP addresses of <literal>HostB</literal> and <literal>HostC</literal>. Then
restart the NFS server:</para>
<para>Change the subnet mask (<literal>255.255.0.0</literal>) to the
appropriate value to include the IP addresses of <literal>HostB</literal>
and <literal>HostC</literal>. Then restart the NFS server:
</para>
<screen><prompt>#</prompt> <userinput>/etc/init.d/nfs-kernel-server restart</userinput>
<prompt>#</prompt> <userinput>/etc/init.d/idmapd restart</userinput></screen>
</step>
<step>
<para>Set the 'execute/search' bit on your shared directory.</para>
<para>On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to
be able to use the images within the directories. On all hosts, run the following
command:</para>
<para>On both compute nodes, enable the 'execute/search' bit on your
shared directory to allow qemu to be able to use the images within
the directories. On all hosts, run the following command:
</para>
<screen><prompt>$</prompt> <userinput>chmod o+x <replaceable>NOVA-INST-DIR</replaceable>/instances</userinput> </screen>
</step>
<step>
<para>Configure NFS at HostB and HostC by adding the following line to the
<filename>/etc/fstab</filename> file:</para>
<para>Configure NFS on <literal>HostB</literal> and <literal>HostC</literal>
by adding the following line to the <filename>/etc/fstab</filename>
file:
</para>
<programlisting>HostA:/ /<replaceable>NOVA-INST-DIR</replaceable>/instances nfs4 defaults 0 0</programlisting>
<para>Ensure that you can mount the exported directory can be mounted:</para>
<para>Ensure that you can mount the exported directory:
</para>
<screen><prompt>$</prompt> <userinput>mount -a -v</userinput></screen>
<para>Check that HostA can see the
"<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
directory:</para>
<para>Check that <literal>HostA</literal> can see the
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
directory:
</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
<para>Perform the same check at HostB and HostC, paying special attention to the
permissions (Compute should be able to write):</para>
<para>Perform the same check on <literal>HostB</literal>
and <literal>HostC</literal>, paying special attention to the
permissions (Compute should be able to write):
</para>
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
<screen><prompt>$</prompt> <userinput>df -k</userinput>
@ -183,56 +220,71 @@ none 16502856 0 16502856 0% /lib/init/rw
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;--- this line is important.)</computeroutput></screen>
</step>
<step>
<para>Update the libvirt configurations so that the calls can be made securely. These
methods enable remote access over TCP and are not documented here. Please consult your
network administrator for assistance in deciding how to configure access.</para>
<para>Update the libvirt configurations so that the calls can be
made securely. These methods enable remote access over TCP and are
not documented here.
</para>
<itemizedlist>
<listitem>
<para>SSH tunnel to libvirtd's UNIX socket</para>
</listitem>
<listitem>
<para>libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption</para>
<para>libvirtd TCP socket, with GSSAPI/Kerberos for auth+data
encryption
</para>
</listitem>
<listitem>
<para>libvirtd TCP socket, with TLS for encryption and x509 client certs for
authentication</para>
<para>libvirtd TCP socket, with TLS for encryption and x509
client certs for authentication
</para>
</listitem>
<listitem>
<para>libvirtd TCP socket, with TLS for encryption and Kerberos for
authentication</para>
<para>libvirtd TCP socket, with TLS for encryption and Kerberos
for authentication
</para>
</listitem>
</itemizedlist>
<para>Restart libvirt. After you run the command, ensure that libvirt is successfully
restarted:</para>
<para>Restart libvirt. After you run the command, ensure that
libvirt is successfully restarted:
</para>
<screen><prompt>#</prompt> <userinput>stop libvirt-bin &amp;&amp; start libvirt-bin</userinput>
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput>
<computeroutput>root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l</computeroutput></screen>
</step>
<step>
<para>Configure your firewall to allow libvirt to communicate between nodes.</para>
<para>By default, libvirt listens on TCP port 16509, and an ephemeral TCP range from 49152
to 49261 is used for the KVM communications. Based on the secure remote access TCP
configuration you chose, be careful choosing what ports you open and understand who has
access. For information about ports that are used with libvirt, see <link
xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">the libvirt
documentation</link>.</para>
<para>Configure your firewall to allow libvirt to communicate
between nodes.
</para>
<para>By default, libvirt listens on TCP port 16509, and an
ephemeral TCP range from 49152 to 49261 is used for the KVM
communications. Based on the secure remote access TCP configuration
you chose, be careful which ports you open, and always understand
who has access. For information about ports that are used with
libvirt, see <link xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">
the libvirt documentation</link>.
</para>
</step>
<step>
<para>You can now configure options for live migration. In most cases, you do not need to
configure any options. The following chart is for advanced usage only.</para>
<para>You can now configure options for live migration. In most
cases, you will not need to configure any options. The following
chart is for advanced users only.
</para>
</step>
</procedure>
<xi:include href="../../common/tables/nova-livemigration.xml"/>
</section>
<section xml:id="true-live-migration-kvm-libvirt">
<title>Enable true live migration</title>
<para>By default, the Compute service does not use the libvirt live migration functionality.
To enable this functionality, add the following line to the <filename>nova.conf</filename>
file:</para>
<title>Enabling true live migration</title>
<para>By default, the Compute service does not use the libvirt live
migration function. To enable this function, add the following line to
the <filename>nova.conf</filename> file:
</para>
<programlisting>live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED</programlisting>
<para>The Compute service does not use libvirt's live migration by default because there is a
risk that the migration process never ends. This can happen if the guest operating system
dirties blocks on the disk faster than they can be migrated.</para>
<para>The Compute service does not use libvirt's live migration by
default because there is a risk that the migration process will never
end. This can happen if the guest operating system uses blocks on the
disk faster than they can be migrated.
</para>
</section>
</section>
<!--status: good, right place-->
@ -243,45 +295,56 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
<itemizedlist>
<title>Prerequisites</title>
<listitem>
<para><emphasis role="bold">Compatible XenServer hypervisors</emphasis>. For more
information, see the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements"
>Requirements for Creating Resource Pools</link> section of the <citetitle>XenServer
Administrator's Guide</citetitle>.</para>
<para><emphasis role="bold">Compatible XenServer hypervisors</emphasis>.
For more information, see the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements"
>Requirements for Creating Resource Pools</link> section of the
<citetitle>XenServer Administrator's Guide</citetitle>.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage</emphasis>. An NFS export, visible to all
XenServer hosts.</para>
<para><emphasis role="bold">Shared storage</emphasis>. An NFS export,
visible to all XenServer hosts.
</para>
<note>
<para>For the supported NFS versions, see the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701"
>NFS VHD</link> section of the <citetitle>XenServer Administrator's
Guide</citetitle>.</para>
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701"
>NFS VHD</link> section of the <citetitle>XenServer Administrator's
Guide</citetitle>.
</para>
</note>
</listitem>
</itemizedlist>
<para>To use shared storage live migration with XenServer hypervisors, the hosts must be
joined to a XenServer pool. To create that pool, a host aggregate must be created with
special metadata. This metadata is used by the XAPI plug-ins to establish the pool.</para>
<para>To use shared storage live migration with XenServer hypervisors,
the hosts must be joined to a XenServer pool. To create that pool, a
host aggregate must be created with specific metadata. This metadata
is used by the XAPI plugins to establish the pool.
</para>
<procedure>
<title>To use shared storage live migration with XenServer hypervisors</title>
<title>Using shared storage live migration with XenServer hypervisors</title>
<step>
<para>Add an NFS VHD storage to your master XenServer, and set it as default SR. For more
information, see NFS VHD in the <citetitle>XenServer Administrator's
Guide</citetitle>.</para>
<para>Add an NFS VHD storage to your master XenServer, and set it as
the default storage repository. For more information, see NFS
VHD in the <citetitle>XenServer Administrator's Guide</citetitle>.
</para>
</step>
<step>
<para>Configure all compute nodes to use the default <literal>sr</literal> for pool
operations. Add this line to your <filename>nova.conf</filename> configuration files
across your compute nodes:</para>
<para>Configure all compute nodes to use the default storage
repository (<literal>sr</literal>) for pool operations. Add this
line to your <filename>nova.conf</filename> configuration files
on all compute nodes:
</para>
<programlisting>sr_matching_filter=default-sr:true</programlisting>
</step>
<step>
<para>Create a host aggregate:</para>
<para>Create a host aggregate. This command creates the aggregate,
and then displays a table that contains the ID of the new
aggregate:
</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-create <replaceable>POOL_NAME</replaceable> <replaceable>AVAILABILITY_ZONE</replaceable></userinput></screen>
<para>The command displays a table that contains the ID of the newly created
aggregate.</para>
<para>Now add special metadata to the aggregate, to mark it as a hypervisor pool:</para>
<para>Add metadata to the aggregate, to mark it as a hypervisor
pool:
</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <replaceable>AGGREGATE_ID</replaceable> hypervisor_pool=true</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <replaceable>AGGREGATE_ID</replaceable> operational_state=created</userinput></screen>
<para>Make the first compute node part of that aggregate:</para>
@ -292,9 +355,10 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
<para>Add hosts to the pool:</para>
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host <replaceable>AGGREGATE_ID</replaceable> <replaceable>COMPUTE_HOST_NAME</replaceable></userinput></screen>
<note>
<para>At this point, the added compute node and the host are shut down, to join the host
to the XenServer pool. The operation fails, if any server other than the compute node
is running/suspended on your host.</para>
<para>The added compute node and the host will shut down to join
the host to the XenServer pool. The operation will fail if any
server other than the compute node is running or suspended on
the host.</para>
</note>
</step>
</procedure>
@ -305,21 +369,24 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( &lt;
<itemizedlist>
<title>Prerequisites</title>
<listitem>
<para><emphasis role="bold">Compatible XenServer hypervisors</emphasis>. The hypervisors
must support the Storage XenMotion feature. See your XenServer manual to make sure your
edition has this feature.</para>
<para><emphasis role="bold">Compatible XenServer hypervisors</emphasis>.
The hypervisors must support the Storage XenMotion feature. See
your XenServer manual to make sure your edition has this feature.
</para>
</listitem>
</itemizedlist>
<note>
<itemizedlist>
<listitem>
<para>To use block migration, you must use the CHANGE THIS ==
<para>To use block migration, you must use the <!--CHANGE THIS ==-->
<parameter>==block-migrate</parameter> parameter with the live migration
command.</para>
<!--Made the CHANGE THIS note a comment. Please revert if incorrect. LKB-->
</listitem>
<listitem>
<para>Block migration works only with EXT local storage SRs, and the server must not
have any volumes attached.</para>
<para>Block migration works only with EXT local storage storage
repositories, and the server must not have any volumes attached.
</para>
</listitem>
</itemizedlist>
</note>