Booting an Image
- FIXME
+ After you've configured the Compute service, you
+ can now launch an instance. An instance is a virtual machine provisioned by
+ OpenStack on one of the Compute servers. Use the procedure below to launch a
+ low-resource instance using an image you've already downloaded.
+
+ This procedure assumes you have:
+
+ Appropriate environment
+ variables set to specify your credentials (see
+
+
+ Downloaded an image (see ).
+
+ Configured networking (see ).
+
+
+
+
+
+ Launch a Compute instance
+ Generate a keypair consisting of a private key and a public key to be able to launch instances
+ on OpenStack. These keys are injected into the instances to make
+ password-less SSH access to the instance. This depends on the way the
+ necessary tools are bundled into the images. For more details, see
+ "Manage instances" in the
+ Administration User Guide.
+ $ssh-keygen
+$cd .ssh
+$nova keypair-add --pub_key id_rsa.pub mykey
+ You have just created a new keypair called mykey. The private key id_rsa is
+ saved locally in ~/.ssh which can be used to connect to an instance
+ launched using mykey as the keypair. You can view available keypairs
+ using the nova keypair-list command.
+ $nova keypair-list
++-------+-------------------------------------------------+
+| Name | Fingerprint |
++-------+-------------------------------------------------+
+| mykey | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82 |
+| mykey2 | b0:18:32:fa:4e:d4:3c:1b:c4:6c:dd:cb:53:29:13:82 |
++-------+-------------------------------------------------+
+
+ To launch an instance using OpenStack, you must specify the ID for the flavor you want to use
+ for the instance. A flavor is a resource allocation profile. For
+ example, it specifies how many virtual CPUs and how much RAM your
+ instance will get. To see a list of the available profiles, run the
+ nova flavor-list command.
+$nova flavor-list
++----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
+| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
+| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
+| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
+| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+ +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
+
+
+ Get the ID of the image you would like to use for the instance using the
+ nova image-list command.
+ $ nova image-list
++--------------------------------------+--------------+--------+--------+
+| ID | Name | Status | Server |
++--------------------------------------+--------------+--------+--------+
+| 9e5c2bee-0373-414c-b4af-b91b0246ad3b | CirrOS 0.3.1 | ACTIVE | |
++--------------------------------------+--------------+--------+--------+
+
+ Create the instance using the nova boot.
+ $nova boot --flavor flavorType --key_name keypairName --image IDnewInstanceNameCreate
+ an instance using flavor 1 or 2, for example:
+ $nova boot --flavor 1 --key_name mykey --image 9e5c2bee-0373-414c-b4af-b91b0246ad3b cirrOS
++--------------------------------------+--------------------------------------+
+| Property | Value |
++--------------------------------------+--------------------------------------+
+| OS-EXT-STS:task_state | scheduling |
+| image | CirrOS 0.3.1 |
+| OS-EXT-STS:vm_state | building |
+| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
+| OS-SRV-USG:launched_at | None |
+| flavor | m1.tiny |
+| id | 3bdf98a0-c767-4247-bf41-2d147e4aa043 |
+| security_groups | [{u'name': u'default'}] |
+| user_id | 530166901fa24d1face95cda82cfae56 |
+| OS-DCF:diskConfig | MANUAL |
+| accessIPv4 | |
+| accessIPv6 | |
+| progress | 0 |
+| OS-EXT-STS:power_state | 0 |
+| OS-EXT-AZ:availability_zone | nova |
+| config_drive | |
+| status | BUILD |
+| updated | 2013-10-10T06:47:26Z |
+| hostId | |
+| OS-EXT-SRV-ATTR:host | None |
+| OS-SRV-USG:terminated_at | None |
+| key_name | mykey |
+| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
+| name | cirrOS |
+| adminPass | DWCdW6FnsKNq |
+| tenant_id | e66d97ac1b704897853412fc8450f7b9 |
+| created | 2013-10-10T06:47:23Z |
+| os-extended-volumes:volumes_attached | [] |
+| metadata | {} |
++--------------------------------------+--------------------------------------+
+ If there is not enough RAM available for the instance, Compute will create the instance, but
+ will not start it (status 'Error').
+
+ After the instance has been created, it will show up in the output of nova
+ list (as the instance is booted up, the status will change from 'BUILD' to
+ 'ACTIVE').
+ $nova list
++--------------------------------------+--------+--------+----------------------+
+| ID | Name | Status | Networks |
++--------------------------------------+--------+--------+----------------------+
+| 3bdf98a0-c767-4247-bf41-2d147e4aa043 | cirrOS | BUILD | demonet=192.168.0.11 |
++--------------------------------------+--------+--------+----------------------+
+$nova list
++--------------------------------------+--------+--------+----------------------+
+| ID | Name | Status | Networks |
++--------------------------------------+--------+--------+----------------------+
+| 3bdf98a0-c767-4247-bf41-2d147e4aa043 | cirrOS | ACTIVE | demonet=192.168.0.11 |
++--------------------------------------+--------+--------+----------------------+
+
+ You can also retrieve additional details about the specific instance using the
+ nova show command.
+ $nova show 3bdf98a0-c767-4247-bf41-2d147e4aa043
+
+ Once enough time has passed so that the instance is fully booted and initialized,
+ you can ssh into the instance. You can obtain the IP address of the instance from the
+ output of nova list.
+ $ ssh -i mykey root@192.168.0.11
+
diff --git a/doc/install-guide/section_nova-compute.xml b/doc/install-guide/section_nova-compute.xml
index 86db71727f..8e8a903cd3 100644
--- a/doc/install-guide/section_nova-compute.xml
+++ b/doc/install-guide/section_nova-compute.xml
@@ -3,102 +3,89 @@
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="nova-compute">
- Installing a Compute Node
+ Configuring a Compute Node
- After configuring the Compute Services on the controller node,
- configure a second system to be a compute node. The compute node receives
- requests from the controller node and hosts virtual machine instances.
- You can run all services on a single node, but this guide uses separate
- systems. This makes it easy to scale horizontally by adding additional
- compute nodes following the instructions in this section.
+ After configuring the Compute Services on the controller node, configure a second system to
+ be a Compute node. The Compute node receives requests from the controller node and hosts virtual
+ machine instances. You can run all services on a single node, but this guide uses separate
+ systems. This makes it easy to scale horizontally by adding additional Compute nodes following
+ the instructions in this section.The Compute Service relies on a hypervisor to run virtual machine
instances. OpenStack can use various hypervisors, but this guide uses
KVM.
-
- Begin by configuring the system using the instructions in
- . Note the following differences from the
- controller node:
-
-
-
- Use different IP addresses when editing the files
- ifcfg-eth0 and ifcfg-eht1.
- This guide uses 192.168.0.11 for the internal network
- and 10.0.0.11 for the external network.
-
-
- Set the hostname to compute1. Ensure that the
- IP addresses and hostnames for both nodes are listed in the
- /etc/hosts file on each system.
-
-
- Do not run the NTP server. Follow the instructions in
- to synchronize from the controller node.
-
-
- You do not need to install the MySQL database server or start
- the MySQL service. Just install the client libraries.
-
-
- You do not need to install a messaging queue server.
-
-
-
- After configuring the operating system, install the appropriate
- packages for the compute service.
-
- #apt-get install nova-compute-kvm
- #yum install openstack-nova-compute
- #zypper install openstack-nova-compute kvm
-
- Either copy the file /etc/nova/nova.conf from the
- controller node, or run the same configuration commands.
-
- #openstack-config --set /etc/nova/nova.conf \
+
+ Configure a Compute Node
+ Begin by configuring the system using the instructions in
+ . Note the following differences from the
+ controller node:
+
+
+ Use different IP addresses when editing the files ifcfg-eth0
+ and ifcfg-eth1. This guide uses 192.168.0.11 for
+ the internal network and 10.0.0.11 for the external network.
+
+
+ Set the hostname to compute1. Ensure that the
+ IP addresses and hostnames for both nodes are listed in the
+ /etc/hosts file on each system.
+
+
+ Do not run the NTP server. Follow the instructions in
+ to synchronize from the controller node.
+
+
+ Install the MySQL client libraries. You do not need to install the MySQL database
+ server or start the MySQL service.
+
+
+ You do not need to install a messaging queue server.
+
+ After configuring the operating system, install the appropriate
+ packages for the compute service.
+ #apt-get install nova-compute-kvm
+ #yum install openstack-nova-compute
+ #zypper install openstack-nova-compute kvm
+
+ Either copy the file /etc/nova/nova.conf from the
+ controller node, or run the same configuration commands.
+ #openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:NOVA_DBPASS@controller/nova#openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
-#openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller
+#openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller#openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova#openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service#openstack-config --set /etc/nova/nova.conf DEFAULT admin_password NOVA_PASS
-
-
- #openstack-config --set /etc/nova/nova.conf \
+
+ #openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid
-#openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
-
- Set the configuration keys my_ip,
- vncserver_listen, and
- vncserver_proxyclient_address to the IP address of the
- compute node on the internal network.
-
- #openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.11
+#openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
+ Set the configuration keys my_ip,
+ vncserver_listen, and
+ vncserver_proxyclient_address to the IP address of the
+ compute node on the internal network.
+ #openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.11#openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.11
-#openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.11
-
- Copy the file /etc/nova/api-paste.ini from the
- controller node, or edit the file to add the credentials in the
- [filter:authtoken] section.
-
- [filter:authtoken]
+#openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.11
+ Copy the file /etc/nova/api-paste.ini from the
+ controller node, or edit the file to add the credentials in the
+ [filter:authtoken] section.
+ [filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=controller
admin_user=nova
admin_tenant_name=service
admin_password=NOVA_PASS
-
-
-
- Finally, start the compute service and configure it to start when
- the system boots.
-
- #service nova-compute start
+ Ensure that api_paste_config=/etc/nova/api-paste.ini is set in
+ /etc/nova/nova.conf.
+
+ Start the Compute service and configure it to start when the system boots.
+ #service nova-compute start#chkconfig nova-compute on
- #service openstack-nova-compute start
+ #service openstack-nova-compute start#chkconfig openstack-nova-compute on
- #systemctl start openstack-nova-compute
-#systemctl enable openstack-nova-compute
-
-
+ #systemctl start openstack-nova-compute
+#systemctl enable openstack-nova-compute
+
+
\ No newline at end of file
diff --git a/doc/install-guide/section_nova-controller.xml b/doc/install-guide/section_nova-controller.xml
index 8a0703cf88..781beb6426 100644
--- a/doc/install-guide/section_nova-controller.xml
+++ b/doc/install-guide/section_nova-controller.xml
@@ -12,9 +12,10 @@
node to run the service that launches virtual machines. This section
details the installation and configuration on the controller node.
+ Install the Nova Controller ServicesInstall the openstack-nova
- meta-package. This package will install all of the various Nova packages, most of
+ meta-package. This package installs all of the various Compute packages, most of
which will be used on the controller node in this guide.#yum install openstack-nova
@@ -61,7 +62,7 @@ IDENTIFIED BY 'NOVA_DBPASS';
- You now have to tell the Compute Service to use that database.
+ Tell the Compute Service to use the created database.#openstack-config --set /etc/nova/nova.conf \
database connection mysql://nova:NOVA_DBPASS@controller/nova
@@ -77,6 +78,7 @@ IDENTIFIED BY 'NOVA_DBPASS';
#openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.10#openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.10
+
Create a user called nova that the Compute Service
can use to authenticate with the Identity Service. Use the
@@ -87,68 +89,73 @@ IDENTIFIED BY 'NOVA_DBPASS';
#keystone user-role-add --user=nova --tenant=service --role=admin
- For the Compute Service to use these credentials, you have to add
+ For the Compute Service to use these credentials, you must add
them to the nova.conf configuration file.#openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
-#openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller
+#openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller#openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova#openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service#openstack-config --set /etc/nova/nova.conf DEFAULT admin_password NOVA_PASS
-
+
- You also have to add the credentials to the file
+ Add the credentials to the file
/etc/nova/api-paste.ini. Open the file in a text editor
and locate the section [filter:authtoken].
Make sure the following options are set:[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
-auth_host=controller
+auth_host=controller
admin_user=nova
admin_tenant_name=service
admin_password=NOVA_PASS
+ Ensure that api_paste_config=/etc/nova/api-paste.ini
+ is set in /etc/nova/nova.conf.
+
You have to register the Compute Service with the Identity Service
so that other OpenStack services can locate it. Register the service and
specify the endpoint using the keystone command.#keystone service-create --name=nova --type=compute \
--description="Nova Compute Service"
+
- Note the id property returned and use it when
- creating the endpoint.
-
- #keystone endpoint-create \
+ Note the id property returned and use it when
+ creating the endpoint.
+ #keystone endpoint-create \
--service-id=the_service_id_above \
- --publicurl=http://controller:8774/v2/%(tenant_id)s \
- --internalurl=http://controller:8774/v2/%(tenant_id)s \
- --adminurl=http://controller:8774/v2/%(tenant_id)s
+ --publicurl=http://controller:8774/v2/%(tenant_id)s \
+ --internalurl=http://controller:8774/v2/%(tenant_id)s \
+ --adminurl=http://controller:8774/v2/%(tenant_id)s
-
- Configure the Compute Service to use the
+
+ Configure the Compute Service to use the
Qpid message broker by setting the following configuration keys.
- #openstack-config --set /etc/nova/nova.conf \
+ #openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid
-#openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
+#openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
+
+
- Configure the Compute Service to use the RabbitMQ
+
+Configure the Compute Service to use the RabbitMQ
message broker by setting the following configuration keys. They are found in
the DEFAULT configuration group of the
/etc/nova/nova.conf file.
-
- rpc_backend = nova.rpc.impl_kombu
+ rpc_backend = nova.rpc.impl_kombu
rabbit_host = controller
-
- Configure the Compute Service to use the RabbitMQ
+
+
+ Configure the Compute Service to use the RabbitMQ
message broker by setting the following configuration keys.
-
- #openstack-config --set /etc/nova/nova.conf \
+ #openstack-config --set /etc/nova/nova.conf \
DEFAULT rpc_backend nova.rpc.impl_kombu#openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller