merge doc changes from Nick

This commit is contained in:
Roman Alekseenkov 2013-03-24 16:04:08 -07:00
parent 95b6430902
commit e4a7d1986d
28 changed files with 804 additions and 863 deletions

View File

@ -41,14 +41,15 @@ master_doc = 'index'
# General information about the project.
project = u'Fuel for OpenStack'
copyright = u'2012, Mirantis'
copyright = u'2013, Mirantis'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1'
version = '2.1'
# The full version, including alpha/beta/rc tags.
release = '2.1'

View File

@ -1,11 +1,11 @@
If you already have Puppet Master installed, you can skip this
installation step and go directly to :ref:`Configuring fuel-pm <Configuring-Fuel-PM>`.
installation step and go directly to :ref:`Installing the OS Using Fuel <Install-OS-Using-Fuel>`.
Installing Puppet master is a one-time procedure for the entire
infrastructure. Once done, Puppet master will act as a single point of
Installing Puppet Master is a one-time procedure for the entire
infrastructure. Once done, Puppet Master will act as a single point of
control for all of your servers, and you will never have to return to
these installation steps again.
@ -13,11 +13,6 @@ these installation steps again.
Initial Setup
-------------
If you plan to provision the Puppet Master on hardware, you need to
make sure that you can boot your server from an ISO.
For VirtualBox, follow these steps to create the virtual hardware:
@ -28,6 +23,7 @@ For VirtualBox, follow these steps to create the virtual hardware:
* Name: fuel-pm
* Type: Linux
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
* Memory: 2048MB
@ -36,9 +32,6 @@ For VirtualBox, follow these steps to create the virtual hardware:
* Adapter 1
* Enable Network Adapter
* Attached to: Host-only Adapter
* Name: vboxnet0
@ -46,21 +39,13 @@ For VirtualBox, follow these steps to create the virtual hardware:
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: epn1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
* Name: eth0 (or whichever physical network has your internet connection)
It is important that host-only Adapter 1 goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from the LAN on the first available network adapter.
The Puppet Master doesn't need the third adapter; it is used for OpenStack hosts and communication between tenant VMs.
OS Installation
---------------
@ -84,24 +69,23 @@ OS Installation
* Boot the server (or VM) from the CD/DVD drive and install the chosen OS
* Choose root password carefully
* Boot the server (or VM) from the CD/DVD drive and install the chosen OS. Be sure to choose the root password carefully.
* Set up the eth0 interface. This will be the public interface:
* Set up the eth0 interface. This interface will be used for communication between the Puppet Master and Puppet clients, as well as for Cobbler.
``vi/etc/sysconfig/network-scripts/ifcfg-eth0``::
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="no"
BOOTPROTO="static"
IPADDR="10.20.0.100"
NETMASK="255.255.255.0"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
Apply network settings::
@ -110,18 +94,15 @@ OS Installation
* Set up the eth1 interface. This interface will be used for communication between the Puppet Master and Puppet clients, as well as for Cobbler:
* Set up the eth1 interface. This will be the public interface.
``vi /etc/sysconfig/network-scripts/ifcfg-eth1``::
DEVICE="eth1"
BOOTPROTO="static"
IPADDR="10.0.0.100"
NETMASK="255.255.255.0"
ONBOOT="yes"
BOOTPROTO="dhcp"
ONBOOT="no"
TYPE="Ethernet"
PEERDNS="no"
@ -149,7 +130,7 @@ OS Installation
* Check that a ping to your host machine works. This means that the management segment is available::
ping 10.0.0.1
ping 10.20.0.1

View File

@ -6,16 +6,6 @@ Once the installation is complete, you will need to finish the configuration to
* Check network settings and connectivity and correct any errors:
* Check host connectivity by pinging the host machine::
ping 10.0.0.1
* Check that Internet access works by pinging an outside host::
ping google.com
* Check the hostname. Running ::
hostname
@ -26,17 +16,12 @@ Once the installation is complete, you will need to finish the configuration to
If not, set the hostname:
* CentOS/RHEL
``vi /etc/sysconfig/network`` ::
HOSTNAME=fuel-pm
``vi /etc/sysconfig/network`` ::
* Ubuntu
HOSTNAME=fuel-pm
``vi /etc/hostname``::
fuel-pm
* Check the fully qualified hostname (FQDN) value. ::
@ -47,9 +32,9 @@ Once the installation is complete, you will need to finish the configuration to
fuel-pm.your-domain-name.com
If not, correct the /etc/resolv.conf file by replacing your-domain-name.com below with your actual domain name, and 8.8.8.8 with your actual DNS server.
If not, correct the ``/etc/resolv.conf`` file by replacing ``your-domain-name.com`` below with your actual domain name, and ``8.8.8.8`` with your actual DNS server.
(Note: you can look up your DNS server on your host machine using ipconfig /all on Windows, or using cat /etc/resolv.conf under Linux) ::
(Note: you can look up your DNS server on your host machine using ``ipconfig /all`` on Windows, or using ``cat /etc/resolv.conf`` under Linux) ::
search your-domain-name.com
nameserver 8.8.8.8
@ -184,7 +169,7 @@ itself (replace your-domain-name. com with your domain name):
Finally, to make sure everything is working properly, run puppet agent
and to see the Hello World from fuel-pm output::
and to see the ``Hello World from fuel-pm`` output::
puppet agent --test
@ -199,12 +184,12 @@ with the SSL setup. If so, remove the original files and start again,
like so::
service puppetmaster stop
service puppetdb stop
rm -rf /etc/puppetdb/ssl
puppetdb-ssl-setup
service puppetdb start
service puppetmaster start
sudo service puppetmaster stop
sudo service puppetdb stop
sudo rm -rf /etc/puppetdb/ssl
sudo puppetdb-ssl-setup
sudo service puppetdb start
sudo service puppetmaster start
Again, remember that it may take several minutes before puppetdb is
fully running, despite appearances to the contrary.

View File

@ -63,7 +63,7 @@ will need to make changes::
# [server] IP address that will be used as address of cobbler server.
# It is needed to download kickstart files, call cobbler API and
# so on. Required.
$server = '10.0.0.100'
$server = '10.20.0.100'
@ -73,7 +73,7 @@ Puppet Master and Cobbler servers. ::
# Interface for cobbler instances
$dhcp_interface = 'eth1'
$dhcp_interface = 'eth0'
@ -82,28 +82,28 @@ so you will need to specify which interface will handle that. ::
$dhcp_start_address = '10.0.0.201'
$dhcp_end_address = '10.0.0.254'
$dhcp_start_address = '10.20.0.110'
$dhcp_end_address = '10.20.0.126'
Change the $dhcp_start_address and $dhcp_end_address to match the network allocations you made
Change the ``$dhcp_start_address`` and ``$dhcp_end_address`` to match the network allocations you made
earlier. The important thing is to make sure there are no conflicts with the static IPs you are allocating. ::
$dhcp_netmask = '255.255.255.0'
$dhcp_gateway = '10.0.0.100'
$dhcp_gateway = '10.20.0.100'
$domain_name = 'your-domain-name.com'
Change the $domain_name to your own domain name. ::
Change the ``$domain_name`` to your own domain name. ::
$name_server = '10.0.0.100'
$next_server = '10.0.0.100'
$name_server = '10.20.0.100'
$next_server = '10.20.0.100'
$cobbler_user = 'cobbler'
$cobbler_password = 'cobbler'
$pxetimeout = '0'
@ -113,7 +113,7 @@ Change the $domain_name to your own domain name. ::
Change the $mirror_type to be default so Fuel knows to request
**Change the $mirror_type to be default** so Fuel knows to request
resources from Internet sources rather than having to set up your own
internal repositories.
@ -219,7 +219,7 @@ Cobbler dashboard from:
http://10.0.0.100/cobbler_web
http://10.20.0.100/cobbler_web

View File

@ -1,5 +0,0 @@
Known Issues and Workarounds
----------------------------
.. include:: /pages/frequently-asked-questions/known-issues-and-workarounds/0010-rabbitmq.rst
.. include:: /pages/frequently-asked-questions/known-issues-and-workarounds/0020-galera.rst

View File

@ -1,3 +1,6 @@
Known Issues and Workarounds
----------------------------
RabbitMQ
^^^^^^^^
@ -8,8 +11,8 @@ RabbitMQ
**Issue:**
In general, all RabbitMQ nodes must not be shut down simultaneously. RabbitMQ requires
that after a full shutdown of the cluster, the first node brought up should
be the last one to shut down. Version 2.1 of Fuel solves this problem by managing the restart of
available nodes, so you should not experience diffuclty with this issue.
be the last one to shut down, but it's not always possible to know which node that is, or even to ensure a clean shutdown. Version 2.1 of Fuel solves this problem by managing the restart of
available nodes, so you should not experience difficulty with this issue.
If, however, you are still using previous versions of Fuel, here is how Fuel 2.1 works around this problem in case you need to do it yourself.

View File

@ -127,7 +127,7 @@ and start the cluster operation from that node.
In the case of OpenStack deployed by Fuel manifests with default settings (2 controllers), Fuel automatically removes local names and IP addresses from gcomm strings on every node to prevent a node from attempting to connect to itself. This parameter should look like this:
``wsrep_cluster_address="gcomm://fuel-controller-01:4567"``
``wsrep_cluster_address="gcomm://fuel-controller-01:4567"``
* If ``wsrep_cluster_address`` is set correctly, run ``rm -f /var/lib/mysql/grastate.dat`` and then ``service mysql start`` on this node.
@ -262,6 +262,6 @@ Useful links
* http://openlife.cc/blogs/2011/july/ultimate-mysql-high-availability-solution
* Other questions (seriously, sometimes there is not enough info about Galera available in official Galera docs):
* Other questions (seriously, sometimes there is not enough info about Galera available in the official Galera docs):
* http://www.google.com

View File

@ -1,7 +1,7 @@
Other Questions
---------------
#. **[Q]** Why did you decide to provide OpenStack packages through your own repository at http://download.mirantis.com?
#. **[Q]** Why did you decide to provide OpenStack packages through your own repository?
**[A]** We are fully committed to providing our customers with working and stable bits and pieces in order to make successful OpenStack deployments. Please note that we do not distribute our own version of OpenStack; we rather provide a plain vanilla distribution. So there is no vendor lock-in. Our repository just keeps the history of OpenStack packages certified to work with our Puppet manifests.

View File

@ -68,3 +68,60 @@ Common Technical Issues
certificate-whitelist = /etc/puppetdb/whitelist.txt
Be sure to list all aliases for the machine in that file.
.. _create-the-XFS-partition:
Creating the XFS partition
^^^^^^^^^^^^^^^^^^^^^^^^^^
In most casts, Fuel creates the XFS partition for you. If for some reason you need to create it yourself, use this procedure:
#. Create the partition itself::
fdisk /dev/sdb
n(for new)
p(for partition)
<enter> (to accept the defaults)
<enter> (to accept the defaults)
w(to save changes)
#. Initialize the XFS partition::
mkfs.xfs -i size=1024 -f /dev/sdb1
#. For a standard swift install, all data drives are mounted directly under /srv/node, so first create the mount point::
mkdir -p /srv/node/sdb1
#. Finally, add the new partition to fstab so it mounts automatically, then mount all current partitions::
echo "/dev/sdb1 /srv/node/sdb1 xfs
noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mount -a

View File

@ -1,5 +1,5 @@
In this section, youll learn how to do an actual installation of OpenStack using Fuel. In addition to getting a feel for the steps involved, youll also gain some familiarity with some of your customization options. While Fuel does provide several different deployment topologies out of the box, its common to want to tweak those architectures for your own situation, so youll get some practice moving certain features around from the standard installation.
In this section, youll learn how to do an actual installation of OpenStack using Fuel. In addition to getting a feel for the steps involved, youll also gain some familiarity with some of your customization options. While Fuel does provide several different deployment configurations out of the box, its common to want to tweak those architectures for your own situation, so youll see how to move certain features around from the standard installation.
The first step, however, is to commit to a deployment template. A fairly balanced small size, yet fully featured, deployment is the Multi-node (HA) Swift Compact deployment, so thats what well be using through the rest of this guide.
The first step, however, is to commit to a deployment template. A fairly balanced small size, yet fully featured, deployment is the Multi-node (HA) Compact deployment, so thats what well be using through the rest of this guide.
Real world installations require a physical hardware infrastructure, but you can easily deploy a small simulation cloud on a single physical machine using VirtualBox. You can follow these instructions in order to install an OpenStack cloud into a test environment using VirtualBox, or to get a production-grade installation using actual hardware.

View File

@ -1,7 +1,7 @@
How installation works
----------------------
In contrast with version 2.0 of Fuel, version 2.1 includes orchestration capabilities that simplify installation of OpenStack. The process of installing a cluster follows this general procedure:
While version 2.0 of Fuel provided the ability to simplify installation of OpenStack, version 2.1 includes orchestration capabilities that simplify deployment an OpenStack cluster. The deployment process follows this general procedure:
#. Design your architecture.
#. Install Fuel onto the fuel-pm machine.
@ -10,7 +10,7 @@ In contrast with version 2.0 of Fuel, version 2.1 includes orchestration capabil
#. PXE-boot the servers so Cobbler can install the operating system.
#. Use Fuel's included templates and the configuration to populate Puppet's site.pp file.
#. Customize the site.pp file if necessary.
#. Use the orchestrator to install the appropriate OpenStack components on each node.
#. Use the orchestrator to coordinate the installation of the appropriate OpenStack components on each node.
Start by designing your architecture.

View File

@ -6,18 +6,21 @@ Before you begin your installation, you will need to make a number of important
decisions:
* **OpenStack features.** You must choose which of the optional OpenStack features you want. For example, you must decide whether you want to install Swift, whether you want Glance to use Swift for image storage, whether you want Cinder for block storage, and whether you want nova-network or Quantum to handle your network connectivity. In the case of this example, we will be installing Swift, and Glance will be using it. We'll also be using Cinder for block storage. Because it can be easily installed using orchestration, we will also be using Quantum.
* **Deployment topology.** The first decision is whether your deployment requires high availability. If you do choose to do an HA deployment, you have a choice regarding the number of controllers you want to have. Following the recommendations in the previous section for a typical HA topology, we will use 3 OpenStack controllers.
* **Deployment configuration.** The first decision is whether your deployment requires high availability. If you do choose to do an HA deployment, you have a choice regarding the number of controllers you want to have. Following the recommendations in the previous section for a typical HA deployment configuration, we will use 3 OpenStack controllers.
* **Cobbler server and Puppet Master.** The heart of a Fuel install is the combination of Puppet Master and Cobbler used to create your resources. Although Cobbler and Puppet Master can be installed on separate machines, it is common practice to install both on a single machine for small to medium size clouds, and that's what we'll be doing in this example. (By default, the Fuel ISO creates a single server with both services.)
* **Domain name.** Puppet clients generate a Certificate Signing Request (CSR), which is then signed by Puppet Master. The signed certificate can then be used to authenticate the client during provisioning. Certificate generation requires a fully qualified hostname, so you must choose a domain name to be used in your installation. We'll leave this up to you.
* **Network addresses.** OpenStack requires a minimum of three networks. If you are deploying on physical hardware two of them -- the public network and the internal, or management network -- must be routable in your networking infrastructure. Additionally, a set of private network addresses should be selected for automatic assignment to guest VMs. (These are fixed IPs for the private network). In our case, we are allocating network addresses as follows:
* **Network addresses.** OpenStack requires a minimum of three networks. If you are deploying on physical hardware two of them -- the public network and the internal, or management network -- must be routable in your networking infrastructure. Also, if you intend for your cluster to be accessible from the Internet, you'll want the public network to be on the proper network segment. For simplicity in this case, this example assumes an Ineternet router at 192.168.0.1. Additionally, a set of private network addresses should be selected for automatic assignment to guest VMs. (These are fixed IPs for the private network). In our case, we are allocating network addresses as follows:
* Public network: 10.20.1.0/24
* Public network: 192.168.0.0/24
* Internal network: 10.20.0.0/24
* Private network: 192.168.0.0/16
* Private network: 10.20.1.0/24
* **Network interfaces.** All of those networks need to be assigned to the available NIC cards on the allocated machines. Additionally, if a fourth NIC is available, Cinder or block storage traffic can also be separated and delegated to the fourth NIC. In our case, were assigning networks as follows:
* **Network interfaces.** All of those networks need to be assigned to the available NIC cards on the allocated machines. Additionally, if a fourth NIC is available, Cinder or block storage traffic can also be separated and delegated to the fourth NIC. In our case, we're assigning networks as follows:
* Public network: eth0
* Internal network: eth1
* Public network: eth1
* Internal network: eth0
* Private network: eth2

View File

@ -8,16 +8,18 @@ hardware and software in place.
Software
^^^^^^^^
You can download the latest release of the Fuel ISO from http://fuel.mirantis.com/your-downloads/
You can download the latest release of the Fuel ISO from http://fuel.mirantis.com/your-downloads/.
Alternatively, if you can't use the pre-built ISO, you can install Fuel Library and create the fuel-pm machine manually. The library can be downloaded as .tar.gz using the same link.
Alternatively, if you can't use the pre-built ISO, Mirantis also offers the Fuel Library as a tar.gz file downloadable from `Downloads <http://fuel.mirantis.com/your-downloads/>`_ section of the Fuel portal.
Hardware for a virtual installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For a virtual installation, you need only a single machine. You can get
by on 8GB of RAM, but 16GB will be better. To actually perform the
by on 8GB of RAM, but 16GB will be better.
To actually perform the
installation, you need a way to create Virtual Machines. This guide
assumes that you are using version 4.2.6 of VirtualBox, which you can download from
@ -38,7 +40,7 @@ You will need to allocate the following resources:
* 1024+ MB of RAM
* 16+ GB of HDD for OS, and Linux distro storage
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Swift Compact mode is:
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Compact mode is:
* 64-bit architecture
* 1+ CPU 1024+ MB of RAM
@ -47,7 +49,6 @@ You will need to allocate the following resources:
* 1 server to act as the OpenStack compute node (called fuel-compute-01). The minimum configuration for a compute node with Cinder deployed on it is:
* 64-bit architecture
* 2+ CPU, with Intel VTx or AMDV virtualization technology
* 2048+ MB of RAM
* 50+ GB of HDD for OS, instances, and ephemeral storage
* 50+ GB of HDD for Cinder
@ -69,14 +70,14 @@ following hardware:
* 1024+ MB of RAM
* 16+ GB of HDD for OS, and Linux distro storage
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Swift Compact mode is:
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Compact mode is:
* 64-bit architecture
* 1+ CPU
* 1024+ MB of RAM
* 400+ GB of HDD
* 1 server to act as the OpenStack compute node (called fuelcompute-01). The minimum configuration for a compute node with Cinder deployed on it is:
* 1 server to act as the OpenStack compute node (called fuel-compute-01). The minimum configuration for a compute node with Cinder deployed on it is:
* 64-bit architecture
* 2+ CPU, with Intel VTx or AMDV virtualization technology
@ -87,5 +88,89 @@ following hardware:
additional server with specifications comparable to the controller
nodes.)
For a list of certified hardware configurations, please contact the
Mirantis Services team.
For a list of certified hardware configurations, please `contact the
Mirantis Services team <http://www.mirantis.com/contact/>`_.
Providing the OpenStack nodes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are using hardware, make sure it is capable of PXE booting over
the network from Cobbler. You'll also need each server's mac address.
If you're using VirtualBox, you will need to create the corresponding
virtual machines for your OpenStack nodes. Follow these instructions
to create machines named fuel-controller-01, fuel-controller-02, fuel-
controller-03, and fuel-compute-02, but do not start them yet.
As you create each network adapter, click Advanced to expose and
record the corresponding mac address.
* Machine -> New...
* Name: fuel-controller-01 (you will need to repeat these steps for fuel-controller-02, fuel-controller-03, and fuel-compute-01)
* Type: Linux
* Version: Red Hat (64 Bit)
* Memory: 1024MB
* Machine -> System -> Motherboard...
* Check Network in Boot sequence
* Machine -> Settings -> Storage
* Controller: SATA
* Click the Add icon at the bottom of the Storage Tree pane
* Add a second VDI disk of 10GB for storage
* Machine -> Settings... -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: eth0 (physical network attached to the Internet. You can also use a gateway.)
* Adapter 3
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet2
* Advanced -> Promiscuous mode: Allow All
It is important that hostonly Adapter 1 goes first, as Cobbler will
use vboxnet0 for PXE, and VirtualBox boots from LAN on the first
available network adapter.
The additional drive volume will be used as storage space by Cinder, and will be configured automatically by Fuel.

View File

@ -54,14 +54,14 @@ ip assignments:
#. 10.20.0.0/24: management or internal network, for communication between Puppet master and Puppet clients, as well as PXE/TFTP/DHCP for Cobbler
#. 10.20.1.0/24: public network, for the High Availability (HA) Virtual IP (VIP), as well as floating IPs assigned to OpenStack guest VMs
#. 192.168.0.0/16: private network, fixed IPs automatically assigned to guest VMs by OpenStack upon their creation
#. 192.168.0.0/24: public network, for the High Availability (HA) Virtual IP (VIP), as well as floating IPs assigned to OpenStack guest VMs
#. 10.20.1.0/24: private network, fixed IPs automatically assigned to guest VMs by OpenStack upon their creation
Next we need to allocate a static IP address from the internal network
to eth1 for fuel-pm, and eth0 for the controller, compute, and (if necessary) quantum
to eth0 for fuel-pm, and eth1 for the controller, compute, and (if necessary) quantum
nodes. For High Availability (HA) we must choose and assign an IP
address from the public network to HAProxy running on the controllers.
You can configure network addresses/network mask according to your
@ -74,12 +74,13 @@ on the interfaces:
* 10.20.0.100 for Puppet Master
* 10.20.0.101-10.0.0.103 for the controller nodes
* 10.20.0.201-10.0.0.254 for the compute nodes
* 10.20.0.110-10.0.0.126 for the compute nodes
* 10.20.0.10 internal Virtual IP for component access
* 255.255.255.0 network mask
#. eth1: public network
* 10.20.1.10 public Virtual IP for access to the Horizon GUI (OpenStack management interface)
* 192.168.0.10 public Virtual IP for access to the Horizon GUI (OpenStack management interface)
#. eth2: for communication between OpenStack VMs without IP address with promiscuous mode enabled.
@ -91,12 +92,20 @@ hostonly adapters exist and are configured correctly:
If you are on VirtualBox, create the following adapter:
If you are on VirtualBox, create the following adapters:
* VirtualBox -> Preferences...
* Network -> Add Bridged Adapter (vboxnet0)
* Attached to whichever NIC of the host machine has access to the Internet
* Network -> Add HostOnly Adapter (vboxnet0)
* IPv4 Address: 10.20.0.1
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
* Network -> Add HostOnly Adapter (vboxnet1)
* IPv4 Address: 10.20.1.1
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
* Network -> Add HostOnly Adapter (vboxnet2)
* IPv4 Address: 0.0.0.0
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
After creating this interface, reboot the host machine to make sure that
@ -109,7 +118,7 @@ Virtual HostOnly Network adapter.
Creating fuel-pm on a Physical Machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------
If you plan to provision the Puppet master on hardware, you need to
create a bootable DVD or USB disk from the downloaded ISO, then make
@ -117,7 +126,7 @@ sure that you can boot your server from the DVD or USB drive.
Creating fuel-pm on a Virtual Machine
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------------------------------
The process of creating a virtual machine to host Fuel in VirtualBox depends on
whether your deployment is purely virtual or consists of a virtual
@ -135,26 +144,34 @@ Start up VirtualBox and create a new machine as follows:
* Name: fuel-pm
* Type: Linux
* Version: Red Hat (32 or 64 Bit)
* Memory: 1024 MB
* Memory: 2048 MB
* Drive space: 16 GB HDD
* Machine -> Settings... -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: The host machine's network with access to the Internet
* Physical network
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: The host machine's network with access to the network on which the physical machines reside
* VirtualBox installation
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Attached to: Bridged Adapter
* Name: eth0 (or whichever physical network is attached to the Internet)
* Machine -> Storage
* Attach the downloaded ISO as a drive
* Attach the downloaded ISO as a drive
If you can't (or would rather not) install from the ISO, you can find instructions for installing from the Fuel Library in :ref:`Appendix A <Create-PM>`.

View File

@ -1,7 +1,5 @@
Installing Fuel from the ISO
----------------------------
Start the new machine to install the ISO. The only real installation decision you will need to make is to specify the interface through which the installer can access the Internet:
.. image:: /pages/installation-instructions/screenshots/fuel-iso-choose-nic.png
Start the new machine to install the ISO. The only real installation decision you will need to make is to specify the interface through which the installer can access the Internet. Choose eth1, as it's connected to the Internet-connected interface.

View File

@ -1,11 +1,7 @@
Configuring fuel-pm from the ISO installation
---------------------------------------------
Once fuel-pm finishes installing, you'll be presented with a basic menu:
.. image:: /pages/installation-instructions/screenshots/fuel-iso-menu.png
You can use this menu to set the basic information Fuel will need to configure your installation. You can customize these steps for your own situation, of course, but here are the steps to take for the example installation:
Once fuel-pm finishes installing, you'll be presented with a basic menu. You can use this menu to set the basic information Fuel will need to configure your installation. You can customize these steps for your own situation, of course, but here are the steps to take for the example installation:
#. To set the fully-qualified domain name for the master node and cloud domain, choose 1.
@ -14,15 +10,14 @@ You can use this menu to set the basic information Fuel will need to configure y
#. To configure the management interface, choose 2.
* The example specifies eth1 as the internal, or management interface, so enter that.
* The example specifies eth0 as the internal, or management interface, so enter that.
* The management network in the example is using static IP addresses, so specify no for for using DHCP.
* Enter the IP address of 10.20.0.10 for the Puppet Master, and the netmask of 255.255.255.0.
* Because this interface is not using DHCP, you'll need to specify the gateway and DNS servers. If you're installing on VirtualBox, the gateway will be the host IP, 10.20.0.1.
* Enter the first and second DNS servers for your domain.
* Enter the IP address of 10.20.0.100 for the Puppet Master, and the netmask of 255.255.255.0.
* Under most situations, you can leave the other three options blank and Fuel will automatically detect the appropriate gateway, but you can specifically set the gateway and DNS servers if desired.
#. To configure the external interface, which will be used to send traffic to and from the internet, choose 3. Set the interface to eth0. By default, this interface uses DHCP, which is what the example calls for.
#. To configure the external interface, which will be used to send traffic to and from the internet, choose 3. Set the interface to eth1. By default, this interface uses DHCP, which is what the example calls for.
#. To choose the start and end addresses for compute nodes, choose 4. In the case of this example, the start address is 10.20.0.201 and the end address is 10.20.0.254.
#. To choose the start and end addresses for compute nodes, choose 4. In the case of this example, the start address is 10.20.0.110 and the end address is 10.20.0.126.
Future versions of Fuel will enable you to choose a custom set of repositories.

View File

@ -1,72 +1,86 @@
.. _Configuring-Cobbler:
Configuring Cobbler
-------------------
(NOTE: This section is a draft and is awaiting final testing before completion.)
Fuel uses a single file, config.yaml, to both configure Cobbler and assist in the configuration of the site.pp file. An example of this file will be distributed with later versions of Fuel, but in the meantime, you can use this file as an example:
Copy the sample config.yaml file to the current directory::
cp /root/config.yaml .
Fuel uses a single file, ``config.yaml``, to both configure Cobbler and assist in the configuration of the ``site.pp`` file. This file appears in the ``/root`` directory when the master node (fuel-pm) is provisioned and configured.
You'll want to configure this example for your own situation, but the example looks like this::
common:
orchestrator_common:
attributes:
deployment_mode: ha_full
deployment_mode: ha_compute
deployment_engine: simplepuppet
task_uuid: deployment_task
Change the deployment node to ``ha_compact`` to tell Fuel to use the Compact architecture. ::
Possible values for ``deployment_mode`` are ``singlenode_compute``, ``multinode_compute``, and ``ha_compute``. Change the ``deployment_mode`` to ``ha_compute`` to tell Fuel to use HA architecture. The ``simplepuppet`` ``deployment_engine`` means that the orchestrator will be calling Puppet on each of the nodes.
Next you'll need to set OpenStack's networking information::
openstack_common:
internal_virtual_ip: 10.20.0.200
public_virtual_ip: 192.168.56.100
create_networks: "pp"
internal_virtual_ip: 10.20.0.10
public_virtual_ip: 192.168.0.10
create_networks: true
fixed_range: 172.16.0.0/16
floating_range: 192.168.56.0/24
floating_range: 192.168.0.0/24
Change the virtual IPs to match the target networks, and set the fixed and floating ranges. ::
swift_loopback: loopback
nv_physical_volumes:
- /dev/sdb
By setting the ``nv_physical_volumes`` value, you are not only telling OpenStack to use this value for Cinder (you'll see more aobut that in the ``site.pp`` file), you're also telling Fuel to create and mount the appropriate partition.
Later, we'll set up a new partition for Cinder, so tell Cobbler to create it here. ::
external_ip_info:
public_net_router: 10.20.0.10
ext_bridge: 10.20.0.1
pool_start: 10.20.0.201
pool_end: 10.20.0.254
public_net_router: 192.168.0.1
ext_bridge: 0.0.0.0
pool_start: 192.168.0.110
pool_end: 192.168.0.126
Set the ``public_net_router`` to be the master node. The ``ext_bridge`` is, in this case, the host machine. ::
Set the ``public_net_router`` to be the master node. The ``ext_bridge`` is a legacy value, included for compatability reasons, so you can set it to the same value as the ``public_net_router``, or simply to ``0.0.0.0``. The ``pool_start`` and ``pool_end`` values represent the public addresses of your nodes, and should be within the ``floating_range``. ::
segment_range: 900:999
use_syslog: false
syslog_server: 127.0.0.1
mirror_type: default
**THIS SETTING IS CRUCIAL:** The ``mirror_type`` **must** to be set to ``default`` unless you have your own repositories set up, or OpenStack will not install properly. ::
quantum: true
internal_interface: eth0
public_interface: eth1
private_interface: eth2
public_netmask: 255.255.255.0
internal_netmask: 255.255.255.0
Earlier, we decided which interfaces to use for which networks; note that here. ::
Earlier, you decided which interfaces to use for which networks; note that here. ::
default_gateway: 10.20.0.10
default_gateway: 192.168.0.1
Set the default gateway to the master node. ::
Depending on how you've set up your network, you can either set the ``default_gateway`` to the master node (fuel-pm) or to the ``public_net_router``. ::
nagios_master: fuel-controller-01.local
nagios_master: fuel-controller-01.your-domain-name.com
loopback: loopback
cinder: true
cinder_on_computes: true
swift: true
We've chosen to run Cinder and Swift, so you'll need to note that here, as well, as noting that we want to run Cinder on the compute nodes, as opposed to the controllers or a separate node. ::
In this example, you're using Cinder and including it on the compute nodes, so note that appropriately. Also, you're using Swift, so turn that on here. ::
repo_proxy: http://10.20.0.100:3128
One improvemenet in Fuel 2.1 is that ability for the master node to cache downloads in order to speed up installs; by default the ``repo_proxy`` is set to point to fuel-pm in order to let that happen. ::
deployment_id: '53'
Fuel enables you to manage multiple clusters; setting the ``deployment_id`` will let Fuel know which deployment you're working with. ::
dns_nameservers:
- 10.20.0.10
- 10.20.0.100
- 8.8.8.8
The slave nodes should first look to the master node for DNS, so mark that as your first nameserver.
@ -74,42 +88,33 @@ The slave nodes should first look to the master node for DNS, so mark that as yo
The next step is to define the nodes themselves. To do that, you'll list each node once for each role that needs to be installed. ::
nodes:
- name: fuel-pm
role: cobbler
internal_address: 10.20.0.100
public_address: 192.168.0.100
- name: fuel-controller-01
role: controller
internal_address: 10.20.0.101
public_address: 10.20.1.101
public_address: 192.168.0.101
swift_zone: 1
- name: fuel-controller-02
role: controller
internal_address: 10.20.0.102
public_address: 10.20.1.102
- name: fuel-controller-01
role: compute
internal_address: 10.20.0.101
public_address: 10.20.1.101
- name: fuel-controller-02
role: compute
internal_address: 10.20.0.102
public_address: 10.20.1.102
- name: fuel-controller-01
role: storage
internal_address: 10.20.0.101
public_address: 10.20.1.101
- name: fuel-controller-02
role: storage
internal_address: 10.20.0.102
public_address: 10.20.1.102
- name: fuel-controller-01
role: swift-proxy
internal_address: 10.20.0.101
public_address: 10.20.1.101
- name: fuel-controller-02
role: swift-proxy
internal_address: 10.20.0.102
public_address: 10.20.1.102
public_address: 192.168.0.102
swift_zone: 2
- name: fuel-controller-03
role: controller
internal_address: 10.20.0.103
public_address: 192.168.0.103
swift_zone: 3
- name: fuel-controller-01
role: quantum
internal_address: 10.20.0.101
public_address: 10.20.1.101
public_address: 192.168.0.101
- name: fuel-compute-01
role: compute
internal_address: 10.20.0.110
public_address: 192.168.0.110
Notice that each node is listed multiple times; this is because each node fulfills multiple roles.
@ -117,20 +122,21 @@ The ``cobbler_common`` section applies to all machines::
cobbler_common:
# for Centos
# profile: "centos63_x86_64"
profile: "centos63_x86_64"
# for Ubuntu
profile: "ubuntu_1204_x86_64"
# profile: "ubuntu_1204_x86_64"
Fuel can install CentOS or Ubuntu on your servers, or you can add a profile of your own. ::
Fuel can install CentOS or Ubuntu on your servers, or you can add a profile of your own. By default, ``config.yaml`` uses Ubuntu, but for our example we'll use CentOS. ::
netboot-enabled: "1"
# for Ubuntu
# ksmeta: "puppet_version=2.7.19-1puppetlabs2 \
# for Centos
name-servers: "10.20.0.10"
name-servers: "10.20.0.100"
name-servers-search: "your-domain-name.com"
gateway: 10.20.0.100
Set the default nameserver to be fuel-pm, and change the domain name to your own domain name. ::
Set the default nameserver to be fuel-pm, and change the domain name to your own domain name. The master node will also serve as a default gateway for the nodes. ::
ksmeta: "puppet_version=2.7.19-1puppetlabs2 \
puppet_auto_setup=1 \
@ -142,7 +148,10 @@ Change the fully-qualified domain name for the Puppet Master to reflect your own
ntp_enable=1 \
mco_auto_setup=1 \
mco_pskey=un0aez2ei9eiGaequaey4loocohjuch4Ievu3shaeweeg5Uthi \
mco_stomphost=10.20.0.10 \
mco_stomphost=10.20.0.100 \
Make sure the ``mco_stomphost`` is set for the master node so that the orchestrator can find the nodes. ::
mco_stompport=61613 \
mco_stompuser=mcollective \
mco_stomppassword=AeN5mi5thahz2Aiveexo \
@ -152,122 +161,72 @@ This section sets the system up for orchestration; you shouldn't have to touch i
Next you'll define the actual servers. ::
fuel-controller-01:
hostname: "fuel-controller-01"
role: controller
interfaces:
eth0:
mac: "08:00:27:75:58:C2"
static: "1"
ip-address: "10.20.0.101"
netmask: "255.255.255.0"
dns-name: "fuel-controller-01.your-domain-name.com"
management: "1"
eth1:
static: "0"
eth2:
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
fuel-controller-01:
hostname: "fuel-controller-01"
role: controller
interfaces:
eth0:
mac: "08:00:27:BD:3A:7D"
static: "1"
ip-address: "10.20.0.101"
netmask: "255.255.255.0"
dns-name: "fuel-controller-01.your-domain-name.com"
management: "1"
eth1:
mac: "08:00:27:ED:9C:3C"
static: "0"
eth2:
mac: "08:00:27:B0:EB:2C"
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
The only part of this section that you need to touch is the defintion of the eth0 interface; change the mac address to match the actual MAC address. (You can retrieve this information by expanding "Advanced" for the network adapater in VirtualBox, or by executing ifconfig on the server itself.) Also, make sure the ip-address is correct, and that the dns-name has your own domain name in it.
You can retrieve the MAC ids for your network adapters by expanding "Advanced" for the adapater in VirtualBox, or by executing ifconfig on the server itself. Also, make sure the ``ip-address`` is correct, and that the ``dns-name`` has your own domain name in it.
Repeat that step for any additional controllers::
In this example, IP addresses should be assigned as follows::
fuel-controller-02:
# If you need create 'cinder-volumes' VG at install OS -- uncomment this line and move it above in middle of ksmeta section.
# At this line you need describe list of block devices, that must come in this group.
# cinder_bd_for_vg=/dev/sdb,/dev/sdc \
hostname: "fuel-controller-02"
role: controller
interfaces:
eth0:
mac: "08:00:27:C4:D8:CF"
static: "1"
ip-address: "10.20.0.102"
netmask: "255.255.255.0"
dns-name: "fuel-controller-02.your-domain-name.com"
management: "1"
eth1:
static: "0"
eth2:
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
fuel-controller-01: 10.20.0.101
fuel-controller-02: 10.20.0.102
fuel-controller-03: 10.20.0.103
fuel-compute-01: 10.20.0.110
fuel-controller-03:
# If you need create 'cinder-volumes' VG at install OS -- uncomment this line and move it above in middle of ksmeta section.
# At this line you need describe list of block devices, that must come in this group.
# cinder_bd_for_vg=/dev/sdb,/dev/sdc \
hostname: "fuel-controller-03"
role: controller
interfaces:
eth0:
mac: "08:00:27:C4:D8:CF"
static: "1"
ip-address: "10.20.0.103"
netmask: "255.255.255.0"
dns-name: "fuel-controller-03.your-domain-name.com"
management: "1"
eth1:
static: "0"
eth2:
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
Repeat this step for each of the other controllers, and for the compute node. Note that the compute node has its own role::
fuel-compute-01:
# If you need create 'cinder-volumes' VG at install OS -- uncomment this line and move it above in middle of ksmeta section.
# At this line you need describe list of block devices, that must come in this group.
# cinder_bd_for_vg=/dev/sdb,/dev/sdc \
hostname: "fuel-compute-01"
role: compute
interfaces:
eth0:
mac: "08:00:27:C4:D8:CF"
static: "1"
ip-address: "10.20.0.201"
netmask: "255.255.255.0"
dns-name: "fuel-compute-01.your-domain-name.com"
management: "1"
eth1:
static: "0"
eth2:
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
fuel-compute-01:
hostname: "fuel-compute-01"
role: compute
interfaces:
eth0:
mac: "08:00:27:AE:A9:6E"
static: "1"
ip-address: "10.20.0.110"
netmask: "255.255.255.0"
dns-name: "fuel-compute-01.your-domain-name.com"
management: "1"
eth1:
mac: "08:00:27:B7:F9:CD"
static: "0"
eth2:
mac: "08:00:27:8B:A6:B7"
static: "1"
interfaces_extra:
eth0:
peerdns: "no"
eth1:
peerdns: "no"
eth2:
promisc: "yes"
userctl: "yes"
peerdns: "no"
This file has been customized for the example in the docs, but in general you will need to be certain that IP and gateway information -- in addition to the MAC addresses -- matches the decisions you made earlier in the process.
Load the configuration
^^^^^^^^^^^^^^^^^^^^^^

View File

@ -3,150 +3,23 @@
Installing the OS using Fuel
----------------------------
Now you're ready to start creating the OpenStack servers themselves.
The first step is to let Fuel's Cobbler kickstart and preseed files
assist in the installation of operating systems on the target servers.
Initial setup
^^^^^^^^^^^^^
If you are using hardware, make sure it is capable of PXE booting over
the network from Cobbler. You'll also need each server's mac address.
If you're using VirtualBox, you will need to create the corresponding
virtual machines for your OpenStack nodes. Follow these instructions
to create machines named fuel-controller-01, fuel-controller-02, fuel-
controller-03, and fuel-compute-02, but do not start them yet.
As you create each network adapter, click Advanced to expose and
record the corresponding mac address.
* Machine -> New...
* Name: fuel-controller-01 (you will need to repeat these steps for fuel-controller-02, fuel-controller-03, and fuel-compute-01)
* Type: Linux
* Version: Red Hat (64 Bit)
* Machine -> System -> Motherboard...
* Check Network in Boot sequence
* Machine -> Settings... -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Internal
* Name: vboxnet1
* Advanced -> Promiscuous mode: Allow All
* Adapter 3
* Enable Network Adapter
* Attached to: Internal
* Name: vboxnet2
* Advanced -> Promiscuous mode: Allow All
* Adapter 4
* Enable Network Adapter
* Attached to: NAT
* Machine -> Settings -> Storage
* Controller: SATA
* Click the Add icon at the bottom of the Storage Tree pane
* Add a second VDI disk of 10GB for storage
It is important that hostonly Adapter 1 goes first, as Cobbler will
use vboxnet0 for PXE, and VirtualBox boots from LAN on the first
available network adapter.
Adapter 4 is not strictly necessary, and can be thought of as an
implementation detail. Its role is to bypass a limitation of Hostonly
interfaces, and simplify internet access from the VM. It is possible
to accomplish the same without using Adapter 4, but it requires
bridged adapters or manipulating the iptables routes of the host, so
using Adapter 4 is much easier.
Also, the additional drive volume will be used as storage space by Cinder, and configured later in the process.
Installing OS on the nodes using Cobbler
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The first step in creating the actual OpenStack nodes is to let Fuel's Cobbler kickstart and preseed files assist in the installation of operating systems on the target servers.
Now that Cobbler has the correct configuration, the only thing you
need to do is to PXE-boot your nodes. This means that they will boot over the network, with
DHCP/TFTP provided by Cobbler, and will be provisioned accordingly,
with the specified operating system and configuration.
If you installed Fuel from the ISO, start fuel-controller-01 first and let the installation finish before starting the other nodes; Fuel will cache the downloads so subsequent installs will go faster.
In case of VirtualBox, start each virtual machine (fuel-controller-01,
fuel-controller-02, fuel-controller-03, fuel-compute-01) as follows:
The process for each node looks like this:
#. Start the VM.
#. Press F12 immediately and select l (LAN) as a bootable media.
#. Wait for the installation to complete.
#. Log into the new machine using root/r00tme.
#. **Change the root password.**
#. Check that networking is set up correctly and the machine can reach the Puppet Master and package repositories::
ping fuel-pm.your-domain-name.com
@ -154,71 +27,14 @@ fuel-controller-02, fuel-controller-03, fuel-compute-01) as follows:
If you're unable to ping outside addresses, add the fuel-pm server as a default gateway::
route add default gw 10.20.0.10
route add default gw 10.20.0.100
**It is important to note** that if you use VLANs in your network
configuration, you always have to keep in mind the fact that PXE
booting does not work on tagged interfaces. Therefore, all your nodes,
including the one where the Cobbler service resides, must share one
untagged VLAN (also called native VLAN). You can use the
dhcp_interface parameter of the cobbler::server class to bind the DHCP
service to a certain interface.
untagged VLAN (also called native VLAN). If necessary, you can use the
``dhcp_interface`` parameter of the ``cobbler::server`` class to bind the DHCP
service to the appropriate interface.
.. _create-the-XFS-partition:
Create the XFS partition
^^^^^^^^^^^^^^^^^^^^^^^^
The last step before installing OpenStack is to prepare the partitions
on which Swift and Cinder will store their data. Later versions of
Fuel will do this for you, but for now, manually prepare the volume by
fdisk and initialize it. To do that, follow these steps:
#. Create the partition itself::
fdisk /dev/sdb
n(for new)
p(for partition)
<enter> (to accept the defaults)
<enter> (to accept the defaults)
w(to save changes)
#. Initialize the XFS partition::
mkfs.xfs -i size=1024 -f /dev/sdb1
#. For a standard swift install, all data drives are mounted directly under /srv/node, so first create the mount point::
mkdir -p /srv/node/sdb1
#. Finally, add the new partition to fstab so it mounts automatically, then mount all current partitions::
echo "/dev/sdb1 /srv/node/sdb1 xfs
noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mount -a

View File

@ -1,10 +1,20 @@
Preparing for deployment
------------------------
Before you can deploy OpenStack, you will need to configure the site.pp file. While previous versions of Fuel required you to manually configure site.pp, version 2.1 includes the ``openstack_system`` script, which uses both the ``config.yaml`` and template files for the various reference architectures to create the appropriate script. To create site.pp, execute this script::
Before you can deploy OpenStack, you will need to configure the site.pp file. While previous versions of Fuel required you to manually configure ``site.pp``, version 2.1 includes the ``openstack_system`` script, which uses both the ``config.yaml`` and template files for the various reference architectures to create the appropriate Puppet manifest. To create ``site.pp``, execute this command::
openstack_system -c /tmp/config.yaml
-t /etc/puppet/modules/openstack/examples/site_openstack_ha_compact.pp
-o /etc/puppet/manifests/site.pp
openstack_system -c config.yaml \
-t /etc/puppet/modules/openstack/examples/site_openstack_ha_compact.pp \
-o /etc/puppet/manifests/site.pp \
-a astute.yaml
From there you're ready to install your OpenStack components, but first let's look at what's actually in the script, so that you can undersand how to customize it if necessary.
The four parameters shown here represent the following:
* ``-c``: The absolute or relative path to the ``config.yaml`` file you customized earlier.
* ``-t``: The template file to serve as a basis for ``site.pp``. Possible templates include ``site_openstack_ha_compact.pp``, ``site_openstack_ha_minimal.pp``, ``site_openstack_ha_full.pp``, ``site_openstack_ha_single.pp``, and ``site_openstack_ha_simple.pp``.
* ``-o``: The output file. This should always be ``/etc/puppet/manifests/site.pp``.
* ``-a``: The orchestration configuration file, to be output for use in the next step.
From there you're ready to install your OpenStack components, but first let's look at what's actually in the new ``site.pp`` manifest, so that you can undersand how to customize it if necessary. (Similarly, if you are installing Fuel Library without the ISO, you will need to make these customizations yourself.)

View File

@ -1,6 +1,6 @@
Deploying OpenStack
-------------------
Understanding the Puppet manifest
---------------------------------
At this point you have functioning servers that are ready to have
OpenStack installed. If you're using VirtualBox, save the current state
@ -8,11 +8,13 @@ of every virtual machine by taking a snapshot using ``File->Take Snapshot``. Thi
way you can go back to this point and try again if necessary.
The next step will be to go through the site.pp file and make any
necessary customizations. With one small exception, there shouldn't be anything to change, because ``openstack_system`` took care of it, but it's always good to understand what your system is doing.
The next step will be to go through the ``/etc/puppet/manifests/site.pp`` file and make any
necessary customizations. If you have run ``openstack_system``, there shouldn't be anything to change (with one small exception) but if you are installing Fuel manually, you will need to make these changes yourself.
In either case, it's always good to understand what your system is doing.
Lets start with the basic network customization::
Let's start with the basic network customization::
@ -35,73 +37,74 @@ In this case, we don't need to make any changes to the interface
settings, because they match what we've already set up. ::
# Public and Internal VIPs. These virtual addresses are required by HA topology and will be managed by keepalived.
$internal_virtual_ip = '10.20.0.200'
$internal_virtual_ip = '10.20.0.10'
# Change this IP to IP routable from your 'public' network,
# e. g. Internet or your office LAN, in which your public
# interface resides
$public_virtual_ip = '10.20.1.200'
$public_virtual_ip = '192.168.0.10'
Make sure the virtual IPs you see here mesh with your actual setup; they should be IPs that are routeable, but not within the range of the DHCP scope.
Make sure the virtual IPs you see here mesh with your actual setup; they should be IPs that are routeable, but not within the range of the DHCP scope. These are the IPs through which your services will be accessed.
The next section sets up the servers themselves::
The next section sets up the servers themselves. If you are setting up Fuel manually, make sure to add each server with the appropriate IP addresses; if you ran ``openstack_system``, the values will be overridden by the next section, and you can ignore this array. ::
$nodes_harr = [
{
'name' => 'fuel-pm',
'role' => 'cobbler',
'internal_address' => '10.20.0.10',
'public_address' => '10.20.1.10',
'internal_address' => '10.20.0.100',
'public_address' => '192.168.0.100',
},
{
'name' => 'fuel-controller-01',
'role' => 'controller',
'internal_address' => '10.20.0.101',
'public_address' => '10.20.1.101',
'public_address' => '192.168.0.101',
},
{
'name' => 'fuel-controller-02',
'role' => 'controller',
'internal_address' => '10.20.0.102',
'public_address' => '10.20.1.102',
'public_address' => '192.168.0.102',
},
{
'name' => 'fuel-controller-03',
'role' => 'controller',
'internal_address' => '10.0.0.105',
'public_address' => '10.0.204.105',
'public_address' => '192.168.0.105',
},
{
'name' => 'fuel-compute-01',
'role' => 'compute',
'internal_address' => '10.0.0.106',
'public_address' => '10.0.204.106',
'public_address' => '192.168.0.106',
}
]
Because this section comes from a template, it will likely include a number of servers you're not using; feel free to leave them or take them out.
Next the site.pp file lists all of the nodes and roles you defined in the ``config.yaml`` file::
Next the ``site.pp`` file lists all of the nodes and roles you defined in the ``config.yaml`` file::
$nodes = [{'public_address' => '10.20.1.101','name' => 'fuel-controller-01',
'role' => 'controller','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02',
'role' => 'controller','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01',
'role' => 'compute','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02',
'role' => 'compute','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01',
'role' => 'storage','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02',
'role' => 'storage','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01',
'role' => 'swift-proxy','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02',
'role' => 'swift-proxy','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01',
'role' => 'quantum','internal_address' => '10.20.0.101'}]
$nodes = [{'public_address' => '10.20.1.101','name' => 'fuel-controller-01','role' =>
'controller','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02','role' =>
'controller','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01','role' =>
'compute','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02','role' =>
'compute','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01','role' =>
'storage','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02','role' =>
'storage','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01','role' =>
'swift-proxy','internal_address' => '10.20.0.101'},
{'public_address' => '10.20.1.102','name' => 'fuel-controller-02','role' =>
'swift-proxy','internal_address' => '10.20.0.102'},
{'public_address' => '10.20.1.101','name' => 'fuel-controller-01','role' =>
'quantum','internal_address' => '10.20.0.101'}]
Possible roles include compute, controller, storage, swift-proxy, quantum, master, and cobbler. Compute nodes cannot be described because it is required for them to disable network configuration. Alternatively, you can force DHCP configuration to ensure proper configuration of IP addresses, default gateways, and DNS servers. IMPORTANT: DNS servers must contain information about all nodes of the cluster. At the time of deployment of the cluster in a standard scenario, the cobbler node contains this information.
@ -109,7 +112,7 @@ The file also specifies the default gateway to be the fuel-pm machine::
$default_gateway = '10.20.0.10'
Next site.pp defines DNS servers and provides netmasks::
Next ``site.pp`` defines DNS servers and provides netmasks::
# Specify nameservers here.
# Need points to cobbler node IP, or to special prepared nameservers if you known what you do.
@ -118,14 +121,13 @@ Next site.pp defines DNS servers and provides netmasks::
# Specify netmasks for internal and external networks.
$internal_netmask = '255.255.255.0'
$public_netmask = '255.255.255.0'
...
Next come several general parameters::
#Set this to anything other than pacemaker if you do not want Quantum HA
#Also, if you do not want Quantum HA, you MUST enable $quantum_network_node
#on the ONLY controller
$ha_provider = 'pacemaker'
$use_unicast_corosync = false
Quantum is actually specified further down in the file, but this is where you specify whether you want Quantum to be specified as High Availability or not. ::
@ -142,7 +144,7 @@ Next specify the main controller. ::
## proj_name name of environment nagios configuration
$proj_name = 'test'
Here again we have a parameter that looks ahead to things to come; OpenStack supports monitoring via Nagios. ::
Here again we have a parameter that looks ahead to things to come; OpenStack supports monitoring via Nagios. In this section, you can choose the Nagios master server as well as setting a project name. ::
#Specify if your installation contains multiple Nova controllers. Defaults to true as it is the most common scenario.
$multi_host = true
@ -151,7 +153,6 @@ A single host cloud isn't especially useful, but if you really want to, you can
Finally, you can define the various usernames and passwords for OpenStack services. ::
...
# Specify different DB credentials for various services
$mysql_root_password = 'nova'
$admin_email = 'openstack@openstack.org'
@ -178,17 +179,16 @@ Finally, you can define the various usernames and passwords for OpenStack servic
$quantum_db_dbname = 'quantum'
# End DB credentials section
...
Now that the network is configured for the servers, lets look at the
network services.
Now that the network is configured for the servers, let's look at the
various OpenStack services.
Enabling Quantum
^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^
In order to deploy OpenStack with Quantum you need to set up an
additional node that will act as a L3 router, or run Quantum out of
additional node that will act as an L3 router, or run Quantum out of
one of the existing nodes. ::
### NETWORK/QUANTUM ###
@ -209,7 +209,7 @@ In this case, we're using a "compact" architecture, so we want to place Quantum
$fixed_range = '172.16.0.0/16'
# Floating IP addresses are used for communication of VM instances with the outside world (e.g. Internet).
$floating_range = '10.20.1.0/24'
$floating_range = '192.168.0.0/24'
OpenStack uses two ranges of IP addresses for virtual machines: fixed IPs, which are used for communication between VMs, and thus are part of the private network, and floating IPs, which are assigned to VMs for the purpose of communicating to and from the Internet. ::
@ -219,7 +219,7 @@ OpenStack uses two ranges of IP addresses for virtual machines: fixed IPs, which
$network_size = 31
$vlan_start = 300
These values don't actually relate to Quantum; they are used by nova-network. IDs for those VLANs run from vlan_start to (vlan_start + num_networks - 1), and are generated automatically. ::
These values don't actually relate to Quantum; they are used by nova-network. IDs for the VLANs OpenStack will create for tenants run from ``vlan_start`` to (``vlan_start + num_networks - 1``), and are generated automatically. ::
# Quantum
@ -231,7 +231,7 @@ These values don't actually relate to Quantum; they are used by nova-network. I
$quantum_gre_bind_addr = $internal_address
#Which IP have Quantum network node?
$quantum_hostname = 'fuel-controller-01'
$quantum_net_node_hostname = 'fuel-controller-03'
$quantum_host = $controller_internal_addresses[$quantum_hostname]
If you are installing Quantum in non-HA mode, you will need to specify which single controller controls Quantum. ::
@ -314,7 +314,6 @@ The remaining configuration is used to define classes that will be added to each
l23network::l3::ifconfig {$private_interface: ipaddr=>'none' }
}
### NETWORK/QUANTUM END ###
...
All of this assumes, of course, that you're using Quantum; if you're using nova-network instead, only those values apply.
@ -323,13 +322,10 @@ Defining the current cluster
Fuel enables you to control multiple deployments simultaneously by setting an individual deployment ID::
...
# This parameter specifies the the identifier of the current cluster. This is needed in case of multiple environments.
# installation. Each cluster requires a unique integer value.
# Valid identifier range is 1 to 254
$deployment_id = '79'
..
Enabling Cinder
^^^^^^^^^^^^^^^
@ -338,14 +334,12 @@ This example also uses Cinder, and with
some very specific variations from the default. Specifically, as we
said before, while the Cinder scheduler will continue to run on the
controllers, the actual storage takes place on the compute nodes, on
the /dev/sdb1 partition you created earlier. Cinder will be activated
the ``/dev/sdb1`` partition you created earlier. Cinder will be activated
on any node that contains the specified block devices -- unless
specified otherwise -- so let's look at what all of that means for the
configuration. ::
...
### CINDER/VOLUME ###
# Should we use cinder or nova-volume(obsolete)
@ -355,7 +349,7 @@ configuration. ::
# Should we install cinder on compute nodes?
$cinder_on_computes = true
We want Cinder to be on the compute nodes, so set this value to true. ::
We want Cinder to be on the compute nodes, so set this value to ``true``. ::
@ -372,7 +366,7 @@ We want Cinder to be on the compute nodes, so set this value to true. ::
Here you have the opportunity to specify which network interface
Cinder uses for its own traffic. For example, you could set up a fourth NIC at ``eth3``
and specify that rather than ``$internal_int`` ::
and specify that rather than ``$internal_int``. ::
@ -387,12 +381,10 @@ and specify that rather than ``$internal_int`` ::
$nv_physical_volume = ['/dev/sdb']
### CINDER/VOLUME END ###
...
We only want to allocate the /dev/sdb value for Cinder, so adjust
$nv_physical_volume accordingly. Note, however, that this is a global
We only want to allocate the ``/dev/sdb`` value for Cinder, so adjust
``$nv_physical_volume`` accordingly. Note, however, that this is a global
value; it will apply to all servers, including the controllers --
unless we specify otherwise, which we will in a moment.
@ -407,13 +399,13 @@ destroyed after you allocate them for Cinder.
Now lets look at the other storage-based service: Swift.
Enabling Swift
^^^^^^^^^^^^^^
Enabling Glance and Swift
^^^^^^^^^^^^^^^^^^^^^^^^^
There aren't many changes that you will need to make to the default
configuration in order to enable Swift to work properly in Swift
Compact mode, but you will need to adjust for the fact that we are
running Swift on physical partitions::
running Swift on physical partitions ::
...
@ -427,7 +419,7 @@ running Swift on physical partitions::
# set 'loopback' or false
# This parameter controls where swift partitions are located:
# on physical partitions or inside loopback devices.
$swift_loopback = loopback
$swift_loopback = false
The default value is ``loopback``, which tells Swift to use a loopback storage device, which is basically a file that acts like a drive, rather than an actual physical drive. You can also set this value to ``false``, which tells OpenStack to use a physical file instead. ::
@ -438,8 +430,13 @@ The default value is ``loopback``, which tells Swift to use a loopback storage d
# IP node of controller used during swift installation
# and put into swift configs
$controller_node_public = $internal_virtual_ip
Next, you're specifying hostnames::
# Hash of proxies hostname|fqdn => ip mappings.
# This is used by controller_ha.pp manifests for haproxy setup
# of swift_proxy backends
$swift_proxies = $controller_internal_addresses
Next, you're specifying the ``swift-master``::
# Set hostname of swift_master.
# It tells on which swift proxy node to build
@ -455,17 +452,14 @@ Next, you're specifying hostnames::
} else {
$primary_controller = false
}
...
In this case, there's no separate fuel-swiftproxy-01, so the master controller will be the primary Swift controller.
In this case, there's no separate ``fuel-swiftproxy-01``, so the master controller will be the primary Swift controller.
Configuring OpenStack to use syslog
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the syslog server, adjust the corresponding variables in the "if $use_syslog" clause::
To use the syslog server, adjust the corresponding variables in the ``if $use_syslog`` clause::
...
$use_syslog = true
if $use_syslog {
class { "::rsyslog::client":
@ -475,7 +469,6 @@ To use the syslog server, adjust the corresponding variables in the "if $use_sys
port => '514'
}
}
...
For remote logging, use the IP or hostname of the server for the ``server`` value and set the ``port`` appropriately. For local logging, ``set log_local`` and ``log_auth_local`` to ``true``.
@ -486,7 +479,15 @@ Setting the version and mirror type
You can customize the various versions of OpenStack's components, though it's typical to use the latest versions::
...
### Syslog END ###
case $::osfamily {
"Debian": {
$rabbitmq_version_string = '2.8.7-1'
}
"RedHat": {
$rabbitmq_version_string = '2.8.7-2.el6'
}
}
# OpenStack packages and customized component versions to be installed.
# Use 'latest' to get the most recent ones or specify exact version if you need to install custom version.
$openstack_version = {
@ -498,17 +499,16 @@ You can customize the various versions of OpenStack's components, though it's ty
'cinder' => 'latest',
'rabbitmq_version' => $rabbitmq_version_string,
}
...
To tell Fuel to download packages from external repos provided by Mirantis and your distribution vendors, make sure the ``$mirror_type`` variable is set to ``default``::
...
# If you want to set up a local repository, you will need to manually adjust mirantis_repos.pp,
# though it is NOT recommended.
$mirror_type = 'default'
$enable_test_repo = false
...
$repo_proxy = 'http://10.20.0.100:3128'
Once again, the ``$mirror_tyoe`` **must** be set to ``default``. If you set it correctly in ``config.yaml`` and ran ``openstack_system`` this will already be taken care of. Otherwise, **make sure** to set this value yourself.
Future versions of Fuel will enable you to use your own internal repositories.
@ -517,21 +517,18 @@ Setting verbosity
You also have the option to determine how much information OpenStack provides when performing configuration::
...
# This parameter specifies the verbosity level of log messages
# in openstack components config. Currently, it disables or enables debugging.
$verbose = true
...
Configuring Rate-Limits
^^^^^^^^^^^^^^^^^^^^^^^
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Sometimes (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-compute-API.html) In this case you can change them to more appropriate values.
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Sometimes (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-compute-API.html.) In this case you can change them to more appropriate values.
There are two hashes describing these limits: $nova_rate_limits and $cinder_rate_limits. ::
There are two hashes describing these limits: ``$nova_rate_limits`` and ``$cinder_rate_limits``. ::
...
#Rate Limits for cinder and Nova
#Cinder and Nova can rate-limit your requests to API services.
#These limits can be reduced for your installation or usage scenario.
@ -548,41 +545,36 @@ There are two hashes describing these limits: $nova_rate_limits and $cinder_rate
'PUT' => 1000, 'GET' => 1000,
'DELETE' => 1000
}
...
Enabling Horizon HTTPS/SSL mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using the $horizon_use_ssl variable, you have the option to decide whether the OpenStack dashboard (Horizon) uses HTTP or HTTPS::
Using the ``$horizon_use_ssl`` variable, you have the option to decide whether the OpenStack dashboard (Horizon) uses HTTP or HTTPS::
...
# 'custom': require fileserver static mount point [ssl_certs] and hostname based certificate existence
$horizon_use_ssl = false
class compact_controller (
...
This variable accepts the following values:
* 'false': In this mode, the dashboard uses HTTP with no encryption
* 'default': In this mode, the dashboard uses keys supplied with the standard Apache SSL module package
* 'exist': In this case, the dashboard assumes that the domain name-based certificate, or keys, are provisioned in advance. This can be a certificate signed by any authorized provider, such as Symantec/Verisign, Comodo, GoDaddy, and so on. The system looks for the keys in these locations:
* ``false``: In this mode, the dashboard uses HTTP with no encryption.
* ``default``: In this mode, the dashboard uses keys supplied with the standard Apache SSL module package.
* ``exist``: In this case, the dashboard assumes that the domain name-based certificate, or keys, are provisioned in advance. This can be a certificate signed by any authorized provider, such as Symantec/Verisign, Comodo, GoDaddy, and so on. The system looks for the keys in these locations:
for Debian/Ubuntu:
* public `/etc/ssl/certs/domain-name.pem`
* private `/etc/ssl/private/domain-name.key`
* public ``/etc/ssl/certs/domain-name.pem``
* private ``/etc/ssl/private/domain-name.key``
for Centos/RedHat:
* public `/etc/pki/tls/certs/domain-name.crt`
* private `/etc/pki/tls/private/domain-name.key`
* public ``/etc/pki/tls/certs/domain-name.crt``
* private ``/etc/pki/tls/private/domain-name.key``
* 'custom': This mode requires a static mount point on the fileserver for [ssl_certs] and certificate pre-existence. To enable this mode, configure the puppet fileserver by editing /etc/puppet/fileserver.conf to add::
* ``custom``: This mode requires a static mount point on the fileserver for ``[ssl_certs]`` and certificate pre-existence. To enable this mode, configure the puppet fileserver by editing ``/etc/puppet/fileserver.conf`` to add::
...
[ssl_certs]
path /etc/puppet/templates/ssl
allow *
..
From there, create the appropriate directory::
@ -599,37 +591,37 @@ Defining the node configurations
Now that we've set all of the global values, its time to make sure that
the actual node definitions are correct. For example, by default all
nodes will enable Cinder on /dev/sdb, but we don't want that for the
controllers, so set nv_physical_volume to null, and manage_volumes to false. ::
nodes will enable Cinder on ``/dev/sdb``, but we don't want that for the
controllers, so set ``nv_physical_volume`` to ``null``, and ``manage_volumes`` to ``false``. ::
...
class compact_controller (
$quantum_network_node = false
$quantum_network_node = $quantum_netnode_on_cnt
) {
class { 'openstack::controller_ha':
controller_public_addresses => $controller_public_addresses,
controller_public_addresses => $controller_public_addresses,
controller_internal_addresses => $controller_internal_addresses,
internal_address => $internal_address,
...
tenant_network_type => $tenant_network_type,
segment_range => $segment_range,
cinder => $cinder,
cinder_iscsi_bind_iface => $cinder_iscsi_bind_iface,
manage_volumes => false,
galera_nodes => $controller_hostnames,
nv_physical_volume => null,
use_syslog => $use_syslog,
nova_rate_limits => $nova_rate_limits,
cinder_rate_limits => $cinder_rate_limits,
horizon_use_ssl => $horizon_use_ssl,
internal_address => $internal_address,
public_interface => $public_int,
internal_interface => $internal_int,
...
manage_volumes => false,
galera_nodes => $controller_hostnames,
nv_physical_volume => null,
use_syslog => $use_syslog,
nova_rate_limits => $nova_rate_limits,
cinder_rate_limits => $cinder_rate_limits,
horizon_use_ssl => $horizon_use_ssl,
use_unicast_corosync => $use_unicast_corosync,
ha_provider => $ha_provider
}
class { 'swift::keystone::auth':
password => $swift_user_password,
public_address => $public_virtual_ip,
password => $swift_user_password,
public_address => $public_virtual_ip,
internal_address => $internal_virtual_ip,
admin_address => $internal_virtual_ip,
admin_address => $internal_virtual_ip,
}
}
...
@ -640,173 +632,72 @@ Fortunately, Fuel includes a class for the controllers, so you don't
have to make these changes for each individual controller. As you can
see, the controllers generally use the global values, but in this case
you're telling the controllers not to manage_volumes, and not to use
/dev/sdb for Cinder.
``/dev/sdb`` for Cinder.
If you look down a little further, this class then goes on to help
specify the individual controllers::
specify the individual controllers and compute nodes::
...
# Definition of the first OpenStack controller.
node /fuel-controller-01/ {
class {'::node_netconfig':
mgmt_ipaddr => $::internal_address,
mgmt_netmask => $::internal_netmask,
public_ipaddr => $::public_address,
public_netmask => $::public_netmask,
stage => 'netconfig',
}
class {'nagios':
proj_name => $proj_name,
services => [
'host-alive','nova-novncproxy','keystone', 'nova-scheduler',
'nova-consoleauth', 'nova-cert', 'haproxy', 'nova-api', 'glance-api',
'glance-registry','horizon', 'rabbitmq', 'mysql', 'swift-proxy',
'swift-account', 'swift-container', 'swift-object',
],
whitelist => ['127.0.0.1', $nagios_master],
hostgroup => 'controller',
}
node /fuel-controller-[\d+]/ {
include stdlib
class { 'operatingsystem::checksupported':
stage => 'setup'
}
class { compact_controller: }
$swift_zone = 1
class {'::node_netconfig':
mgmt_ipaddr => $::internal_address,
mgmt_netmask => $::internal_netmask,
public_ipaddr => $::public_address,
public_netmask => $::public_netmask,
stage => 'netconfig',
}
class { 'openstack::swift::storage_node':
storage_type => $swift_loopback,
swift_zone => $swift_zone,
swift_local_net_ip => $internal_address,
}
class { 'openstack::swift::proxy':
swift_user_password => $swift_user_password,
swift_proxies => $swift_proxies,
primary_proxy => $primary_proxy,
controller_node_address => $internal_virtual_ip,
swift_local_net_ip => $internal_address,
}
}
...
class {'nagios':
proj_name => $proj_name,
services => [
'host-alive','nova-novncproxy','keystone', 'nova-scheduler',
'nova-consoleauth', 'nova-cert', 'haproxy', 'nova-api', 'glance-api',
'glance-registry','horizon', 'rabbitmq', 'mysql', 'swift-proxy',
'swift-account', 'swift-container', 'swift-object',
],
whitelist => ['127.0.0.1', $nagios_master],
hostgroup => 'controller',
}
class { compact_controller: }
$swift_zone = $node[0]['swift_zone']
class { 'openstack::swift::storage_node':
storage_type => $swift_loopback,
swift_zone => $swift_zone,
swift_local_net_ip => $internal_address,
}
class { 'openstack::swift::proxy':
swift_user_password => $swift_user_password,
swift_proxies => $swift_proxies,
primary_proxy => $primary_proxy,
controller_node_address => $internal_virtual_ip,
swift_local_net_ip => $internal_address,
}
}
Notice also that each controller has the swift_zone specified, so each
of the three controllers can represent each of the three Swift zones.
In the ``openstack/examples/site_openstack_full.pp`` example, the following nodes are specified:
* fuel-controller-01
* fuel-controller-02
* fuel-controller-03
* fuel-compute-[\d+]
* fuel-swift-01
* fuel-swift-02
* fuel-swift-03
* fuel-swiftproxy-[\d+]
* fuel-quantum
Using this architecture, the system includes three stand-alone swift-storage servers, and one or more swift-proxy servers.
In the ``openstack/examples/site_openstack_compact.pp`` example on the other hand, the role of swift-storage and swift-proxy are combined with the controllers.
Now you're ready to perform the actual installation.
Installing OpenStack using orchestration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now that you've set all of your configurations, all that's left to stand
up your OpenStack cluster is to run Puppet on each of your nodes; the
Puppet Master knows what to do for each of them.
You have two options for performing this step. The first is to use the orchestrator. When you created the ``site.pp`` file, you also created a second file, ``astute.yaml``, which configures the orchestrator. To run the orchestrator, log in to ``fuel-pm`` and execute::
astute -f astute.yaml
You will see a message on ``fuel-pm`` stating that the installation has started on fuel-controller-01. To see what's going on on the target node, type::
tail -f /var/log/syslog
for Ubuntu, or::
tail -f /var/log/messages
for CentOS/Red Hat.
Note that Puppet will require several runs to install all the different roles, so the first time it runs, the orchestrator will show an error, but it just means that the installation isn't complete. Also, after the first run on each server, the orchestrator doesn't output messages on fuel-pm; when it's finished running, it will return you to the command prompt. In the meantime, you can see what's going on by watching the logs on each individual machine.
Installing OpenStack using Puppet directly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you choose not to use orchestration, or if for some reason you want to reload only one or two nodes, you can run Puppet manually on a the target nodes.
If you're starting from scratch, start by logging in to fuel-controller-01 and running the Puppet
agent. (You need to install the master controller first.)
One optional step would be to use the script command to log all
of your output so you can check for errors if necessary::
script agent-01.log
puppet agent --test
You will to see a great number of messages scroll by, and the
installation will take a significan't amount of time. When the process
has completed, press CTRL-D to stop logging and grep for errors::
grep err: agent-01.log
If you find any errors relating to other nodes, ignore them for now.
Now you can run the same installation procedure on fuel-controller-01
and fuel-controller-02, as well as fuel-compute-01.
Note that the controllers must be installed sequentially due to the
nature of assembling a MySQL cluster based on Galera, which means that
one must complete its installation before the next begins, but that
compute nodes can be installed concurrently once the controllers are
in place.
In some cases, you may find errors related to resources that are not
yet available when the installation takes place. To solve that
problem, simply re-run the puppet agent on the affected node, and
again grep for error messages.
When you see no errors on any of your nodes, your OpenStack cluster is
ready to go.
Installing Nagios Monitoring using Puppet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fuel provides a way to deploy Nagios for monitoring your OpenStack cluster. It will require an installation of agent on controller, compute, and storage nodes, as well as having a master server for Nagios which will collect and display all the results. An agent, Nagios NRPE addon, allows OpenStack to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to monitor basic resources (like CPU load, memory usage, etc.), as well as more advanced ones on remote machines.
Fuel provides a way to deploy Nagios for monitoring your OpenStack cluster. It will require the installation of an agent on the controller, compute, and storage nodes, as well as having a master server for Nagios which will collect and display all the results. An agent, the Nagios NRPE addon, allows OpenStack to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to monitor basic resources (such as CPU load, memory usage, etc.), as well as more advanced ones on remote machines.
Nagios Agent
~~~~~~~~~~~~
In order to install Nagios NRPE on compute or controller node, a node should have the following settings: ::
In order to install Nagios NRPE on a compute or controller node, a node should have the following settings: ::
class {'nagios':
proj_name => 'test',
@ -815,10 +706,10 @@ In order to install Nagios NRPE on compute or controller node, a node should hav
hostgroup => 'compute',
}
* ``proj_name`` - is an environment for nagios commands and directory (``/etc/nagios/test/``)
* ``services`` - all services which nagios will monitor
* ``whitelist`` - array of IP addreses which NRPE trusts
* ``hostgroup`` - group to be used in nagios master (do not forget create it in nagios master)
* ``proj_name``: An environment for nagios commands and the directory (``/etc/nagios/test/``).
* ``services``: All services to be monitored by nagios.
* ``whitelist``: The array of IP addreses trusted by NRPE.
* ``hostgroup``: The group to be used in the nagios master (do not forget create the group in the nagios master).
Nagios Server
~~~~~~~~~~~~~
@ -836,18 +727,18 @@ In order to install Nagios Master on any convenient node, a node should have the
'group' => 'admins'},
}
* ``proj_name`` - is an environment for nagios commands and directory (``/etc/nagios/test/``)
* ``templatehost`` - group of checks and intervals parameters for hosts (as Hash)
* ``templateservice`` - group of checks and intervals parameters for services (as Hash)
* ``hostgroups`` - just add all groups which were on NRPE nodes (as Array)
* ``contactgroups`` - group of contacts {as Hash}
* ``contacts`` - create contacts for send error reports to {as Hash}
* ``proj_name``: The environment for nagios commands and the directory (``/etc/nagios/test/``).
* ``templatehost``: The group of checks and intervals parameters for hosts (as a Hash).
* ``templateservice``: The group of checks and intervals parameters for services (as a Hash).
* ``hostgroups``: All groups which on NRPE nodes (as an Array).
* ``contactgroups``: The group of contacts (as a Hash).
* ``contacts``: Contacts to receive error reports (as a Hash)
Health Checks
~~~~~~~~~~~~~
Complete definition of the available services to monitor and their health checks can be viewed at ``deployment/puppet/nagios/manifests/params.pp``
The complete definition of the available services to monitor and their health checks can be viewed at ``deployment/puppet/nagios/manifests/params.pp``.
Here is the list: ::
@ -887,47 +778,187 @@ Here is the list: ::
'host-alive' => 'check-host-alive',
}
Finally, back in ``site.pp``, you define the compute nodes::
# Definition of OpenStack compute nodes.
node /fuel-compute-[\d+]/ {
## Uncomment lines bellow if You want
## configure network of this nodes
## by puppet.
class {'::node_netconfig':
mgmt_ipaddr => $::internal_address,
mgmt_netmask => $::internal_netmask,
public_ipaddr => $::public_address,
public_netmask => $::public_netmask,
stage => 'netconfig',
}
include stdlib
class { 'operatingsystem::checksupported':
stage => 'setup'
}
class {'nagios':
proj_name => $proj_name,
services => [
'host-alive', 'nova-compute','nova-network','libvirt'
],
whitelist => ['127.0.0.1', $nagios_master],
hostgroup => 'compute',
}
class { 'openstack::compute':
public_interface => $public_int,
private_interface => $private_interface,
internal_address => $internal_address,
libvirt_type => 'kvm',
fixed_range => $fixed_range,
network_manager => $network_manager,
network_config => { 'vlan_start' => $vlan_start },
multi_host => $multi_host,
sql_connection => "mysql://nova:${nova_db_password}@${internal_virtual_ip}/nova",
rabbit_nodes => $controller_hostnames,
rabbit_password => $rabbit_password,
rabbit_user => $rabbit_user,
rabbit_ha_virtual_ip => $internal_virtual_ip,
glance_api_servers => "${internal_virtual_ip}:9292",
vncproxy_host => $public_virtual_ip,
verbose => $verbose,
vnc_enabled => true,
manage_volumes => $manage_volumes,
nova_user_password => $nova_user_password,
cache_server_ip => $controller_hostnames,
service_endpoint => $internal_virtual_ip,
quantum => $quantum,
quantum_sql_connection => $quantum_sql_connection,
quantum_user_password => $quantum_user_password,
quantum_host => $quantum_net_node_address,
tenant_network_type => $tenant_network_type,
segment_range => $segment_range,
cinder => $cinder_on_computes,
cinder_iscsi_bind_iface=> $cinder_iscsi_bind_iface,
nv_physical_volume => $nv_physical_volume,
db_host => $internal_virtual_ip,
ssh_private_key => 'puppet:///ssh_keys/openstack',
ssh_public_key => 'puppet:///ssh_keys/openstack.pub',
use_syslog => $use_syslog,
nova_rate_limits => $nova_rate_limits,
cinder_rate_limits => $cinder_rate_limits
}
}
In the ``openstack/examples/site_openstack_full.pp`` example, the following nodes are specified:
* fuel-controller-01
* fuel-controller-02
* fuel-controller-03
* fuel-compute-[\d+]
* fuel-swift-01
* fuel-swift-02
* fuel-swift-03
* fuel-swiftproxy-[\d+]
* fuel-quantum
Using this architecture, the system includes three stand-alone swift-storage servers, and one or more swift-proxy servers.
With ``site.pp`` prepared, you're ready to perform the actual installation.
Installing OpenStack using Puppet directly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now that you've set all of your configurations, all that's left to stand
up your OpenStack cluster is to run Puppet on each of your nodes; the
Puppet Master knows what to do for each of them.
You have two options for performing this step. The first, and by far the easiest, is to use the orchestrator. If you're going to take that option, skip ahead to :ref:`Deploying OpenStack via Orchestration <orchestration>`. If you choose not to use orchestration, or if for some reason you want to reload only one or two nodes, you can run Puppet manually on a the target nodes.
If you're starting from scratch, start by logging in to fuel-controller-01 and running the Puppet
agent.
One optional step would be to use the script command to log all
of your output so you can check for errors if necessary::
script agent-01.log
puppet agent --test
You will to see a great number of messages scroll by, and the
installation will take a significant amount of time. When the process
has completed, press CTRL-D to stop logging and grep for errors::
grep err: agent-01.log
If you find any errors relating to other nodes, ignore them for now.
Now you can run the same installation procedure on fuel-controller-01
and fuel-controller-02, as well as fuel-compute-01.
Note that the controllers must be installed sequentially due to the
nature of assembling a MySQL cluster based on Galera, which means that
one must complete its installation before the next begins, but that
compute nodes can be installed concurrently once the controllers are
in place.
In some cases, you may find errors related to resources that are not
yet available when the installation takes place. To solve that
problem, simply re-run the puppet agent on the affected node after running the other controllers, and
again grep for error messages.
When you see no errors on any of your nodes, your OpenStack cluster is
ready to go.
Examples of OpenStack installation sequences
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**First, please see the link below for details about different deployment scenarios.**
:ref:`Swift-and-object-storage-notes`
**Note:** No changes to ``site.pp`` necessary between installation phases except the *Controller + Compute on the same node* case. You simply run the same puppet scenario in several passes over the already installed node. Every deployment pass Puppet collects and adds necessary absent information to OpenStack configuration, stores it to PuppedDB and applies necessary changes. But please use appropriate ``site.pp`` from OpenStack Examples as base file for your OpenStack deployment.
When running Puppet manually, the exact sequence depends on what it is you're trying to achieve. In most cases, you'll need to run Puppet more than once; with every deployment pass Puppet collects and adds necessary absent information to the OpenStack configuration, stores it to PuppedDB and applies necessary changes.
**Note:** *Sequentially run* means you don't start the next node deployment until previous one is finished.
**Example1:** **Full OpenStack deployment with standalone storage nodes**
**Example 1:** **Full OpenStack deployment with standalone storage nodes**
* Create necessary volumes on storage nodes as described in :ref:`create-the-XFS-partition`
* Sequentially run deployment pass on controller nodes (``fuel-controller-01 ... fuel-controller-xx``).
* Run additional deployment pass on Controller 1 only (``fuel-controller-01``) to finalize Galera cluster configuration.
* Run deployment pass on Quantum node (``fuel-quantum``) to install Quantum router.
* Run deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike controllers these nodes may be deployed in parallel.
* Sequentially run deployment pass on every storage node (``fuel-sowift-01`` ... ``fuel-swift-xx``) node. By default these nodes named as ``fuel-swift-xx``. Errors in Swift storage like */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected on Storage nodes during the deployment passes until the very final pass.
* In case loopback devices are used on storage nodes (``$swift_loopback = 'loopback'`` in ``site.pp``) - run deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node one more time. Skip this step in case loopback is off (``$swift_loopback = false`` in ``site.pp``). Again, ignore errors in *Swift::Storage::Container* during this deployment pass.
* Run deployment pass on every SwiftProxy node (``fuel-swiftproxy-01 ... fuel-swiftproxy-02``). Node names are set by ``$swift_proxies`` variable in ``site.pp``. There are 2 Swift Proxies by default.
* Repeat deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node. No Swift storage errors should appear during this deployment pass!
* Create necessary volumes on storage nodes as described in :ref:`create-the-XFS-partition`.
* Sequentially run a deployment pass on the controller nodes (``fuel-controller-01 ... fuel-controller-xx``).
* Run an additional deployment pass on Controller 1 only (``fuel-controller-01``) to finalize the Galera cluster configuration.
* Run a deployment pass on the Quantum node (``fuel-quantum``) to install the Quantum router.
* Run a deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike the controllers, these nodes may be deployed in parallel.
* Sequentially run a deployment pass on every storage node (``fuel-swift-01`` ... ``fuel-swift-xx``). Errors in Swift storage such as */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected on the Storage nodes during the deployment passes until the very final pass.
* If loopback devices are used on storage nodes (``$swift_loopback = 'loopback'`` in ``site.pp``) - run a deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node one more time. Skip this step if loopback is off (``$swift_loopback = false`` in ``site.pp``). Again, ignore errors in *Swift::Storage::Container* during this deployment pass.
* Run a deployment pass on every SwiftProxy node (``fuel-swiftproxy-01 ... fuel-swiftproxy-xx``). Node names are set by the ``$swift_proxies`` variable in ``site.pp``. There are 2 Swift Proxies by default.
* Repeat the deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node. No Swift storage errors should appear during this deployment pass.
**Example2:** **Compact OpenStack deployment with storage and swift-proxy combined with nova-controller on the same nodes**
**Example 2:** **Compact OpenStack deployment with storage and swift-proxy combined with nova-controller on the same nodes**
* Create necessary volumes on controller nodes as described in :ref:`create-the-XFS-partition`
* Sequentially run deployment pass on controller nodes (``fuel-controller-01 ... fuel-controller-xx``). Errors in Swift storage like */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected during the deployment passes until the very final pass.
* Run deployment pass on Quantum node (``fuel-quantum``) to install Quantum router.
* Run deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike controllers these nodes may be deployed in parallel.
* Create the necessary volumes on controller nodes as described in :ref:`create-the-XFS-partition`
* Sequentially run a deployment pass on the controller nodes (``fuel-controller-01 ... fuel-controller-xx``). Errors in Swift storage such as */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected during the deployment passes until the very final pass.
* Run a deployment pass on the Quantum node (``fuel-quantum``) to install the Quantum router.
* Run a deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike the controllers these nodes may be deployed in parallel.
* Sequentially run one more deployment pass on every controller (``fuel-controller-01 ... fuel-controller-xx``) node. Again, ignore errors in *Swift::Storage::Container* during this deployment pass.
* Run additional deployment pass *only* on controller, which holds on the SwiftProxy service. By default it is ``fuel-controller-01``. And again, ignore errors in *Swift::Storage::Container* during this deployment pass.
* Sequentially run one more deployment pass on every controller (``fuel-controller-01 ... fuel-controller-xx``) node to finalize storage configuration. No Swift storage errors should appear during this deployment pass!
* Run an additional deployment pass *only* on the controller that hosts the SwiftProxy service. By default it is ``fuel-controller-01``. Again, ignore errors in *Swift::Storage::Container* during this deployment pass.
* Sequentially run one more deployment pass on every controller (``fuel-controller-01 ... fuel-controller-xx``) node to finalize storage configuration. No Swift storage errors should appear during this deployment pass.
**Example3:** **OpenStack HA installation without Swift**
**Example 3:** **OpenStack HA installation without Swift**
* Sequentially run deployment pass on controller nodes (``fuel-controller-01 ... fuel-controller-xx``). No errors should appear during this deployment pass.
* Run additional deployment pass on Controller 1 only (``fuel-controller-01``) to finalize Galera cluster configuration.
* Run deployment pass on Quantum node (``fuel-quantum``) to install Quantum router.
* Run deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike controllers these nodes may be deployed in parallel.
* Sequentially run a deployment pass on the controller nodes (``fuel-controller-01 ... fuel-controller-xx``). No errors should appear during this deployment pass.
* Run an additional deployment pass on Controller 1 only (``fuel-controller-01``) to finalize the Galera cluster configuration.
* Run a deployment pass on the Quantum node (``fuel-quantum``) to install the Quantum router.
* Run a deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike the controllers these nodes may be deployed in parallel.
**Example4:** **The most simple OpenStack installation Controller + Compute on the same node**
**Example 4:** **The most simple OpenStack installation: Controller + Compute on the same node**
* Set ``node /fuel-controller-[\d+]/`` variable in ``site.pp`` to match with node name you are going to deploy OpenStack. Set ``node /fuel-compute-[\d+]/`` variable to **mismatch** with node name. Run deployment pass on this node. No errors should appear during this deployment pass.
* Set ``node /fuel-compute-[\d+]/`` variable in ``site.pp`` to match with node name you are going to deploy OpenStack. Set ``node /fuel-controller-[\d+]/`` variable to **mismatch** with node name. Run deployment pass on this node. No errors should appear during this deployment pass.
* Set the ``node /fuel-controller-[\d+]/`` variable in ``site.pp`` to match the hostname of the node on which you are going to deploy OpenStack. Set the ``node /fuel-compute-[\d+]/`` variable to **mismatch** the node name. Run a deployment pass on this node. No errors should appear during this deployment pass.
* Set the ``node /fuel-compute-[\d+]/`` variable in ``site.pp`` to match the hostname of the node on which you are going to deploy OpenStack. Set the ``node /fuel-controller-[\d+]/`` variable to **mismatch** the node name. Run a deployment pass on this node. No errors should appear during this deployment pass.

View File

@ -1,12 +1,24 @@
.. _orchestration:
Depolying via orchestration
----------------------------
Manually installing a handful of servers might be managable, but repeatable installations, or those that involve a large number of servers, require automated orchestration. Now you can use orchestration with Fuel through the ``astute`` script. To configure ``astute``, modify this ``astute.cfg`` file for your own configuration and create it on fuel-pm:
Manually installing a handful of servers might be managable, but repeatable installations, or those that involve a large number of servers, require automated orchestration. Now you can use orchestration with Fuel through the ``astute`` script. This script is configured using the ``astute.yaml`` file you created when you ran ``openstack_system``.
.. literalinclude:: /pages/installation-instructions/astute.cfg
To run the orchestrator, log in to ``fuel-pm`` and execute::
Finally, to begin the deployment, run the following script::
astute -f astute.yaml
You will see a message on ``fuel-pm`` stating that the installation has started on fuel-controller-01. To see what's going on on the target node, type::
tail -f /var/log/syslog
for Ubuntu, or::
tail -f /var/log/messages
for CentOS/Red Hat.
Note that Puppet will require several runs to install all the different roles, so the first time it runs, the orchestrator will show an error, but it just means that the installation isn't complete. Also, after the first run on each server, the orchestrator doesn't output messages on fuel-pm; when it's finished running, it will return you to the command prompt. In the meantime, you can see what's going on by watching the logs on each individual machine.
astute -f astute.cfg
You should see the process continuing as it goes along.

View File

@ -10,61 +10,56 @@ openstack cloud for a drive. Follow these steps:
#. On the host machine, open your browser to
http://192.168.0.10/ (Adjust this value to your own ``public_virtual_ip``.)
http://10.0.1.10/
and login as nova/nova (unless you changed this information in ``site.pp``)
#. Click the Project tab in the left-hand column.
and login as nova/nova (unless you changed this information in
site.pp)
#. In the network and security groups tab:
#. Create a new key/pair for future use
#. Add tcp 22 22 to default network setting
#. Add icmp -1 -1 to default network settings
#. Allocate 2 floating ips for future use
#. Under Manage Compute, choose Access & Security to set security settings:
#. Click Create Keypair and enter a name for the new keypair. The private key should download automatically; make sure to keep it safe.
#. Click Access & Security again and click Edit Rules for the default Security Group. Add a new rule allowing TCP connections from port 22 to port 22 for all IP addresses using a CIDR of 0.0.0.0/0. (You can also customize this setting as necessary.) Click Add Rule to save the new rule.
#. Add a second new rule allowing ICMP connections with a type and code of -1 to the default Security Group and click Add Rule to save.
#. Click Allocate IP To Project and add two new floating ips. Notice that they come from the pool specified in ``config.yaml`` and ``site.pp``.
#. Click Images & Snapshots, then Create Image. Enter a name and specify the Image Location as https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img, with a Format of QCOW2. Check the Public checkbox.
#. The next step is to upload an image to use for creating VMs, but an
OpenStack bug prevents you from doing this in the browser. Instead,
log in to any of the controllers as root and execute the following
commands::
~/source openrc
cd ~
source openrc
glance image-create --name cirros --container-format bare --disk-format qcow2 --is-public yes --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
#. Go back to the browser and launch a new instance of this image
using the tiny flavor.
#. Go back to the browser and refresh the page. Launch a new instance of this image
using the tiny flavor. Click the Networking tab and choose the default ``net04_ext`` network, then click the Launch button.
#. On the instances page:
#. Click the image and look at the settings.
#. Click the logs tab to look at the logs.
#. Click the VNC tab to log in. If you see just a big black rectangle, the machine is in screensaver mode; click the grey area and press the space bar to wake it up, then login as cirros/cubswin:).
#. Do ifconfig -a | more and see the assigned ip address.
#. Do sudo fdisk -l and see the disk. Notice that there arent any; no volume has yet been assigned to this VM.
#. Click the new instance and look at the settings.
#. Click the Logs tab to look at the logs.
#. Click the VNC tab to log in. If you see just a big black rectangle, the machine is in screensaver mode; click the grey area and press the space bar to wake it up, then login as ``cirros/cubswin:)``.
#. At the command line, enter ``ifconfig -a | more`` and see the assigned ip address.
#. Enter ``sudo fdisk -l`` to see that no volume has yet been assigned to this VM.
#. On the Instances page, click Assign Floating IP and assign an IP address to your instance. You can either choose from one of the existing created IPs by using the pulldown menu or click the plus sign (+) to choose a network and allocate a new IP address.
#. From your host machine, ping the floating ip assigned to this VM.
#. If that works, try to ``ssh cirros@floating-ip`` from the host machine.
#. Assign a floating ip address to your instance.
#. From your host machine, ping the floating ip assigned to this VM.
#. If that works, you can try to ssh cirros@floating-ip from the host machine.
#. Back in the browser, go to the volumes tab and create a new volume, then attach it to the instance.
#. Go back to the VNC tab and repeat fdisk -l and see the new unpartitioned disk attached.
#. Back in the browser, click Volumes and Create Volume. Create the new vlume, and attach it to the instance.
#. Go back to the VNC tab and repeat ``fdisk -l`` and see the new unpartitioned disk attached.
From here, your new VM is ready to be used.

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

View File

@ -28,7 +28,5 @@ In addition to these configurations, Fuel is designed to be completely
customizable. Upcoming editions of this guide discuss techniques for
creating additional OpenStack deployment configurations.
.. Need the correct location for the "contact Mirantis" link; do we have a special sales link?
To configure Fuel immediately for more extensive variations on these
use cases, you can `contact Mirantis for further assistance <http://www.mirantis.com>`_.
use cases, you can `contact Mirantis for further assistance <http://www.mirantis.com/contact/>`_.

View File

@ -7,7 +7,7 @@ v1.0-essex
* Puppet manifests for deploying OpenStack Essex in HA mode
* Active/Active HA architecture for Essex, based on RabbitMQ / MySQL Galera / HAProxy / keepalived
* Cobbler-based bare-metal provisioning for CentOS 6.3 and RHEL 6.3
* Access to the mirror with OpenStack packages (http://download.mirantis.com/epel-fuel/)
* Access to the mirror with OpenStack packages
* Configuration templates for different OpenStack cluster setups
* User Guide

View File

@ -1,3 +1,3 @@
One of the advantages of using Fuel is that it makes it easy to set up an OpenStack cluster so that you can feel your way around and get your feet wet. You can easily set up a cluster using test, or even virtual machines but when you're ready to do an actual deployment, however, there are a number of things you need to consider.
One of the advantages of using Fuel is that it makes it easy to set up an OpenStack cluster so that you can feel your way around and get your feet wet. You can easily set up a cluster using test, or even virtual machines, but when you're ready to do an actual deployment there are a number of things you need to consider.
In this section, you'll find information such as how to size the hardware for your cloud, how to handle large-scale deployments, and how to streamline your maintenance tasks using techniques such as orchestration.
In this section, you'll find information such as how to size the hardware for your cloud and how to handle large-scale deployments.

View File

@ -3,7 +3,7 @@ Sizing Hardware
One of the first questions that comes to mind when planning an OpenStack deployment is "what kind of hardware do I need?" Finding the answer is rarely simple, but getting some idea is not impossible.
Many factors contribute to decisions regarding hardware for an OpenStack cluster -- contact Mirantis for information on your specific situation -- but in general, you will want to consider the following four areas:
Many factors contribute to decisions regarding hardware for an OpenStack cluster -- `contact Mirantis <http://www.mirantis.com/contact/>`_ for information on your specific situation -- but in general, you will want to consider the following four areas:
* CPU
* Memory
@ -13,7 +13,7 @@ Many factors contribute to decisions regarding hardware for an OpenStack cluster
Your needs in each of these areas are going to determine your overall hardware requirements.
CPU
---
^^^
The basic consideration when it comes to CPU is how many GHZ you're going to need. To determine that, think about how many VMs you plan to support, and the average speed you plan to provide, as well as the maximum you plan to provide for a single VM. For example, consider a situation in which you expect:
@ -34,7 +34,7 @@ You will need to take into account a couple of additional notes:
* Choose a good value CPU.
Memory
------
^^^^^^
The process of determining memory requirements is similar to determining CPU. Start by deciding how much memory will be devoted to each VM. In this example, with 4 GB per VM and a maximum of 32 GB for a single VM, you will need 400 GB of RAM.
@ -45,7 +45,7 @@ However, remember that you need 6 servers to meet your CPU requirements, so inst
Again, you do not want to oversubscribe memory.
Disk Space
----------
^^^^^^^^^^
When it comes to disk space there are several types that you need to consider:
@ -63,7 +63,7 @@ As far as local drive space that must reside on the compute nodes, in our exampl
Again you have 6 servers, so that means you're looking at .9TB per server (5 TB / 6 servers) for local drive space.
Throughput
^^^^^^^^^^
~~~~~~~~~~
As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPS based on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS will depend on what you choose. For example:
@ -82,7 +82,7 @@ As far as throughput, that's going to depend on what kind of storage you choose.
Clearly, SSD gives you the best performance, but the difference in cost between that and the lower end solution is going to be signficant, to say the least. You'll need to decide based on your own situation.
Remote storage
^^^^^^^^^^^^^^
~~~~~~~~~~~~~~
IOPS will also be a factor in determining how you decide to handle persistent storage. For example, consider these options for laying out your 50 TB of remote volume space:
@ -91,19 +91,19 @@ IOPS will also be a factor in determining how you decide to handle persistent st
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = **36 Read IOPS, 18 Write IOPS per frame**
* 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per frame
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = **120 Read IOPS, 60 Write IOPS per frame**
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a single point of failure in your cluster.
Object storage
^^^^^^^^^^^^^^
~~~~~~~~~~~~~~
When it comes to object storage, you will find that you need more space than you think. For example, this example specifies 50 TB of object storage. Easy right?
@ -118,7 +118,7 @@ So how do you put that together? If you were to use 3 TB 3.5" drives, you could
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB each, but it's not recommended due to several factors, from the high cost of failure to replication and capacity issues.
Networking
----------
^^^^^^^^^^
Perhaps the most complex part of designing an OpenStack cluster is the networking. An OpenStack cluster can involve multiple networks even beyond the Public, Private, and Internal networks. Your cluster may involve tenant networks, storage networks, multiple tenant private networks, and so on. Many of these will be VLANs, and all of them will need to be planned out.
@ -133,14 +133,14 @@ In order to achieve this, you can use 2 1Gb links per server (2 x 1000 Mbits/sec
You can also increase throughput and decrease latency by using 2 10 Gb links, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor to consider.
Scalability and oversubscription
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is one of the ironies of networking that 1Gb Ethernet generally scales better than 10Gb Ethernet -- at least until 100Gb switches are more commonly available. It's possible to aggregate the 1Gb links in a 48 port switch, so that you have 48 1Gb links down, but 4 10GB links up. Do the same thing with a 10Gb switch, however, and you have 48 10Gb links down and 4 100Gb links up, resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning. Problems only arise when you are moving between racks, so plan to create "pods", each of which includes both storage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
^^^^^^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~
In this example, you are looking at:
@ -151,7 +151,7 @@ In this example, you are looking at:
Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your network grows, you will need to consider uplinks and aggregation switches.
Summary
-------
^^^^^^^
In general, your best bet is to choose a large multi-socket server, such as a 2 socket server with a balance in I/o, CPU, Memory, and Disk. Look for a 1U low cost R-class or 2U high density C-class server. Some good alternatives for compute nodes include:

View File

@ -2,8 +2,8 @@ Redepolying an environment
--------------------------
Because Puppet is additive only, there is no ability to revert changes as you would in a typical application deployment.
If a change needs to be backed out, you must explicitly add configuration to reverse it, check this configuration in,
and promote it to production using the pipeline. This meant that if a breaking change did get deployed into production,
If a change needs to be backed out, you must explicitly add a configuration to reverse it, check this configuration in,
and promote it to production using the pipeline. This means that if a breaking change did get deployed into production,
typically a manual fix was applied, with the proper fix subsequently checked into version control.
Fuel combines the ability to isolate code changes while developing with minimizing the headaches associated
@ -17,11 +17,11 @@ Environments
* On the Master/Server Node
The puppetmaster tries to find modules using its ``modulepath`` setting, typically something like ``/etc/puppet/modules``.
You usually just set this value once in your ``/etc/puppet/puppet.conf`` and thats it, all done.
The Puppet Master tries to find modules using its ``modulepath`` setting, which is typically something like ``/etc/puppet/modules``.
You usually just set this value once in your ``/etc/puppet/puppet.conf``.
Environments expand on this idea and give you the ability to use different settings for different environments.
For example, you can specify several search paths. The following example dynamically sets the modulepath
For example, you can specify several search paths. The following example dynamically sets the ``modulepath``
so Puppet will check a per-environment folder for a module before serving it from the main set::
[master]
@ -35,10 +35,10 @@ Environments
* On the Agent Node
Once the agent node makes a request, the puppet master gets informed of its environment.
Once the agent node makes a request, the Puppet Master gets informed of its environment.
If you dont specify an environment, the agent uses the default ``production`` environment.
To set an environment agent-side, just specify the environment setting in the [agent] block of ``puppet.conf``::
To set an environment agent-side, just specify the environment setting in the ``[agent]`` block of ``puppet.conf``::
[agent]
environment = development
@ -52,7 +52,7 @@ Deployment pipeline
In order to deploy multiple environments that don't interfere with each other,
you should specify the ``$deployment_id`` option in ``/etc/puppet/manifests/site.pp``. It should be an even integer value in the range of 1-254.
This value is used in dynamic environment-based tag generation. Fuel also apply that tag globally to all resources on each node. It is also used for the keepalived daemon, which evaluates a unique virtual_router_id.
This value is used in dynamic environment-based tag generation. Fuel also apply that tag globally to all resources on each node. It is also used for the keepalived daemon, which evaluates a unique ``virtual_router_id``.
* Clean/Revert