added pdf_install.rst, pdf_reference.rst, modified conf.py, added contents refarch.rst, contents-user.rst, copyright_page.rst, install-guide.rst

This commit is contained in:
svetlana 2013-09-30 13:16:28 -07:00
parent cc685776ab
commit b7ba70a5a0
29 changed files with 3094 additions and 27 deletions

View File

@ -285,7 +285,8 @@ extensions += ['rst2pdf.pdfbuilder']
pdf_documents = [
('pdf_index', u'Fuel-for-OpenStack-3.1-UserGuide', u'User Guide',
u'2013, Mirantis Inc.'),
('pdf_install', u'Fuel-for-Openstack-3.1-InstallGuide', u'Installation Guide', u'2013, Mirantis, Inc.')
('pdf_install', u'Fuel-for-Openstack-3.1-InstallGuide', u'Installation Guide', u'2013, Mirantis Inc.'),
('pdf_reference', u'Fuel-for-OpenStack-3.1-ReferenceArchitecture', u'Reference Architecture', u'2013, Mirantis Inc.')
# (master_doc, project, project, copyright),
]
pdf_stylesheets = ['letter', 'mirantis']
@ -302,7 +303,7 @@ pdf_break_level = 1
# When a section starts in a new page, force it to be 'even', 'odd',
# or just use 'any'
pdf_breakside = 'any'
pdf_breakside = 'odd'
# Insert footnotes where they are defined instead of
# at the end.

10
contents-install.rst Normal file
View File

@ -0,0 +1,10 @@
.. index Table of Contents
.. _ToC:
.. toctree:: Table of Contents
:maxdepth: 2
copyright_page
preface
instal-guide

9
contents-refarch.rst Normal file
View File

@ -0,0 +1,9 @@
.. index Table of Contents
.. _ToC:
.. toctree:: Table of Contents
:maxdepth: 2
preface
reference-architecture

12
contents-user.rst Normal file
View File

@ -0,0 +1,12 @@
.. index Table of Contents
.. _ToC:
.. toctree:: Table of Contents
:maxdepth: 2
preface
.. include:: /pages/about-fuel/0070-introduction.rst
.. include:: /pages/installation-fuel-ui/red_hat_openstack.rst
.. include:: /pages/installation-fuel-ui/post-install-healthchecks.rst
.. include:: /pages/troubleshooting-ug-network-issues.rst

View File

@ -1,6 +1,12 @@
..index:: copyright_page
.. raw::pdf
©2013 Mirantis, Inc. All rights reserved.
PageBreak
.. index:: copyright_page
.. _CopyrightPage:
2013 Mirantis, Inc. All rights reserved.
This product is protected by U.S. and international copyright and
intellectual property laws. No part of this publication may be
@ -15,4 +21,4 @@ contains the latest information at the time of publication.
Mirantis, Inc. and the Mirantis Logo are trademarks of Mirantis, Inc.
and/or its affiliates in the United States an other countries.
Third party trademarks, service marks, and names mentioned in this
document are the properties of their respective owners.
document are the properties of their respective owners.

View File

@ -7,23 +7,20 @@ Installation Guide
.. contents:: :local:
:depth: 2
.. include:: /pages/preface/preface.rst
.. include:: /pages/about-fuel/prerequisites.rst
.. include:: /pages/about-fuel/0030-how-it-works.rst
.. include:: /pages/about-fuel/0040-reference-topologies.rst
.. include:: /pages/about-fuel/0050-supported-software.rst
.. include:: /pages/about-fuel/0060-download-fuel.rst
.. include:: /pages/production-considerations/0015-sizing-hardware.rst
.. include:: /pages/production-considerations/0020-deployment-pipeline.rst
.. include:: /pages/production-considerations/0030-large-deployments.rst
.. include:: /pages/installation-fuel-ui/install.rst
.. include:: /pages/installation-fuel-ui/networks.rst
.. include:: /pages/installation-fuel-cli/0000-preamble.rst
.. include:: /pages/installation-fuel-cli/0010-introduction.rst
.. include:: /pages/installation-fuel-cli/0015-before-you-start.rst
.. include:: /pages/installation-fuel-cli/0020-machines.rst
.. include:: /pages/installation-fuel-cli/0057-prepare-for-deployment.rst
.. include:: /pages/installation-fuel-cli/0060-understand-the-manifest.rst
.. include:: /pages/installation-fuel-cli/0070-orchestration.rst
.. include:: /pages/installation-fuel-cli/0080-testing-openstack.rst
.. include:: /pages/install-guide/0030-how-it-works.rst
.. include:: /pages/install-guide/0040-reference-topologies.rst
.. include:: /pages/install-guide/0050-supported-software.rst
.. include:: /pages/install-guide/0060-download-fuel.rst
.. include:: /pages/install-guide/0015-sizing-hardware.rst
.. include:: /pages/install-guide/0020-deployment-pipeline.rst
.. include:: /pages/install-guide/0030-large-deployments.rst
.. include:: /pages/install-guide/install.rst
.. include:: /pages/install-guide/networks.rst
.. include:: /pages/install-guide/0000-preamble.rst
.. include:: /pages/install-guide/0010-introduction.rst
.. include:: /pages/install-guide/0015-before-you-start.rst
.. include:: /pages/install-guide/0020-machines.rst
.. include:: /pages/install-guide/0057-prepare-for-deployment.rst
.. include:: /pages/install-guide/0060-understand-the-manifest.rst
.. include:: /pages/install-guide/0070-orchestration.rst
.. include:: /pages/install-guide/0080-testing-openstack.rst

View File

@ -0,0 +1,17 @@
In this section, youll learn how to install OpenStack using Fuel. In
addition to getting a feel for the steps involved, youll also gain valuable
familiarity with some of the customization options. While Fuel provides
different deployment configuration templates in the box, it is common for
administrators to modify the architecture to meet enterprise requirements.
Working hands on with Fuel for OpenStack will help you see how to move
certain features around from the standard installation.
The first step, however, is to commit to a deployment template. A balanced,
compact, and full-featured deployment is the Multi-node (HA) Compact
deployment, so thats what well be using through the rest of this guide.
Production installations require a physical hardware infrastructure, but you
can easily deploy a small simulation cloud on a single physical machine
using VirtualBox. You can follow these instructions to install an OpenStack
cloud into a test environment using VirtualBox, or to get a production-grade
installation using physical hardware.

View File

@ -0,0 +1,23 @@
How installation works
----------------------
While version 2.0 of Fuel provided the ability to simplify installation of
OpenStack, versions 2.1 and above include orchestration capabilities that
simplify deployment of OpenStack clusters. The deployment process using
Fuel CLI follows this general procedure:
#. Design your architecture.
#. Install Fuel onto the fuel-pm machine.
#. Configure Fuel.
#. Create the basic configuration and load it into Cobbler.
#. PXE-boot the servers so Cobbler can install the operating system and
prepare them for orchestration.
#. Use one of the templates included in Fuel and the configuration that
populates Puppet's site.pp file.
#. Customize the site.pp file as needed.
#. Use the orchestrator to coordinate the installation of the appropriate
OpenStack components on each node.
Start by designing your architecture, details about which you can find in
the next section, :ref:`Before You Start <0015-before-you-start.rst>`

View File

@ -0,0 +1,68 @@
Before You Start
----------------
Before you begin your installation, you will need to make a number of
important decisions:
**OpenStack features**
Your first decision is to decide which of the
optional OpenStack features you will need. For example, you must decide
whether you want to install Swift, whether you want Glance to use Swift for
image storage, whether you want Cinder for block storage, and whether you
want nova-network or Quantum to handle your network
connectivity. In our example installation we will be installing Glance with
Swift support and Cinder for block storage. Also, due to the fact that it
can be easily installed using orchestration, we will be using Quantum.
**Deployment configuration**
Next, you need to decide whether your
deployment requires high availability (HA). If you need HA for your
deployment, you have a choice regarding the number of controllers you want
to include. Following the recommendations in the previous section for a
typical HA deployment configuration, we will use three OpenStack controllers.
**Cobbler server and Puppet Master.**
The heart of any Fuel install is the
combination of Puppet Master and Cobbler used to create your resources.
Although Cobbler and Puppet Master can be installed on separate machines, it
is common practice to install both on a single machine for small to medium
size clouds, and that's what we'll be doing in this example. By default, the
Fuel ISO creates a single server with both services.
**Domain name.**
Puppet clients generate a Certificate Signing Request
(CSR), which is then signed by the Puppet Master. The signed certificate can
then be used to authenticate clients during provisioning. Certificate
generation requires a fully qualified hostname, so you must choose a domain
name to be used in your installation. Future versions of Fuel will enable
you to choose this domain name on your own; by default, Fuel 3.1 uses
``localdomain``.
**Network addresses.**
OpenStack requires a minimum of three networks. If
you are deploying on physical hardware, two of them -- the public network
and the internal, or management network -- must be routable in your
networking infrastructure. The third network is used by the nodes for
inter-node communications. Also, if you intend for your cluster to be
accessible from the Internet, you'll want the public network to be on the
proper network segment. For simplicity in this case, this example assumes
an Internet router at 192.168.0.1. Additionally, a set of private network
addresses should be selected for automatic assignment to guest VMs. (These
are fixed IPs for the private network). In our case, we are allocating
network addresses as follows:
* Public network: 192.168.0.0/24
* Internal network: 10.0.0.0/24
* Private network: 10.0.1.0/24
**Network interfaces.**
All of those networks need to be assigned to the
available NIC cards on the allocated machines. Additionally, if a fourth NIC
is available, Cinder or block storage traffic can be separated and delegated
to the fourth NIC. In our case, we're assigning networks as follows:
* Public network: eth1
* Internal network: eth0
* Private network: eth2

View File

@ -0,0 +1,284 @@
.. raw:: pdf
PageBreak
.. index:: Sizing Hardware, Hardware Sizing
.. _Sizing_Hardware:
Sizing Hardware for Production Deployment
=========================================
.. contents :local:
One of the first questions people ask when planning an OpenStack deployment is
"what kind of hardware do I need?" There is no such thing as a one-size-fits-all
answer, but there are straightforward rules to selecting appropriate hardware
that will suit your needs. The Golden Rule, however, is to always accommodate
for growth. With the potential for growth accounted for, you can move on to the
actual hardware needs.
Many factors contribute to selecting hardware for an OpenStack cluster --
`contact Mirantis <http://www.mirantis.com/contact/>`_ for information on your
specific requirements -- but in general, you will want to consider the following
factors:
* Processing
* Memory
* Storage
* Networking
Your needs in each of these areas are going to determine your overall hardware
requirements.
Processing
----------
In order to calculate how much processing power you need to acquire you will
need to determine the number of VMs your cloud will support. You must also
consider the average and maximum processor resources you will allocate to each
VM. In the vast majority of deployments, the allocated resources will be the
same for all of your VMs. However, if you are planning to create groups of VMs
that have different requirements, you will need to calculate for all of them in
aggregate. Consider this example:
* 100 VMs
* 2 EC2 compute units (2 GHz) average
* 16 EC2 compute units (16 GHz) max
To make it possible to provide the maximum CPU in this example you will need at
least 5 CPU cores (16 GHz/(2.4 GHz per core * 1.3 to adjust for hyper-threading))
per machine, and at least 84 CPU cores ((100 VMs * 2 GHz per VM)/2.4 GHz per
core) in total. If you were to select the Intel E5 2650-70 8 core CPU, that
means you need 11 sockets (84 cores / 8 cores per socket). This breaks down to
six dual core servers (12 sockets / 2 sockets per server), for a "packing
density" of 17 VMs per server (102 VMs / 6 servers).
This process also accommodates growth since you now know what a single server
using this CPU configuration can support. You can add new servers accounting
for 17 VMs each as needed without having to re-calculate.
You will also need to take into account the following:
* This model assumes you are not oversubscribing your CPU.
* If you are considering Hyper-threading, count each core as 1.3, not 2.
* Choose a good value CPU that supports the technologies you require.
Memory
------
Continuing to use the example from the previous section, we need to determine
how much RAM will be required to support 17 VMs per server. Let's assume that
you need an average of 4 GBs of RAM per VM with dynamic allocation for up to
12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires
that each server have 204 GBs of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate
core OS operations as well as RAM for each VM container (not the RAM allocated
to each VM, but the memory the core OS uses to run the VM). The node's OS must
run it's own operations, schedule processes, allocate dynamic resources, and
handle network operations, so giving the node itself at least 16 GBs or more RAM
is not unreasonable.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB
and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server.
For an average 2-CPU socket server board you get 16-24 RAM slots. To have
256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support
all core OS requirements.
You can adjust this calculation based on your needs.
Storage Space
-------------
When it comes to disk space there are several types that you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
As far as local drive space that must reside on the compute nodes, in our
example of 100 VMs we make the following assumptions:
* 150 GB local storage per VM
* 5 TB total of local storage (100 VMs * 50 GB per VM)
* 500 GB of persistent volume storage per VM
* 50 TB total persistent storage
Returning to our already established example, we need to figure out how much
storage to install per server. This storage will service the 17 VMs per server.
If we are assuming 50 GBs of storage for each VMs drive container, then we would
need to install 2.5 TBs of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on
server form factor (i.e., 2U vs. 4U), you will need to consider how the storage
will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using
unified storage. For this example a single 3 TB drive would provide more than
enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even
consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5
for redundancy. If speed is critical, however, you will likely want to have a
single hardware drive for each VM. In this case you would likely look at a 3U
form factor with 24-slots.
Don't forget that you will also need drive space for the node itself, and don't
forget to order the correct backplane that supports the drive configuration
that meets your needs. Using our example specifications and assuming that speed
it critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM
146 GB SAS drives.
Throughput
++++++++++
As far as throughput, that's going to depend on what kind of storage you choose.
In general, you calculate IOPS based on the packing density (drive IOPS * drives
in the server / VMs per server), but the actual drive IOPS will depend on the
drive technology you choose. For example:
* 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
* 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
* 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)
* 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
* SSD (40K IOPS, eight 300 GB drive, RAID-10)
* 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between
SSDs and the less costly platter-based solutions is going to be significant, to
say the least. The acceptable cost burden is determined by the balance between
your budget and your performance and redundancy needs. It is also important to
note that the rules for redundancy in a cloud environment are different than a
traditional server installation in that entire servers provide redundancy as
opposed to making a single server instance redundant.
In other words, the weight for redundant components shifts from individual OS
installation to server redundancy. It is far more critical to have redundant
power supplies and hot-swappable CPUs and RAM than to have redundant compute
node storage. If, for example, you have 18 drives installed on a server and have
17 drives directly allocated to each VM installed and one fails, you simply
replace the drive and push a new node copy. The remaining VMs carry whatever
additional load is present due to the temporary loss of one node.
Remote storage
++++++++++++++
IOPS will also be a factor in determining how you plan to handle persistent
storage. For example, consider these options for laying out your 50 TB of remote
volume space:
* 12 drive storage frame using 3 TB 3.5" drives mirrored
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your cluster.
Object storage
++++++++++++++
When it comes to object storage, you will find that you need more space than
you think. For example, this example specifies 50 TB of object storage.
`Easy right?` Not really.
Object storage uses a default of 3 times the required space for replication,
which means you will need 150 TB. However, to accommodate two hands-off zones,
you will need 5 times the required space, which actually means 250 TB.
The calculations don't end there. You don't ever want to run out of space, so
"full" should really be more like 75% of capacity, which means you will need a
total of 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start
with a happy medium of a multiplier of 4, then acquire more hardware as your
drives begin to fill up. That calculates to 200 TB in our example. So how do
you put that together? If you were to use 3 TB 3.5" drives, you could use a 12
drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB
each, but its not recommended due to the high cost of failure to replication
and capacity issues.
Networking
----------
Perhaps the most complex part of designing an OpenStack cluster is the
networking.
An OpenStack cluster can involve multiple networks even beyond the Public,
Private, and Internal networks. Your cluster may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
In terms of the example network, consider these assumptions:
* 100 Mbits/second per VM
* HA architecture
* Network Storage is not latency sensitive
In order to achieve this, you can use two 1 Gb links per server (2 x 1000
Mbits/second / 17 VMs = 118 Mbits/second).
Using two links also helps with HA. You can also increase throughput and
decrease latency by using two 10 Gb links, bringing the bandwidth per VM to
1 Gb/second, but if you're going to do that, you've got one more factor to
consider.
Scalability and oversubscription
++++++++++++++++++++++++++++++++
It is one of the ironies of networking that 1 Gb Ethernet generally scales
better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly
available. It's possible to aggregate the 1 Gb links in a 48 port switch, so
that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a
10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up,
resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great
extent with careful planning. Problems only arise when you are moving between
racks, so plan to create "pods", each of which includes both storage and
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
+++++++++++++++++++++++++
In this example, you are looking at:
* 2 data switches (for HA), each with a minimum of 12 ports for data
(2 x 1 Gb links per server x 6 servers)
* 1 x 1 Gb switch for IPMI (1 port per server x 6 servers)
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port
switches. Also, as your network grows, you will need to consider uplinks and
aggregation switches.
Summary
-------
In general, your best bet is to choose a 2 socket server with a balance in I/O,
CPU, Memory, and Disk that meets your project requirements.
Look for a 1U R-class or 2U high density C-class servers. Some good options
from Dell for compute nodes include:
* Dell PowerEdge R620
* Dell PowerEdge C6220 Rack Server
* Dell PowerEdge R720XD (for high disk or IOPS requirements)
You may also want to consider systems from HP (http://www.hp.com/servers) or
from a smaller systems builder like Aberdeen, a manufacturer that specializes
in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com).

View File

@ -0,0 +1,109 @@
.. raw:: pdf
PageBreak
.. index:: Redeploying An Environment
.. _Redeploying_An_Environment:
Redeploying An Environment
==========================
.. contents :local:
Because Puppet is additive only, there is no ability to revert changes as you
would in a typical application deployment. If a change needs to be backed out,
you must explicitly add a configuration to reverse it, check the configuration
in, and promote it to production using the pipeline. This means that if a
breaking change does get deployed into production, typically a manual fix is
applied, with the proper fix subsequently checked into version control.
Fuel offers the ability to isolate code changes while developing a deployment
and minimizes the headaches associated with maintaining multiple configurations
through a single Puppet Master by creating what are called environments.
Environments
------------
Puppet supports assigning nodes 'environments'. These environments can be
mapped directly to your development, QA and production life cycles, so its a
way to distribute code to nodes that are assigned to those environments.
**On the Master node:**
The Puppet Master tries to find modules using its ``modulepath`` setting,
which by default is ``/etc/puppet/modules``. It is common practice to set
this value once in your ``/etc/puppet/puppet.conf``. Environments expand on
this idea and give you the ability to use different settings for different
configurations.
For example, you can specify several search paths. The following example
dynamically sets the ``modulepath`` so Puppet will check a per-environment
folder for a module before serving it from the main set:
.. code-block:: ini
[master]
modulepath = $confdir/$environment/modules:$confdir/modules
[production]
manifest = $confdir/manifests/site.pp
[development]
manifest = $confdir/$environment/manifests/site.pp
**On the Slave Node:**
Once the slave node makes a request, the Puppet Master gets informed of its
environment. If you dont specify an environment, the agent uses the default
``production`` environment.
To set aslave-side environment, just specify the environment setting in the
``[agent]`` block of ``puppet.conf``:
.. code-block:: ini
[agent]
environment = development
Deployment pipeline
-------------------
1. Deploy
In order to deploy multiple environments that don't interfere with each other,
you should specify the ``deployment_id`` option in YAML file.
It should be an even integer value in the range of 2-254.
This value is used in dynamic environment-based tag generation. Fuel applies
that tag globally to all resources and some services on each node.
2. Clean/Revert
At this stage you just need to make sure the environment has the
original/virgin state.
3. Puppet node deactivate
This will ensure that any resources exported by that node will stop appearing
in the catalogs served to the slave nodes::
puppet node deactivate <node>
where ``<node>`` is the fully qualified domain name as seen in
``puppet cert list --all``.
You can deactivate nodes manually one by one, or execute the following
command to automatically deactivate all nodes::
cert list --all | awk '! /DNS:puppet/ { gsub(/"/, "", $2); print $2}' | xargs puppet node deactivate
4. Redeploy
Start the puppet agent again to apply a desired node configuration.
.. seealso::
http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/
http://docs.puppetlabs.com/guides/environment.html

View File

@ -0,0 +1,307 @@
Infrastructure Allocation and Installation
------------------------------------------
The next step is to make sure that you have all of the required hardware and
software in place.
Software
^^^^^^^^
You can download the latest release of the Fuel ISO from
http://fuel.mirantis.com/your-downloads/.
Alternatively, if you can't use the pre-built ISO, Mirantis offers the Fuel
Library as a tar.gz file downloadable from the `Downloads
<http://fuel.mirantis.com/your-downloads/>`_ section of the Fuel portal.
Using this file requires a bit more effort, but will yield the same results
as using the ISO.
Network setup
^^^^^^^^^^^^^
OpenStack requires a minimum of three distinct networks: internal (or
management), public, and private. The simplest and best methodology to map
NICs is to assign each network to a different physical interface. However,
not all machines have three NICs, and OpenStack can be configured and
deployed with only two physical NICs, combining the internal and public
traffic onto a single NIC.
If you are building a simulation environment, you are not limited to the
availability of physical NICs. Allocate three NICs to each VM in your
OpenStack infrastructure, one each for the internal, public, and private
networks.
Finally, we assign network ranges to the internal, public, and private
networks, and IP addresses to fuel-pm, fuel-controllers, and fuel-compute
nodes. For deployment to a physical infrastructure you must work with your
IT department to determine which IPs to use, but for the purposes of this
exercise we will assume the below network and IP assignments:
#. 10.0.0.0/24: management or internal network, for communication between
Puppet master and Puppet clients, as well as PXE/TFTP/DHCP for Cobbler.
#. 192.168.0.0/24: public network, for the High Availability (HA) Virtual IP
(VIP), as well as floating IPs assigned to OpenStack guest VMs
#. 10.0.1.0/24: private network, fixed IPs automatically assigned to guest
VMs by OpenStack upon their creation
Next we need to allocate a static IP address from the internal network to
eth0 for fuel-pm, and eth1 for the controller, compute, and, if necessary,
quantum nodes. For High Availability (HA) we must choose and assign an IP
address from the public network to HAProxy running on the controllers. You
can configure network addresses/network mask according to your needs, but
our instructions assume the following network settings on the interfaces:
#. eth0: internal management network, where each machine is allocated a
static IP address from a the defined pool of available addresses:
* 10.0.0.100 for Puppet Master
* 10.0.0.101-10.0.0.103 for the controller nodes
* 10.0.0.110-10.0.0.126 for the compute nodes
* 10.0.0.10 internal Virtual IP for component access
* 255.255.255.0 network mask
#. eth1: public network
* 192.168.0.10 public Virtual IP for access to the Horizon GUI
(OpenStack management interface)
#. eth2: for communication between OpenStack VMs without IP address with
promiscuous mode enabled.
Physical installation infrastructure
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The hardware necessary for an installation depends on the choices you have
made above. This sample installation requires the following hardware:
* 1 server to host both the Puppet Master and Cobbler. The minimum
configuration for this server is:
* 32-bit or 64-bit architecture (64-bit recommended)
* 1+ CPU or vCPU for up to 10 nodes (2 vCPU for up to 20 nodes, 4 vCPU
for up to 100 nodes)
* 1024+ MB of RAM for up to 10 nodes (4096+ MB for up to 20 nodes, 8192+
MB for up to 100 nodes)
* 16+ GB of HDD for OS, and Linux distro storage
* 3 servers to act as OpenStack controllers (called fuel-controller-01,
fuel-controller-02, and fuel-controller-03 for our sample deployment). The
minimum configuration for a controller in Compact mode is:
* 64-bit architecture
* 1+ CPU (2 or more CPUs or vCPUs recommended)
* 1024+ MB of RAM (2048+ MB recommended)
* 400+ GB of HDD
* 1 server to act as the OpenStack compute node (called fuel-compute-01).
The minimum configuration for a compute node with Cinder installed is:
* 64-bit architecture
* 2+ CPU, with Intel VTx or AMDV virtualization technology enabled
* 2048+ MB of RAM
* 1+ TB of HDD
If you choose to deploy Neutron (formerly Quantum) on a separate node, you
will need an additional server with specifications comparable to the
controller nodes.
Make sure your hardware is capable of PXE booting over the network from
Cobbler. You'll also need each server's mac addresses.
For a list of certified hardware configurations, please `contact the
Mirantis Services team <http://www.mirantis.com/contact/>`_.
Virtual installation infrastructure
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For a virtual installation, you need only a single machine. You can get by
on 8GB of RAM, but 16GB or more is recommended.
To perform the installation, you need a way to create Virtual Machines. This
guide assumes that you are using version 4.2.12 or later of VirtualBox,
which you can download from `the VirtualBox site
<https://www.virtualbox.org/wiki/Downloads>`_. It is also required that you
have the Extension Pack installed to enable features that are needed for a
virtualized OpenStack test environment to work correctly.
You'll need to run VirtualBox on a stable host system. Mac OS 10.7.x, CentOS
6.3+, or Ubuntu 12.04 are preferred; results in other operating systems are
unpredictable. It's also important to remember that Windows is incapable of
running the installation scripts for Fuel so we cannot recommend Windows as
a test platform.
Configuring VirtualBox
++++++++++++++++++++++
If you are on VirtualBox, please create the following host-only adapters and
that they are configured correctly:
* VirtualBox -> File -> Preferences...
* Network -> Add HostOnly Adapter (vboxnet0)
* IPv4 Address: 10.0.0.1
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
* Network -> Add HostOnly Adapter (vboxnet1)
* IPv4 Address: 10.0.1.1
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
* Network -> Add HostOnly Adapter (vboxnet2)
* IPv4 Address: 0.0.0.0
* IPv4 Network Mask: 255.255.255.0
* DHCP server: disabled
In this example, only the first two adapters will be used. If necessasry,
though, you can choose to use the third adapter to handle your storage
network traffic.
After creating these interfaces, reboot the host machine to make sure that
DHCP isn't running in the background.
As stated before, installing on Windows isn't recommended, but if you're
attempting to do so you will also need to set up the IP address & network
mask under Control Panel > Network and Internet > Network and Sharing Center
for the Virtual HostOnly Network adapter.
Creating fuel-pm
++++++++++++++++
The process of creating a virtual machine to host Fuel in VirtualBox depends
on whether your deployment is purely virtual or consists of a physical or
virtual fuel-pm controlling physical hardware. If your deployment is purely
virtual then Adapter 1 may be a Hostonly adapter attached to vboxnet0, but
if your deployment infrastructure consists of a virtual fuel-pm controlling
physical machines Adapter 1 must be a Bridged Adapter and connected to
whatever network interface of the host machine is connected to your physical
machines.
To create fuel-pm, start up VirtualBox and create a new machine as follows:
* Machine -> New...
* Name: fuel-pm
* Type: Linux
* Version: Red Hat (64 Bit)
* Memory: 2048 MB
* Drive space: 16 GB HDD
* Machine -> Settings... -> Network
* Adapter 1
* Physical network
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: The host machine's network with access to the network on
which the physical machines reside
* VirtualBox installation
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: eth0 (or whichever physical network is attached to the Internet)
* Machine -> Storage
* Attach the downloaded ISO as a drive
If you cannot or prefer not to install from the ISO, you can find
instructions for installing from the Fuel Library in :ref:`Appendix A
<Create-PM>`.
Creating the OpenStack nodes
++++++++++++++++++++++++++++
If you're using VirtualBox, you will need to create the necessary virtual
machines for your OpenStack nodes. Follow these instructions to create
machines named fuel-controller-01, fuel-controller-02, fuel- controller-03,
and fuel-compute-01. Please, do NOT start these virtual machines until
instructed.
As you create each network adapter, click Advanced to expose and record the
corresponding mac address.
* Machine -> New...
* Name: fuel-controller-01 (you will repeat these steps to create
fuel-controller-02, fuel-controller-03, and fuel-compute-01)
* Type: Linux
* Version: Red Hat (64 Bit)
* Memory: 2048MB
* Drive space: 8GB
* Machine -> Settings -> System
* Check Network in Boot sequence
* Machine -> Settings -> Storage
* Controller: SATA
* Click the Add icon at the bottom of the Storage Tree pane and
choose Add Disk
* Add a second VDI disk of 10GB for storage
* Machine -> Settings -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: eth0 (physical network attached to the Internet. You may
also use a gateway if necessary.)
* Adapter 3
* Enable Network Adapter
* Attached to: Hostonly Adapter
* Name: vboxnet1
* Advanced -> Promiscuous mode: Allow All
It is important that Adapter 1 is configured to load first as Cobbler will
use vboxnet0 for PXE and VirtualBox boots from the LAN using the first
available network adapter.
The additional drive volume will be used as storage space by Cinder and will
be configured automatically by Fuel.

View File

@ -0,0 +1,59 @@
.. raw:: pdf
PageBreak
.. index: How Fuel Works
.. _How-Fuel-Works:
How Fuel Works
==============
Fuel is a ready-to-install collection of the packages and scripts you need
to create a robust, configurable, vendor-independent OpenStack cloud in your
own environment. As of Fuel 3.1, Fuel Library and Fuel Web have been merged
into a single toolbox with options to use the UI or CLI for management.
A single OpenStack cloud consists of packages from many different open source
projects, each with its own requirements, installation procedures, and
configuration management. Fuel brings all of these projects together into a
single open source distribution, with components that have been tested and are
guaranteed to work together, and all wrapped up using scripts to help you work
through a single installation.
Simply put, Fuel is a way for you to easily configure and install an
OpenStack-based infrastructure in your own environment.
.. image:: /_images/FuelSimpleDiagram.jpg
:align: center
Fuel works on a simple premise. Rather than installing each of the
components that make up OpenStack directly, you instead use a configuration
management system like Puppet to create scripts that can provide a
configurable, reproducible, sharable installation process.
In practice, Fuel works as follows:
1. First, set up Fuel Master Node using the ISO. This process only needs to
be completed once per installation.
2. Next, discover your virtual or physical nodes and configure your
OpenStack cluster using the Fuel UI.
3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will
perform all deployment magic for you by applying pre-configured and
pre-integrated Puppet manifests via Astute orchestration engine.
Fuel is designed to enable you to maintain your cluster while giving you the
flexibility to adapt it to your own configuration.
.. image:: /_images/how-it-works_svg.jpg
:align: center
Fuel comes with several pre-defined deployment configurations, some of them
include additional configuration options that allow you to adapt OpenStack
deployment to your environment.
Fuel UI integrates all of the deployments scripts into a unified,
Web-based Graphical User Interface that walks administrators through the
process of installing and configuring a fully functional OpenStack environment.

View File

@ -0,0 +1,83 @@
.. raw:: pdf
PageBreak
.. index:: Large Scale Deployments
.. _Large_Scale_Deployments:
Large Scale Deployments
=======================
When deploying large clusters (of 100 nodes or more) there are two basic
bottlenecks:
Careful planning is key to eliminating these potential problem areas, but
there's another way.
Fuel takes care of these problems through caching and orchestration. We feel,
however, that it's always good to have a sense of how to solve these problems
should they appear.
Certificate signing requests and Puppet Master/Cobbler capacity
---------------------------------------------------------------
When deploying a large cluster, you may find that Puppet Master begins to have
difficulty when you start exceeding 20 or more simultaneous requests. Part of
this problem is because the initial process of requesting and signing
certificates involves \*.tmp files that can create conflicts. To solve this
problem, you have two options:
* reduce the number of simultaneous requests,
* or increase the number of Puppet Master/Cobbler servers.
The number of simultaneous certificate requests that are active can be
controlled by staggering the Puppet agent run schedule. This can be
accomplished through orchestration. You don't need extreme staggering (1 to 5
seconds will do) but if this method isn't practical, you can increase the number
of Puppet Master/Cobbler servers.
If you're simply overwhelming the Puppet Master process and not running into
file conflicts, one way to get around this problem is to use Puppet Master with
Thin as the backend component and nginx as a frontend component. This
configuration dynamically scales the number of Puppet Master processes to better
accommodate changing load.
.. You can find sample configuration files for nginx and puppetmasterd at [CONTENT NEEDED HERE].
You can also increase the number of servers by creating a cluster that utilizes
a round robin DNS configuration through a service like HAProxy. You will need
to ensure that these nodes are kept in sync. For Cobbler, that means a
combination of the ``--replicate`` switch, XMLRPC for metadata, rsync for
profiles and distributions. Similarly, Puppet Master can be kept in sync with a
combination of rsync (for modules, manifests, and SSL data) and database
replication.
..
image:: /_images/cobbler-puppet-ha.jpg
:align: center
Downloading of operating systems and other software
---------------------------------------------------
Large deployments can also suffer from a bottleneck in terms of the additional
traffic created by downloading software from external sources. One way to avoid
this problem is by increasing LAN bandwidth through bonding multiple gigabit
interfaces. You might also want to consider 10G Ethernet trunking between
infrastructure switches using CAT-6a or fiber cables to improve backend speeds
to reduce latency and provide more overall pipe.
.. seealso:: :ref:`Sizing_Hardware` for more information on choosing networking equipment.
..
Another option is to prevent the need to download so much data in the first place
using either apt-cacher to cache frequently downloaded packages or to set up a
private repository. The downside of using your own repository, however, is that
you have to spend more time manually updating it. Apt-cacher automates this
process. To use apt-cacher, the kickstart that Cobbler sends to each node
should specify Cobbler's IP address and the apt-cacher port as the proxy server.
This will prevent all of the nodes from having to download the software
individually.
`Contact Mirantis <http://www.mirantis.com/contact/>`_ for information on
creating a private repository.

View File

@ -0,0 +1,41 @@
.. raw:: pdf
PageBreak
.. index:: Deployment Configurations
.. _Deployment_Configurations:
Deployment Configurations Provided By Fuel
==========================================
One of the advantages of Fuel is that it comes with a number of pre-built
deployment configurations that you can use to quickly build your own
OpenStack cloud infrastructure. These are well-specified configurations of
OpenStack and its constituent components that are expertly tailored to one
or more common cloud use cases. Fuel provides the ability to create the
following cluster types without requiring extensive customization:
**Simple (non-HA)**: The Simple (non-HA) installation provides an easy way
to install an entire OpenStack cluster without requiring the degree of
increased hardware involved in ensuring high availability.
**Multi-node (HA)**: When you're ready to begin your move to production, the
Multi-node (HA) configuration is a straightforward way to create an OpenStack
cluster that provides high availability. With three controller nodes and the
ability to individually specify services such as Cinder, Neutron (formerly
Quantum), and Swift, Fuel provides the following variations of the
Multi-node (HA) configurations:
- **Compact HA**: When you choose this option, Swift will be installed on
your controllers, reducing your hardware requirements by eliminating the need
for additional Swift servers while still addressing high availability
requirements.
- **Full HA**: This option enables you to install independent Swift and Proxy
nodes, so that you can separate their operation from your controller nodes.
In addition to these configurations, Fuel is designed to be completely
customizable. For assistance on deeper customization options based on the
included configurations you can `contact Mirantis for further assistance
<http://www.mirantis.com/contact/>`_.

View File

@ -0,0 +1,63 @@
.. raw:: pdf
PageBreak
.. index:: Supported Software Components
Supported Software Components
=============================
Fuel has been tested and is guaranteed to work with the following software
components:
* Operating Systems
* CentOS 6.4 (x86_64 architecture only)
* RHEL 6.4 (x86_64 architecture only)
* Puppet (IT automation tool)
* 2.7.19
* MCollective
* 2.3.1
* Cobbler (bare-metal provisioning tool)
* 2.2.3
* OpenStack
* Grizzly release 2013.1.2
* Hypervisor
* KVM
* QEMU
* Open vSwitch
* 1.10.0
* HA Proxy
* 1.4.19
* Galera
* 23.2.2
* RabbitMQ
* 2.8.7
* Pacemaker
* 1.1.8
* Corosync
* 1.4.3

View File

@ -0,0 +1,37 @@
.. _Generating_Puppet_Manifest:
Generating the Puppet Manifest
------------------------------
Before you can deploy OpenStack using CLI, you will need to configure the
`site.pp` file.
To do this we included the ``openstack_system`` script, which uses the
`config.yaml` file and reference architecture templates to create the
appropriate Puppet manifest.
To create `site.pp`, execute this command::
openstack_system -c config.yaml \
-t /etc/puppet/modules/openstack/examples/site_openstack_ha_compact.pp \
-o /etc/puppet/manifests/site.pp \
-a astute.yaml
The four parameters in the command above are:
``-c``:
The absolute or relative path to the ``config.yaml`` file you customized earlier.
``-t``:
The template file to serve as a basis for ``site.pp``.
Possible templates include ``site_openstack_ha_compact.pp``,
``site_openstack_ha_full.pp``, and ``site_openstack_simple.pp``.
``-o``:
The output file. This should always be ``/etc/puppet/manifests/site.pp``.
``-a``:
The orchestration configuration file, to be output for use in the next step.
From there you're ready to install your OpenStack components. Before that,
however, let's look at what is actually in the new ``site.pp`` manifest, so
that you can understand how to customize it if necessary.

View File

@ -0,0 +1,18 @@
.. raw:: pdf
PageBreak
.. index:: Download Fuel
Download Fuel
=============
The first step in installing Fuel is to download the version appropriate to
your environment.
Fuel is available for Essex, Folsom and Grizzly OpenStack installations, and
will be available for Havana shortly after Havana's release.
The Fuel ISO and IMG, along with other Fuel releases, are available in the
`Downloads <http://fuel.mirantis.com/your-downloads/>`_ section of the Fuel
portal.

View File

@ -0,0 +1,948 @@
.. raw:: pdf
PageBreak
.. index:: CLI Deployment Workflow
Understanding the CLI Deployment Workflow
=========================================
To deploy OpenStack using CLI successfully you need nodes to pass through the
"Prepare->Discover->Provision->Deploy" workflow. Following sections describe how
to do this from the beginning to the end of the deployment.
During `Prepare` stage nodes should be connected correctly to the Master node for
network booting. Then turn on the nodes to boot using PXE provided by Fuel Master node.
Discover
--------
Nodes being booted into bootstrap mode run all the required services for the node
to be managed by Fuel Master node. When booted into bootstrap phase, node
contains ssh authorized keys of Master node which allows Cobbler server installed
on Master node to reboot the node during provision phase. Also, bootstrap mode
configures MCollective on the node and specifies ID used by Astute orchestrator
to check the status of the node.
Provision
---------
Provisioning is done using Cobbler. Astute orchestrator parses ``nodes`` section
of YAML configuration file and creates corresponding Cobbler systems using
parameters specified in ``engine`` section of YAML file. After the systems are
created, it connects to Cobbler engine and reboots nodes according to the power
management parameters of the node.
Deploy
------
Deployment is done using Astute orchestrator, which parses ``nodes`` and
``attributes`` sections and recalculates parameters needed for deployment.
Calculated parameters are passed to the nodes being deployed by use of
``nailyfact`` MCollective agent that uploads these attributes to
``/etc/naily.facts`` file of the node. Then puppet parses this file using
Facter plugin and uploads these facts into puppet. These facts are used
during catalog compilation phase by puppet master. Finally catalog is executed
and Astute orchestrator passes to the next node in deployment sequence.
.. raw:: pdf
PageBreak
.. index:: Deploying Using CLI
Deploying OpenStack Cluster Using CLI
=====================================
.. contents :local:
After you understood how deployment workflow is traversed, you can finally start.
Connect the nodes to Master node and power them on. You should also plan your
cluster configuration meaning that you should know which node should host which
role in the cluster. As soon as nodes boot into bootstrap mode and populate
their data to MCollective you will need to fill configuration YAML file and
consequently trigger Provisioning and Deployment phases.
YAML High Level Structure
-------------------------
The high level structure of deployment configuration file is:
.. code-block:: yaml
nodes: # Array of nodes
- name: # Definition of node
role:
.....
attributes: # OpenStack cluster attributes used during deployment
engine: # Cobbler engine parameters
nodes Section
+++++++++++++
In this section you define nodes, their IP/MAC addresses, disk partitioning,
their roles in the cluster and so on.
attributes Section
++++++++++++++++++
In this section OpenStack cluster attributes such as which networking engine
(Quantum or Nova Network) to use, whether to use Cinder block storage, which
usernames and passwords to use for internal and public services of
OpenStack and so on.
engine Section
++++++++++++++
This section specifies parameters used to connect to Cobbler engine during
provisioning phase.
Collecting Identities
---------------------
After the nodes boot to bootstrap mode, you need to collect their MCollective
identities. You can do this in two ways:
- Login to the node, open ``/etc/mcollective/server.cfg`` and find node ID in
the `identity` field::
identity = 7
- Get discovered nodes JSON file by issuing GET HTTP request to
http://<master_ip>:8000/api/nodes/
Calculating Partitioning of the Nodes
-------------------------------------
In order to provision nodes, you need to calculate partitioning for each
particular node.
Currently, the smallest partitioning scheme includes two partitions: **root**
and **swap**. These ones reside on **os** LVM volume group. If you want
to have separate partition for Glance and Swift what we strongly suggest you
to do, then you need to create a partition with mount point ``/var/lib/glance``.
If you want the node to work as cinder LVM storage you will also need to
create a ``cinder`` LVM Volume Group.
.. warning:: Do not use '_' and '-' symbols in cinder volume names since the
Anaconda limitation.
Partitioning is done by parsing ``ks_spaces`` section of node's ``ks_meta`` hash.
Example ``ks_spaces`` is pasted below.
Be also aware that the sizes are provided in MiBs (= 1024KiB = 1048576 bytes)
and Anaconda uses 32MiB physical extents for LVM.
Thus your LVM PVs size MUST be multiple of 32.
.. code-block:: yaml
# == ks_spaces
# Kickstart data for disk partitioning
# The simplest way to calculate is to use REST call to nailgun api,
# recalculate disk size into MiB and dump the following config.
# Workflow is as follows:
# GET request to http://<fuel-master-node>:8000/api/nodes
# Parse JSON and derive disk data from meta['disks'].
# Set explicitly which disk is system and which is for cinder.
# $system_disk_size=floor($system_disk_meta['disks']['size']/1048576)
# $system_disk_path=$system_disk_meta['disks']['disk']
# $cinder_disk_size=floor($cinder_disk_meta['disks']['size']/1048576)
#
# $cinder_disk_path=$cinder_disk_meta['disks']['disk']
#
# All further calculations are made in MiB
# Calculation of system partitions
#
# For each node:
# calculate size of physical volume for operating system:
# $pv_size = $system_disk_size - 200 - 1
# declare $swap_size
# calculate size of root partition:
# $free_vg_size = $pv_size - $swap_size
# $free_extents = floor($free_vg_size/32)
# $system_disk_size = 32 * $free_extents
# ks_spaces: '"[
# {
# \"type\": \"disk\",
# \"id\": \"$system_disk_path\",
# \"volumes\":
# [
# {
# \"mount\": \"/boot\",
# \"type\": \"partition\",
# \"size\": 200
# },
# {
# \"type\": \"mbr\"
# },
# {
# \"size\": $pv_size,
# \"type\": \"pv\",
# \"vg\": \"os\"
# }
# ],
# \"size\": $system_disk_size
# },
# {
# \"type\": \"vg\",
# \"id\": \"os\",
# \"volumes\":
# [
# {
# \"mount\": \"/\",
# \"type\": \"lv\",
# \"name\": \"root\",
# \"size\": $system_disk_size
# },
# {
# \"mount\": \"swap\",
# \"type\": \"lv\",
# \"name\": \"swap\",
# \"size\": $swap_size
# }
# ]
# },
# {
# \"type\": \"disk\",
# \"id\": \"$path_to_cinder_disk\",
# \"volumes\":
# [
# {
# \"type\": \"mbr\"
# },
# {
# \"size\": $cinder_disk_size,
# \"type\": \"pv\",
# \"vg\": \"cinder\"
# }
# ],
# \"size\": $cinder_disk_size
# }
# ]"'
ks_spaces: '"
[
{
\"type\": \"disk\",
\"id\": \"disk/by-path/pci-0000:00:06.0-virtio-pci-virtio3\",
\"volumes\":
[
{
\"mount\": \"/boot\",
\"type\": \"partition\",
\"size\": 200
},
{
\"type\": \"mbr\"
},
{
\"size\": 20000,
\"type\": \"pv\",
\"vg\": \"os\"
}
],
\"size\": 20480
},
{
\"type\": \"vg\",
\"id\": \"os\",
\"volumes\":
[
{
\"mount\": \"/\",
\"type\": \"lv\",
\"name\": \"root\",
\"size\": 10240
},
{
\"mount\": \"swap\",
\"type\": \"lv\",
\"name\": \"swap\",
\"size\": 2048
}
]
}
]"'
.. raw:: pdf
PageBreak
.. index:: Configuring Nodes for Provisioning
Configuring Nodes for Provisioning
==================================
In order to provision nodes you need to configure ``nodes`` section of YAML
file for each node.
Sample YAML configuration for provisioning is listed below:
.. code-block:: yaml
nodes:
# == id
# MCollective node id in mcollective server.cfg.
- id: 1
# == uid
# UID of the node for deployment engine. Should be equal to `id`
uid: 1
# == mac
# MAC address of the interface being used for network boot.
mac: 64:43:7B:CA:56:DD
# == name
# name of the system in cobbler
name: controller-01
# == ip
# IP issued by cobbler DHCP server to this node during network boot.
ip: 10.20.0.94
# == profile
# Cobbler profile for the node.
# Default: centos-x86_64
# [centos-x86_64|rhel-x86_64]
# CAUTION:
# rhel-x86_64 is created only after rpmcache class is run on master node
# and currently not supported in CLI mode
profile: centos-x86_64
# == fqdn
# Fully-qualified domain name of the node
fqdn: controller-01.domain.tld
# == power_type
# Cobbler power-type. Consult cobbler documentation for available options.
# Default: ssh
power_type: ssh
# == power_user
# Username for cobbler to manage power of this machine
# Default: unset
power_user: root
# == power_pass
# Password/credentials for cobbler to manage power of this machine
# Default: unset
power_pass: /root/.ssh/bootstrap.rsa
# == power_address
# IP address of the device managing the node power state.
# Default: unset
power_address: 10.20.0.94
# == netboot_enabled
# Disable/enable netboot for this node.
netboot_enabled: '1'
# == name_servers
# DNS name servers for this node during provisioning phase.
name_servers: ! '"10.20.0.2"'
# == puppet_master
# Hostname or IP address of puppet master node
puppet_master: fuel.domain.tld
# == ks_meta
# Kickstart metadata used during provisioning
ks_meta:
# == ks_spaces
# Kickstart data for disk partitioning
# The simplest way to calculate is to use REST call to nailgun api,
# recalculate disk size into MiB and dump the following config.
# Workflow is as follows:
# GET request to http://<fuel-master-node>:8000/api/nodes
# Parse JSON and derive disk data from meta['disks'].
# Set explicitly which disk is system and which is for cinder.
# $system_disk_size=floor($system_disk_meta['disks']['size']/1048576)
# $system_disk_path=$system_disk_meta['disks']['disk']
# $cinder_disk_size=floor($cinder_disk_meta['disks']['size']/1048576)
#
# $cinder_disk_path=$cinder_disk_meta['disks']['disk']
#
# All further calculations are made in MiB
# Calculation of system partitions
#
# For each node:
# calculate size of physical volume for operating system:
# $pv_size = $system_disk_size - 200 - 1
# declare $swap_size
# calculate size of root partition:
# $free_vg_size = $pv_size - $swap_size
# $free_extents = floor($free_vg_size/32)
# $system_disk_size = 32 * $free_extents
# ks_spaces: '"[
# {
# \"type\": \"disk\",
# \"id\": \"$system_disk_path\",
# \"volumes\":
# [
# {
# \"mount\": \"/boot\",
# \"type\": \"partition\",
# \"size\": 200
# },
# {
# \"type\": \"mbr\"
# },
# {
# \"size\": $pv_size,
# \"type\": \"pv\",
# \"vg\": \"os\"
# }
# ],
# \"size\": $system_disk_size
# },
# {
# \"type\": \"vg\",
# \"id\": \"os\",
# \"volumes\":
# [
# {
# \"mount\": \"/\",
# \"type\": \"lv\",
# \"name\": \"root\",
# \"size\": $system_disk_size
# },
# {
# \"mount\": \"swap\",
# \"type\": \"lv\",
# \"name\": \"swap\",
# \"size\": $swap_size
# }
# ]
# },
# {
# \"type\": \"disk\",
# \"id\": \"$path_to_cinder_disk\",
# \"volumes\":
# [
# {
# \"type\": \"mbr\"
# },
# {
# \"size\": $cinder_disk_size,
# \"type\": \"pv\",
# \"vg\": \"cinder\"
# }
# ],
# \"size\": $cinder_disk_size
# }
# ]"'
ks_spaces: '"[
{
\"type\": \"disk\",
\"id\": \"disk/by-path/pci-0000:00:06.0-virtio-pci-virtio3\",
\"volumes\":
[
{
\"mount\": \"/boot\",
\"type\": \"partition\",
\"size\": 200
},
{
\"type\": \"mbr\"
},
{
\"size\": 20000,
\"type\": \"pv\",
\"vg\": \"os\"
}
],
\"size\": 20480
},
{
\"type\": \"vg\",
\"id\": \"os\",
\"volumes\":
[
{
\"mount\":\"/\",
\"type\": \"lv\",
\"name\": \"root\",
\"size\": 10240
},
{
\"mount\": \"swap\",
\"type\": \"lv\",
\"name\": \"swap\",
\"size\": 2048
}
]
}
]"'
# == mco_enable
# If mcollective should be installed and enabled on the node
mco_enable: 1
# == mco_vhost
# Mcollective AMQP virtual host
mco_vhost: mcollective
# == mco_pskey
# **NOT USED**
mco_pskey: unset
# == mco_user
# Mcollective AMQP user
mco_user: mcollective
# == puppet_enable
# should puppet agent start on boot
# Default: 0
puppet_enable: 0
# == install_log_2_syslog
# Enable/disable on boot remote logging
# Default: 1
install_log_2_syslog: 1
# == mco_password
# Mcollective AMQP password
mco_password: marionette
# == puppet_auto_setup
# Whether to install puppet during provisioning
# Default: 1
puppet_auto_setup: 1
# == puppet_master
# hostname or IP of puppet master server
puppet_master: fuel.domain.tld
# == puppet_auto_setup
# Whether to install mcollective during provisioning
# Default: 1
mco_auto_setup: 1
# == auth_key
# Public RSA key to be added to cobbler authorized keys
auth_key: ! '""'
# == puppet_version
# Which puppet version to install on the node
puppet_version: 2.7.19
# == mco_connector
# Mcollective AMQP driver.
# Default: rabbitmq
mco_connector: rabbitmq
# == mco_host
# AMQP host to which Mcollective agent should connect
mco_host: 10.20.0.2
# == interfaces
# Hash of interfaces configured during provision state
interfaces:
eth0:
ip_address: 10.20.0.94
netmask: 255.255.255.0
dns_name: controller-01.domain.tld
static: '1'
mac_address: 64:43:7B:CA:56:DD
# == interfaces_extra
# extra interfaces information
interfaces_extra:
eth2:
onboot: 'no'
peerdns: 'no'
eth1:
onboot: 'no'
peerdns: 'no'
eth0:
onboot: 'yes'
peerdns: 'no'
# == meta
# Metadata needed for log parsing during deployment jobs.
meta:
# == Array of hashes of interfaces
interfaces:
- mac: 64:D8:E1:F6:66:43
max_speed: 100
name: <iface name>
ip: <IP>
netmask: <Netmask>
current_speed: <Integer>
- mac: 64:C8:E2:3B:FD:6E
max_speed: 100
name: eth1
ip: 10.21.0.94
netmask: 255.255.255.0
current_speed: 100
disks:
- model: VBOX HARDDISK
disk: disk/by-path/pci-0000:00:0d.0-scsi-2:0:0:0
name: sdc
size: 2411724800000
- model: VBOX HARDDISK
disk: disk/by-path/pci-0000:00:0d.0-scsi-1:0:0:0
name: sdb
size: 536870912000
- model: VBOX HARDDISK
disk: disk/by-path/pci-0000:00:0d.0-scsi-0:0:0:0
name: sda
size: 17179869184
system:
serial: '0'
version: '1.2'
fqdn: bootstrap
family: Virtual Machine
manufacturer: VirtualBox
error_type:
After you populate YAML file with all the required data, fire Astute
orchestrator and point it to corresponding YAML file:
.. code-block:: bash
[root@fuel ~]# astute -f simple.yaml -c provision
Wait for command to finish. Now you can start configuring OpenStack cluster
parameters.
.. raw:: pdf
PageBreak
.. index:: Configuring Nodes for Deployment
Configuring Nodes for Deployment
================================
Node Configuration
------------------
In order to deploy OpenStack cluster, you need to populate each node's ``nodes``
section of the file with data related to deployment.
.. code-block:: yaml
nodes:
.....
# == role
# Specifies role of the node
# [primary-controller|controller|storage|swift-proxy|primary-swift-proxy]
# Default: unspecified
role: primary-controller
# == network_data
# Array of network interfaces hashes
# === name: scalar or array of one or more of
# [management|fixed|public|storage]
# ==== 'management' is used for internal communication
# ==== 'public' is used for public endpoints
# ==== 'storage' is used for cinder and swift storage networks
# ==== 'fixed' is used for traffic passing between VMs in Quantum 'vlan'
# segmentation mode or with Nova Network enabled
# === ip: IP address to be configured by puppet on this interface
# === dev: interface device name
# === netmask: network mask for the interface
# === vlan: vlan ID for the interface
# === gateway: IP address of gateway (**not used**)
network_data:
- name: public
ip: 10.20.0.94
dev: eth0
netmask: 255.255.255.0
gateway: 10.20.0.1
- name:
- management
- storage
ip: 10.20.1.94
netmask: 255.255.255.0
dev: eth1
- name: fixed
dev: eth2
# == public_br
# Name of the public bridge for Quantum-enabled configuration
public_br: br-ex
# == internal_br
# Name of the internal bridge for Quantum-enabled configuration
internal_br: br-mgmt
General Parameters
------------------
Once nodes are populated with role and networking information,
it is time to set some general parameters for deployment.
.. code-block:: yaml
attributes:
....
# == master_ip
# IP of puppet master.
- master_ip: 10.20.0.2
# == deployment_id
# Id if deployment used do differentiate environments
deployment_id: 1
# == deployment_source
# [web|cli] - should be set to cli for CLI installation
deployment_source: cli
# == management_vip
# Virtual IP address for internal services
# (MySQL, AMQP, internal OpenStack endpoints)
management_vip: 10.20.1.200
# == public_vip
# Virtual IP address for public services
# (Horizon, public OpenStack endpoints)
public_vip: 10.20.0.200
# == auto_assign_floating_ip
# Whether to assign floating IPs automatically
auto_assign_floating_ip: true
# == start_guests_on_host_boot
# Default: true
start_guests_on_host_boot: true
# == create_networks
# whether to create fixed or floating networks
create_networks: true
# == compute_scheduler_driver
# Nova scheduler driver class
compute_scheduler_driver: nova.scheduler.multi.MultiScheduler
== use_cow_images:
# Whether to use cow images
use_cow_images: true
# == libvirt_type
# Nova libvirt hypervisor type
# Values: qemu|kvm
# Default: kvm
libvirt_type: qemu
# == dns_nameservers
# array of DNS servers configured during deployment phase.
dns_nameservers:
- 10.20.0.1
# Below go credentials and access parameters for main OpenStack components
mysql:
root_password: root
glance:
db_password: glance
user_password: glance
swift:
user_password: swift_pass
nova:
db_password: nova
user_password: nova
access:
password: admin
user: admin
tenant: admin
email: admin@example.org
keystone:
db_password: keystone
admin_token: nova
quantum_access:
user_password: quantum
db_password: quantum
rabbit:
password: nova
user: nova
cinder:
password: cinder
user: cinder
# == floating_network_range
# CIDR (for quantum == true) or array if IPs (for quantum == false)
# Used for creation of floating networks/IPs during deployment
floating_network_range: 10.20.0.150/26
# == fixed_network_range
# CIDR for fixed network created during deployment.
fixed_network_range: 10.20.2.0/24
# == ntp_servers
# List of ntp servers
ntp_servers:
- pool.ntp.org
.. raw:: pdf
PageBreak
.. index:: Configure Deployment Scenario
Configure Deployment Scenario
=============================
Choose deployment scenario you want to use.
Currently supported scenarios are:
- HA Compact (:download:`Download example YAML file </_static/compact.yaml>`)
- HA Full (:download:`Download example YAML file </_static/full.yaml>`)
- Non-HA Multinode Simple (:download:`Download example YAML file </_static/simple.yaml>`)
.. code-block:: yaml
attributes:
....
# == deployment_mode
# [ha|ha_full|multinode]
deployment_mode: ha
..
Enabling Nova Network
---------------------
If you want to use Nova Network as networking engine for your
OpenStack cloud, you need to set ``quantum`` parameter to *false* in
your config file:
.. code-block:: yaml
attributes:
.....
quantum: false
You need also to configure some nova-network related parameters:
.. code-block:: yaml
attributes:
.....
novanetwork_parameters:
vlan_start: <1-1024>
# == network_manager
# Which nova-network manager to use
network_manager: String
# == network_size
# which network size to use during fixed network range segmentation
network_size: <Integer>
# == num_networks
# number of networks into which to split fixed_network_range
num_networks: <Integer>
Enabling Quantum
----------------
In order to deploy OpenStack with Quantum you need to enable ``quantum`` in your
YAML file
.. code-block:: yaml
attributes:
.....
quantum: false
You need also to configure some nova-network related parameters:
.. code-block:: yaml
attributes:
.....
#Quantum part, used only if quantum='true'
quantum_parameters:
# == tenant_network_type
# Which type of network segmentation to use.
# Values: gre|vlan
tenant_network_type: gre
# == segment_range
# Range of IDs for network segmentation. Consult Quantum documentation.
# Values: gre|vlan
segment_range: ! '300:500'
# == metadata_proxy_shared_secret
# Shared secret for metadata proxy services
# Values: String
metadata_proxy_shared_secret: quantum
Enabling Cinder
---------------
Our example uses Cinder, and with some very specific variations from the default.
Specifically, as we said before, while the Cinder scheduler will continue to
run on the controllers, the actual storage can be specified by setting
``cinder_nodes`` array.
.. code-block:: yaml
attributes:
.....
# == cinder_nodes
# Which nodes to use as cinder-volume backends
# Array of values
# 'all'|<hostname>|<internal IP address of node>|'controller'|<node_role>
cinder_nodes:
- controller
Configuring Syslog Parameters
-----------------------------
To configure syslog servers to use, specify several parameters:
.. code-block:: yaml
# == base_syslog
# Main syslog server configuration.
base_syslog:
syslog_port: '514'
syslog_server: 10.20.0.2
# == syslog
# Additional syslog servers configuration.
syslog:
syslog_port: '514'
syslog_transport: udp
syslog_server: ''
Setting Verbosity
-----------------
You also have the option to determine how much information OpenStack provides
when performing configuration:
.. code-block:: yaml
attributes:
....
verbose: true
debug: false
Enabling Horizon HTTPS/SSL mode
-------------------------------
Using the ``horizon_use_ssl`` variable, you have the option to decide whether
the OpenStack dashboard (Horizon) uses HTTP or HTTPS:
.. code-block:: yaml
attributes:
....
horizon_use_ssl: false
This variable accepts the following values:
`false`:
In this mode, the dashboard uses HTTP with no encryption.
`default`:
In this mode, the dashboard uses keys supplied with the standard Apache SSL
module package.
`exist`:
In this case, the dashboard assumes that the domain name-based certificate,
or keys, are provisioned in advance. This can be a certificate signed by any
authorized provider, such as Symantec/Verisign, Comodo, GoDaddy, and so on.
The system looks for the keys in these locations:
* public `/etc/pki/tls/certs/domain-name.crt`
* private `/etc/pki/tls/private/domain-name.key`
.. for Debian/Ubuntu:
.. * public ``/etc/ssl/certs/domain-name.pem``
.. * private ``/etc/ssl/private/domain-name.key``
`custom`:
This mode requires a static mount point on the fileserver for ``[ssl_certs]``
and certificate pre-existence. To enable this mode, configure the puppet
fileserver by editing ``/etc/puppet/fileserver.conf`` to add::
[ssl_certs]
path /etc/puppet/templates/ssl
allow *
From there, create the appropriate directory::
mkdir -p /etc/puppet/templates/ssl
Add the certificates to this directory.
Then reload the puppetmaster service for these changes to take effect.
Dealing With Multicast Issues
-----------------------------
Fuel uses Corosync and Pacemaker cluster engines for HA scenarios, thus requiring
consistent multicast networking. Sometimes it is not possible to configure
multicast in your network. In this case, you can tweak Corosync to use
unicast addressing by setting ``use_unicast_corosync`` variable to ``true``.
.. code-block:: yaml
# == use_unicast_corosync
# which communication protocol to use for corosync
use_unicast_corosync: false
.. index:: Triggering the Deployment
.. raw:: pdf
PageBreak
Finally Triggering the Deployment
=================================
After YAML is updated with all the required parameters you can finally trigger
deployment by issuing ``deploy`` command to Astute orchestrator.
.. code-block:: none
[root@fuel ~]# astute -f simple.yaml -c deploy
And wait for command to finish.

View File

@ -0,0 +1,87 @@
Installing Nagios Monitoring using Puppet
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fuel provides a way to deploy Nagios for monitoring your OpenStack cluster. Nagios is an open source distributed management and monitoring infrastructure that is commonly used in data centers to keep an eye on thousands of servers. Nagios requires the installation of a software agent on all nodes, as well as having a master server for Nagios which will collect and display all the results. The agent, the Nagios NRPE addon, allows OpenStack to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to monitor key resources (such as CPU load, memory usage, etc.), as well as provide more advanced metrics and performance data on local and remote machines.
Nagios Agent
++++++++++++
In order to install Nagios NRPE on a compute or controller node, a node should have the following settings: ::
class {'nagios':
proj_name => 'test',
services => ['nova-compute','nova-network','libvirt'],
whitelist => ['127.0.0.1', $nagios_master],
hostgroup => 'compute',
}
* ``proj_name``: An environment for nagios commands and the directory (``/etc/nagios/test/``).
* ``services``: All services to be monitored by nagios.
* ``whitelist``: The array of IP addreses trusted by NRPE.
* ``hostgroup``: The group to be used in the nagios master (do not forget create the group in the nagios master).
Nagios Server
+++++++++++++
In order to install Nagios Master on any convenient node, a node should have the following applied: ::
class {'nagios::master':
proj_name => 'test',
templatehost => {'name' => 'default-host','check_interval' => '10'},
templateservice => {'name' => 'default-service' ,'check_interval'=>'10'},
hostgroups => ['compute','controller'],
contactgroups => {'group' => 'admins', 'alias' => 'Admins'},
contacts => {'user' => 'hotkey', 'alias' => 'Dennis Hoppe',
'email' => 'nagios@%{domain}',
'group' => 'admins'},
}
* ``proj_name``: The environment for nagios commands and the directory (``/etc/nagios/test/``).
* ``templatehost``: The group of checks and intervals parameters for hosts (as a Hash).
* ``templateservice``: The group of checks and intervals parameters for services (as a Hash).
* ``hostgroups``: All groups which on NRPE nodes (as an Array).
* ``contactgroups``: The group of contacts (as a Hash).
* ``contacts``: Contacts to receive error reports (as a Hash)
Health Checks
+++++++++++++
You can see the complete definition of the available services to monitor and their health checks at ``deployment/puppet/nagios/manifests/params.pp``.
Here is the list: ::
$services_list = {
'nova-compute' => 'check_nrpe_1arg!check_nova_compute',
'nova-network' => 'check_nrpe_1arg!check_nova_network',
'libvirt' => 'check_nrpe_1arg!check_libvirt',
'swift-proxy' => 'check_nrpe_1arg!check_swift_proxy',
'swift-account' => 'check_nrpe_1arg!check_swift_account',
'swift-container' => 'check_nrpe_1arg!check_swift_container',
'swift-object' => 'check_nrpe_1arg!check_swift_object',
'swift-ring' => 'check_nrpe_1arg!check_swift_ring',
'keystone' => 'check_http_api!5000',
'nova-novncproxy' => 'check_nrpe_1arg!check_nova_novncproxy',
'nova-scheduler' => 'check_nrpe_1arg!check_nova_scheduler',
'nova-consoleauth' => 'check_nrpe_1arg!check_nova_consoleauth',
'nova-cert' => 'check_nrpe_1arg!check_nova_cert',
'cinder-scheduler' => 'check_nrpe_1arg!check_cinder_scheduler',
'cinder-volume' => 'check_nrpe_1arg!check_cinder_volume',
'haproxy' => 'check_nrpe_1arg!check_haproxy',
'memcached' => 'check_nrpe_1arg!check_memcached',
'nova-api' => 'check_http_api!8774',
'cinder-api' => 'check_http_api!8776',
'glance-api' => 'check_http_api!9292',
'glance-registry' => 'check_nrpe_1arg!check_glance_registry',
'horizon' => 'check_http_api!80',
'rabbitmq' => 'check_rabbitmq',
'mysql' => 'check_galera_mysql',
'apt' => 'nrpe_check_apt',
'kernel' => 'nrpe_check_kernel',
'libs' => 'nrpe_check_libs',
'load' => 'nrpe_check_load!5.0!4.0!3.0!10.0!6.0!4.0',
'procs' => 'nrpe_check_procs!250!400',
'zombie' => 'nrpe_check_procs_zombie!5!10',
'swap' => 'nrpe_check_swap!20%!10%',
'user' => 'nrpe_check_users!5!10',
'host-alive' => 'check-host-alive',
}

View File

@ -0,0 +1,144 @@
..index:: Deploying via Orchestration
.. _orchestration:
Deploying via Orchestration
===========================
.. contents :local:
Performing a small series of manual installs many be an acceptable approach in
some circumstances, but if you plan on deploying to a large number of servers
then we strongly suggest that you consider orchestration. You can perform a
deployment using orchestration with Fuel using the "astute" script.
..
This script
is configured using the `astute.yaml` file that was created when you ran
``openstack_system`` earlier in this process.
To confirm that your servers are ready for orchestration, execute the following
command::
mco ping
You should see all three controllers, plus the compute node, in the response to
the command::
fuel-compute-01 time=107.26 ms
fuel-controller-01 time=120.14 ms
fuel-controller-02 time=135.94 ms
fuel-controller-03 time=139.33 ms
To run the orchestrator, log in to Fuel Master node and execute::
astute -f astute.yaml
You will see a message stating that the installation has started
on fuel-controller-01.
To see what's going on on the target node, enter this command::
tail -f /var/log/messages
Note that Puppet will require several runs to install all the different roles.
The first time it runs, the orchestrator will show an error. This error means
that the installation isn't complete, but will be rectified after the various
rounds of installation are completed. Also, after the first run on each server,
the orchestrator doesn't output messages on fuel-pm; when it's finished running,
it will return you to the command prompt. In the meantime, you can see what's
going on by watching the logs on each individual machine.
Installing OpenStack using Puppet directly
------------------------------------------
If, for some reason, you choose not to use orchestration (one common example is
adding a single node to an existing (non-HA) cluster) you have the option to
install on one or more nodes using Puppet manually.
Start by logging in to the target server (`fuel-controller-01` to start, if you're
starting from scratch) and running the Puppet agent.
One optional step would be to use the ``script`` command to log all of your output
so you can check for errors::
script agent-01.log
puppet agent --test
You will to see a great number of messages scroll by, and the installation will
take a significant amount of time. When the process has completed, press CTRL-D
to stop logging and grep for errors::
grep err: agent-01.log
If you find any errors relating to other nodes you may safely ignore them
for now.
Now you can run the same installation procedure on fuel-controller-02 and
fuel-controller-03, as well as fuel-compute-01.
Note that the controllers must be installed sequentially due to the nature of
assembling a MySQL cluster based on Galera. This means that one server must
complete its installation before the next installation is started. However,
compute nodes can be installed concurrently once the controllers are in place.
In some cases, you may find errors related to resources that are not yet
available when the installation takes place. To solve that problem, simply
re-run the puppet agent on the affected node after running the other
controllers, and again grep for error messages. When you see no errors on any
of your nodes, your OpenStack cluster is ready to go.
Examples of OpenStack installation sequences
--------------------------------------------
When running Puppet manually, the exact sequence depends on the configuration
goals you're trying to achieve. In most cases, you'll need to run Puppet more
than once; with every pass Puppet collects and adds necessary absent information
to the OpenStack configuration, stores and applies necessary changes.
.. note::
*Sequentially run* means you don't start the next node deployment until
previous one is finished.
**Example 1: Full OpenStack deployment with standalone Storage nodes**
(:download:`Download example YAML file </_static/full.yaml>`)
* Create necessary volumes on Storage nodes as described in :ref:`create-the-XFS-partition`.
* Sequentially run a deployment pass on every SwiftProxy node
(`fuel-swiftproxy-01` ... `fuel-swiftproxy-xx`), starting with the
`primary-swift-proxy` node. There are 2 Swift Proxies by default.
* Sequentially run a deployment pass on every Storage node (`fuel-swift-01` ...
`fuel-swift-xx`).
* Sequentially run a deployment pass on the Controller nodes
(`fuel-controller-01` ... `fuel-controller-xx`) starting with the
`primary-controller` node.
* Run a deployment pass on every Compute node (`fuel-compute-01` ...
`fuel-compute-xx`) - unlike the Controllers, these nodes may be deployed in parallel.
* Run an additional deployment pass on `fuel-controller-01` to finalize the
Galera cluster configuration.
**Example 2: Compact OpenStack deployment with Storage and swift-proxy
combined with nova-controller on the same nodes**
(:download:`Download example YAML file </_static/compact.yaml>`)
* Create the necessary volumes on Controller nodes as described
in :ref:`create-the-XFS-partition`
* Sequentially run a deployment pass on the Controller nodes
(`fuel-controller-01` ... `fuel-controller-xx`), starting with the
`primary-controller` node. Errors in Swift storage such as ``*/Stage[main]
/Swift::Storage::Container/Ring_container_device[<device address>]: Could not
evaluate: Device not found check device on <device address>*`` are expected
during the deployment passes until the very final pass.
* Run an additional deployment pass on `fuel-controller-01` only to finalize the
Galera cluster configuration.
* Run a deployment pass on every Compute node (`fuel-compute-01` ...
`fuel-compute-xx`) - unlike the Controllers these nodes may be deployed in parallel.
**Example 3:** **Simple OpenStack non-HA installation**
(:download:`Download example YAML file </_static/simple.yaml>`)
* Sequentially run a deployment pass on the Controller (`fuel-controller-01`).
No errors should appear during this deployment pass.
* Run a deployment pass on every Compute node (`fuel-compute-01` ...
`fuel-compute-xx`) - unlike the Controllers these nodes may be deployed in parallel.

View File

@ -0,0 +1,88 @@
.. raw:: pdf
PageBreak
.. index:: Testing OpenStack Cluster Manually
Testing OpenStack Cluster
=========================
Now that you've installed OpenStack, its time to take your new OpenStack cloud
for a drive around the block. Follow these steps:
1. On the host machine, open your browser to http://192.168.0.10/ (change the
IP address value to your own ``public_virtual_ip``) and login as
``nova/nova`` (unless you changed these credentials in YAML file).
2. Click the ``Project`` tab in the left-hand column.
3. Under ``Manage Compute``, choose ``Access & Security`` to set security
settings:
- Click ``Create Keypair`` and enter a name for the new keypair. The
private key should download automatically; make sure to keep it safe.
- Click ``Access & Security`` again and click ``Edit Rules`` for the
default Security Group.
- Add a new rule allowing TCP connections from
port 22 to port 22 for all IP addresses using a CIDR of 0.0.0.0/0.
(You can also customize this setting as necessary.)
- Click ``Add Rule`` to save the new rule.
- Add a second new rule allowing ICMP connections with a type and code of
-1 to the default Security Group and click ``Add Rule`` to save.
4. Click ``Allocate IP To Project`` and add two new floating IPs. Notice that
they come from the pool specified in ``config.yaml`` and ``site.pp``.
5. Click ``Images & Snapshots``, then ``Create Image``.
Enter a name and specify the ``Image Location`` as
https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
with a ``Format`` of ``QCOW2``. Check the ``Public`` checkbox.
6. The next step is to upload an image to use for creating VMs, but an OpenStack
bug prevents you from doing this in the browser. Instead, log in to any
of the controllers as ``root`` and execute the following commands::
cd ~
source openrc
glance image-create --name cirros --container-format bare --disk-format qcow2 --is-public yes \
--location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
7. Go back to the browser and refresh the page. Launch a new instance of this image
using the ``tiny`` flavor. Click the ``Networking`` tab and choose the
default ``net04_ext`` network, then click the ``Launch`` button.
8. On the instances page:
- Click the new instance and look at the settings.
- Click the ``Logs`` tab to look at the logs.
- Click the ``VNC`` tab to log in. If you see just a big black rectangle, the
machine is in screensaver mode; click the grey area and press the space
bar to wake it up, then login as ``cirros/cubswin:)``.
- At the command line, enter ``ifconfig -a | more`` and see the assigned IP address.
- Enter ``sudo fdisk -l`` to see that no volume has yet been assigned to this VM.
9. On the ``Instances`` page, click ``Assign Floating IP`` and assign an IP
address to your instance. You can either choose from one of the existing
created IPs by using the pulldown menu or click the plus sign (+) to choose
a network and allocate a new IP address.
- From your host machine, ping the floating IP assigned to this VM.
- If that works, try to ``ssh cirros@floating-ip`` from the host machine.
10. Back in the browser, click ``Volumes`` and ``Create Volume``. Create the
new volume, and attach it to the instance.
11. Go back to the VNC tab and repeat ``fdisk -l`` to see the new unpartitioned
disk attached.
Now your new VM is ready to be used.

View File

@ -0,0 +1,291 @@
.. raw:: pdf
PageBreak
.. index:: Installing Fuel Master Node
Installing Fuel Master Node
===========================
.. contents :local:
Fuel is distributed as both ISO and IMG images, each of them contains
an installer for Fuel Master node. The ISO image is used for CD media devices,
iLO or similar remote access systems. The IMG file is used for USB memory sticks.
Once installed, Fuel can be used to deploy and manage OpenStack clusters. It
will assign IP addresses to the nodes, perform PXE boot and initial
configuration, and provision of OpenStack nodes according to their roles in
the cluster.
.. _Install_Bare-Metal:
On Bare-Metal Environment
-------------------------
To install Fuel on bare-metal hardware, you need to burn the provided ISO to
a CD/DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS.
Burning an ISO to optical media is a deeply supported function on all OSes.
For Linux there are several interfaces available such as `Brasero` or `Xfburn`,
two of the more commonly pre-installed desktop applications. There are also
a number for Windows such as `ImgBurn <http://www.imgburn.com/>`_ and the
open source `InfraRecorder <http://infrarecorder.org/>`_.
Burning an ISO in Mac OS X is deceptively simple. Open `Disk Utility` from
`Applications > Utilities`, drag the ISO into the disk list on the left side
of the window and select it, insert blank media with enough room, and click
`Burn`. If you prefer a utility, check out the open source `Burn
<http://burn-osx.sourceforge.net/Pages/English/home.html>`_.
Installing the ISO to a bootable USB stick, however, is an entirely different
matter. Canonical suggests `PenDriveLinux` which is a GUI tool for Windows.
On Windows, you can write the installation image with a number of different
utilities. The following list links to some of the more popular ones and they are
all available at no cost:
- `Win32 Disk Imager <http://sourceforge.net/projects/win32diskimager/>`_.
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to allocate bare-metal
nodes for your OpenStack cluster, put them on the same L2 network as the
Master node, and PXE boot. The UI will discover them and make them available
for installing OpenStack.
On VirtualBox
-------------
If you are going to evaluate Fuel on VirtualBox, you should know that we
provide a set of scripts that create and configure all of the required VMs for
you, including the Master node and Slave nodes for OpenStack itself. It's a very
simple, single-click installation.
.. note::
These scripts are not supported on Windows, but you can still test on
VirtualBox by creating the VMs by yourself. See :ref:`Install_Manual` for more
details.
The requirements for running Fuel on VirtualBox are:
A host machine with Linux or Mac OS.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10.
VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both
can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
to handle 4 VMs for non-HA OpenStack installation (1 Master node, 1 Controller
node, 1 Compute node, 1 Cinder node) or
to handle 5 VMs for HA OpenStack installation (1 Master node, 3 Controller
nodes, 1 Compute node)
.. _Install_Automatic:
Automatic Mode
++++++++++++++
When you unpack the scripts, you will see the following important files and
folders:
`iso`
This folder needs to contain a single ISO image for Fuel. Once you
downloaded ISO from the portal, copy or move it into this directory.
`config.sh`
This file contains configuration, which can be fine-tuned. For example, you
can select how many virtual nodes to launch, as well as how much memory to give them.
`launch.sh`
Once executed, this script will pick up an image from the ``iso`` directory,
create a VM, mount the image to this VM, and automatically install the Fuel
Master node.
After installation of the Master node, the script will create Slave nodes for
OpenStack and boot them via PXE from the Master node.
Finally, the script will give you the link to access the Web-based UI for the
Master node so you can start installation of an OpenStack cluster.
.. _Install_Manual:
Manual Mode
+++++++++++
.. note::
However, these manual steps will allow you to set up the evaluation environment
for vanilla OpenStack release only. `RHOS installation is not possible.`
To download and deploy RedHat OpenStack you need to use automated VirtualBox
helper scripts or install Fuel :ref:`Install_Bare-Metal`.
If you cannot or would rather not run our helper scripts, you can still run
Fuel on VirtualBox by following these steps.
Master Node Deployment
^^^^^^^^^^^^^^^^^^^^^^
First, create the Master node VM.
1. Configure the host-only interface vboxnet0 in VirtualBox.
* IP address: 10.20.0.1
* Interface mask: 255.255.255.0
* DHCP disabled
2. Create a VM for the Master node with the following parameters:
* OS Type: Linux, Version: Red Hat (64bit)
* RAM: 1024 MB
* HDD: 20 GB, with dynamic disk expansion
* CDROM: mount Fuel ISO
* Network 1: host-only interface vboxnet0
3. Power on the VM in order to start the installation.
4. Wait for the Welcome message with all information needed to login into the UI
of Fuel.
Adding Slave Nodes
^^^^^^^^^^^^^^^^^^
Next, create Slave nodes where OpenStack needs to be installed.
1. Create 3 or 4 additional VMs depending on your wish with the following parameters:
* OS Type: Linux, Version: Red Hat (64bit)
* RAM: 1024 MB
* HDD: 30 GB, with dynamic disk expansion
* Network 1: host-only interface vboxnet0, PCnet-FAST III device
2. Set priority for the network boot:
.. image:: /_images/vbox-image1.png
:align: center
3. Configure the network adapter on each VM:
.. image:: /_images/vbox-image2.png
:align: center
Changing Network Parameters Before the Installation
---------------------------------------------------
You can change the network settings for the Fuel (PXE booting) network, which
is ``10.20.0.2/24 gw 10.20.0.1`` by default.
In order to do so, press the <TAB> key on the very first installation screen
which says "Welcome to Fuel Installer!" and update the kernel options. For
example, to use 192.168.1.10/24 IP address for the Master node and 192.168.1.1
as the gateway and DNS server you should change the parameters to those shown
in the image below:
.. image:: /_images/network-at-boot.jpg
:align: center
When you're finished making changes, press the <ENTER> key and wait for the
installation to complete.
Changing Network Parameters After Installation
----------------------------------------------
It is still possible to configure other interfaces, or add 802.1Q sub-interfaces
to the Master node to be able to access it from your network if required.
It is easy to do via standard network configuration scripts for CentOS. When the
installation is complete, you can modify
``/etc/sysconfig/network-scripts/ifcfg-eth\*`` scripts. For example, if *eth1*
interface is on the L2 network which is planned for PXE booting, and *eth2* is
the interface connected to your office network switch, *eth0* is not in use, then
settings can be the following:
/etc/sysconfig/network-scripts/ifcfg-eth0::
DEVICE=eth0
ONBOOT=no
/etc/sysconfig/network-scripts/ifcfg-eth1::
DEVICE=eth1
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
BOOTPROTO=static
IPADDR=192.168.1.10
NETMASK=255.255.255.0
/etc/sysconfig/network-scripts/ifcfg-eth2::
DEVICE=eth2
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
IPADDR=172.18.0.5
NETMASK=255.255.255.0
.. warning::
Once IP settings are set at the boot time for Fuel Master node, they
**should not be changed during the whole lifecycle of Fuel.**
After modification of network configuration files, it is required to apply the
new configuration::
service network restart
Now you should be able to connect to Fuel UI from your network at
http://172.18.0.5:8000/
Name Resolution (DNS)
---------------------
During Master node installation, it is assumed that there is a recursive DNS
service on 10.20.0.1.
If you want to make it possible for Slave nodes to be able to resolve public names,
you need to change this default value to point to an actual DNS service.
To make the change, run the following command on Fuel Master node (replace IP to
your actual DNS)::
echo "nameserver 172.0.0.1" > /etc/dnsmasq.upstream
PXE Booting Settings
--------------------
By default, `eth0` on Fuel Master node serves PXE requests. If you are planning
to use another interface, then it is required to modify dnsmasq settings (which
acts as DHCP server). Edit the file ``/etc/cobbler/dnsmasq.template``, find the line
``interface=eth0`` and replace the interface name with the one you want to use.
Launch command to synchronize cobbler service afterwards::
cobbler sync
During synchronization cobbler builds actual dnsmasq configuration file
``/etc/dnsmasq.conf`` from template ``/etc/cobbler/dnsmasq.template``. That is
why you should not edit ``/etc/dnsmasq.conf``. Cobbler rewrites it each time
when it is synchronized.
If you want to use virtual machines to launch Fuel then you have to be sure
that dnsmasq on Master node is configured to support the PXE client you use on your
virtual machines. We enabled *dhcp-no-override* option because without it
dnsmasq tries to move ``PXE filename`` and ``PXE servername`` special fields
into DHCP options. Not all PXE implementations can recognize those options and
therefore they will not be able to boot. For example, CentOS 6.4 uses gPXE
implementation instead of more advanced iPXE by default.
When Master Node Installation is Done
-------------------------------------
Once the Master node is installed, power on all other nodes and log in to the
Fuel UI.
Slave nodes will be booted in bootstrap mode (CentOS based Linux in memory) via
PXE and you will see notifications in the user interface about discovered nodes.
This is the point when you can create an environment, add nodes into it, and
start configuration...
Networking configuration is most complicated part, so please read the networking
section of documentation carefully.

View File

@ -0,0 +1,303 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Configuration
Understanding and Configuring the Network
=========================================
.. contents :local:
OpenStack clusters use several types of network managers: FlatDHCPManager,
VLANManager and Neutron (formerly Quantum). The current version of Fuel UI
supports only two (FlatDHCP and VLANManager), but Fuel CLI supports all
three. For more information about how the first two network managers work,
you can read these two resources:
* `OpenStack Networking FlatManager and FlatDHCPManager
<http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/>`_
* `Openstack Networking for Scalability and Multi-tenancy with VLANManager
<http://www.mirantis.com/blog/openstack-networking-vlanmanager/>`_
FlatDHCPManager (multi-host scheme)
-----------------------------------
The main idea behind the flat network manager is to configure a bridge
(i.e. **br100**) on every Compute node and have one of the machine's host
interfaces connect to it. Once the virtual machine is launched its virtual
interface will connect to that bridge as well.
The same L2 segment is used for all OpenStack projects, which means that there
is no L2 isolation between virtual hosts, even if they are owned by separate
projects, and there is only one flat IP pool defined for the cluster. For this
reason it is called the *Flat* manager.
The simplest case here is as shown on the following diagram. Here the *eth1*
interface is used to give network access to virtual machines, while *eth0*
interface is the management network interface.
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
:align: center
Fuel deploys OpenStack in FlatDHCP mode with the so called **multi-host**
feature enabled. Without this feature enabled, network traffic from each VM
would go through the single gateway host, which basically becomes a single
point of failure. In enabled mode, each Compute node becomes a gateway for
all the VMs running on the host, providing a balanced networking solution.
In this case, if one of the Computes goes down, the rest of the environment
remains operational.
The current version of Fuel uses VLANs, even for the FlatDHCP network
manager. On the Linux host, it is implemented in such a way that it is not
the physical network interfaces that are connected to the bridge, but the
VLAN interface (i.e. *eth0.102*).
FlatDHCPManager (single-interface scheme)
-----------------------------------------
.. image:: /_images/flatdhcpmanager-sh_scheme.jpg
:align: center
Therefore all switch ports where Compute nodes are connected must be
configured as tagged (trunk) ports with required VLANs allowed (enabled,
tagged). Virtual machines will communicate with each other on L2 even if
they are on different Compute nodes. If the virtual machine sends IP packets
to a different network, they will be routed on the host machine according to
the routing table. The default route will point to the gateway specified on
the networks tab in the UI as the gateway for the Public network.
VLANManager
------------
VLANManager mode is more suitable for large scale clouds. The idea behind
this mode is to separate groups of virtual machines, owned by different
projects, on different L2 layers. In VLANManager this is done by tagging IP
frames, or simply speaking, by VLANs. It allows virtual machines inside the
given project to communicate with each other and not to see any traffic from
VMs of other projects. Switch ports must be configured as tagged (trunk)
ports to allow this scheme to work.
.. image:: /_images/vlanmanager_scheme.jpg
:align: center
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Deployment Schema
Fuel Deployment Schema
======================
One of the physical interfaces on each host has to be chosen to carry
VM-to-VM traffic (fixed network), and switch ports must be configured to
allow tagged traffic to pass through. OpenStack Computes will untag the IP
packets and send them to the appropriate VMs. Simplifying the configuration
of VLAN Manager, there is no known limitation which Fuel could add in this
particular networking mode.
Configuring the network
-----------------------
Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment
accordingly. The diagram below shows an example configuration.
.. image:: /_images/physical-network.jpg
:width: 100%
:align: center
Fuel operates with following logical networks:
**Fuel** network
Used for internal Fuel communications only and PXE booting (untagged on the scheme);
**Public** network
Is used to get access from virtual machines to outside, Internet or office
network (VLAN 101 on the scheme);
**Floating** network
Used to get access to virtual machines from outside (shared L2-interface with
Public network; in this case it's VLAN 101);
**Management** network
Is used for internal OpenStack communications (VLAN 102 on the scheme);
**Storage** network
Is used for Storage traffic (VLAN 103 on the scheme);
**Fixed** network
One (for flat mode) or more (for VLAN mode) virtual machines
networks (VLAN 104 on the scheme).
Mapping logical networks to physical interfaces on servers
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fuel allows you to use different physical interfaces to handle different
types of traffic. When a node is added to the environment, click at the bottom
line of the node icon. In the detailed information window, click the "Network
Configuration" button to open the physical interfaces configuration screen.
.. image:: /_images/network-settings.jpg
:align: center
On this screen you can drag-and-drop logical networks to physical interfaces
according to your network setup.
All networks are presented on the screen, except Fuel.
It runs on the physical interface from which node was initially PXE booted,
and in the current version it is not possible to map it on any other physical
interface. Also, once the network is configured and OpenStack is deployed,
you may not modify network settings, even to move a logical network to another
physical interface or VLAN number.
Switch
++++++
Fuel can configure hosts, however switch configuration is still manual work.
Unfortunately the set of configuration steps, and even the terminology used,
is different for different vendors, so we will try to provide
vendor-agnostic information on how traffic should flow and leave the
vendor-specific details to you. We will provide an example for a Cisco switch.
First of all, you should configure access ports to allow non-tagged PXE booting
connections from all Slave nodes to the Fuel node. We refer this network
as the Fuel network.
By default, the Fuel Master node uses the `eth0` interface to serve PXE
requests on this network.
So if that's left unchanged, you have to set the switch port for `eth0` of Fuel
Master node to access mode.
We recommend that you use the `eth0` interfaces of all other nodes for PXE booting
as well. Corresponding ports must also be in access mode.
Taking into account that this is the network for PXE booting, do not mix
this L2 segment with any other network segments. Fuel runs a DHCP
server, and if there is another DHCP on the same L2 network segment, both the
company's infrastructure and Fuel's will be unable to function properly.
You also need to configure each of the switch's ports connected to nodes as an
"STP Edge port" (or a "spanning-tree port fast trunk", according to Cisco
terminology). If you don't do that, DHCP timeout issues may occur.
As long as the Fuel network is configured, Fuel can operate.
Other networks are required for OpenStack environments, and currently all of
these networks live in VLANs over the one or multiple physical interfaces on a
node. This means that the switch should pass tagged traffic, and untagging is done
on the Linux hosts.
.. note:: For the sake of simplicity, all the VLANs specified on the networks tab of
the Fuel UI should be configured on switch ports, pointing to Slave nodes,
as tagged.
Of course, it is possible to specify as tagged only certain ports for a certain
nodes. However, in the current version, all existing networks are automatically
allocated for each node, with any role.
And network check will also check if tagged traffic pass, even if some nodes do
not require this check (for example, Cinder nodes do not need fixed network traffic).
This is enough to deploy the OpenStack environment. However, from a
practical standpoint, it's still not really usable because there is no
connection to other corporate networks yet. To make that possible, you must
configure uplink port(s).
One of the VLANs may carry the office network. To provide access to the Fuel Master
node from your network, any other free physical network interface on the
Fuel Master node can be used and configured according to your network
rules (static IP or DHCP). The same network segment can be used for
Public and Floating ranges. In this case, you must provide the corresponding
VLAN ID and IP ranges in the UI. One Public IP per node will be used to SNAT
traffic out of the VMs network, and one or more floating addresses per VM
instance will be used to get access to the VM from your network, or
even the global Internet. To have a VM visible from the Internet is similar to
having it visible from corporate network - corresponding IP ranges and VLAN IDs
must be specified for the Floating and Public networks. One current limitation
of Fuel is that the user must use the same L2 segment for both Public and
Floating networks.
Example configuration for one of the ports on a Cisco switch::
interface GigabitEthernet0/6 # switch port
description s0_eth0 jv # description
switchport trunk encapsulation dot1q # enables VLANs
switchport trunk native vlan 262 # access port, untags VLAN 262
switchport trunk allowed vlan 100,102,104 # 100,102,104 VLANs are passed with tags
switchport mode trunk # To allow more than 1 VLAN on the port
spanning-tree portfast trunk # STP Edge port to skip network loop
# checks (to prevent DHCP timeout issues)
vlan 262,100,102,104 # Might be needed for enabling VLANs
Router
++++++
To make it possible for VMs to access the outside world, you must have an IP
address set on a router in the Public network. In the examples provided,
that IP is 12.0.0.1 in VLAN 101.
Fuel UI has a special field on the networking tab for the gateway address. As
soon as deployment of OpenStack is started, the network on nodes is
reconfigured to use this gateway IP as the default gateway.
If Floating addresses are from another L3 network, then you have to configure the
IP address (or even multiple IPs if Floating addresses are from more than one L3
network) for them on the router as well.
Otherwise, Floating IPs on nodes will be inaccessible.
.. _access_to_public_net:
Deployment configuration to access OpenStack API and VMs from host machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Helper scripts for VirtualBox create network adapters `eth0`, `eth1`, `eth2`
which are represented on host machine as `vboxnet0`, `vboxnet1`, `vboxnet2`
correspondingly, and assign IP addresses for adapters:
vboxnet0 - 10.20.0.1/24,
vboxnet1 - 172.16.1.1/24,
vboxnet2 - 172.16.0.1/24.
For the demo environment on VirtualBox, the first network adapter is used to run Fuel
network traffic, including PXE discovery.
To access the Horizon and OpenStack RESTful API via Public network from the host machine,
it is required to have route from your host to the Public IP address on the OpenStack Controller.
Also, if access to Floating IP of VM is required, it is also required to have route
to the Floating IP on Compute host, which is binded to Public interface there.
To make this configuration possible on VirtualBox demo environment, the user has
to run Public network untagged. On the image below you can see the configuration of
Public and Floating networks which will allow to make this happen.
.. image:: /_images/vbox_public_settings.jpg
:align: center
By default Public and Floating networks are run on the first network interface.
It is required to change it, as you can see on this image below. Make sure you change
it on every node.
.. image:: /_images/vbox_node_settings.jpg
:align: center
If you use default configuration in VirtualBox scripts, and follow the exact same
settings on the images above, you should be able to access OpenStack Horizon via
Public network after the installation.
If you want to enable Internet on provisioned VMs by OpenStack, you
have to configure NAT on the host machine. When packets reach `vboxnet1` interface,
according to the OpenStack settings tab, they have to know the way out of the host.
For Ubuntu, the following command, executed on the host, can make this happen::
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
To access VMs managed by OpenStack it is needed to provide IP addresses from
Floating IP range. When OpenStack cluster is deployed and VM is provisioned there,
you have to associate one of the Floating IP addresses from the pool to this VM,
whether in Horizon or via Nova CLI. By default, OpenStack blocking all the traffic to the VM.
To allow the connectivity to the VM, you need to configure security groups.
It can be done in Horizon, or from OpenStack Controller using the following commands::
. /root/openrc
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
IP ranges for Public and Management networks (172.16.*.*) are defined in ``config.sh``
script. If default values doesn't fit your needs, you are free to change them, but before
the installation of Fuel Master node.

View File

@ -1,4 +1,8 @@
.. index:: Preface
.. raw:: pdf
PageBreak
.. index: Preface
.. _Preface:

View File

@ -26,4 +26,4 @@
.. contents:: Table of Contents
:depth: 2
.. include:: contents.rst
.. include:: contents-user.rst

29
pdf_install.rst Normal file
View File

@ -0,0 +1,29 @@
.. header::
.. cssclass:: header-table
+-------------------------------------+-----------------------------------+
| Fuel™ for Openstack v3.1 | .. cssclass:: right|
| | |
| Installation Guide | ###Section### |
+-------------------------------------+-----------------------------------+
.. footer::
.. cssclass:: footer-table
+--------------------------+----------------------+
| | .. cssclass:: right|
| | |
| ©2013, Mirantis Inc. | Page ###Page### |
+--------------------------+----------------------+
.. raw:: pdf
PageBreak oneColumn
.. contents:: Table of Contents
:depth: 2
.. include:: contents-install.rst

29
pdf_reference.rst Normal file
View File

@ -0,0 +1,29 @@
.. header::
.. cssclass:: header-table
+-------------------------------------+-----------------------------------+
| Fuel™ for Openstack v3.1 | .. cssclass:: right|
| | |
| Reference Architecture | ###Section### |
+-------------------------------------+-----------------------------------+
.. footer::
.. cssclass:: footer-table
+--------------------------+----------------------+
| | .. cssclass:: right|
| | |
| ©2013, Mirantis Inc. | Page ###Page### |
+--------------------------+----------------------+
.. raw:: pdf
PageBreak oneColumn
.. contents:: Table of Contents
:depth: 2
.. include:: contents-refarch.rst