restructured the repository, fixed issue with PDF generation

This commit is contained in:
Svetlana Sychova 2013-10-02 23:58:16 -07:00
parent e124f5e4b8
commit 7e3b005003
48 changed files with 4125 additions and 21 deletions

BIN
.pdf_install.rst.swp Normal file

Binary file not shown.

View File

@ -283,7 +283,7 @@ nwdiag_antialias = True
extensions += ['rst2pdf.pdfbuilder']
pdf_documents = [
('pdf_index', u'Fuel-for-OpenStack-3.1-UserGuide', u'User Guide',
('pdf_user', u'Fuel-for-OpenStack-3.1-UserGuide', u'User Guide',
u'2013, Mirantis Inc.'),
('pdf_install', u'Fuel-for-Openstack-3.1-InstallGuide', u'Installation Guide', u'2013, Mirantis Inc.'),
('pdf_reference', u'Fuel-for-OpenStack-3.1-ReferenceArchitecture', u'Reference Architecture', u'2013, Mirantis Inc.')
@ -303,7 +303,7 @@ pdf_break_level = 1
# When a section starts in a new page, force it to be 'even', 'odd',
# or just use 'any'
pdf_breakside = 'odd'
pdf_breakside = 'any'
# Insert footnotes where they are defined instead of
# at the end.

View File

@ -5,6 +5,15 @@
.. toctree:: Table of Contents
:maxdepth: 2
copyright_page
preface
instal-guide
.. include:: /pages/install-guide/0030-how-it-works.rst
.. include:: /pages/install-guide/0040-reference-topologies.rst
.. include:: /pages/install-guide/0050-supported-software.rst
.. include:: /pages/install-guide/0060-download-fuel.rst
.. include:: /pages/install-guide/0015-sizing-hardware.rst
.. include:: /pages/install-guide/install.rst
.. include:: /pages/install-guide/networks.rst
.. include:: /pages/install-guide/0000-preamble.rst
.. include:: /pages/install-guide/0010-introduction.rst
.. include:: /pages/install-guide/0015-before-you-start.rst
.. include:: /pages/install-guide/0020-machines.rst

View File

@ -6,4 +6,15 @@
:maxdepth: 2
preface
reference-architecture
.. include:: /pages/reference-architecture/0010-overview.rst
.. include:: /pages/reference-architecture/0012-simple.rst
.. include:: /pages/reference-architecture/0014-compact.rst
.. include:: /pages/reference-architecture/0016-full.rst
.. include:: /pages/reference-architecture/0015-closer-look.rst
.. include:: /pages/reference-architecture/0018-red-hat-differences.rst
.. include:: /pages/reference-architecture/0020-logical-setup.rst
.. include:: /pages/reference-architecture/0030-cluster-sizing.rst
.. include:: /pages/reference-architecture/0040-network-setup.rst
.. include:: /pages/reference-architecture/0050-technical-considerations-overview.rst
.. include:: /pages/reference-architecture/0060-quantum-vs-nova-network.rst
.. include:: /pages/reference-architecture/0080-swift-notes.rst

View File

@ -6,7 +6,11 @@
:maxdepth: 2
preface
.. include:: /pages/about-fuel/0070-introduction.rst
.. include:: /pages/installation-fuel-ui/red_hat_openstack.rst
.. include:: /pages/installation-fuel-ui/post-install-healthchecks.rst
.. include:: /pages/troubleshooting-ug/network-issues.rst
.. include:: /pages/user-guide/0070-introduction.rst
.. include:: /pages/user-guide/red_hat_openstack.rst
.. include:: /pages/user-guide/post-install-healthchecks.rst
.. include:: /pages/user-guide/troubleshooting-ug/network-issues.rst
.. include:: /pages/user-guide/advanced-topics/0010-introduction.rst
.. include:: /pages/user-guide/advanced-topics/0020-custom-plug-ins.rst
.. include:: /pages/user-guide/advanced-topics/0030-quantum-HA.rst
.. include:: /pages/user-guide/advanced-topics/0040-bonding.rst

View File

@ -10,4 +10,4 @@
reference-architecture
release-notes
frequently-asked-questions
copyright
eula

View File

@ -13,16 +13,16 @@ PDFs
---------
The following Fuel documentation is available in PDF:
* Installation Guide
* `Installation Guide <../pdf/Fuel-for-Openstack-3.1-InstallGuide.pdf>`_
This document describes how to install pre-configure your
OpenStack environment and install the Fuel Master Node.
* User Guide
* `User Guide <../pdf/Fuel-for-OpenStack-3.1-UserGuide.pdf>`_
This document describes how to deploy compute nodes for Fuel.
* Reference Architecture
* `Reference Architecture <../pdf/Fuel-for-OpenStack-3.1-ReferenceArchitecture.pdf>`_
A deep dive into the structure of the Fuel OpenStack environment,
network considerations, and deployment options.

View File

@ -7,6 +7,7 @@ Installation Guide
.. contents:: :local:
:depth: 2
.. include:: /pages/install-guide/0030-how-it-works.rst
.. include:: /pages/install-guide/0040-reference-topologies.rst
.. include:: /pages/install-guide/0050-supported-software.rst

View File

@ -0,0 +1,9 @@
OpenStack is an extensible, versatile, and flexible cloud management platform. By exposing its portfolio of cloud infrastructure services compute, storage, networking and other core resources — through ReST APIs, OpenStack enables a wide range of control over these services, both from the perspective of an integrated Infrastructure as a Service (IaaS) controlled by applications, as well as automated manipulation of the infrastructure itself.
This architectural flexibility doesnt set itself up magically. It asks you, the user and cloud administrator, to organize and manage an extensive array of configuration options. Consequently, getting the most out of your OpenStack cloud over time in terms of flexibility, scalability, and manageability requires a thoughtful combination of complex configuration choices. This can be very time consuming and requires a significant amount of studious documentation to comprehend.
Mirantis Fuel for OpenStack was created to eliminate exactly these problems. This step-by-step guide takes you through this process of:
* Configuring OpenStack and its supporting components into a robust cloud architecture
* Deploying that architecture through an effective, well-integrated automation package that sets up and maintains the components and their configurations
* Providing access to a well-integrated, up-to-date set of components known to work together

View File

@ -0,0 +1,49 @@
..raw:: pdf
PageBreak
..index:: Prerequisites
Hardware Requirements
===========================
The amount of hardware depends on your deployment requirements.
When you plan your OpenStack environment, consider the following:
* **CPU**
Depends on the number of virtual machines that you plan to deploy
in your cloud environment and the CPU per virtual machine.
See :ref:'Calculating CPU Requirements'
* **Memory**
Depends on the amount of RAM assigned per virtual machine and the
controller node.
* **Storage**
Depends on the local drive space per virtual machine, remote volumes
that can be attached to a virtual machine, and object storage.
* **Networking**
Depends on the OpenStack architecture, network bandwidth per virtual
machine, and network storage.
Example of Hardware Requirements Calculation
-------------------------------------------------
When you calculate resources for your OpenStack environment, consider
resources required for expanding your environment.
The example described in this section presumes that your environment
has the following prerequisites:
* 100 virtual machines
* 2 x Amazon EC2 compute units 2 GHz average
* 16 x Amazon EC2 compute units 16 GHz maximum

View File

@ -0,0 +1,24 @@
.. index:: What is Fuel
.. _What_is_Fuel:
What is Fuel?
=============
Fuel is a ready-to-install collection of the packages and scripts you need
to create a robust, configurable, vendor-independent OpenStack cloud in your
own environment. As of Fuel 3.1, Fuel Library and Fuel Web have been merged
into a single toolbox with options to use the UI or CLI for management.
A single OpenStack cloud consists of packages from many different open source
projects, each with its own requirements, installation procedures, and
configuration management. Fuel brings all of these projects together into a
single open source distribution, with components that have been tested and are
guaranteed to work together, and all wrapped up using scripts to help you work
through a single installation.
Simply put, Fuel is a way for you to easily configure and install an
OpenStack-based infrastructure in your own environment.
.. image:: /_images/FuelSimpleDiagram.jpg
:align: center

View File

@ -0,0 +1,9 @@
.. raw:: pdf
PageBreak
.. index:: Large Scale Deployments
.. _Large_Scale_Deployments:
.. TODO(mihgen): Fill in this section. It needs to be completely rewritten.

15
pages/old/about-fuel.rst Normal file
View File

@ -0,0 +1,15 @@
.. index:: About Fuel
.. _About_Fuel:
============
About Fuel
============
.. contents:: :local:
:depth: 2
.. include:: /pages/about-fuel/0020-what-is-fuel
.. include:: /pages/about-fuel/0030-how-it-works.rst
.. include:: /pages/about-fuel/0040-reference-topologies.rst
.. include:: /pages/about-fuel/0050-supported-software.rst
.. include:: /pages/about-fuel/0060-download-fuel.rst

View File

@ -0,0 +1,24 @@
.. index:: What is Fuel
.. _What_is_Fuel:
What is Fuel?
=============
Fuel is a ready-to-install collection of the packages and scripts you need
to create a robust, configurable, vendor-independent OpenStack cloud in your
own environment. As of Fuel 3.1, Fuel Library and Fuel Web have been merged
into a single toolbox with options to use the UI or CLI for management.
A single OpenStack cloud consists of packages from many different open source
projects, each with its own requirements, installation procedures, and
configuration management. Fuel brings all of these projects together into a
single open source distribution, with components that have been tested and are
guaranteed to work together, and all wrapped up using scripts to help you work
through a single installation.
Simply put, Fuel is a way for you to easily configure and install an
OpenStack-based infrastructure in your own environment.
.. image:: /_images/FuelSimpleDiagram.jpg
:align: center

View File

@ -0,0 +1,41 @@
.. raw:: pdf
PageBreak
.. index: How Fuel Works
.. _How-Fuel-Works:
How Fuel Works
==============
Fuel works on a simple premise. Rather than installing each of the
components that make up OpenStack directly, you instead use a configuration
management system like Puppet to create scripts that can provide a
configurable, reproducible, sharable installation process.
In practice, Fuel works as follows:
1. First, set up Fuel Master Node using the ISO. This process only needs to
be completed once per installation.
2. Next, discover your virtual or physical nodes and configure your
OpenStack cluster using the Fuel UI.
3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will
perform all deployment magic for you by applying pre-configured and
pre-integrated Puppet manifests via Astute orchestration engine.
Fuel is designed to enable you to maintain your cluster while giving you the
flexibility to adapt it to your own configuration.
.. image:: /_images/how-it-works_svg.jpg
:align: center
Fuel comes with several pre-defined deployment configurations, some of them
include additional configuration options that allow you to adapt OpenStack
deployment to your environment.
Fuel UI integrates all of the deployments scripts into a unified,
Web-based Graphical User Interface that walks administrators through the
process of installing and configuring a fully functional OpenStack environment.

View File

@ -0,0 +1,41 @@
.. raw:: pdf
PageBreak
.. index:: Deployment Configurations
.. _Deployment_Configurations:
Deployment Configurations Provided By Fuel
==========================================
One of the advantages of Fuel is that it comes with a number of pre-built
deployment configurations that you can use to quickly build your own
OpenStack cloud infrastructure. These are well-specified configurations of
OpenStack and its constituent components that are expertly tailored to one
or more common cloud use cases. Fuel provides the ability to create the
following cluster types without requiring extensive customization:
**Simple (non-HA)**: The Simple (non-HA) installation provides an easy way
to install an entire OpenStack cluster without requiring the degree of
increased hardware involved in ensuring high availability.
**Multi-node (HA)**: When you're ready to begin your move to production, the
Multi-node (HA) configuration is a straightforward way to create an OpenStack
cluster that provides high availability. With three controller nodes and the
ability to individually specify services such as Cinder, Neutron (formerly
Quantum), and Swift, Fuel provides the following variations of the
Multi-node (HA) configurations:
- **Compact HA**: When you choose this option, Swift will be installed on
your controllers, reducing your hardware requirements by eliminating the need
for additional Swift servers while still addressing high availability
requirements.
- **Full HA**: This option enables you to install independent Swift and Proxy
nodes, so that you can separate their operation from your controller nodes.
In addition to these configurations, Fuel is designed to be completely
customizable. For assistance on deeper customization options based on the
included configurations you can `contact Mirantis for further assistance
<http://www.mirantis.com/contact/>`_.

View File

@ -0,0 +1,63 @@
.. raw:: pdf
PageBreak
.. index:: Supported Software Components
Supported Software Components
=============================
Fuel has been tested and is guaranteed to work with the following software
components:
* Operating Systems
* CentOS 6.4 (x86_64 architecture only)
* RHEL 6.4 (x86_64 architecture only)
* Puppet (IT automation tool)
* 2.7.19
* MCollective
* 2.3.1
* Cobbler (bare-metal provisioning tool)
* 2.2.3
* OpenStack
* Grizzly release 2013.1.2
* Hypervisor
* KVM
* QEMU
* Open vSwitch
* 1.10.0
* HA Proxy
* 1.4.19
* Galera
* 23.2.2
* RabbitMQ
* 2.8.7
* Pacemaker
* 1.1.8
* Corosync
* 1.4.3

View File

@ -0,0 +1,18 @@
.. raw:: pdf
PageBreak
.. index:: Download Fuel
Download Fuel
=============
The first step in installing Fuel is to download the version appropriate to
your environment.
Fuel is available for Grizzly OpenStack installation, and
will be available for Havana shortly after Havana's release.
The Fuel ISO and IMG, along with other Fuel releases, are available in the
`Downloads <http://fuel.mirantis.com/your-downloads/>`_ section of the Fuel
portal.

View File

@ -0,0 +1,41 @@
.. index:: Introduction
.. _Introduction:
Introducing Fuel™ for OpenStack
===============================
OpenStack is an extensible, versatile, and flexible cloud management
platform. By exposing its portfolio of cloud infrastructure services
compute, storage, networking and other core resources — through ReST APIs,
OpenStack enables a wide range of control over these services, both from the
perspective of an integrated Infrastructure as a Service (IaaS) controlled
by applications, as well as automated manipulation of the infrastructure
itself.
This architectural flexibility doesnt set itself up magically. It asks you,
the user and cloud administrator, to organize and manage an extensive array
of configuration options. Consequently, getting the most out of your
OpenStack cloud over time in terms of flexibility, scalability, and
manageability requires a thoughtful combination of complex configuration
choices. This can be very time consuming and requires that you become
familiar with a lot of documentation from a number of different projects.
Mirantis Fuel™ for OpenStack was created to eliminate exactly these problems.
This step-by-step guide takes you through this process of:
* Configuring OpenStack and its supporting components into a robust cloud
architecture
* Deploying that architecture through an effective, well-integrated automation
package that sets up and maintains the components and their configurations
* Providing access to a well-integrated, up-to-date set of components known to
work together
Fuel™ for OpenStack can be used to create virtually any OpenStack
configuration. To make things easier, the installation includes several
pre-defined architectures. For the sake of simplicity, this guide emphasises
a single, common reference architecture; the multi-node, high-availability
configuration. We begin with an explanation of this architecture, then move
on to the details of creating the configuration in a test environment using
VirtualBox. Finally, we give you the information you need to know to create
this and other OpenStack architectures in a production environment.

View File

@ -0,0 +1,3 @@
This section covers subjects that go beyond the standard OpenStack cluster,
from configuring OpenStack Networking for high-availability to adding your own
custom components to your cluster using Fuel.

View File

@ -0,0 +1,320 @@
Adding And Configuring Custom Services
--------------------------------------
Fuel is designed to help you easily install a standard OpenStack cluster, but what do you do if your cluster is not standard? What if you need services or components that are not included with the standard Fuel distribution? This document gives you all of the information you need to add custom services and packages to a Fuel-deployed cluster.
Fuel usage scenarios and how they affect installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Two basic Fuel usage scenarios exist::
* In the first scenario, a deployment engineer uses the Fuel ISO image to create a master node, make necessary changes to configuration files, and deploy OpenStack. In this scenario, each node gets a clean OpenStack installation.
* In the second scenario, the master node and other nodes in the cluster have already been installed, and the deployment engineer has to deploy OpenStack to an existing configuration.
For this discussion, the first scenario requires that any customizations needed must be applied during the deployment and the second scenario already has customizations applied.
In most cases, best practices dictate that you deploy and test OpenStack first, later adding any custom services. Fuel works using puppet manifests, so the simplest way to install a new service is to edit the current site.pp file on the Puppet Master to add any additional deployment paths for the target nodes. There are, however, certain components that must be installed prior to the installation of OpenStack (i.e., hardware drivers, management software, etc...). In cases like these, Puppet can only be used to perform these installations using a separate, custom site.pp file that prepares the target system(s) for OpenStack installation. An advantage to this method, however, is that it helps isolate version mismatches and the various OpenStack dependencies.
If a pre-deployment site.pp approach is not an option, you can inject a custom component installation into the existing Fuel manifests. If you elect to go this route, you'll need to be aware of software source compatibility issues, as well as installation stages, component versions, incompatible dependencies, and declared resource names.
In short, simple custom component installation may be accomplished by editing the site.pp file, but more complex components should be added as new Fuel components.
In the next section we take a closer look at what you need to know.
Installing the new service along with Fuel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When it comes to installing your new service or component alongside Fuel, you have several options. How you go about it depends on where in the process the component needs to be available. Let's look at each step and how it can impact your installation.
**Boot the master node**
In most cases, you will be installing the master node from the Fuel ISO. This is a semi-automated step, and doesn't allow for any custom components. If for some reason you need to install a node at this level, you will need to use the manual Fuel installation procedure.
**Cobbler configuration**
If your customizations need to take place before the install of the operating system, or even as part of the operating system install, this is where you will add them to the configuration process. This is also where you would make customizations to other services. At this stage, you are making changes to the operating system kickstart/pre-seed files, and may include any custom software source and components required to install the operating system for a node. Anything that needs to be installed before OpenStack should be configured during this step.
**OpenStack installation**
It is during this stage that you perform any Puppet, Astute, or mCollective configuration. In most cases, this means customizing the Puppet site.pp file to add any custom components during the actual OpenStack installation.
This step actually includes several different stages. (In fact, Puppet STDLib defines several additional default stages that fuel does not use.) These stages include:
0. ``Puppetlabs-repo``. mCollective uses this stage to add the Puppetlabs repositories during operating system and Puppet deployment.
1. ``Openstack-custom-repo``. Additional repositories required by OpenStack are configured at this stage. Additionally, to avoid compatibility issues, the Puppetlabs repositories are switched off at this stage. As a general rule, it is a good idea to turn off any unnecessary software repositories defined for operating system installation.
2. ``FUEL``. During this stage, Fuel performs any actions defined for the current operating system.
3. ``Netconfig``. During this stage, Fuel performs all network configuration actions. This means that you should include any custom components that are related to the network in this stage.
4. ``Main``. The actual OpenStack installation process happens during this stage. Install any remaining non-network-related components during or after this stage.
**Post-OpenStack install**
At this point, OpenStack is installed. You may add any components you like at this point. We suggest that you take care at this point so as not to break OpenStack. This is a good place to make an image of the nodes to have a roll-back in case of any catestrophic errors that render OpenStack or any other components inoperable. If you are preparing to deploy a large-scale environment, you may want to perform a small-scale test to familiarize yourself with the entire process and make yourself aware of any potential gotchas that are specific to your infrastructure. You should perform this small-scale test using the same hardware that the large-scale deployment will use and not VirtualBox. VirtualBox does not offer the ability to test any custom hardware driver installations your physical hardware may require.
Defining a new component
^^^^^^^^^^^^^^^^^^^^^^^^
In general, we recommend you follow these steps to define a new component:
#. **Custom stages. Optional.**
Declare a custom stage or stages to help Puppet understand the required installation sequence. Stages are special markers indicating the sequence of actions. Best practice is to use the input parameter Before for every stage, to help define the correct sequence. The default built-in stage is "main". Every Puppet action is automatically assigned to the main stage if no stage is explicitly specified for the action.
Note that since Fuel installs almost all of OpenStack during the main stage, custom stages may not help, so future plans include breaking the OpenStack installation into several sub-stages.
Don't forget to take into account other existing stages; training several parallel sequences of stages increases the chances that Puppet will order them in correctly if you do not explicitly specify the order.
*Example*::
stage {'Custom stage 1':
before => Stage['Custom stage 2'],
}
stage {'Custom stage 2':
before => Stage['main'],
}
Note that there are several limitations to stages, and they should be used with caution and only with the simplest of classes. You can find more information regarding stages and limitations here: http://docs.puppetlabs.com/puppet/2.7/reference/lang_run_stages.html.
#. **Custom repositories. Optional.**
If the custom component requires a custom software source, you may declare a new repository and add it during one of the early stages of the installation.
#. **Common variable definition**
It is a good idea to have all common variables defined in a single place. Unlike variables in many other languages, Puppet variables are actually constants, and may be assigned only once inside a given scope.
#. **OS and condition-dependent variable definition**
We suggest that you assign all common operating system or condition-dependent variables to a single location, preferably near the other common variables. Also, be sure to always use a ``default`` section when defining conditional operators or you could experience configuration issues.
*Example*::
case $::osfamily {
# RedHat in most cases should work for CentOS and Fedora as well
'RedHat': {
# List of packages to get from URL/path.
# Separate list should be defined for each separate URL!
$custom_package_list_from_url = ['qpid-cpp-server-0.14-16.el6.x86_64.rpm']
}
'Debian': {
# List of packages to get from URL/path.
# Separate list should be defined for each separate URL!
$custom_package_list_from_url = [ "qpidd_0.14-2_amd64.deb" ]
}
default: {
fail("Module install_custom_package does not support ${::operatingsystem}")
}
}
#. **Define installation procedures for independent custom components as classes**
You can think of public classes as singleton collections, or as a named block of code with its own namespace. Each class should be defined only once, but every class may be used with different input variable sets. The best practice is to define a separate class for every component, define required sub-classes for sub-components, and include class-dependent required resources within the actual class/subclass.
*Example*::
class add_custom_service (
# Input parameter definitions:
# Name of the service to place behind HAProxy. **Mandatory**.
# This name appears as a new HAProxy configuration block in /etc/haproxy/haproxy.cfg.
$service_name_in_haproxy_config,
$custom_package_download_url,
$custom_package_list_from_url,
#The list of remaining input parameters
...
) {
# HAProxy::params is a container class holding default parameters for the haproxy class. It adds and populates the Global and Default sections in /etc/haproxy/haproxy.cfg.
# If you install a custom service over the already deployed HAProxy configuration, it is probably better to comment out the following string:
include haproxy::params
#Class resources definitions:
# Define the list of package names to be installed
define install_custom_package_from_url (
$custom_package_download_url,
$package_provider = undef
) {
exec { "download-${name}" :
command => "/usr/bin/wget -P/tmp ${custom_package_download_url}/${name}",
creates => "/tmp/${name}",
} ->
install_custom_package { "${name}" :
provider => $package_provider,
source => "/tmp/${name}",
}
}
define install_custom_package (
$package_provider = undef,
$package_source = undef
) {
package { "custom-${name}" :
ensure => present,
provider => $package_provider,
source => $package_source
}
}
#Here we actually install all the packages from a single URL.
if is_array($custom_package_list_from_url) {
install_custom_package_from_url { $custom_package_list_from_url :
provider => $package_provider,
custom_package_download_url => $custom_package_download_url,
}
}
}
#. **Target nodes**
Every component should be explicitly assigned to a particular target node or nodes. To do that, declare the node or nodes within site.pp. When Puppet runs the manifest for each node, it compares each node definition with the name of the current hostname and applies only to classes assigned to the current node. Node definitions may include regular expressions. For example, you can apply the class 'add custom service' to all controller nodes with hostnames fuel-controller-00 to fuel-controller-xxx, where xxx = any integer value using the following definition:
*Example*::
node /fuel-controller-[\d+]/ {
include stdlib
class { 'add_custom_service':
stage => 'Custom stage 1',
service_name_in_haproxy_config => $service_name_in_haproxy_config,
custom_package_download_url => $custom_package_download_url,
custom_package_list_from_url => $custom_package_list_from_url,
}
}
Fuel API Reference
^^^^^^^^^^^^^^^^^^
**add_haproxy_service**
Location: Top level
As the name suggests, this function enables you to create a new HAProxy service. The service is defined in the ``/etc/haproxy/haproxy.cfg`` file, and generally looks something like this::
listen keystone-2
bind 10.0.74.253:35357
bind 10.0.0.110:35357
balance roundrobin
option httplog
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
To accomplish this, you might create a Fuel statement such as::
add_haproxy_service { 'keystone-2' :
order => 30,
balancers => {'fuel-controller-01.example.com' => '10.0.0.101',
'fuel-controller-02.example.com' => '10.0.0.102'},
virtual_ips => {'10.0.74.253', '10.0.0.110'},
port => '35357',
haproxy_config_options => { 'option' => ['httplog'], 'balance' => 'roundrobin' },
balancer_port => '35357',
balancermember_options => 'check',
mode => 'tcp',
define_cookies => false,
define_backend => false,
collect_exported => false
}
Let's look at how this command works.
**Usage:** ::
add_haproxy_service { '<SERVICE_NAME>' :
order => $order,
balancers => $balancers,
virtual_ips => $virtual_ips,
port => $port,
haproxy_config_options => $haproxy_config_options,
balancer_port => $balancer_port,
balancermember_options => $balancermember_options,
mode => $mode, #Optional. Default is 'tcp'.
define_cookies => $define_cookies, #Optional. Default false.
define_backend => $define_backend,#Optional. Default false.
collect_exported => $collect_exported, #Optional. Default false.
}
**Parameters:**
``<'Service name'>``
The name of the new HAProxy listener section. In our example it was ``keystone-2``. If you want to include an IP address or port in the listener name, you have the option to use a name such as::
'stats 0.0.0.0:9000 #Listen on all IP's on port 9000'
``order``
This parameter determines the order of the file fragments. It is optional, but we strongly recommend setting it manually. Fuel already has several different order values from 1 to 100 hardcoded for HAProxy configuration. If your HAProxy configuration fragments appear in the wrong places in ``/etc/haproxy/haproxy.cfg`` this is likely due to an incorrect order value. It is acceptable to set order values greater than 100 in order to place your custom configuration block at the end of ``haproxy.cfg``.
Puppet assembles configuration files from fragments. First it creates several configuration fragments and temporarily stores all of them as separate files. Every fragment has a name such as ``${order}-${fragment_name}``, so the order determines the number of the current fragment in the fragment sequence. After all the fragments are created, Puppet reads the fragment names and sorts them in ascending order, concatenating all the fragments in that order. In other words, a fragment with a smaller order value always goes before all fragments with a greater order value.
The ``keystone-2`` fragment from the example above has ``order = 30`` so it's placed after the ``keystone-1`` section (``order = 20``) and the ``nova-api-1`` section (order = 40).
``balancers``
Balancers (or **Backends** in HAProxy terms) are a hash of ``{ "$::hostname" => $::ipaddress }`` values.
The default is ``{ "<current hostname>" => <current ipaddress> }``, but that value is set for compatability only, and may not work correctly in HA mode. Instead, the default for HA mode is to explicitly set the Balancers as ::
Haproxy_service {
balancers => $controller_internal_addresses
}
where ``$controller_internal_addresses`` represents a hash of all the controllers with a corresponding internal IP address; this value is set in ``site.pp``.
The ``balancers`` parameter is a list of HAProxy listener balance members (hostnames) with corresponding IP addresses. The following strings from the ``keystone-2`` listener example represent balancers::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
Every key pair in the ``balancers`` hash adds a new string to the list of balancers defined in the listener section. Different options may be set for every string.
``virtual_ips``
This parameter represents an array of IP addresses (or **Frontends** in HAProxy terms) of the current listener. Every IP address in this array adds a new string to the bind section of the current listeners. The following strings from the ``keystone-2`` listener example represent virtual IPs::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``port``
This parameters specifies the frontend port for the listeners. Currently you must set the same port frontends.
The following strings from the ``keystone-2`` listener example represent the frontend port, where the port is 35357::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``haproxy_config_options``
This parameter represents a hash of key pairs of HAProxy listener options in the form ``{ 'option name' => 'option value' }``. Every key pair from this hash adds a new string to the listener options.
**NOTE** Every HAProxy option may require a different input value type, such as strings or a list of multiple options per single string.
The '`keystone-2`` listener example has the ``{ 'option' => ['httplog'], 'balance' => 'roundrobin' }`` option array and this array is represented as the following in the resulting /etc/haproxy/haproxy.cfg:
balance roundrobin
option httplog
``balancer_port``
This parameter represents the balancer (backend) port. By default, the balancer_port is the same as the frontend ``port``. The following strings from the ``keystone-2`` listener example represent ``balancer_port``, where port is ``35357``::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``balancermember_options``
This is a string of options added to each balancer (backend) member. The ``keystone-2`` listener example has the single ``check`` option::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``mode``
This optional parameter represents the HAProxy listener mode. The default value is ``tcp``, but Fuel writes ``mode http`` to the defaults section of ``/etc/haproxy/haproxy.cfg``. You can set the same option via ``haproxy_config_options``. A separate mode parameter is required to set some modes by default on every new listener addition. The ``keystone-2`` listener example has no ``mode`` option and so it works in the default Fuel-configured HTTP mode.
``define_cookies``
This optional boolean parameter is a Fuel-only feature. The default is ``false``, but if set to ``true``, Fuel directly adds ``cookie ${hostname}`` to every balance member (backend).
The ``keystone-2`` listener example has no ``define_cookies`` option. Typically, frontend cookies are added with ``haproxy_config_options`` and backend cookies with ``balancermember_options``.
``collect_exported``
This optional boolean parameter has a default value of ``false``. True means 'collect exported @@balancermember resources' (when every balancermember node exports itself), while false means 'rely on the existing declared balancermember resources' (for when you know the full set of balancermembers in advance and use ``haproxy::balancermember`` with array arguments, which allows you to deploy everything in one run).

View File

@ -0,0 +1,9 @@
OpenStack Networking HA
-----------------------
NOTE: THIS DOCUMENT HAS NOT BEEN EDITED AND IS NOT READY FOR PUBLIC CONSUMPTION.
Fuel 2.1 introduced support for OpenStack Networking utilizing a high-availability configuration. To accomplish this, Fuel uses a combination of Pacemaker and Corosync to ensure that if the networking service goes down, it will be restarted either on the existing node or on separate node.
This document explains how to configure these options in your own installation.

View File

@ -0,0 +1,283 @@
L23network
----------
NOTE: THIS DOCUMENT HAS NOT BEEN EDITED AND IS NOT READY FOR PUBLIC CONSUMPTION.
Puppet module for configuring network interfaces on 2nd and 3rd level (802.1q vlans, access ports, NIC-bonding, assign IP addresses, dhcp, and interfaces without IP addresses).
Can work together with Open vSwitch or standard linux way.
At this moment we support Centos 6.3 (RHEL6) and Ubuntu 12.04 or above.
Usage
^^^^^
Place this module at /etc/puppet/modules or on another path that contains your puppet modules.
Include L23network module and initialize it. I recommend to do it in an early stage::
#Network configuration
stage {'netconfig':
before => Stage['main'],
}
class {'l23network': stage=> 'netconfig'}
If you do not plan to use Open vSwitch -- you can disable it::
class {'l23network': use_ovs=>false, stage=> 'netconfig'}
L2 network configuation (Open vSwitch only)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Current layout is:
* *bridges* -- A "Bridge" is a virtual ethernet L2 switch. You can plug ports into it.
* *ports* -- A Port is an interface you plug into the bridge (switch). It's a virtual. (virtual what?)
* *interface* -- A physical implementation of port.
Then in your manifest you can either use the things as parameterized classes::
class {"l23network": }
l23network::l2::bridge{"br-mgmt": }
l23network::l2::port{"eth0": bridge => "br-mgmt"}
l23network::l2::port{"mmm0": bridge => "br-mgmt"}
l23network::l2::port{"mmm1": bridge => "br-mgmt"}
l23network::l2::bridge{"br-ex": }
l23network::l2::port{"eth0": bridge => "br-ex"}
l23network::l2::port{"eth1": bridge => "br-ex", ifname_order_prefix='ovs'}
l23network::l2::port{"eee0": bridge => "br-ex", skip_existing => true}
l23network::l2::port{"eee1": bridge => "br-ex", type=>'internal'}
You can define type for the port. Port type can be
'system', 'internal', 'tap', 'gre', 'ipsec_gre', 'capwap', 'patch', 'null'.
If you do not define type for port (or define '') -- ovs-vsctl will have default behavior
(see http://openvswitch.org/cgi-bin/ovsman.cgi?page=utilities%2Fovs-vsctl.8).
You can use *skip_existing* option if you do not want to interrupt configuration while adding an existing port or bridge.
L3 network configuration
^^^^^^^^^^^^^^^^^^^^^^^^
::
### Simple IP address definition, DHCP or address-less interfaces
l23network::l3::ifconfig {"eth0": ipaddr=>'192.168.1.1/24'}
l23network::l3::ifconfig {"xXxXxXx":
interface => 'eth1',
ipaddr => '192.168.2.1',
netmask => '255.255.255.0'
}
l23network::l3::ifconfig {"eth2": ipaddr=>'dhcp'}
l23network::l3::ifconfig {"eth3": ipaddr=>'none'}
Option *ipaddr* can contains IP address, 'dhcp', or 'none' string. In this example we describe configuration of 4 network interfaces:
* Interface *eth0* have short CIDR-notated form of IP address definition.
* Interface *eth1*
* Interface *eth2* will be configured to use dhcp protocol.
* Interface *eth3* will be configured as interface without IP address. Often you will need to create "master" interface for 802.1q vlans (in native linux implementation) or as slave interface for bonding.
CIDR-notated form of IP address has more priority, that classic *ipaddr* and *netmask* definition.
If you omitted *natmask* and did not use CIDR-notated form -- default *netmask* value will be used as '255.255.255.0'.::
### Multiple IP addresses for one interface (aliases)
l23network::l3::ifconfig {"eth0":
ipaddr => ['192.168.0.1/24', '192.168.1.1/24', '192.168.2.1/24']
}
You can pass a list of CIDR-notated IP addresses to the *ipaddr* parameter to assign many IP addresses to one interface. This will create aliases (not subinterfaces). Array can contain one or more elements. ::
### UP and DOWN interface order
l23network::l3::ifconfig {"eth1":
ipaddr=>'192.168.1.1/24'
}
l23network::l3::ifconfig {"br-ex":
ipaddr=>'192.168.10.1/24',
ifname_order_prefix='ovs'
}
l23network::l3::ifconfig {"aaa0":
ipaddr=>'192.168.20.1/24',
ifname_order_prefix='zzz'
}
Centos and Ubuntu (at startup OS) start and configure network interfaces in alphabetical order
by interface configuration file names. In the example above we change configuration process order by *ifname_order_prefix* keyword. We will have this order::
ifcfg-eth1
ifcfg-ovs-br-ex
ifcfg-zzz-aaa0
And OS will configure interfaces br-ex and aaa0 after eth0::
### Default gateway
l23network::l3::ifconfig {"eth1":
ipaddr => '192.168.2.5/24',
gateway => '192.168.2.1',
check_by_ping => '8.8.8.8',
check_by_ping_timeout => '30'
}
In this example we define default *gateway* and options for waiting so that the network stays up.
Parameter *check_by_ping* define IP address, that will be pinged. Puppet will be blocked for waiting response for *check_by_ping_timeout* seconds.
Parameter *check_by_ping* can be IP address, 'gateway', or 'none' string for disabling checking.
By default gateway will be pinged. ::
### DNS-specific options
l23network::l3::ifconfig {"eth1":
ipaddr => '192.168.2.5/24',
dns_nameservers => ['8.8.8.8','8.8.4.4'],
dns_search => ['aaa.com','bbb.com'],
dns_domain => 'qqq.com'
}
Also we can specify DNS nameservers, and search list that will be inserted (by resolvconf lib) to /etc/resolv.conf .
Option *dns_domain* implemented only in Ubuntu. ::
### DHCP-specific options
l23network::l3::ifconfig {"eth2":
ipaddr => 'dhcp',
dhcp_hostname => 'compute312',
dhcp_nowait => false,
}
Bonding
^^^^^^^
### Using standard linux bond (ifenslave)
For bonding two interfaces you need to:
* Specify these interfaces as interfaces without IP addresses
* Specify that the interfaces depend on the master-bond-interface
* Assign IP address to the master-bond-interface.
* Specify bond-specific properties for master-bond-interface (if defaults are not suitable for you)
for example (defaults included)::
l23network::l3::ifconfig {'eth1': ipaddr=>'none', bond_master=>'bond0'} ->
l23network::l3::ifconfig {'eth2': ipaddr=>'none', bond_master=>'bond0'} ->
l23network::l3::ifconfig {'bond0':
ipaddr => '192.168.232.1',
netmask => '255.255.255.0',
bond_mode => 0,
bond_miimon => 100,
bond_lacp_rate => 1,
}
More information about bonding network interfaces you can get in manuals for your operating system:
* https://help.ubuntu.com/community/UbuntuBonding
* http://wiki.centos.org/TipsAndTricks/BondingInterfaces
### Using Open vSwitch
For bonding two interfaces you need:
* Specify OVS bridge
* Specify special resource "bond" and add it to bridge. Specify bond-specific parameters.
* Assign IP address to the newly-created network interface (if needed).
In this example we add "eth1" and "eth2" interfaces to bridge "bridge0" as bond "bond1". ::
l23network::l2::bridge{'bridge0': } ->
l23network::l2::bond{'bond1':
bridge => 'bridge0',
ports => ['eth1', 'eth2'],
properties => [
'lacp=active',
'other_config:lacp-time=fast'
],
} ->
l23network::l3::ifconfig {'bond1':
ipaddr => '192.168.232.1',
netmask => '255.255.255.0',
}
Open vSwitch provides lot of parameters for different configurations.
We can specify them in the "properties" option as a list of parameter=value
(or parameter:key=value) strings.
The most of them you can see in [open vSwitch documentation page](http://openvswitch.org/support/).
802.1q vlan access ports
^^^^^^^^^^^^^^^^^^^^^^^^
### Using standard linux way
We can use tagged vlans over ordinary network interfaces (or over bonds).
L23networks support two variants of naming vlan interfaces:
* *vlanXXX* -- 802.1q tag gives from the vlan interface name, but you need to specify
parent interface name in the **vlandev** parameter.
* *eth0.101* -- 802.1q tag and parent interface name gives from the vlan interface name
If you need to use 802.1q vlans over bonds -- you can use only the first variant.
In this example we can see both variants: ::
l23network::l3::ifconfig {'vlan6':
ipaddr => '192.168.6.1',
netmask => '255.255.255.0',
vlandev => 'bond0',
}
l23network::l3::ifconfig {'vlan5':
ipaddr => 'none',
vlandev => 'bond0',
}
L23network:L3:Ifconfig['bond0'] -> L23network:L3:Ifconfig['vlan6'] -> L23network:L3:Ifconfig['vlan5']
l23network::l3::ifconfig {'eth0':
ipaddr => '192.168.0.5',
netmask => '255.255.255.0',
gateway => '192.168.0.1',
} ->
l23network::l3::ifconfig {'eth0.101':
ipaddr => '192.168.101.1',
netmask => '255.255.255.0',
} ->
l23network::l3::ifconfig {'eth0.102':
ipaddr => 'none',
}
### Using Open vSwitch
In the Open vSwitch all internal traffic is virtually tagged.
For creating the 802.1q tagged access port you need to specify vlan tag when adding a port to a bridge.
In this example we create two ports with tags 10 and 20, and assign an IP address to interface with tag 10::
l23network::l2::bridge{'bridge0': } ->
l23network::l2::port{'vl10':
bridge => 'bridge0',
type => 'internal',
port_properties => [
'tag=10'
],
} ->
l23network::l2::port{'vl20':
bridge => 'bridge0',
type => 'internal',
port_properties => [
'tag=20'
],
} ->
l23network::l3::ifconfig {'vl10':
ipaddr => '192.168.101.1/24',
} ->
l23network::l3::ifconfig {'vl20':
ipaddr => 'none',
}
Information about vlans in open vSwitch you can get in [open vSwitch documentation page](http://openvswitch.org/support/config-cookbooks/vlan-configuration-cookbook/).
**IMPORTANT:** You can't use vlan interface names like vlanXXX if you do not want double-tagging of your network traffic.
---
When I began to write this module, I checked https://github.com/ekarlso/puppet-vswitch. Elcarso, big thanks...

8
pages/old/copyright.rst Normal file
View File

@ -0,0 +1,8 @@
.. index:: Fuel License
============
Fuel License
============
.. literalinclude:: LICENSE
:language: none

View File

@ -0,0 +1,12 @@
.. index:: Deploy using CLI
.. _Deploy-Cluster-CLI:
==========================================
Deploy an OpenStack cluster using Fuel CLI
==========================================
.. contents:: :local:
:depth: 2
>>>> The section will contain CLI instructions <<<<

View File

@ -0,0 +1,18 @@
.. index:: Deploy using UI
.. _Create-Cluster-UI:
============================================
Create an OpenStack cluster using Fuel UI
============================================
Now let's look at performing an actual OpenStack deployment using Fuel.
.. contents:: :local:
:depth: 2
.. include:: /pages/installation-fuel-ui/install.rst
.. include:: /pages/installation-fuel-ui/networks.rst
.. include:: /pages/installation-fuel-ui/network-issues.rst
.. include:: /pages/installation-fuel-ui/red_hat_openstack.rst
.. include:: /pages/installation-fuel-ui/post-install-healthchecks.rst

View File

@ -0,0 +1,291 @@
.. raw:: pdf
PageBreak
.. index:: Installing Fuel Master Node
Installing Fuel Master Node
===========================
.. contents :local:
Fuel is distributed as both ISO and IMG images, each of them contains
an installer for Fuel Master node. The ISO image is used for CD media devices,
iLO or similar remote access systems. The IMG file is used for USB memory sticks.
Once installed, Fuel can be used to deploy and manage OpenStack clusters. It
will assign IP addresses to the nodes, perform PXE boot and initial
configuration, and provision of OpenStack nodes according to their roles in
the cluster.
.. _Install_Bare-Metal:
On Bare-Metal Environment
-------------------------
To install Fuel on bare-metal hardware, you need to burn the provided ISO to
a CD/DVD or create a bootable USB stick. You would then begin the
installation process by booting from that media, very much like any other OS.
Burning an ISO to optical media is a deeply supported function on all OSes.
For Linux there are several interfaces available such as `Brasero` or `Xfburn`,
two of the more commonly pre-installed desktop applications. There are also
a number for Windows such as `ImgBurn <http://www.imgburn.com/>`_ and the
open source `InfraRecorder <http://infrarecorder.org/>`_.
Burning an ISO in Mac OS X is deceptively simple. Open `Disk Utility` from
`Applications > Utilities`, drag the ISO into the disk list on the left side
of the window and select it, insert blank media with enough room, and click
`Burn`. If you prefer a utility, check out the open source `Burn
<http://burn-osx.sourceforge.net/Pages/English/home.html>`_.
Installing the ISO to a bootable USB stick, however, is an entirely different
matter. Canonical suggests `PenDriveLinux` which is a GUI tool for Windows.
On Windows, you can write the installation image with a number of different
utilities. The following list links to some of the more popular ones and they are
all available at no cost:
- `Win32 Disk Imager <http://sourceforge.net/projects/win32diskimager/>`_.
- `ISOtoUSB <http://www.isotousb.com/>`_.
After the installation is complete, you will need to allocate bare-metal
nodes for your OpenStack cluster, put them on the same L2 network as the
Master node, and PXE boot. The UI will discover them and make them available
for installing OpenStack.
On VirtualBox
-------------
If you are going to evaluate Fuel on VirtualBox, you should know that we
provide a set of scripts that create and configure all of the required VMs for
you, including the Master node and Slave nodes for OpenStack itself. It's a very
simple, single-click installation.
.. note::
These scripts are not supported on Windows, but you can still test on
VirtualBox by creating the VMs by yourself. See :ref:`Install_Manual` for more
details.
The requirements for running Fuel on VirtualBox are:
A host machine with Linux or Mac OS.
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10.
VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both
can be downloaded from `<http://www.virtualbox.org/>`_.
8 GB+ of RAM
to handle 4 VMs for non-HA OpenStack installation (1 Master node, 1 Controller
node, 1 Compute node, 1 Cinder node) or
to handle 5 VMs for HA OpenStack installation (1 Master node, 3 Controller
nodes, 1 Compute node)
.. _Install_Automatic:
Automatic Mode
++++++++++++++
When you unpack the scripts, you will see the following important files and
folders:
`iso`
This folder needs to contain a single ISO image for Fuel. Once you
downloaded ISO from the portal, copy or move it into this directory.
`config.sh`
This file contains configuration, which can be fine-tuned. For example, you
can select how many virtual nodes to launch, as well as how much memory to give them.
`launch.sh`
Once executed, this script will pick up an image from the ``iso`` directory,
create a VM, mount the image to this VM, and automatically install the Fuel
Master node.
After installation of the Master node, the script will create Slave nodes for
OpenStack and boot them via PXE from the Master node.
Finally, the script will give you the link to access the Web-based UI for the
Master node so you can start installation of an OpenStack cluster.
.. _Install_Manual:
Manual Mode
+++++++++++
.. note::
However, these manual steps will allow you to set up the evaluation environment
for vanilla OpenStack release only. `RHOS installation is not possible.`
To download and deploy RedHat OpenStack you need to use automated VirtualBox
helper scripts or install Fuel :ref:`Install_Bare-Metal`.
If you cannot or would rather not run our helper scripts, you can still run
Fuel on VirtualBox by following these steps.
Master Node Deployment
^^^^^^^^^^^^^^^^^^^^^^
First, create the Master node VM.
1. Configure the host-only interface vboxnet0 in VirtualBox.
* IP address: 10.20.0.1
* Interface mask: 255.255.255.0
* DHCP disabled
2. Create a VM for the Master node with the following parameters:
* OS Type: Linux, Version: Red Hat (64bit)
* RAM: 1024 MB
* HDD: 20 GB, with dynamic disk expansion
* CDROM: mount Fuel ISO
* Network 1: host-only interface vboxnet0
3. Power on the VM in order to start the installation.
4. Wait for the Welcome message with all information needed to login into the UI
of Fuel.
Adding Slave Nodes
^^^^^^^^^^^^^^^^^^
Next, create Slave nodes where OpenStack needs to be installed.
1. Create 3 or 4 additional VMs depending on your wish with the following parameters:
* OS Type: Linux, Version: Red Hat (64bit)
* RAM: 1024 MB
* HDD: 30 GB, with dynamic disk expansion
* Network 1: host-only interface vboxnet0, PCnet-FAST III device
2. Set priority for the network boot:
.. image:: /_images/vbox-image1.png
:align: center
3. Configure the network adapter on each VM:
.. image:: /_images/vbox-image2.png
:align: center
Changing Network Parameters Before the Installation
---------------------------------------------------
You can change the network settings for the Fuel (PXE booting) network, which
is ``10.20.0.2/24 gw 10.20.0.1`` by default.
In order to do so, press the <TAB> key on the very first installation screen
which says "Welcome to Fuel Installer!" and update the kernel options. For
example, to use 192.168.1.10/24 IP address for the Master node and 192.168.1.1
as the gateway and DNS server you should change the parameters to those shown
in the image below:
.. image:: /_images/network-at-boot.jpg
:align: center
When you're finished making changes, press the <ENTER> key and wait for the
installation to complete.
Changing Network Parameters After Installation
----------------------------------------------
It is still possible to configure other interfaces, or add 802.1Q sub-interfaces
to the Master node to be able to access it from your network if required.
It is easy to do via standard network configuration scripts for CentOS. When the
installation is complete, you can modify
``/etc/sysconfig/network-scripts/ifcfg-eth\*`` scripts. For example, if *eth1*
interface is on the L2 network which is planned for PXE booting, and *eth2* is
the interface connected to your office network switch, *eth0* is not in use, then
settings can be the following:
/etc/sysconfig/network-scripts/ifcfg-eth0::
DEVICE=eth0
ONBOOT=no
/etc/sysconfig/network-scripts/ifcfg-eth1::
DEVICE=eth1
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
BOOTPROTO=static
IPADDR=192.168.1.10
NETMASK=255.255.255.0
/etc/sysconfig/network-scripts/ifcfg-eth2::
DEVICE=eth2
ONBOOT=yes
HWADDR=<your MAC>
..... (other settings in your config) .....
PEERDNS=no
IPADDR=172.18.0.5
NETMASK=255.255.255.0
.. warning::
Once IP settings are set at the boot time for Fuel Master node, they
**should not be changed during the whole lifecycle of Fuel.**
After modification of network configuration files, it is required to apply the
new configuration::
service network restart
Now you should be able to connect to Fuel UI from your network at
http://172.18.0.5:8000/
Name Resolution (DNS)
---------------------
During Master node installation, it is assumed that there is a recursive DNS
service on 10.20.0.1.
If you want to make it possible for Slave nodes to be able to resolve public names,
you need to change this default value to point to an actual DNS service.
To make the change, run the following command on Fuel Master node (replace IP to
your actual DNS)::
echo "nameserver 172.0.0.1" > /etc/dnsmasq.upstream
PXE Booting Settings
--------------------
By default, `eth0` on Fuel Master node serves PXE requests. If you are planning
to use another interface, then it is required to modify dnsmasq settings (which
acts as DHCP server). Edit the file ``/etc/cobbler/dnsmasq.template``, find the line
``interface=eth0`` and replace the interface name with the one you want to use.
Launch command to synchronize cobbler service afterwards::
cobbler sync
During synchronization cobbler builds actual dnsmasq configuration file
``/etc/dnsmasq.conf`` from template ``/etc/cobbler/dnsmasq.template``. That is
why you should not edit ``/etc/dnsmasq.conf``. Cobbler rewrites it each time
when it is synchronized.
If you want to use virtual machines to launch Fuel then you have to be sure
that dnsmasq on Master node is configured to support the PXE client you use on your
virtual machines. We enabled *dhcp-no-override* option because without it
dnsmasq tries to move ``PXE filename`` and ``PXE servername`` special fields
into DHCP options. Not all PXE implementations can recognize those options and
therefore they will not be able to boot. For example, CentOS 6.4 uses gPXE
implementation instead of more advanced iPXE by default.
When Master Node Installation is Done
-------------------------------------
Once the Master node is installed, power on all other nodes and log in to the
Fuel UI.
Slave nodes will be booted in bootstrap mode (CentOS based Linux in memory) via
PXE and you will see notifications in the user interface about discovered nodes.
This is the point when you can create an environment, add nodes into it, and
start configuration...
Networking configuration is most complicated part, so please read the networking
section of documentation carefully.

View File

@ -0,0 +1,103 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Issues
Network Issues
==============
Fuel has a built-in capability to run network check before or after OpenStack
deployment. Currently it can check connectivity between nodes within
configured VLANs on configured server interfaces. Image below shows sample
result of such check. By using this simple table it is easy to say which
interfaces do not receive certain VLAN IDs. Usually it means that switch or
multiple switches are not configured correctly and do not allow certain
tagged traffic to pass through.
.. image:: /_images/net_verify_failure.jpg
:align: center
On VirtualBox
-------------
Scripts which are provided for quick Fuel setup, create 3 host-interface
adapters. Basically networking works as this being a 3 bridges, in each of
them the only one VMs interfaces is connected. It means there is only L2
connectivity between VMs on interfaces with the same name. If you try to
move, for example, management network to `eth1` on Controller node, and the
same network to `eth2` on the Compute, then there will be no connectivity
between OpenStack services in spite of being configured to live on the same
VLAN. It is very easy to validate network settings before deployment by
clicking the "Verify Networks" button.
If you need to access OpenStack REST API over Public network, VNC console of VMs,
Horizon in HA mode or VMs, refer to this section: :ref:`access_to_public_net`.
Timeout In Connection to OpenStack API From Client Applications
---------------------------------------------------------------
If you use Java, Python or any other code to work with OpenStack API, all
connections should be done over OpenStack Public network. To explain why we
can not use Fuel network, let's try to run nova client with debug
option enabled::
[root@controller-6 ~]# nova --debug list
REQ: curl -i http://192.168.0.2:5000/v2.0/tokens -X POST -H "Content-Type: appli
cation/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d
'{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin",
"password": "admin"}}}'
INFO (connectionpool:191) Starting new HTTP connection (1): 192.168.0.2
DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2702
RESP: [200] {'date': 'Tue, 06 Aug 2013 13:01:05 GMT', 'content-type': 'applicati
on/json', 'content-length': '2702', 'vary': 'X-Auth-Token'}
RESP BODY: {"access": {"token": {"issued_at": "2013-08-06T13:01:05.616481", "exp
ires": "2013-08-07T13:01:05Z", "id": "c321cd823c8a4852aea4b870a03c8f72", "tenant
": {"description": "admin tenant", "enabled": true, "id": "8eee400f7a8a4f35b7a92
bc6cb54de42", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL":
"http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42", "region": "Region
One", "internalURL": "http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de4
2", "id": "6b9563c1e37542519e4fc601b994f980", "publicURL": "http://172.16.1.2:87
74/v2/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "type": "compu
te", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080", "re
gion": "RegionOne", "internalURL": "http://192.168.0.2:8080", "id": "4db0e11de35
74c889179f499f1e53c7e", "publicURL": "http://172.16.1.2:8080"}], "endpoints_link
s": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://1
92.168.0.2:9292", "region": "RegionOne", "internalURL": "http://192.168.0.2:9292
", "id": "960a3ad83e4043bbbc708733571d433b", "publicURL": "http://172.16.1.2:929
2"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [
{"adminURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42", "reg
ion": "RegionOne", "internalURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7
a92bc6cb54de42", "id": "055edb2aface49c28576347a8c2a5e35", "publicURL": "http://
172.16.1.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "
type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.
0.2:8773/services/Admin", "region": "RegionOne", "internalURL": "http://192.168.
0.2:8773/services/Cloud", "id": "1e5e51a640f94e60aed0a5296eebdb51", "publicURL":
"http://172.16.1.2:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2"
, "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080/",
"region": "RegionOne", "internalURL": "http://192.168.0.2:8080/v1/AUTH_8eee400f
7a8a4f35b7a92bc6cb54de42", "id": "081a50a3c9fa49719673a52420a87557", "publicURL
": "http://172.16.1.2:8080/v1/AUTH_8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoi
nts_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"admi
nURL": "http://192.168.0.2:35357/v2.0", "region": "RegionOne", "internalURL": "
http://192.168.0.2:5000/v2.0", "id": "057a7f8e9a9f4defb1966825de957f5b", "publi
cURL": "http://172.16.1.2:5000/v2.0"}], "endpoints_links": [], "type": "identit
y", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id"
: "717701504566411794a9cfcea1a85c1f", "roles": [{"name": "admin"}], "name": "ad
min"}, "metadata": {"is_admin": 0, "roles": ["90a1f4f29aef48d7bce3ada631a54261"
]}}}
REQ: curl -i http://172.16.1.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42/servers/
detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -
H "Accept: application/json" -H "X-Auth-Token: c321cd823c8a4852aea4b870a03c8f72"
INFO (connectionpool:191) Starting new HTTP connection (1): 172.16.1.2
Even though initial connection was in 192.168.0.2, then client tries to
access Public network for Nova API. The reason is because Keystone returns
the list of OpenStack services URLs, and for production-grade deployments it
is required to access services over public network.
.. seealso:: :ref:`access_to_public_net` if you want to configure the installation
on VirtualBox to make all these issues fixed.

View File

@ -0,0 +1,303 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Configuration
Understanding and Configuring the Network
=========================================
.. contents :local:
OpenStack clusters use several types of network managers: FlatDHCPManager,
VLANManager and Neutron (formerly Quantum). The current version of Fuel UI
supports only two (FlatDHCP and VLANManager), but Fuel CLI supports all
three. For more information about how the first two network managers work,
you can read these two resources:
* `OpenStack Networking FlatManager and FlatDHCPManager
<http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/>`_
* `Openstack Networking for Scalability and Multi-tenancy with VLANManager
<http://www.mirantis.com/blog/openstack-networking-vlanmanager/>`_
FlatDHCPManager (multi-host scheme)
-----------------------------------
The main idea behind the flat network manager is to configure a bridge
(i.e. **br100**) on every Compute node and have one of the machine's host
interfaces connect to it. Once the virtual machine is launched its virtual
interface will connect to that bridge as well.
The same L2 segment is used for all OpenStack projects, which means that there
is no L2 isolation between virtual hosts, even if they are owned by separate
projects, and there is only one flat IP pool defined for the cluster. For this
reason it is called the *Flat* manager.
The simplest case here is as shown on the following diagram. Here the *eth1*
interface is used to give network access to virtual machines, while *eth0*
interface is the management network interface.
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
:align: center
Fuel deploys OpenStack in FlatDHCP mode with the so called **multi-host**
feature enabled. Without this feature enabled, network traffic from each VM
would go through the single gateway host, which basically becomes a single
point of failure. In enabled mode, each Compute node becomes a gateway for
all the VMs running on the host, providing a balanced networking solution.
In this case, if one of the Computes goes down, the rest of the environment
remains operational.
The current version of Fuel uses VLANs, even for the FlatDHCP network
manager. On the Linux host, it is implemented in such a way that it is not
the physical network interfaces that are connected to the bridge, but the
VLAN interface (i.e. *eth0.102*).
FlatDHCPManager (single-interface scheme)
-----------------------------------------
.. image:: /_images/flatdhcpmanager-sh_scheme.jpg
:align: center
Therefore all switch ports where Compute nodes are connected must be
configured as tagged (trunk) ports with required VLANs allowed (enabled,
tagged). Virtual machines will communicate with each other on L2 even if
they are on different Compute nodes. If the virtual machine sends IP packets
to a different network, they will be routed on the host machine according to
the routing table. The default route will point to the gateway specified on
the networks tab in the UI as the gateway for the Public network.
VLANManager
------------
VLANManager mode is more suitable for large scale clouds. The idea behind
this mode is to separate groups of virtual machines, owned by different
projects, on different L2 layers. In VLANManager this is done by tagging IP
frames, or simply speaking, by VLANs. It allows virtual machines inside the
given project to communicate with each other and not to see any traffic from
VMs of other projects. Switch ports must be configured as tagged (trunk)
ports to allow this scheme to work.
.. image:: /_images/vlanmanager_scheme.jpg
:align: center
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Deployment Schema
Fuel Deployment Schema
======================
One of the physical interfaces on each host has to be chosen to carry
VM-to-VM traffic (fixed network), and switch ports must be configured to
allow tagged traffic to pass through. OpenStack Computes will untag the IP
packets and send them to the appropriate VMs. Simplifying the configuration
of VLAN Manager, there is no known limitation which Fuel could add in this
particular networking mode.
Configuring the network
-----------------------
Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment
accordingly. The diagram below shows an example configuration.
.. image:: /_images/physical-network.jpg
:width: 100%
:align: center
Fuel operates with following logical networks:
**Fuel** network
Used for internal Fuel communications only and PXE booting (untagged on the scheme);
**Public** network
Is used to get access from virtual machines to outside, Internet or office
network (VLAN 101 on the scheme);
**Floating** network
Used to get access to virtual machines from outside (shared L2-interface with
Public network; in this case it's VLAN 101);
**Management** network
Is used for internal OpenStack communications (VLAN 102 on the scheme);
**Storage** network
Is used for Storage traffic (VLAN 103 on the scheme);
**Fixed** network
One (for flat mode) or more (for VLAN mode) virtual machines
networks (VLAN 104 on the scheme).
Mapping logical networks to physical interfaces on servers
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fuel allows you to use different physical interfaces to handle different
types of traffic. When a node is added to the environment, click at the bottom
line of the node icon. In the detailed information window, click the "Network
Configuration" button to open the physical interfaces configuration screen.
.. image:: /_images/network-settings.jpg
:align: center
On this screen you can drag-and-drop logical networks to physical interfaces
according to your network setup.
All networks are presented on the screen, except Fuel.
It runs on the physical interface from which node was initially PXE booted,
and in the current version it is not possible to map it on any other physical
interface. Also, once the network is configured and OpenStack is deployed,
you may not modify network settings, even to move a logical network to another
physical interface or VLAN number.
Switch
++++++
Fuel can configure hosts, however switch configuration is still manual work.
Unfortunately the set of configuration steps, and even the terminology used,
is different for different vendors, so we will try to provide
vendor-agnostic information on how traffic should flow and leave the
vendor-specific details to you. We will provide an example for a Cisco switch.
First of all, you should configure access ports to allow non-tagged PXE booting
connections from all Slave nodes to the Fuel node. We refer this network
as the Fuel network.
By default, the Fuel Master node uses the `eth0` interface to serve PXE
requests on this network.
So if that's left unchanged, you have to set the switch port for `eth0` of Fuel
Master node to access mode.
We recommend that you use the `eth0` interfaces of all other nodes for PXE booting
as well. Corresponding ports must also be in access mode.
Taking into account that this is the network for PXE booting, do not mix
this L2 segment with any other network segments. Fuel runs a DHCP
server, and if there is another DHCP on the same L2 network segment, both the
company's infrastructure and Fuel's will be unable to function properly.
You also need to configure each of the switch's ports connected to nodes as an
"STP Edge port" (or a "spanning-tree port fast trunk", according to Cisco
terminology). If you don't do that, DHCP timeout issues may occur.
As long as the Fuel network is configured, Fuel can operate.
Other networks are required for OpenStack environments, and currently all of
these networks live in VLANs over the one or multiple physical interfaces on a
node. This means that the switch should pass tagged traffic, and untagging is done
on the Linux hosts.
.. note:: For the sake of simplicity, all the VLANs specified on the networks tab of
the Fuel UI should be configured on switch ports, pointing to Slave nodes,
as tagged.
Of course, it is possible to specify as tagged only certain ports for a certain
nodes. However, in the current version, all existing networks are automatically
allocated for each node, with any role.
And network check will also check if tagged traffic pass, even if some nodes do
not require this check (for example, Cinder nodes do not need fixed network traffic).
This is enough to deploy the OpenStack environment. However, from a
practical standpoint, it's still not really usable because there is no
connection to other corporate networks yet. To make that possible, you must
configure uplink port(s).
One of the VLANs may carry the office network. To provide access to the Fuel Master
node from your network, any other free physical network interface on the
Fuel Master node can be used and configured according to your network
rules (static IP or DHCP). The same network segment can be used for
Public and Floating ranges. In this case, you must provide the corresponding
VLAN ID and IP ranges in the UI. One Public IP per node will be used to SNAT
traffic out of the VMs network, and one or more floating addresses per VM
instance will be used to get access to the VM from your network, or
even the global Internet. To have a VM visible from the Internet is similar to
having it visible from corporate network - corresponding IP ranges and VLAN IDs
must be specified for the Floating and Public networks. One current limitation
of Fuel is that the user must use the same L2 segment for both Public and
Floating networks.
Example configuration for one of the ports on a Cisco switch::
interface GigabitEthernet0/6 # switch port
description s0_eth0 jv # description
switchport trunk encapsulation dot1q # enables VLANs
switchport trunk native vlan 262 # access port, untags VLAN 262
switchport trunk allowed vlan 100,102,104 # 100,102,104 VLANs are passed with tags
switchport mode trunk # To allow more than 1 VLAN on the port
spanning-tree portfast trunk # STP Edge port to skip network loop
# checks (to prevent DHCP timeout issues)
vlan 262,100,102,104 # Might be needed for enabling VLANs
Router
++++++
To make it possible for VMs to access the outside world, you must have an IP
address set on a router in the Public network. In the examples provided,
that IP is 12.0.0.1 in VLAN 101.
Fuel UI has a special field on the networking tab for the gateway address. As
soon as deployment of OpenStack is started, the network on nodes is
reconfigured to use this gateway IP as the default gateway.
If Floating addresses are from another L3 network, then you have to configure the
IP address (or even multiple IPs if Floating addresses are from more than one L3
network) for them on the router as well.
Otherwise, Floating IPs on nodes will be inaccessible.
.. _access_to_public_net:
Deployment configuration to access OpenStack API and VMs from host machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Helper scripts for VirtualBox create network adapters `eth0`, `eth1`, `eth2`
which are represented on host machine as `vboxnet0`, `vboxnet1`, `vboxnet2`
correspondingly, and assign IP addresses for adapters:
vboxnet0 - 10.20.0.1/24,
vboxnet1 - 172.16.1.1/24,
vboxnet2 - 172.16.0.1/24.
For the demo environment on VirtualBox, the first network adapter is used to run Fuel
network traffic, including PXE discovery.
To access the Horizon and OpenStack RESTful API via Public network from the host machine,
it is required to have route from your host to the Public IP address on the OpenStack Controller.
Also, if access to Floating IP of VM is required, it is also required to have route
to the Floating IP on Compute host, which is binded to Public interface there.
To make this configuration possible on VirtualBox demo environment, the user has
to run Public network untagged. On the image below you can see the configuration of
Public and Floating networks which will allow to make this happen.
.. image:: /_images/vbox_public_settings.jpg
:align: center
By default Public and Floating networks are run on the first network interface.
It is required to change it, as you can see on this image below. Make sure you change
it on every node.
.. image:: /_images/vbox_node_settings.jpg
:align: center
If you use default configuration in VirtualBox scripts, and follow the exact same
settings on the images above, you should be able to access OpenStack Horizon via
Public network after the installation.
If you want to enable Internet on provisioned VMs by OpenStack, you
have to configure NAT on the host machine. When packets reach `vboxnet1` interface,
according to the OpenStack settings tab, they have to know the way out of the host.
For Ubuntu, the following command, executed on the host, can make this happen::
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
To access VMs managed by OpenStack it is needed to provide IP addresses from
Floating IP range. When OpenStack cluster is deployed and VM is provisioned there,
you have to associate one of the Floating IP addresses from the pool to this VM,
whether in Horizon or via Nova CLI. By default, OpenStack blocking all the traffic to the VM.
To allow the connectivity to the VM, you need to configure security groups.
It can be done in Horizon, or from OpenStack Controller using the following commands::
. /root/openrc
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
IP ranges for Public and Management networks (172.16.*.*) are defined in ``config.sh``
script. If default values doesn't fit your needs, you are free to change them, but before
the installation of Fuel Master node.

View File

@ -0,0 +1,431 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Post-Deployment Check
.. _Post-Deployment-Check:
Post-Deployment Check
=====================
.. contents :local:
On occasion, even a successful deployment may result in some OpenStack
components not working correctly. If this happens, Fuel offers the ability
to perform post-deployment checks to verify operations. Part of Fuel's goal
is to provide easily accessible status information about the most commonly
used components and the most recently performed actions. To perform these
checks you will use Sanity and Smoke checks, as described below:
**Sanity Checks**
Reveal whether the overall system is functional. If it fails, you will most
likely need to restart some services to operate OpenStack.
**Smoke Checks**
Dive in a little deeper and reveal networking, system-requirements,
functionality issues.
Sanity Checks will likely be the point on which the success of your
deployment pivots, but it is critical to pay close attention to all
information collected from theses tests. Another way to look at these tests
is by their names. Sanity Checks are intended to assist in maintaining your
sanity. Smoke Checks tell you where the fires are so you can put them out
strategically instead of firehosing the entire installation.
Benefits
--------
* Using post-deployment checks helps you identify potential issues which
may impact the health of a deployed system.
* All post-deployment checks provide detailed descriptions about failed
operations and tell you which component or components are not working
properly.
* Previously, performing these checks manually would have consumed a
great deal of time. Now, with these checks the process will take only a
few minutes.
* Aside from verifying that everything is working correctly, the process
will also determine how quickly your system works.
* Post-deployment checks continue to be useful, for example after
sizable changes are made in the system you can use the checks to
determine if any new failure points have been introduced.
Running Post-Deployment Checks
------------------------------
Now, let`s take a closer look on what should be done to execute the tests and
to understand if something is wrong with your OpenStack cluster.
.. image:: /_images/healthcheck_tab.jpg
:align: center
As you can see on the image above, the Fuel UI now contains a ``Healthcheck``
tab, indicated by the Heart icon.
All of the post-deployment checks are displayed on this tab. If your
deployment was successful, you will see a list of tests this show a green
Thumbs Up in the last column. The Thumb indicates the status of the
component. If you see a detailed message and a Thumbs Down, that
component has failed in some manner, and the details will indicate where the
failure was detected. All tests can be run on different environments, which
you select on main page of Fuel UI. You can run checks in parallel on
different environments.
Each test contains information on its estimated and actual duration. We have
included information about test processing time from our own tests and
indicate this in each test. Note that we show average times from the slowest
to the fastest systems we have tested, so your results will vary.
Once a test is complete the results will appear in the Status column. If
there was an error during the test the UI will display the error message
below the test name. To assist in the troubleshooting process, the test
scenario is displayed under the failure message and the failed step is
highlighted. You will find more detailed information on these tests later in
this section.
An actual test run looks like this:
.. image:: /_images/ostf_screen.jpg
:align: center
What To Do When a Test Fails
----------------------------
If a test fails, there are several ways to investigate the problem. You may
prefer to start in Fuel UI since it's feedback is directly related to the
health of the deployment. To do so, start by checking the following:
* Under the `Healthcheck` tab
* In the OpenStack Dashboard
* In the test execution logs (/var/log/ostf-stdout.log)
* In the individual OpenStack components logs
Of course, there are many different conditions that can lead to system
breakdowns, but there are some simple things that can be examined before you
dig deep. The most common issues are:
* Not all OpenStack services are running
* Any defined quota has been exceeded
* Something has been broken in the network configuration
* There is a general lack of resources (memory/disk space)
The first thing to be done is to ensure all OpenStack services are up and
running. To do this you can run sanity test set, or execute the following
command on your Controller node::
nova-manage service list
If any service is off (has “XXX” status), you can restart it using this command::
service openstack-<service name> restart
If all services are on, but you`re still experiencing some issues, you can
gather information on OpenStack Dashboard (exceeded number of instances,
fixed IPs etc). You may also read the logs generated by tests which is
stored at ``/var/log/ostf-stdout.log``, or go to ``/var/log/<component>`` and view
if any operation has ERROR status. If it looks like the last item, you may
have underprovisioned your environment and should check your math and your
project requirements.
Sanity Tests Description
------------------------
Sanity checks work by sending a query to all OpenStack components to get a
response back from them. Many of these tests are simple in that they ask
each service for a list of it's associated objects and waits for a response.
The response can be something, nothing, and error, or a timeout, so there
are several ways to determine if a service is up. The following list shows
what test is used for each service:
.. topic:: Instances list availability
Test checks that Nova component can return list of instances.
Test scenario:
1. Request list of instances.
2. Check returned list is not empty.
.. topic:: Images list availability
Test checks that Glance component can return list of images.
Test scenario:
1. Request list of images.
2. Check returned list is not empty.
.. topic:: Volumes list availability
Test checks that Swift component can return list of volumes.
Test scenario:
1. Request list of volumes.
2. Check returned list is not empty.
.. topic:: Snapshots list availability
Test checks that Glance component can return list of snapshots.
Test scenario:
1. Request list of snapshots.
2. Check returned list is not empty.
.. topic:: Flavors list availability
Test checks that Nova component can return list of flavors.
Test scenario:
1. Request list of flavors.
2. Check returned list is not empty.
.. topic:: Limits list availability
Test checks that Nova component can return list of absolute limits.
Test scenario:
1. Request list of limits.
2. Check response.
.. topic:: Services list availability
Test checks that Nova component can return list of services.
Test scenario:
1. Request list of services.
2. Check returned list is not empty.
.. topic:: User list availability
Test checks that Keystone component can return list of users.
Test scenario:
1. Request list of services.
2. Check returned list is not empty.
.. topic:: Services execution monitoring
Test checks that all of the expected services are on, meaning the test will
fail if any of the listed services is in “XXX” status.
Test scenario:
1. Connect to a controller via SSH.
2. Execute nova-manage service list command.
3. Check there are no failed services.
.. topic:: DNS availability
Test checks that DNS is available.
Test scenario:
1. Connect to a Controller node via SSH.
2. Execute host command for the controller IP.
3. Check DNS name can be successfully resolved.
.. topic:: Networks availability
Test checks that Nova component can return list of available networks.
Test scenario:
1. Request list of networks.
2. Check returned list is not empty.
.. topic:: Ports availability
Test checks that Nova component can return list of available ports.
Test scenario:
1. Request list of ports.
2. Check returned list is not empty.
For more information refer to nova cli reference.
Smoke Tests Description
-----------------------
Smoke tests verify how your system handles basic OpenStack operations under
normal circumstances. The Smoke test series uses timeout tests for
operations that have a known completion time to determine if there is any
smoke, and thusly fire. An additional benefit to the Smoke Test series is
that you get to see how fast your environment is the first time you run them.
All tests use basic OpenStack services (Nova, Glance, Keystone, Cinder etc),
therefore if any of them is off, the test using it will fail. It is
recommended to run all sanity checks prior to your smoke checks to determine
all services are alive. This helps ensure that you don't get any false
negatives. The following is a description of each sanity test available:
.. topic:: Flavor creation
Test checks that low requirements flavor can be created.
Target component: Nova
Scenario:
1. Create small-size flavor.
2. Check created flavor has expected name.
3. Check flavor disk has expected size.
For more information refer to nova cli reference.
.. topic:: Volume creation
Test checks that a small-sized volume can be created.
Target component: Compute
Scenario:
1. Create a new small-size volume.
2. Wait for "available" volume status.
3. Check response contains "display_name" section.
4. Create instance and wait for "Active" status
5. Attach volume to instance.
6. Check volume status is "in use".
7. Get created volume information by its id.
8. Detach volume from instance.
9. Check volume has "available" status.
10. Delete volume.
If you see that created volume is in ERROR status, it can mean that you`ve
exceeded the maximum number of volumes that can be created. You can check it
on OpenStack dashboard. For more information refer to volume management
instructions.
.. topic:: Instance booting and snapshotting
Test creates a keypair, checks that instance can be booted from default
image, then a snapshot can be created from it and a new instance can be
booted from a snapshot. Test also verifies that instances and images reach
ACTIVE state upon their creation.
Target component: Glance
Scenario:
1. Create new keypair to boot an instance.
2. Boot default image.
3. Make snapshot of created server.
4. Boot another instance from created snapshot.
If you see that created instance is in ERROR status, it can mean that you`ve
exceeded any system requirements limit. The test is using a nano-flavor with
parameters: 64 RAM, 1 GB disk space, 1 virtual CPU presented. For more
information refer to nova cli reference, image management instructions.
.. topic:: Keypair creation
Target component: Nova.
Scenario:
1. Create a new keypair, check if it was created successfully
(check name is expected, response status is 200).
For more information refer to nova cli reference.
.. topic:: Security group creation
Target component: Nova
Scenario:
1. Create security group, check if it was created correctly
(check name is expected, response status is 200).
For more information refer to nova cli reference.
.. topic:: Network parameters check
Target component: Nova
Scenario:
1. Get list of networks.
2. Check seen network labels equal to expected ones.
3. Check seen network ids equal to expected ones.
For more information refer to nova cli reference.
.. topic:: Instance creation
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
For more information refer to nova cli reference, instance management
instructions.
.. topic:: Floating IP assignment
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
4. Create new floating IP.
5. Assign floating IP to created instance.
For more information refer to nova cli reference, floating ips management
instructions.
.. topic:: Network connectivity check through floating IP
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
4. Check connectivity for all floating IPs using ping command.
If this test failed, it`s better to run a network check and verify that all
connections are correct. For more information refer to the Nova CLI reference's
floating IPs management instructions.
.. topic:: User creation and authentication in Horizon
Test creates new user, tenant, user role with admin privileges and logs in
to dashboard.
Target components: Nova, Keystone
Scenario:
1. Create a new tenant.
2. Check tenant was created successfully.
3. Create a new user.
4. Check user was created successfully.
5. Create a new user role.
6. Check user role was created successfully.
7. Perform token authentication.
8. Check authentication was successful.
9. Send authentication request to Horizon.
10. Verify response status is 200.
If this test fails on the authentication step, you should first try opening
the dashboard - it may be unreachable for some reason and then you should
check your network configuration. For more information refer to nova cli
reference.

View File

@ -0,0 +1,191 @@
.. raw:: pdf
PageBreak
.. index:: Red Hat OpenStack
Red Hat OpenStack Deployment Notes
==================================
.. contents :local:
Overview
--------
Fuel can deploy OpenStack using Red Hat OpenStack packages and Red Hat
Enterprise Linux Server as a base operating system. Because Red Hat has
exclusive distribution rights for its products, Fuel cannot be bundled with
Red Hat OpenStack directly. To work around this issue, you can enter your
Red Hat account credentials in order to download Red Hat OpenStack Platform.
The necessary components will be prepared and loaded into Cobbler. There are
two methods Fuel supports for obtaining Red Hat OpenStack packages:
* :ref:`RHSM` (default)
* :ref:`RHN_Satellite`
.. index:: Red Hat OpenStack: Deployment Requirements
Deployment Requirements
-----------------------
Minimal Requirements
++++++++++++++++++++
* Red Hat account (https://access.redhat.com)
* Red Hat OpenStack entitlement (one per node)
* Internet access for Fuel Master name
Optional requirements
+++++++++++++++++++++
* Red Hat Satellite Server
* Configured Satellite activation key
.. _RHSM:
Red Hat Subscription Management (RHSM)
--------------------------------------
Benefits
++++++++
* No need to handle large ISOs or physical media.
* Register all your clients with just a single username and password.
* Automatically register the necessary products required for installation and
downloads a full cache.
* Download only the latest packages.
* Download only necessary packages.
Considerations
++++++++++++++
* Must observe Red Hat licensing requirements after deployment
* Package download time is dependent on network speed (20-60 minutes)
.. seealso::
`Overview of Subscription Management - Red Hat Customer
Portal <https://access.redhat.com/site/articles/143253>`_
.. _RHN_Satellite:
Red Hat RHN Satellite
---------------------
Benefits
++++++++
* Faster download of Red Hat OpenStack packages
* Register all your clients with an activation key
* More granular control of package set for your installation
* Registered OpenStack hosts don't need external network access
* Easier to consume for large enterprise customers
Considerations
++++++++++++++
* Red Hat RHN Satellite is a separate offering from Red Hat and requires
dedicated hardware
* Still requires Red Hat Subscription Manager and Internet access to download
registration packages (just for Fuel Master host)
What you need
+++++++++++++
* Red Hat account (https://access.redhat.com)
* Red Hat OpenStack entitlement (one per host)
* Internet access for Fuel master host
* Red Hat Satellite Server
* Configured Satellite activation key
Your RHN Satellite activation key must be configured the following channels
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* RHEL Server High Availability
* RHEL Server Load Balancer
* RHEL Server Optional
* RHEL Server Resilient Storage
* RHN Tools for RHEL
* Red Hat OpenStack 3.0
.. seealso::
`Red Hat | Red Hat Network Satellite <http://www.redhat.com/products/enterprise-linux/rhn-satellite/>`_
.. _rhn_sat_channels:
Fuel looks for the following RHN Satellite channels.
* rhel-x86_64-server-6
* rhel-x86_64-server-6-ost-3
* rhel-x86_64-server-ha-6
* rhel-x86_64-server-lb-6
* rhel-x86_64-server-rs-6
.. note:: If you create cloned channels, leave these channel strings intact.
.. index:: Red Hat OpenStack: Troubleshooting
Troubleshooting Red Hat OpenStack Deployment
--------------------------------------------
Issues downloading from Red Hat Subscription Manager
++++++++++++++++++++++++++++++++++++++++++++++++++++
If you receive an error from Fuel UI regarding Red Hat OpenStack download
issues, ensure that you have a valid subscription to the Red Hat OpenStack
3.0 product. This product is separate from standard Red Hat Enterprise
Linux. You can check by going to https://access.redhat.com and checking
Active Subscriptions. Contact your `Red Hat sales representative
<https://access.redhat.com/site/solutions/368643>`_ to get the proper
subscriptions associated with your account.
If you are still encountering issues, `contact Mirantis
Support <http://www.mirantis.com/contact/>`_.
Issues downloading from Red Hat RHN Satellite
+++++++++++++++++++++++++++++++++++++++++++++
If you receive an error from Fuel UI regarding Red Hat OpenStack download
issues, ensure that you have all the necessary channels available on your
RHN Satellite Server. The correct list is :ref:`here <rhn_sat_channels>`.
If you are missing these channels, please contact your `Red Hat sales
representative <https://access.redhat.com/site/solutions/368643>`_ to get
the proper subscriptions associated with your account.
RHN Satellite error: "rhel-x86_64-server-rs-6 not found"
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This means your Red Hat Satellite Server has run out of available entitlements
or your licenses have expired. Check your RHN Satellite to ensure there is at
least one available entitlement for each of the required channels.
If any of these channels are missing or you need to make changes your
account, please contact your `Red Hat sales representative
<https://access.redhat.com/site/solutions/368643>`_ to get the proper
subscriptions associated with your account.
Yum Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-x86_64-server-6.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This can be caused by many problems. This could happen if your SSL
certificate does not match the hostname of your RHN Satellite Server or if
you configured Fuel to use an IP address during deployment. This is not
recommended and you should use a fully qualified domain name for your RHN
Satellite Server.
You may find solutions to your issues with ``repomd.xml`` at the
`Red Hat Knowledgebase <https://access.redhat.com/>`_ or contact
`Red Hat Support. <https://access.redhat.com/support/>`_.
GPG Key download failed. Looking for URL your-satellite-server/pub/RHN-ORG-TRUSTED-SSL-CERT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This issue has two known problems. If you are using VirtualBox, this may not
be properly configured. Ensure that your upstream DNS resolver is correct
in ``/etc/dnsmasq.upstream``. This setting is configured during the bootstrap
process, but it is not possible to validate resolution of internal DNS names
at that time. Also, this may be caused by other DNS issues, local network,
or incorrect spelling of the RHN Satellite Server. Check your local network
and settings and try again.

View File

@ -0,0 +1,9 @@
OpenStack is an extensible, versatile, and flexible cloud management platform. By exposing its portfolio of cloud infrastructure services compute, storage, networking and other core resources — through ReST APIs, OpenStack enables a wide range of control over these services, both from the perspective of an integrated Infrastructure as a Service (IaaS) controlled by applications, as well as automated manipulation of the infrastructure itself.
This architectural flexibility doesnt set itself up magically. It asks you, the user and cloud administrator, to organize and manage an extensive array of configuration options. Consequently, getting the most out of your OpenStack cloud over time in terms of flexibility, scalability, and manageability requires a thoughtful combination of complex configuration choices. This can be very time consuming and requires a significant amount of studious documentation to comprehend.
Mirantis Fuel for OpenStack was created to eliminate exactly these problems. This step-by-step guide takes you through this process of:
* Configuring OpenStack and its supporting components into a robust cloud architecture
* Deploying that architecture through an effective, well-integrated automation package that sets up and maintains the components and their configurations
* Providing access to a well-integrated, up-to-date set of components known to work together

View File

@ -0,0 +1,21 @@
.. index:: Production Considerations
.. _Production:
=========================
Production Considerations
=========================
Fuel simplifies the set up of an OpenStack cluster, affording you the ability
to dig in and fully understand how OpenStack works. You can deploy on test
hardware or in a virtualized environment and root around all you like, but
when it comes time to deploy to production there are a few things to take
into consideration.
In this section we will talk about such things including how to size your
hardware and how to handle large-scale deployments.
.. contents:: :local:
:depth: 2
.. include:: /pages/production-considerations/0015-sizing-hardware.rst

View File

@ -0,0 +1,285 @@
.. raw:: pdf
PageBreak
.. index:: Sizing Hardware, Hardware Sizing
.. _Sizing_Hardware:
Sizing Hardware for Production Deployment
=========================================
.. contents :local:
.. TODO(mihgen): Add link to Hardware calculator on Mirantis site
One of the first questions people ask when planning an OpenStack deployment is
"what kind of hardware do I need?" There is no such thing as a one-size-fits-all
answer, but there are straightforward rules to selecting appropriate hardware
that will suit your needs. The Golden Rule, however, is to always accommodate
for growth. With the potential for growth accounted for, you can move on to the
actual hardware needs.
Many factors contribute to selecting hardware for an OpenStack cluster --
`contact Mirantis <http://www.mirantis.com/contact/>`_ for information on your
specific requirements -- but in general, you will want to consider the following
factors:
* Processing
* Memory
* Storage
* Networking
Your needs in each of these areas are going to determine your overall hardware
requirements.
Processing
----------
In order to calculate how much processing power you need to acquire you will
need to determine the number of VMs your cloud will support. You must also
consider the average and maximum processor resources you will allocate to each
VM. In the vast majority of deployments, the allocated resources will be the
same for all of your VMs. However, if you are planning to create groups of VMs
that have different requirements, you will need to calculate for all of them in
aggregate. Consider this example:
* 100 VMs
* 2 EC2 compute units (2 GHz) average
* 16 EC2 compute units (16 GHz) max
To make it possible to provide the maximum CPU in this example you will need at
least 5 CPU cores (16 GHz/(2.4 GHz per core * 1.3 to adjust for hyper-threading))
per machine, and at least 84 CPU cores ((100 VMs * 2 GHz per VM)/2.4 GHz per
core) in total. If you were to select the Intel E5 2650-70 8 core CPU, that
means you need 11 sockets (84 cores / 8 cores per socket). This breaks down to
six dual core servers (12 sockets / 2 sockets per server), for a "packing
density" of 17 VMs per server (102 VMs / 6 servers).
This process also accommodates growth since you now know what a single server
using this CPU configuration can support. You can add new servers accounting
for 17 VMs each as needed without having to re-calculate.
You will also need to take into account the following:
* This model assumes you are not oversubscribing your CPU.
* If you are considering Hyper-threading, count each core as 1.3, not 2.
* Choose a good value CPU that supports the technologies you require.
Memory
------
Continuing to use the example from the previous section, we need to determine
how much RAM will be required to support 17 VMs per server. Let's assume that
you need an average of 4 GBs of RAM per VM with dynamic allocation for up to
12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires
that each server have 204 GBs of available RAM.
You must also consider that the node itself needs sufficient RAM to accommodate
core OS operations as well as RAM for each VM container (not the RAM allocated
to each VM, but the memory the core OS uses to run the VM). The node's OS must
run it's own operations, schedule processes, allocate dynamic resources, and
handle network operations, so giving the node itself at least 16 GBs or more RAM
is not unreasonable.
Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB
and 32 GB sticks, we would need a total of 256 GBs of RAM installed per server.
For an average 2-CPU socket server board you get 16-24 RAM slots. To have
256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM
needs for up to 17 VMs requiring dynamic allocation up to 12 GBs and to support
all core OS requirements.
You can adjust this calculation based on your needs.
Storage Space
-------------
When it comes to disk space there are several types that you need to consider:
* Ephemeral (the local drive space for a VM)
* Persistent (the remote volumes that can be attached to a VM)
* Object Storage (such as images or other objects)
As far as local drive space that must reside on the compute nodes, in our
example of 100 VMs we make the following assumptions:
* 150 GB local storage per VM
* 5 TB total of local storage (100 VMs * 50 GB per VM)
* 500 GB of persistent volume storage per VM
* 50 TB total persistent storage
Returning to our already established example, we need to figure out how much
storage to install per server. This storage will service the 17 VMs per server.
If we are assuming 50 GBs of storage for each VMs drive container, then we would
need to install 2.5 TBs of storage on the server. Since most servers have
anywhere from 4 to 32 2.5" drive slots or 2 to 12 3.5" drive slots, depending on
server form factor (i.e., 2U vs. 4U), you will need to consider how the storage
will be impacted by the intended use.
If storage impact is not expected to be significant, then you may consider using
unified storage. For this example a single 3 TB drive would provide more than
enough storage for seventeen 150 GB VMs. If speed is really not an issue, you might even
consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5
for redundancy. If speed is critical, however, you will likely want to have a
single hardware drive for each VM. In this case you would likely look at a 3U
form factor with 24-slots.
Don't forget that you will also need drive space for the node itself, and don't
forget to order the correct backplane that supports the drive configuration
that meets your needs. Using our example specifications and assuming that speed
it critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM
146 GB SAS drives.
Throughput
++++++++++
As far as throughput, that's going to depend on what kind of storage you choose.
In general, you calculate IOPS based on the packing density (drive IOPS * drives
in the server / VMs per server), but the actual drive IOPS will depend on the
drive technology you choose. For example:
* 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)
* 100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS
* 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)
* 200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS
* SSD (40K IOPS, eight 300 GB drive, RAID-10)
* 40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPS
Clearly, SSD gives you the best performance, but the difference in cost between
SSDs and the less costly platter-based solutions is going to be significant, to
say the least. The acceptable cost burden is determined by the balance between
your budget and your performance and redundancy needs. It is also important to
note that the rules for redundancy in a cloud environment are different than a
traditional server installation in that entire servers provide redundancy as
opposed to making a single server instance redundant.
In other words, the weight for redundant components shifts from individual OS
installation to server redundancy. It is far more critical to have redundant
power supplies and hot-swappable CPUs and RAM than to have redundant compute
node storage. If, for example, you have 18 drives installed on a server and have
17 drives directly allocated to each VM installed and one fails, you simply
replace the drive and push a new node copy. The remaining VMs carry whatever
additional load is present due to the temporary loss of one node.
Remote storage
++++++++++++++
IOPS will also be a factor in determining how you plan to handle persistent
storage. For example, consider these options for laying out your 50 TB of remote
volume space:
* 12 drive storage frame using 3 TB 3.5" drives mirrored
* 36 TB raw, or 18 TB usable space per 2U frame
* 3 frames (50 TB / 18 TB per server)
* 12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame
* 3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM
* 24 drive storage frame using 1TB 7200 RPM 2.5" drives
* 24 TB raw, or 12 TB usable space per 2U frame
* 5 frames (50 TB / 12 TB per server)
* 24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame
* 5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frame
You can accomplish the same thing with a single 36 drive frame using 3 TB
drives, but this becomes a single point of failure in your cluster.
Object storage
++++++++++++++
When it comes to object storage, you will find that you need more space than
you think. For example, this example specifies 50 TB of object storage.
`Easy right?` Not really.
Object storage uses a default of 3 times the required space for replication,
which means you will need 150 TB. However, to accommodate two hands-off zones,
you will need 5 times the required space, which actually means 250 TB.
The calculations don't end there. You don't ever want to run out of space, so
"full" should really be more like 75% of capacity, which means you will need a
total of 333 TB, or a multiplication factor of 6.66.
Of course, that might be a bit much to start with; you might want to start
with a happy medium of a multiplier of 4, then acquire more hardware as your
drives begin to fill up. That calculates to 200 TB in our example. So how do
you put that together? If you were to use 3 TB 3.5" drives, you could use a 12
drive storage frame, with 6 servers hosting 36 TB each (for a total of 216 TB).
You could also use a 36 drive storage frame, with just 2 servers hosting 108 TB
each, but its not recommended due to the high cost of failure to replication
and capacity issues.
Networking
----------
Perhaps the most complex part of designing an OpenStack cluster is the
networking.
An OpenStack cluster can involve multiple networks even beyond the Public,
Private, and Internal networks. Your cluster may involve tenant networks,
storage networks, multiple tenant private networks, and so on. Many of these
will be VLANs, and all of them will need to be planned out in advance to avoid
configuration issues.
In terms of the example network, consider these assumptions:
* 100 Mbits/second per VM
* HA architecture
* Network Storage is not latency sensitive
In order to achieve this, you can use two 1 Gb links per server (2 x 1000
Mbits/second / 17 VMs = 118 Mbits/second).
Using two links also helps with HA. You can also increase throughput and
decrease latency by using two 10 Gb links, bringing the bandwidth per VM to
1 Gb/second, but if you're going to do that, you've got one more factor to
consider.
Scalability and oversubscription
++++++++++++++++++++++++++++++++
It is one of the ironies of networking that 1 Gb Ethernet generally scales
better than 10Gb Ethernet -- at least until 100 Gb switches are more commonly
available. It's possible to aggregate the 1 Gb links in a 48 port switch, so
that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a
10 Gb switch, however, and you have 48 x 10 Gb links down and 4 x 100b links up,
resulting in oversubscription.
Like many other issues in OpenStack, you can avoid this problem to a great
extent with careful planning. Problems only arise when you are moving between
racks, so plan to create "pods", each of which includes both storage and
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
Hardware for this example
+++++++++++++++++++++++++
In this example, you are looking at:
* 2 data switches (for HA), each with a minimum of 12 ports for data
(2 x 1 Gb links per server x 6 servers)
* 1 x 1 Gb switch for IPMI (1 port per server x 6 servers)
* Optional Cluster Management switch, plus a second for HA
Because your network will in all likelihood grow, it's best to choose 48 port
switches. Also, as your network grows, you will need to consider uplinks and
aggregation switches.
Summary
-------
In general, your best bet is to choose a 2 socket server with a balance in I/O,
CPU, Memory, and Disk that meets your project requirements.
Look for a 1U R-class or 2U high density C-class servers. Some good options
from Dell for compute nodes include:
* Dell PowerEdge R620
* Dell PowerEdge C6220 Rack Server
* Dell PowerEdge R720XD (for high disk or IOPS requirements)
You may also want to consider systems from HP (http://www.hp.com/servers) or
from a smaller systems builder like Aberdeen, a manufacturer that specializes
in powerful, low-cost systems and storage servers (http://www.aberdeeninc.com).

View File

@ -0,0 +1,9 @@
.. raw:: pdf
PageBreak
.. index:: Large Scale Deployments
.. _Large_Scale_Deployments:
.. TODO(mihgen): Fill in this section. It needs to be completely rewritten.

View File

@ -0,0 +1,23 @@
.. index:: Reference Architectures
.. _Reference-Architectures:
=======================
Reference Architectures
=======================
.. contents:: :local:
:depth: 2
.. include:: /pages/reference-architecture/0010-overview.rst
.. include:: /pages/reference-architecture/0012-simple.rst
.. include:: /pages/reference-architecture/0014-compact.rst
.. include:: /pages/reference-architecture/0016-full.rst
.. include:: /pages/reference-architecture/0015-closer-look.rst
.. include:: /pages/reference-architecture/0018-red-hat-differences.rst
.. include:: /pages/reference-architecture/0020-logical-setup.rst
.. include:: /pages/reference-architecture/0030-cluster-sizing.rst
.. include:: /pages/reference-architecture/0040-network-setup.rst
.. include:: /pages/reference-architecture/0050-technical-considerations-overview.rst
.. include:: /pages/reference-architecture/0060-quantum-vs-nova-network.rst
.. include:: /pages/reference-architecture/0080-swift-notes.rst

View File

@ -0,0 +1,41 @@
.. index:: Introduction
.. _Introduction:
Introducing Fuel™ for OpenStack
===============================
OpenStack is an extensible, versatile, and flexible cloud management
platform. By exposing its portfolio of cloud infrastructure services
compute, storage, networking and other core resources — through ReST APIs,
OpenStack enables a wide range of control over these services, both from the
perspective of an integrated Infrastructure as a Service (IaaS) controlled
by applications, as well as automated manipulation of the infrastructure
itself.
This architectural flexibility doesnt set itself up magically. It asks you,
the user and cloud administrator, to organize and manage an extensive array
of configuration options. Consequently, getting the most out of your
OpenStack cloud over time in terms of flexibility, scalability, and
manageability requires a thoughtful combination of complex configuration
choices. This can be very time consuming and requires that you become
familiar with a lot of documentation from a number of different projects.
Mirantis Fuel™ for OpenStack was created to eliminate exactly these problems.
This step-by-step guide takes you through this process of:
* Configuring OpenStack and its supporting components into a robust cloud
architecture
* Deploying that architecture through an effective, well-integrated automation
package that sets up and maintains the components and their configurations
* Providing access to a well-integrated, up-to-date set of components known to
work together
Fuel™ for OpenStack can be used to create virtually any OpenStack
configuration. To make things easier, the installation includes several
pre-defined architectures. For the sake of simplicity, this guide emphasises
a single, common reference architecture; the multi-node, high-availability
configuration. We begin with an explanation of this architecture, then move
on to the details of creating the configuration in a test environment using
VirtualBox. Finally, we give you the information you need to know to create
this and other OpenStack architectures in a production environment.

View File

@ -0,0 +1,3 @@
This section covers subjects that go beyond the standard OpenStack cluster,
from configuring OpenStack Networking for high-availability to adding your own
custom components to your cluster using Fuel.

View File

@ -0,0 +1,333 @@
.. raw:: pdf
PageBreak
.. index:: Advanced Configurations
Advanced Configurations
==========================================
This section covers subjects that go beyond the standard OpenStack cluster,
from configuring OpenStack Networking for high-availability to adding your own
custom components to your cluster using Fuel.
Adding And Configuring Custom Services
--------------------------------------
Fuel is designed to help you easily install a standard OpenStack cluster, but what do you do if your cluster is not standard? What if you need services or components that are not included with the standard Fuel distribution? This document gives you all of the information you need to add custom services and packages to a Fuel-deployed cluster.
Fuel usage scenarios and how they affect installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Two basic Fuel usage scenarios exist::
* In the first scenario, a deployment engineer uses the Fuel ISO image to create a master node, make necessary changes to configuration files, and deploy OpenStack. In this scenario, each node gets a clean OpenStack installation.
* In the second scenario, the master node and other nodes in the cluster have already been installed, and the deployment engineer has to deploy OpenStack to an existing configuration.
For this discussion, the first scenario requires that any customizations needed must be applied during the deployment and the second scenario already has customizations applied.
In most cases, best practices dictate that you deploy and test OpenStack first, later adding any custom services. Fuel works using puppet manifests, so the simplest way to install a new service is to edit the current site.pp file on the Puppet Master to add any additional deployment paths for the target nodes. There are, however, certain components that must be installed prior to the installation of OpenStack (i.e., hardware drivers, management software, etc...). In cases like these, Puppet can only be used to perform these installations using a separate, custom site.pp file that prepares the target system(s) for OpenStack installation. An advantage to this method, however, is that it helps isolate version mismatches and the various OpenStack dependencies.
If a pre-deployment site.pp approach is not an option, you can inject a custom component installation into the existing Fuel manifests. If you elect to go this route, you'll need to be aware of software source compatibility issues, as well as installation stages, component versions, incompatible dependencies, and declared resource names.
In short, simple custom component installation may be accomplished by editing the site.pp file, but more complex components should be added as new Fuel components.
In the next section we take a closer look at what you need to know.
Installing the new service along with Fuel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When it comes to installing your new service or component alongside Fuel, you have several options. How you go about it depends on where in the process the component needs to be available. Let's look at each step and how it can impact your installation.
**Boot the master node**
In most cases, you will be installing the master node from the Fuel ISO. This is a semi-automated step, and doesn't allow for any custom components. If for some reason you need to install a node at this level, you will need to use the manual Fuel installation procedure.
**Cobbler configuration**
If your customizations need to take place before the install of the operating system, or even as part of the operating system install, this is where you will add them to the configuration process. This is also where you would make customizations to other services. At this stage, you are making changes to the operating system kickstart/pre-seed files, and may include any custom software source and components required to install the operating system for a node. Anything that needs to be installed before OpenStack should be configured during this step.
**OpenStack installation**
It is during this stage that you perform any Puppet, Astute, or mCollective configuration. In most cases, this means customizing the Puppet site.pp file to add any custom components during the actual OpenStack installation.
This step actually includes several different stages. (In fact, Puppet STDLib defines several additional default stages that fuel does not use.) These stages include:
0. ``Puppetlabs-repo``. mCollective uses this stage to add the Puppetlabs repositories during operating system and Puppet deployment.
1. ``Openstack-custom-repo``. Additional repositories required by OpenStack are configured at this stage. Additionally, to avoid compatibility issues, the Puppetlabs repositories are switched off at this stage. As a general rule, it is a good idea to turn off any unnecessary software repositories defined for operating system installation.
2. ``FUEL``. During this stage, Fuel performs any actions defined for the current operating system.
3. ``Netconfig``. During this stage, Fuel performs all network configuration actions. This means that you should include any custom components that are related to the network in this stage.
4. ``Main``. The actual OpenStack installation process happens during this stage. Install any remaining non-network-related components during or after this stage.
**Post-OpenStack install**
At this point, OpenStack is installed. You may add any components you like at this point. We suggest that you take care at this point so as not to break OpenStack. This is a good place to make an image of the nodes to have a roll-back in case of any catestrophic errors that render OpenStack or any other components inoperable. If you are preparing to deploy a large-scale environment, you may want to perform a small-scale test to familiarize yourself with the entire process and make yourself aware of any potential gotchas that are specific to your infrastructure. You should perform this small-scale test using the same hardware that the large-scale deployment will use and not VirtualBox. VirtualBox does not offer the ability to test any custom hardware driver installations your physical hardware may require.
Defining a new component
^^^^^^^^^^^^^^^^^^^^^^^^
In general, we recommend you follow these steps to define a new component:
#. **Custom stages. Optional.**
Declare a custom stage or stages to help Puppet understand the required installation sequence. Stages are special markers indicating the sequence of actions. Best practice is to use the input parameter Before for every stage, to help define the correct sequence. The default built-in stage is "main". Every Puppet action is automatically assigned to the main stage if no stage is explicitly specified for the action.
Note that since Fuel installs almost all of OpenStack during the main stage, custom stages may not help, so future plans include breaking the OpenStack installation into several sub-stages.
Don't forget to take into account other existing stages; training several parallel sequences of stages increases the chances that Puppet will order them in correctly if you do not explicitly specify the order.
*Example*::
stage {'Custom stage 1':
before => Stage['Custom stage 2'],
}
stage {'Custom stage 2':
before => Stage['main'],
}
Note that there are several limitations to stages, and they should be used with caution and only with the simplest of classes. You can find more information regarding stages and limitations here: http://docs.puppetlabs.com/puppet/2.7/reference/lang_run_stages.html.
#. **Custom repositories. Optional.**
If the custom component requires a custom software source, you may declare a new repository and add it during one of the early stages of the installation.
#. **Common variable definition**
It is a good idea to have all common variables defined in a single place. Unlike variables in many other languages, Puppet variables are actually constants, and may be assigned only once inside a given scope.
#. **OS and condition-dependent variable definition**
We suggest that you assign all common operating system or condition-dependent variables to a single location, preferably near the other common variables. Also, be sure to always use a ``default`` section when defining conditional operators or you could experience configuration issues.
*Example*::
case $::osfamily {
# RedHat in most cases should work for CentOS and Fedora as well
'RedHat': {
# List of packages to get from URL/path.
# Separate list should be defined for each separate URL!
$custom_package_list_from_url = ['qpid-cpp-server-0.14-16.el6.x86_64.rpm']
}
'Debian': {
# List of packages to get from URL/path.
# Separate list should be defined for each separate URL!
$custom_package_list_from_url = [ "qpidd_0.14-2_amd64.deb" ]
}
default: {
fail("Module install_custom_package does not support ${::operatingsystem}")
}
}
#. **Define installation procedures for independent custom components as classes**
You can think of public classes as singleton collections, or as a named block of code with its own namespace. Each class should be defined only once, but every class may be used with different input variable sets. The best practice is to define a separate class for every component, define required sub-classes for sub-components, and include class-dependent required resources within the actual class/subclass.
*Example*::
class add_custom_service (
# Input parameter definitions:
# Name of the service to place behind HAProxy. **Mandatory**.
# This name appears as a new HAProxy configuration block in /etc/haproxy/haproxy.cfg.
$service_name_in_haproxy_config,
$custom_package_download_url,
$custom_package_list_from_url,
#The list of remaining input parameters
...
) {
# HAProxy::params is a container class holding default parameters for the haproxy class. It adds and populates the Global and Default sections in /etc/haproxy/haproxy.cfg.
# If you install a custom service over the already deployed HAProxy configuration, it is probably better to comment out the following string:
include haproxy::params
#Class resources definitions:
# Define the list of package names to be installed
define install_custom_package_from_url (
$custom_package_download_url,
$package_provider = undef
) {
exec { "download-${name}" :
command => "/usr/bin/wget -P/tmp ${custom_package_download_url}/${name}",
creates => "/tmp/${name}",
} ->
install_custom_package { "${name}" :
provider => $package_provider,
source => "/tmp/${name}",
}
}
define install_custom_package (
$package_provider = undef,
$package_source = undef
) {
package { "custom-${name}" :
ensure => present,
provider => $package_provider,
source => $package_source
}
}
#Here we actually install all the packages from a single URL.
if is_array($custom_package_list_from_url) {
install_custom_package_from_url { $custom_package_list_from_url :
provider => $package_provider,
custom_package_download_url => $custom_package_download_url,
}
}
}
#. **Target nodes**
Every component should be explicitly assigned to a particular target node or nodes. To do that, declare the node or nodes within site.pp. When Puppet runs the manifest for each node, it compares each node definition with the name of the current hostname and applies only to classes assigned to the current node. Node definitions may include regular expressions. For example, you can apply the class 'add custom service' to all controller nodes with hostnames fuel-controller-00 to fuel-controller-xxx, where xxx = any integer value using the following definition:
*Example*::
node /fuel-controller-[\d+]/ {
include stdlib
class { 'add_custom_service':
stage => 'Custom stage 1',
service_name_in_haproxy_config => $service_name_in_haproxy_config,
custom_package_download_url => $custom_package_download_url,
custom_package_list_from_url => $custom_package_list_from_url,
}
}
Fuel API Reference
^^^^^^^^^^^^^^^^^^
**add_haproxy_service**
Location: Top level
As the name suggests, this function enables you to create a new HAProxy service. The service is defined in the ``/etc/haproxy/haproxy.cfg`` file, and generally looks something like this::
listen keystone-2
bind 10.0.74.253:35357
bind 10.0.0.110:35357
balance roundrobin
option httplog
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
To accomplish this, you might create a Fuel statement such as::
add_haproxy_service { 'keystone-2' :
order => 30,
balancers => {'fuel-controller-01.example.com' => '10.0.0.101',
'fuel-controller-02.example.com' => '10.0.0.102'},
virtual_ips => {'10.0.74.253', '10.0.0.110'},
port => '35357',
haproxy_config_options => { 'option' => ['httplog'], 'balance' => 'roundrobin' },
balancer_port => '35357',
balancermember_options => 'check',
mode => 'tcp',
define_cookies => false,
define_backend => false,
collect_exported => false
}
Let's look at how this command works.
**Usage:** ::
add_haproxy_service { '<SERVICE_NAME>' :
order => $order,
balancers => $balancers,
virtual_ips => $virtual_ips,
port => $port,
haproxy_config_options => $haproxy_config_options,
balancer_port => $balancer_port,
balancermember_options => $balancermember_options,
mode => $mode, #Optional. Default is 'tcp'.
define_cookies => $define_cookies, #Optional. Default false.
define_backend => $define_backend,#Optional. Default false.
collect_exported => $collect_exported, #Optional. Default false.
}
**Parameters:**
``<'Service name'>``
The name of the new HAProxy listener section. In our example it was ``keystone-2``. If you want to include an IP address or port in the listener name, you have the option to use a name such as::
'stats 0.0.0.0:9000 #Listen on all IP's on port 9000'
``order``
This parameter determines the order of the file fragments. It is optional, but we strongly recommend setting it manually. Fuel already has several different order values from 1 to 100 hardcoded for HAProxy configuration. If your HAProxy configuration fragments appear in the wrong places in ``/etc/haproxy/haproxy.cfg`` this is likely due to an incorrect order value. It is acceptable to set order values greater than 100 in order to place your custom configuration block at the end of ``haproxy.cfg``.
Puppet assembles configuration files from fragments. First it creates several configuration fragments and temporarily stores all of them as separate files. Every fragment has a name such as ``${order}-${fragment_name}``, so the order determines the number of the current fragment in the fragment sequence. After all the fragments are created, Puppet reads the fragment names and sorts them in ascending order, concatenating all the fragments in that order. In other words, a fragment with a smaller order value always goes before all fragments with a greater order value.
The ``keystone-2`` fragment from the example above has ``order = 30`` so it's placed after the ``keystone-1`` section (``order = 20``) and the ``nova-api-1`` section (order = 40).
``balancers``
Balancers (or **Backends** in HAProxy terms) are a hash of ``{ "$::hostname" => $::ipaddress }`` values.
The default is ``{ "<current hostname>" => <current ipaddress> }``, but that value is set for compatability only, and may not work correctly in HA mode. Instead, the default for HA mode is to explicitly set the Balancers as ::
Haproxy_service {
balancers => $controller_internal_addresses
}
where ``$controller_internal_addresses`` represents a hash of all the controllers with a corresponding internal IP address; this value is set in ``site.pp``.
The ``balancers`` parameter is a list of HAProxy listener balance members (hostnames) with corresponding IP addresses. The following strings from the ``keystone-2`` listener example represent balancers::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
Every key pair in the ``balancers`` hash adds a new string to the list of balancers defined in the listener section. Different options may be set for every string.
``virtual_ips``
This parameter represents an array of IP addresses (or **Frontends** in HAProxy terms) of the current listener. Every IP address in this array adds a new string to the bind section of the current listeners. The following strings from the ``keystone-2`` listener example represent virtual IPs::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``port``
This parameters specifies the frontend port for the listeners. Currently you must set the same port frontends.
The following strings from the ``keystone-2`` listener example represent the frontend port, where the port is 35357::
bind 10.0.74.253:35357
bind 10.0.0.110:35357
``haproxy_config_options``
This parameter represents a hash of key pairs of HAProxy listener options in the form ``{ 'option name' => 'option value' }``. Every key pair from this hash adds a new string to the listener options.
**NOTE** Every HAProxy option may require a different input value type, such as strings or a list of multiple options per single string.
The '`keystone-2`` listener example has the ``{ 'option' => ['httplog'], 'balance' => 'roundrobin' }`` option array and this array is represented as the following in the resulting /etc/haproxy/haproxy.cfg:
balance roundrobin
option httplog
``balancer_port``
This parameter represents the balancer (backend) port. By default, the balancer_port is the same as the frontend ``port``. The following strings from the ``keystone-2`` listener example represent ``balancer_port``, where port is ``35357``::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``balancermember_options``
This is a string of options added to each balancer (backend) member. The ``keystone-2`` listener example has the single ``check`` option::
server fuel-controller-01.example.com 10.0.0.101:35357 check
server fuel-controller-02.example.com 10.0.0.102:35357 check
``mode``
This optional parameter represents the HAProxy listener mode. The default value is ``tcp``, but Fuel writes ``mode http`` to the defaults section of ``/etc/haproxy/haproxy.cfg``. You can set the same option via ``haproxy_config_options``. A separate mode parameter is required to set some modes by default on every new listener addition. The ``keystone-2`` listener example has no ``mode`` option and so it works in the default Fuel-configured HTTP mode.
``define_cookies``
This optional boolean parameter is a Fuel-only feature. The default is ``false``, but if set to ``true``, Fuel directly adds ``cookie ${hostname}`` to every balance member (backend).
The ``keystone-2`` listener example has no ``define_cookies`` option. Typically, frontend cookies are added with ``haproxy_config_options`` and backend cookies with ``balancermember_options``.
``collect_exported``
This optional boolean parameter has a default value of ``false``. True means 'collect exported @@balancermember resources' (when every balancermember node exports itself), while false means 'rely on the existing declared balancermember resources' (for when you know the full set of balancermembers in advance and use ``haproxy::balancermember`` with array arguments, which allows you to deploy everything in one run).

View File

@ -0,0 +1,9 @@
OpenStack Networking HA
-----------------------
NOTE: THIS DOCUMENT HAS NOT BEEN EDITED AND IS NOT READY FOR PUBLIC CONSUMPTION.
Fuel 2.1 introduced support for OpenStack Networking utilizing a high-availability configuration. To accomplish this, Fuel uses a combination of Pacemaker and Corosync to ensure that if the networking service goes down, it will be restarted either on the existing node or on separate node.
This document explains how to configure these options in your own installation.

View File

@ -0,0 +1,283 @@
L23network
----------
NOTE: THIS DOCUMENT HAS NOT BEEN EDITED AND IS NOT READY FOR PUBLIC CONSUMPTION.
Puppet module for configuring network interfaces on 2nd and 3rd level (802.1q vlans, access ports, NIC-bonding, assign IP addresses, dhcp, and interfaces without IP addresses).
Can work together with Open vSwitch or standard linux way.
At this moment we support Centos 6.3 (RHEL6) and Ubuntu 12.04 or above.
Usage
^^^^^
Place this module at /etc/puppet/modules or on another path that contains your puppet modules.
Include L23network module and initialize it. I recommend to do it in an early stage::
#Network configuration
stage {'netconfig':
before => Stage['main'],
}
class {'l23network': stage=> 'netconfig'}
If you do not plan to use Open vSwitch -- you can disable it::
class {'l23network': use_ovs=>false, stage=> 'netconfig'}
L2 network configuation (Open vSwitch only)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Current layout is:
* *bridges* -- A "Bridge" is a virtual ethernet L2 switch. You can plug ports into it.
* *ports* -- A Port is an interface you plug into the bridge (switch). It's a virtual. (virtual what?)
* *interface* -- A physical implementation of port.
Then in your manifest you can either use the things as parameterized classes::
class {"l23network": }
l23network::l2::bridge{"br-mgmt": }
l23network::l2::port{"eth0": bridge => "br-mgmt"}
l23network::l2::port{"mmm0": bridge => "br-mgmt"}
l23network::l2::port{"mmm1": bridge => "br-mgmt"}
l23network::l2::bridge{"br-ex": }
l23network::l2::port{"eth0": bridge => "br-ex"}
l23network::l2::port{"eth1": bridge => "br-ex", ifname_order_prefix='ovs'}
l23network::l2::port{"eee0": bridge => "br-ex", skip_existing => true}
l23network::l2::port{"eee1": bridge => "br-ex", type=>'internal'}
You can define type for the port. Port type can be
'system', 'internal', 'tap', 'gre', 'ipsec_gre', 'capwap', 'patch', 'null'.
If you do not define type for port (or define '') -- ovs-vsctl will have default behavior
(see http://openvswitch.org/cgi-bin/ovsman.cgi?page=utilities%2Fovs-vsctl.8).
You can use *skip_existing* option if you do not want to interrupt configuration while adding an existing port or bridge.
L3 network configuration
^^^^^^^^^^^^^^^^^^^^^^^^
::
### Simple IP address definition, DHCP or address-less interfaces
l23network::l3::ifconfig {"eth0": ipaddr=>'192.168.1.1/24'}
l23network::l3::ifconfig {"xXxXxXx":
interface => 'eth1',
ipaddr => '192.168.2.1',
netmask => '255.255.255.0'
}
l23network::l3::ifconfig {"eth2": ipaddr=>'dhcp'}
l23network::l3::ifconfig {"eth3": ipaddr=>'none'}
Option *ipaddr* can contains IP address, 'dhcp', or 'none' string. In this example we describe configuration of 4 network interfaces:
* Interface *eth0* have short CIDR-notated form of IP address definition.
* Interface *eth1*
* Interface *eth2* will be configured to use dhcp protocol.
* Interface *eth3* will be configured as interface without IP address. Often you will need to create "master" interface for 802.1q vlans (in native linux implementation) or as slave interface for bonding.
CIDR-notated form of IP address has more priority, that classic *ipaddr* and *netmask* definition.
If you omitted *natmask* and did not use CIDR-notated form -- default *netmask* value will be used as '255.255.255.0'.::
### Multiple IP addresses for one interface (aliases)
l23network::l3::ifconfig {"eth0":
ipaddr => ['192.168.0.1/24', '192.168.1.1/24', '192.168.2.1/24']
}
You can pass a list of CIDR-notated IP addresses to the *ipaddr* parameter to assign many IP addresses to one interface. This will create aliases (not subinterfaces). Array can contain one or more elements. ::
### UP and DOWN interface order
l23network::l3::ifconfig {"eth1":
ipaddr=>'192.168.1.1/24'
}
l23network::l3::ifconfig {"br-ex":
ipaddr=>'192.168.10.1/24',
ifname_order_prefix='ovs'
}
l23network::l3::ifconfig {"aaa0":
ipaddr=>'192.168.20.1/24',
ifname_order_prefix='zzz'
}
Centos and Ubuntu (at startup OS) start and configure network interfaces in alphabetical order
by interface configuration file names. In the example above we change configuration process order by *ifname_order_prefix* keyword. We will have this order::
ifcfg-eth1
ifcfg-ovs-br-ex
ifcfg-zzz-aaa0
And OS will configure interfaces br-ex and aaa0 after eth0::
### Default gateway
l23network::l3::ifconfig {"eth1":
ipaddr => '192.168.2.5/24',
gateway => '192.168.2.1',
check_by_ping => '8.8.8.8',
check_by_ping_timeout => '30'
}
In this example we define default *gateway* and options for waiting so that the network stays up.
Parameter *check_by_ping* define IP address, that will be pinged. Puppet will be blocked for waiting response for *check_by_ping_timeout* seconds.
Parameter *check_by_ping* can be IP address, 'gateway', or 'none' string for disabling checking.
By default gateway will be pinged. ::
### DNS-specific options
l23network::l3::ifconfig {"eth1":
ipaddr => '192.168.2.5/24',
dns_nameservers => ['8.8.8.8','8.8.4.4'],
dns_search => ['aaa.com','bbb.com'],
dns_domain => 'qqq.com'
}
Also we can specify DNS nameservers, and search list that will be inserted (by resolvconf lib) to /etc/resolv.conf .
Option *dns_domain* implemented only in Ubuntu. ::
### DHCP-specific options
l23network::l3::ifconfig {"eth2":
ipaddr => 'dhcp',
dhcp_hostname => 'compute312',
dhcp_nowait => false,
}
Bonding
^^^^^^^
### Using standard linux bond (ifenslave)
For bonding two interfaces you need to:
* Specify these interfaces as interfaces without IP addresses
* Specify that the interfaces depend on the master-bond-interface
* Assign IP address to the master-bond-interface.
* Specify bond-specific properties for master-bond-interface (if defaults are not suitable for you)
for example (defaults included)::
l23network::l3::ifconfig {'eth1': ipaddr=>'none', bond_master=>'bond0'} ->
l23network::l3::ifconfig {'eth2': ipaddr=>'none', bond_master=>'bond0'} ->
l23network::l3::ifconfig {'bond0':
ipaddr => '192.168.232.1',
netmask => '255.255.255.0',
bond_mode => 0,
bond_miimon => 100,
bond_lacp_rate => 1,
}
More information about bonding network interfaces you can get in manuals for your operating system:
* https://help.ubuntu.com/community/UbuntuBonding
* http://wiki.centos.org/TipsAndTricks/BondingInterfaces
### Using Open vSwitch
For bonding two interfaces you need:
* Specify OVS bridge
* Specify special resource "bond" and add it to bridge. Specify bond-specific parameters.
* Assign IP address to the newly-created network interface (if needed).
In this example we add "eth1" and "eth2" interfaces to bridge "bridge0" as bond "bond1". ::
l23network::l2::bridge{'bridge0': } ->
l23network::l2::bond{'bond1':
bridge => 'bridge0',
ports => ['eth1', 'eth2'],
properties => [
'lacp=active',
'other_config:lacp-time=fast'
],
} ->
l23network::l3::ifconfig {'bond1':
ipaddr => '192.168.232.1',
netmask => '255.255.255.0',
}
Open vSwitch provides lot of parameters for different configurations.
We can specify them in the "properties" option as a list of parameter=value
(or parameter:key=value) strings.
The most of them you can see in [open vSwitch documentation page](http://openvswitch.org/support/).
802.1q vlan access ports
^^^^^^^^^^^^^^^^^^^^^^^^
### Using standard linux way
We can use tagged vlans over ordinary network interfaces (or over bonds).
L23networks support two variants of naming vlan interfaces:
* *vlanXXX* -- 802.1q tag gives from the vlan interface name, but you need to specify
parent interface name in the **vlandev** parameter.
* *eth0.101* -- 802.1q tag and parent interface name gives from the vlan interface name
If you need to use 802.1q vlans over bonds -- you can use only the first variant.
In this example we can see both variants: ::
l23network::l3::ifconfig {'vlan6':
ipaddr => '192.168.6.1',
netmask => '255.255.255.0',
vlandev => 'bond0',
}
l23network::l3::ifconfig {'vlan5':
ipaddr => 'none',
vlandev => 'bond0',
}
L23network:L3:Ifconfig['bond0'] -> L23network:L3:Ifconfig['vlan6'] -> L23network:L3:Ifconfig['vlan5']
l23network::l3::ifconfig {'eth0':
ipaddr => '192.168.0.5',
netmask => '255.255.255.0',
gateway => '192.168.0.1',
} ->
l23network::l3::ifconfig {'eth0.101':
ipaddr => '192.168.101.1',
netmask => '255.255.255.0',
} ->
l23network::l3::ifconfig {'eth0.102':
ipaddr => 'none',
}
### Using Open vSwitch
In the Open vSwitch all internal traffic is virtually tagged.
For creating the 802.1q tagged access port you need to specify vlan tag when adding a port to a bridge.
In this example we create two ports with tags 10 and 20, and assign an IP address to interface with tag 10::
l23network::l2::bridge{'bridge0': } ->
l23network::l2::port{'vl10':
bridge => 'bridge0',
type => 'internal',
port_properties => [
'tag=10'
],
} ->
l23network::l2::port{'vl20':
bridge => 'bridge0',
type => 'internal',
port_properties => [
'tag=20'
],
} ->
l23network::l3::ifconfig {'vl10':
ipaddr => '192.168.101.1/24',
} ->
l23network::l3::ifconfig {'vl20':
ipaddr => 'none',
}
Information about vlans in open vSwitch you can get in [open vSwitch documentation page](http://openvswitch.org/support/config-cookbooks/vlan-configuration-cookbook/).
**IMPORTANT:** You can't use vlan interface names like vlanXXX if you do not want double-tagging of your network traffic.
---
When I began to write this module, I checked https://github.com/ekarlso/puppet-vswitch. Elcarso, big thanks...

View File

@ -0,0 +1,431 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Post-Deployment Check
.. _Post-Deployment-Check:
Post-Deployment Check
=====================
.. contents :local:
On occasion, even a successful deployment may result in some OpenStack
components not working correctly. If this happens, Fuel offers the ability
to perform post-deployment checks to verify operations. Part of Fuel's goal
is to provide easily accessible status information about the most commonly
used components and the most recently performed actions. To perform these
checks you will use Sanity and Smoke checks, as described below:
**Sanity Checks**
Reveal whether the overall system is functional. If it fails, you will most
likely need to restart some services to operate OpenStack.
**Smoke Checks**
Dive in a little deeper and reveal networking, system-requirements,
functionality issues.
Sanity Checks will likely be the point on which the success of your
deployment pivots, but it is critical to pay close attention to all
information collected from theses tests. Another way to look at these tests
is by their names. Sanity Checks are intended to assist in maintaining your
sanity. Smoke Checks tell you where the fires are so you can put them out
strategically instead of firehosing the entire installation.
Benefits
--------
* Using post-deployment checks helps you identify potential issues which
may impact the health of a deployed system.
* All post-deployment checks provide detailed descriptions about failed
operations and tell you which component or components are not working
properly.
* Previously, performing these checks manually would have consumed a
great deal of time. Now, with these checks the process will take only a
few minutes.
* Aside from verifying that everything is working correctly, the process
will also determine how quickly your system works.
* Post-deployment checks continue to be useful, for example after
sizable changes are made in the system you can use the checks to
determine if any new failure points have been introduced.
Running Post-Deployment Checks
------------------------------
Now, let`s take a closer look on what should be done to execute the tests and
to understand if something is wrong with your OpenStack cluster.
.. image:: /_images/healthcheck_tab.jpg
:align: center
As you can see on the image above, the Fuel UI now contains a ``Healthcheck``
tab, indicated by the Heart icon.
All of the post-deployment checks are displayed on this tab. If your
deployment was successful, you will see a list of tests this show a green
Thumbs Up in the last column. The Thumb indicates the status of the
component. If you see a detailed message and a Thumbs Down, that
component has failed in some manner, and the details will indicate where the
failure was detected. All tests can be run on different environments, which
you select on main page of Fuel UI. You can run checks in parallel on
different environments.
Each test contains information on its estimated and actual duration. We have
included information about test processing time from our own tests and
indicate this in each test. Note that we show average times from the slowest
to the fastest systems we have tested, so your results will vary.
Once a test is complete the results will appear in the Status column. If
there was an error during the test the UI will display the error message
below the test name. To assist in the troubleshooting process, the test
scenario is displayed under the failure message and the failed step is
highlighted. You will find more detailed information on these tests later in
this section.
An actual test run looks like this:
.. image:: /_images/ostf_screen.jpg
:align: center
What To Do When a Test Fails
----------------------------
If a test fails, there are several ways to investigate the problem. You may
prefer to start in Fuel UI since it's feedback is directly related to the
health of the deployment. To do so, start by checking the following:
* Under the `Healthcheck` tab
* In the OpenStack Dashboard
* In the test execution logs (/var/log/ostf-stdout.log)
* In the individual OpenStack components logs
Of course, there are many different conditions that can lead to system
breakdowns, but there are some simple things that can be examined before you
dig deep. The most common issues are:
* Not all OpenStack services are running
* Any defined quota has been exceeded
* Something has been broken in the network configuration
* There is a general lack of resources (memory/disk space)
The first thing to be done is to ensure all OpenStack services are up and
running. To do this you can run sanity test set, or execute the following
command on your Controller node::
nova-manage service list
If any service is off (has “XXX” status), you can restart it using this command::
service openstack-<service name> restart
If all services are on, but you`re still experiencing some issues, you can
gather information on OpenStack Dashboard (exceeded number of instances,
fixed IPs etc). You may also read the logs generated by tests which is
stored at ``/var/log/ostf-stdout.log``, or go to ``/var/log/<component>`` and view
if any operation has ERROR status. If it looks like the last item, you may
have underprovisioned your environment and should check your math and your
project requirements.
Sanity Tests Description
------------------------
Sanity checks work by sending a query to all OpenStack components to get a
response back from them. Many of these tests are simple in that they ask
each service for a list of it's associated objects and waits for a response.
The response can be something, nothing, and error, or a timeout, so there
are several ways to determine if a service is up. The following list shows
what test is used for each service:
.. topic:: Instances list availability
Test checks that Nova component can return list of instances.
Test scenario:
1. Request list of instances.
2. Check returned list is not empty.
.. topic:: Images list availability
Test checks that Glance component can return list of images.
Test scenario:
1. Request list of images.
2. Check returned list is not empty.
.. topic:: Volumes list availability
Test checks that Swift component can return list of volumes.
Test scenario:
1. Request list of volumes.
2. Check returned list is not empty.
.. topic:: Snapshots list availability
Test checks that Glance component can return list of snapshots.
Test scenario:
1. Request list of snapshots.
2. Check returned list is not empty.
.. topic:: Flavors list availability
Test checks that Nova component can return list of flavors.
Test scenario:
1. Request list of flavors.
2. Check returned list is not empty.
.. topic:: Limits list availability
Test checks that Nova component can return list of absolute limits.
Test scenario:
1. Request list of limits.
2. Check response.
.. topic:: Services list availability
Test checks that Nova component can return list of services.
Test scenario:
1. Request list of services.
2. Check returned list is not empty.
.. topic:: User list availability
Test checks that Keystone component can return list of users.
Test scenario:
1. Request list of services.
2. Check returned list is not empty.
.. topic:: Services execution monitoring
Test checks that all of the expected services are on, meaning the test will
fail if any of the listed services is in “XXX” status.
Test scenario:
1. Connect to a controller via SSH.
2. Execute nova-manage service list command.
3. Check there are no failed services.
.. topic:: DNS availability
Test checks that DNS is available.
Test scenario:
1. Connect to a Controller node via SSH.
2. Execute host command for the controller IP.
3. Check DNS name can be successfully resolved.
.. topic:: Networks availability
Test checks that Nova component can return list of available networks.
Test scenario:
1. Request list of networks.
2. Check returned list is not empty.
.. topic:: Ports availability
Test checks that Nova component can return list of available ports.
Test scenario:
1. Request list of ports.
2. Check returned list is not empty.
For more information refer to nova cli reference.
Smoke Tests Description
-----------------------
Smoke tests verify how your system handles basic OpenStack operations under
normal circumstances. The Smoke test series uses timeout tests for
operations that have a known completion time to determine if there is any
smoke, and thusly fire. An additional benefit to the Smoke Test series is
that you get to see how fast your environment is the first time you run them.
All tests use basic OpenStack services (Nova, Glance, Keystone, Cinder etc),
therefore if any of them is off, the test using it will fail. It is
recommended to run all sanity checks prior to your smoke checks to determine
all services are alive. This helps ensure that you don't get any false
negatives. The following is a description of each sanity test available:
.. topic:: Flavor creation
Test checks that low requirements flavor can be created.
Target component: Nova
Scenario:
1. Create small-size flavor.
2. Check created flavor has expected name.
3. Check flavor disk has expected size.
For more information refer to nova cli reference.
.. topic:: Volume creation
Test checks that a small-sized volume can be created.
Target component: Compute
Scenario:
1. Create a new small-size volume.
2. Wait for "available" volume status.
3. Check response contains "display_name" section.
4. Create instance and wait for "Active" status
5. Attach volume to instance.
6. Check volume status is "in use".
7. Get created volume information by its id.
8. Detach volume from instance.
9. Check volume has "available" status.
10. Delete volume.
If you see that created volume is in ERROR status, it can mean that you`ve
exceeded the maximum number of volumes that can be created. You can check it
on OpenStack dashboard. For more information refer to volume management
instructions.
.. topic:: Instance booting and snapshotting
Test creates a keypair, checks that instance can be booted from default
image, then a snapshot can be created from it and a new instance can be
booted from a snapshot. Test also verifies that instances and images reach
ACTIVE state upon their creation.
Target component: Glance
Scenario:
1. Create new keypair to boot an instance.
2. Boot default image.
3. Make snapshot of created server.
4. Boot another instance from created snapshot.
If you see that created instance is in ERROR status, it can mean that you`ve
exceeded any system requirements limit. The test is using a nano-flavor with
parameters: 64 RAM, 1 GB disk space, 1 virtual CPU presented. For more
information refer to nova cli reference, image management instructions.
.. topic:: Keypair creation
Target component: Nova.
Scenario:
1. Create a new keypair, check if it was created successfully
(check name is expected, response status is 200).
For more information refer to nova cli reference.
.. topic:: Security group creation
Target component: Nova
Scenario:
1. Create security group, check if it was created correctly
(check name is expected, response status is 200).
For more information refer to nova cli reference.
.. topic:: Network parameters check
Target component: Nova
Scenario:
1. Get list of networks.
2. Check seen network labels equal to expected ones.
3. Check seen network ids equal to expected ones.
For more information refer to nova cli reference.
.. topic:: Instance creation
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
For more information refer to nova cli reference, instance management
instructions.
.. topic:: Floating IP assignment
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
4. Create new floating IP.
5. Assign floating IP to created instance.
For more information refer to nova cli reference, floating ips management
instructions.
.. topic:: Network connectivity check through floating IP
Target component: Nova
Scenario:
1. Create new keypair (if it`s nonexistent yet).
2. Create new sec group (if it`s nonexistent yet).
3. Create instance with usage of created sec group and keypair.
4. Check connectivity for all floating IPs using ping command.
If this test failed, it`s better to run a network check and verify that all
connections are correct. For more information refer to the Nova CLI reference's
floating IPs management instructions.
.. topic:: User creation and authentication in Horizon
Test creates new user, tenant, user role with admin privileges and logs in
to dashboard.
Target components: Nova, Keystone
Scenario:
1. Create a new tenant.
2. Check tenant was created successfully.
3. Create a new user.
4. Check user was created successfully.
5. Create a new user role.
6. Check user role was created successfully.
7. Perform token authentication.
8. Check authentication was successful.
9. Send authentication request to Horizon.
10. Verify response status is 200.
If this test fails on the authentication step, you should first try opening
the dashboard - it may be unreachable for some reason and then you should
check your network configuration. For more information refer to nova cli
reference.

View File

@ -0,0 +1,191 @@
.. raw:: pdf
PageBreak
.. index:: Red Hat OpenStack
Red Hat OpenStack Deployment Notes
==================================
.. contents :local:
Overview
--------
Fuel can deploy OpenStack using Red Hat OpenStack packages and Red Hat
Enterprise Linux Server as a base operating system. Because Red Hat has
exclusive distribution rights for its products, Fuel cannot be bundled with
Red Hat OpenStack directly. To work around this issue, you can enter your
Red Hat account credentials in order to download Red Hat OpenStack Platform.
The necessary components will be prepared and loaded into Cobbler. There are
two methods Fuel supports for obtaining Red Hat OpenStack packages:
* :ref:`RHSM` (default)
* :ref:`RHN_Satellite`
.. index:: Red Hat OpenStack: Deployment Requirements
Deployment Requirements
-----------------------
Minimal Requirements
++++++++++++++++++++
* Red Hat account (https://access.redhat.com)
* Red Hat OpenStack entitlement (one per node)
* Internet access for Fuel Master name
Optional requirements
+++++++++++++++++++++
* Red Hat Satellite Server
* Configured Satellite activation key
.. _RHSM:
Red Hat Subscription Management (RHSM)
--------------------------------------
Benefits
++++++++
* No need to handle large ISOs or physical media.
* Register all your clients with just a single username and password.
* Automatically register the necessary products required for installation and
downloads a full cache.
* Download only the latest packages.
* Download only necessary packages.
Considerations
++++++++++++++
* Must observe Red Hat licensing requirements after deployment
* Package download time is dependent on network speed (20-60 minutes)
.. seealso::
`Overview of Subscription Management - Red Hat Customer
Portal <https://access.redhat.com/site/articles/143253>`_
.. _RHN_Satellite:
Red Hat RHN Satellite
---------------------
Benefits
++++++++
* Faster download of Red Hat OpenStack packages
* Register all your clients with an activation key
* More granular control of package set for your installation
* Registered OpenStack hosts don't need external network access
* Easier to consume for large enterprise customers
Considerations
++++++++++++++
* Red Hat RHN Satellite is a separate offering from Red Hat and requires
dedicated hardware
* Still requires Red Hat Subscription Manager and Internet access to download
registration packages (just for Fuel Master host)
What you need
+++++++++++++
* Red Hat account (https://access.redhat.com)
* Red Hat OpenStack entitlement (one per host)
* Internet access for Fuel master host
* Red Hat Satellite Server
* Configured Satellite activation key
Your RHN Satellite activation key must be configured the following channels
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* RHEL Server High Availability
* RHEL Server Load Balancer
* RHEL Server Optional
* RHEL Server Resilient Storage
* RHN Tools for RHEL
* Red Hat OpenStack 3.0
.. seealso::
`Red Hat | Red Hat Network Satellite <http://www.redhat.com/products/enterprise-linux/rhn-satellite/>`_
.. _rhn_sat_channels:
Fuel looks for the following RHN Satellite channels.
* rhel-x86_64-server-6
* rhel-x86_64-server-6-ost-3
* rhel-x86_64-server-ha-6
* rhel-x86_64-server-lb-6
* rhel-x86_64-server-rs-6
.. note:: If you create cloned channels, leave these channel strings intact.
.. index:: Red Hat OpenStack: Troubleshooting
Troubleshooting Red Hat OpenStack Deployment
--------------------------------------------
Issues downloading from Red Hat Subscription Manager
++++++++++++++++++++++++++++++++++++++++++++++++++++
If you receive an error from Fuel UI regarding Red Hat OpenStack download
issues, ensure that you have a valid subscription to the Red Hat OpenStack
3.0 product. This product is separate from standard Red Hat Enterprise
Linux. You can check by going to https://access.redhat.com and checking
Active Subscriptions. Contact your `Red Hat sales representative
<https://access.redhat.com/site/solutions/368643>`_ to get the proper
subscriptions associated with your account.
If you are still encountering issues, `contact Mirantis
Support <http://www.mirantis.com/contact/>`_.
Issues downloading from Red Hat RHN Satellite
+++++++++++++++++++++++++++++++++++++++++++++
If you receive an error from Fuel UI regarding Red Hat OpenStack download
issues, ensure that you have all the necessary channels available on your
RHN Satellite Server. The correct list is :ref:`here <rhn_sat_channels>`.
If you are missing these channels, please contact your `Red Hat sales
representative <https://access.redhat.com/site/solutions/368643>`_ to get
the proper subscriptions associated with your account.
RHN Satellite error: "rhel-x86_64-server-rs-6 not found"
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This means your Red Hat Satellite Server has run out of available entitlements
or your licenses have expired. Check your RHN Satellite to ensure there is at
least one available entitlement for each of the required channels.
If any of these channels are missing or you need to make changes your
account, please contact your `Red Hat sales representative
<https://access.redhat.com/site/solutions/368643>`_ to get the proper
subscriptions associated with your account.
Yum Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-x86_64-server-6.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This can be caused by many problems. This could happen if your SSL
certificate does not match the hostname of your RHN Satellite Server or if
you configured Fuel to use an IP address during deployment. This is not
recommended and you should use a fully qualified domain name for your RHN
Satellite Server.
You may find solutions to your issues with ``repomd.xml`` at the
`Red Hat Knowledgebase <https://access.redhat.com/>`_ or contact
`Red Hat Support. <https://access.redhat.com/support/>`_.
GPG Key download failed. Looking for URL your-satellite-server/pub/RHN-ORG-TRUSTED-SSL-CERT
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This issue has two known problems. If you are using VirtualBox, this may not
be properly configured. Ensure that your upstream DNS resolver is correct
in ``/etc/dnsmasq.upstream``. This setting is configured during the bootstrap
process, but it is not possible to validate resolution of internal DNS names
at that time. Also, this may be caused by other DNS issues, local network,
or incorrect spelling of the RHN Satellite Server. Check your local network
and settings and try again.

View File

@ -0,0 +1,103 @@
.. raw:: pdf
PageBreak
.. index:: Fuel UI: Network Issues
Network Issues
==============
Fuel has a built-in capability to run network check before or after OpenStack
deployment. Currently it can check connectivity between nodes within
configured VLANs on configured server interfaces. Image below shows sample
result of such check. By using this simple table it is easy to say which
interfaces do not receive certain VLAN IDs. Usually it means that switch or
multiple switches are not configured correctly and do not allow certain
tagged traffic to pass through.
.. image:: /_images/net_verify_failure.jpg
:align: center
On VirtualBox
-------------
Scripts which are provided for quick Fuel setup, create 3 host-interface
adapters. Basically networking works as this being a 3 bridges, in each of
them the only one VMs interfaces is connected. It means there is only L2
connectivity between VMs on interfaces with the same name. If you try to
move, for example, management network to `eth1` on Controller node, and the
same network to `eth2` on the Compute, then there will be no connectivity
between OpenStack services in spite of being configured to live on the same
VLAN. It is very easy to validate network settings before deployment by
clicking the "Verify Networks" button.
If you need to access OpenStack REST API over Public network, VNC console of VMs,
Horizon in HA mode or VMs, refer to this section: :ref:`access_to_public_net`.
Timeout In Connection to OpenStack API From Client Applications
---------------------------------------------------------------
If you use Java, Python or any other code to work with OpenStack API, all
connections should be done over OpenStack Public network. To explain why we
can not use Fuel network, let's try to run nova client with debug
option enabled::
[root@controller-6 ~]# nova --debug list
REQ: curl -i http://192.168.0.2:5000/v2.0/tokens -X POST -H "Content-Type: appli
cation/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d
'{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin",
"password": "admin"}}}'
INFO (connectionpool:191) Starting new HTTP connection (1): 192.168.0.2
DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2702
RESP: [200] {'date': 'Tue, 06 Aug 2013 13:01:05 GMT', 'content-type': 'applicati
on/json', 'content-length': '2702', 'vary': 'X-Auth-Token'}
RESP BODY: {"access": {"token": {"issued_at": "2013-08-06T13:01:05.616481", "exp
ires": "2013-08-07T13:01:05Z", "id": "c321cd823c8a4852aea4b870a03c8f72", "tenant
": {"description": "admin tenant", "enabled": true, "id": "8eee400f7a8a4f35b7a92
bc6cb54de42", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL":
"http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42", "region": "Region
One", "internalURL": "http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de4
2", "id": "6b9563c1e37542519e4fc601b994f980", "publicURL": "http://172.16.1.2:87
74/v2/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "type": "compu
te", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080", "re
gion": "RegionOne", "internalURL": "http://192.168.0.2:8080", "id": "4db0e11de35
74c889179f499f1e53c7e", "publicURL": "http://172.16.1.2:8080"}], "endpoints_link
s": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://1
92.168.0.2:9292", "region": "RegionOne", "internalURL": "http://192.168.0.2:9292
", "id": "960a3ad83e4043bbbc708733571d433b", "publicURL": "http://172.16.1.2:929
2"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [
{"adminURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42", "reg
ion": "RegionOne", "internalURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7
a92bc6cb54de42", "id": "055edb2aface49c28576347a8c2a5e35", "publicURL": "http://
172.16.1.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "
type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.
0.2:8773/services/Admin", "region": "RegionOne", "internalURL": "http://192.168.
0.2:8773/services/Cloud", "id": "1e5e51a640f94e60aed0a5296eebdb51", "publicURL":
"http://172.16.1.2:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2"
, "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080/",
"region": "RegionOne", "internalURL": "http://192.168.0.2:8080/v1/AUTH_8eee400f
7a8a4f35b7a92bc6cb54de42", "id": "081a50a3c9fa49719673a52420a87557", "publicURL
": "http://172.16.1.2:8080/v1/AUTH_8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoi
nts_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"admi
nURL": "http://192.168.0.2:35357/v2.0", "region": "RegionOne", "internalURL": "
http://192.168.0.2:5000/v2.0", "id": "057a7f8e9a9f4defb1966825de957f5b", "publi
cURL": "http://172.16.1.2:5000/v2.0"}], "endpoints_links": [], "type": "identit
y", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id"
: "717701504566411794a9cfcea1a85c1f", "roles": [{"name": "admin"}], "name": "ad
min"}, "metadata": {"is_admin": 0, "roles": ["90a1f4f29aef48d7bce3ada631a54261"
]}}}
REQ: curl -i http://172.16.1.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42/servers/
detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -
H "Accept: application/json" -H "X-Auth-Token: c321cd823c8a4852aea4b870a03c8f72"
INFO (connectionpool:191) Starting new HTTP connection (1): 172.16.1.2
Even though initial connection was in 192.168.0.2, then client tries to
access Public network for Nova API. The reason is because Keystone returns
the list of OpenStack services URLs, and for production-grade deployments it
is required to access services over public network.
.. seealso:: :ref:`access_to_public_net` if you want to configure the installation
on VirtualBox to make all these issues fixed.

View File

@ -23,7 +23,9 @@
PageBreak oneColumn
.. contents:: Table of Contents
:depth: 2
.. toctree:: Table of Contents
:maxdepth: 2
.. include:: contents-install.rst

28
pdf_user.rst Normal file
View File

@ -0,0 +1,28 @@
.. header::
.. cssclass:: header-table
+-------------------------------------+-----------------------------------+
| Fuel™ for Openstack v3.1 | .. cssclass:: right|
| | |
| User Guide | ###Section### |
+-------------------------------------+-----------------------------------+
.. footer::
.. cssclass:: footer-table
+--------------------------+----------------------+
| | .. cssclass:: right|
| | |
| ©2013, Mirantis Inc. | Page ###Page### |
+--------------------------+----------------------+
.. raw:: pdf
PageBreak oneColumn
.. toctree:: Table of Contents
:maxdepth: 2
.. include:: contents-user.rst

View File

@ -8,8 +8,10 @@ User Guide
.. contents:: :local:
:depth: 2
.. include:: /pages/preface/preface.rst
.. include:: /pages/about-fuel/0070-introduction.rst
.. include:: /pages/installation-fuel-ui/red_hat_openstack.rst
.. include:: /pages/installation-fuel-ui/post-install-healthchecks.rst
.. include:: /pages/troubleshooting-ug-network-issues.rst
.. include:: /pages/user-guide/0070-introduction.rst
.. include:: /pages/user-guide/red_hat_openstack.rst
.. include:: /pages/user-guide/post-install-healthchecks.rst
.. include:: /pages/user-guide/troubleshooting-ug/network-issues.rst
.. include:: /pages/user-guide/advanced-topics/0020-custom-plug-ins.rst
.. include:: /pages/user-guide/advanced-topics/0030-quantum-HA.rst
.. include:: /pages/user-guide/advanced-topics/0040-bonding.rst