Reformatting, reimaging, HA notes, etc...
This commit is contained in:
parent
5b929176b8
commit
751770e12a
|
@ -4,11 +4,12 @@
|
|||
About Fuel
|
||||
============
|
||||
|
||||
.. _contents:: :local:
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. include /pages/about-fuel/0010-introduction.rst
|
||||
.. include:: /pages/about-fuel/0020-what-is-fuel.rst
|
||||
.. include:: /pages/about-fuel/0030-how-it-works.rst
|
||||
.. include:: /pages/about-fuel/0040-reference-topologies.rst
|
||||
.. include:: /pages/about-fuel/0050-supported-software.rst
|
||||
.. include:: /pages/about-fuel/0060-download-fuel.rst
|
||||
.. include:: /pages/about-fuel/0060-download-fuel.rst
|
||||
|
|
|
@ -5,6 +5,9 @@
|
|||
=============
|
||||
Release Notes
|
||||
=============
|
||||
..
|
||||
contents:: :local:
|
||||
:depth: 1
|
||||
|
||||
.. include:: /pages/release-notes/v3-1-grizzly.rst
|
||||
.. include /pages/release-notes/v3-0-grizzly.rst
|
||||
|
|
|
@ -6,9 +6,15 @@
|
|||
Reference Architectures
|
||||
=======================
|
||||
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. include:: /pages/reference-architecture/0010-overview.rst
|
||||
.. include:: /pages/reference-architecture/0012-simple.rst
|
||||
.. include:: /pages/reference-architecture/0014-compact.rst
|
||||
.. include:: /pages/reference-architecture/0016-full.rst
|
||||
.. include:: /pages/reference-architecture/0015-closer-look.rst
|
||||
.. include:: /pages/reference-architecture/0016-red-hat-differences.rst
|
||||
.. include:: /pages/reference-architecture/0018-red-hat-differences.rst
|
||||
.. include:: /pages/reference-architecture/0020-logical-setup.rst
|
||||
.. include:: /pages/reference-architecture/0030-cluster-sizing.rst
|
||||
.. include:: /pages/reference-architecture/0040-network-setup.rst
|
||||
|
@ -16,4 +22,4 @@ Reference Architectures
|
|||
.. include:: /pages/reference-architecture/0060-quantum-vs-nova-network.rst
|
||||
.. include:: /pages/reference-architecture/0070-cinder-vs-nova-volume.rst
|
||||
.. include:: /pages/reference-architecture/0080-swift-notes.rst
|
||||
|
||||
.. include:: /pages/reference-architecture/0090-ha-notes.rst
|
||||
|
|
|
@ -6,6 +6,9 @@
|
|||
Create an OpenStack cluster using Fuel UI
|
||||
============================================
|
||||
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. include:: /pages/installation-fuel-ui/install.rst
|
||||
.. include:: /pages/installation-fuel-ui/networks.rst
|
||||
.. include:: /pages/installation-fuel-ui/network-issues.rst
|
||||
|
|
|
@ -6,7 +6,8 @@
|
|||
Deploy an OpenStack cluster using Fuel CLI
|
||||
==========================================
|
||||
|
||||
.. contents: :local:
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. include: /pages/installation-fuel-cli/0000-preamble.rst
|
||||
.. include: /pages/installation-fuel-cli/0010-introduction.rst
|
||||
|
|
|
@ -6,7 +6,19 @@
|
|||
Production Considerations
|
||||
=========================
|
||||
|
||||
.. include:: /pages/production-considerations/0010-introduction.rst
|
||||
Fuel simplifies the set up of an OpenStack cluster, affording you the ability
|
||||
to dig in and fully understand how OpenStack works. You can deploy on test
|
||||
hardware or in a virtualized environment and root around all you like, but
|
||||
when it comes time to deploy to production there are a few things to take
|
||||
into consideration.
|
||||
|
||||
In this section we will talk about such things including how to size your
|
||||
hardware and how to handle large-scale deployments.
|
||||
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. include /pages/production-considerations/0010-introduction.rst
|
||||
.. include:: /pages/production-considerations/0015-sizing-hardware.rst
|
||||
.. include:: /pages/production-considerations/0020-deployment-pipeline.rst
|
||||
.. include:: /pages/production-considerations/0030-large-deployments.rst
|
||||
|
|
|
@ -6,7 +6,8 @@
|
|||
FAQ (Frequently Asked Questions)
|
||||
================================
|
||||
|
||||
.. _contents:: :local:
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
.. Known Issues and Workarounds
|
||||
|
||||
|
|
2
Makefile
2
Makefile
|
@ -17,7 +17,7 @@ I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
|||
IMAGEDIRS = _images
|
||||
SVG2JPG = convert
|
||||
# JPGs will be resized to 600px width
|
||||
SVG2JPG_FLAGS = -resize 600x
|
||||
SVG2JPG_FLAGS = -resize 600x -quality 100%
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf pdf text man changes linkcheck doctest gettext
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 33 KiB After Width: | Height: | Size: 115 KiB |
Binary file not shown.
After Width: | Height: | Size: 80 KiB |
|
@ -77,7 +77,9 @@ var ga_enabled = !$.cookie('disable-ga');
|
|||
if(ga_enabled){
|
||||
var _gaq = _gaq || [];
|
||||
_gaq.push(['_setAccount', '{{ theme_googleanalytics_id }}']);
|
||||
|
||||
_gaq.push(['_setCookiePath', '{{ theme_googleanalytics_path }}']);
|
||||
_gaq.push(['_setDomainName', 'mirantis.com']);
|
||||
_gaq.push(['_setDetectFlash', false]);
|
||||
_gaq.push(['_trackPageview']);
|
||||
(function() {
|
||||
|
|
|
@ -13,11 +13,16 @@
|
|||
|
||||
@import url("mirantis_style.css");
|
||||
|
||||
.highlight .hll {
|
||||
background-color: #C0FF7C !important;
|
||||
}
|
||||
|
||||
|
||||
div.highlight pre,
|
||||
div.highlight-python pre
|
||||
{
|
||||
border-width: 0px 3px;
|
||||
|
||||
background-color: #F7FFE8;
|
||||
border-radius: 6px;
|
||||
-moz-border-radius: 6px;
|
||||
-webkit-border-radius: 6px;
|
||||
|
|
|
@ -3,8 +3,14 @@ li.toctree-l1, li.toctree-l2
|
|||
margin-top: 4px;
|
||||
}
|
||||
|
||||
div.disqus_thread
|
||||
{
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
div.relbar-bottom div.related_hf
|
||||
{
|
||||
background: #FFFFFF;
|
||||
border-radius: 0 0 .7em .7em;
|
||||
-moz-border-radius: 0 0 .7em .7em;
|
||||
-webkit-border-radius: 0 0 .7em .7em;
|
||||
|
|
15
conf.py
15
conf.py
|
@ -75,7 +75,8 @@ release = '3.1'
|
|||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['_*', "pages"]
|
||||
exclude_patterns = ['_*', "pages", 'rn_index.rst']
|
||||
# exclude_patterns = ['_*', 'rn_index.rst']
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents.
|
||||
#default_role = None
|
||||
|
@ -142,7 +143,7 @@ html_static_path = ['_static']
|
|||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
html_use_smartypants = True
|
||||
html_use_smartypants = False
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
html_sidebars = {
|
||||
|
@ -160,7 +161,7 @@ html_sidebars = {
|
|||
html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
html_split_index = True
|
||||
html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
html_show_sourcelink = False
|
||||
|
@ -293,18 +294,18 @@ pdf_fit_mode = "shrink"
|
|||
# Section level that forces a break page.
|
||||
# For example: 1 means top-level sections start in a new page
|
||||
# 0 means disabled
|
||||
#pdf_break_level = 0
|
||||
pdf_break_level = 2
|
||||
|
||||
# When a section starts in a new page, force it to be 'even', 'odd',
|
||||
# or just use 'any'
|
||||
#pdf_breakside = 'any'
|
||||
pdf_breakside = 'any'
|
||||
|
||||
# Insert footnotes where they are defined instead of
|
||||
# at the end.
|
||||
#pdf_inline_footnotes = True
|
||||
|
||||
# verbosity level. 0 1 or 2
|
||||
pdf_verbosity = 2
|
||||
pdf_verbosity = 0
|
||||
|
||||
# If false, no index is generated.
|
||||
#pdf_use_index = True
|
||||
|
@ -337,7 +338,7 @@ pdf_cover_template = 'mirantiscover.tmpl'
|
|||
pdf_page_template = 'oneColumn'
|
||||
|
||||
# Show Table Of Contents at the beginning?
|
||||
pdf_use_toc = False
|
||||
pdf_use_toc = True
|
||||
|
||||
# How many levels deep should the table of contents be?
|
||||
pdf_toc_depth = 2
|
||||
|
|
28
contents.rst
28
contents.rst
|
@ -1,20 +1,16 @@
|
|||
.. index:: Table of Contents
|
||||
.. index Table of Contents
|
||||
|
||||
.. _ToC:
|
||||
|
||||
=================
|
||||
Table of Contents
|
||||
=================
|
||||
.. toctree:: Table of Contents
|
||||
:maxdepth: 2
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
index
|
||||
0020-about-fuel
|
||||
0030-release-notes
|
||||
0040-reference-architecture
|
||||
0055-production-considerations
|
||||
0045-installation-fuel-ui
|
||||
0050-installation-fuel-cli
|
||||
0060-frequently-asked-questions
|
||||
copyright
|
||||
index
|
||||
0020-about-fuel
|
||||
0030-release-notes
|
||||
0040-reference-architecture
|
||||
0055-production-considerations
|
||||
0045-installation-fuel-ui
|
||||
0050-installation-fuel-cli
|
||||
0060-frequently-asked-questions
|
||||
copyright
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
.. index:: What is Fuel?
|
||||
.. index:: What is Fuel
|
||||
|
||||
.. _What_is_Fuel:
|
||||
|
||||
|
@ -21,4 +21,4 @@ Simply put, Fuel is a way for you to easily configure and install an
|
|||
OpenStack-based infrastructure in your own environment.
|
||||
|
||||
.. image:: /_images/FuelSimpleDiagram.jpg
|
||||
|
||||
:align: center
|
||||
|
|
|
@ -12,24 +12,25 @@ configurable, reproducible, sharable installation process.
|
|||
|
||||
In practice, that means that the process of using Fuel looks like 1-2-3:
|
||||
|
||||
1. First, set up Fuel Master Node using the ISO. This process only needs to be
|
||||
1. First, set up Fuel Master node using the ISO. This process only needs to be
|
||||
completed once per installation.
|
||||
|
||||
2. Next, discover your virtual or phisical nodes and configure your OpenStack
|
||||
cluster using the Fuel UI.
|
||||
2. Next, discover your virtual or bare-metal nodes and configure your OpenStack
|
||||
cluster using the Fuel UI or CLI.
|
||||
|
||||
3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will do all
|
||||
deployment magic for you by applying pre-configured Puppet manifests.
|
||||
|
||||
All of this is desgined to enable you to maintain your cluster while giving
|
||||
All of this is designed to enable you to maintain your cluster while giving
|
||||
you the flexibility to adapt it to your own configuration.
|
||||
|
||||
.. image:: /_images/how-it-works_svg.jpg
|
||||
:align: center
|
||||
|
||||
Fuel comes with several pre-defined deployment configurations, some of them
|
||||
include additional configuration options that allow you to adapt OpenStack
|
||||
deployment to your environment.
|
||||
|
||||
Fuel UI integrates all of the deployments scripts into a unified,
|
||||
web-based graphical user interface that walks administrators through the
|
||||
Web-based Graphical User Interface that walks administrators through the
|
||||
process of installing and configuring a fully functional OpenStack environment.
|
||||
|
|
|
@ -7,43 +7,53 @@ Fuel has been tested and is guaranteed to work with the following software
|
|||
components:
|
||||
|
||||
* Operating Systems
|
||||
* CentOS 6.4 (x86_64 architecture only)
|
||||
* RHEL 6.4 (x86_64 architecture only)
|
||||
|
||||
* CentOS 6.4 (x86_64 architecture only)
|
||||
|
||||
* RHEL 6.4 (x86_64 architecture only)
|
||||
|
||||
* Puppet (IT automation tool)
|
||||
* 2.7.19
|
||||
|
||||
* 2.7.19
|
||||
|
||||
* MCollective
|
||||
* 2.3.1
|
||||
|
||||
* 2.3.1
|
||||
|
||||
* Cobbler (bare-metal provisioning tool)
|
||||
* 2.2.3
|
||||
|
||||
* 2.2.3
|
||||
|
||||
* OpenStack
|
||||
* Grizzly release 2013.1
|
||||
|
||||
* Grizzly release 2013.1.2
|
||||
|
||||
* Hypervisor
|
||||
* KVM
|
||||
* QEMU
|
||||
|
||||
* KVM
|
||||
|
||||
* QEMU
|
||||
|
||||
* Open vSwitch
|
||||
* 1.10.0
|
||||
|
||||
* 1.10.0
|
||||
|
||||
* HA Proxy
|
||||
* 1.4.19
|
||||
|
||||
* 1.4.19
|
||||
|
||||
* Galera
|
||||
* 23.2.2
|
||||
|
||||
* 23.2.2
|
||||
|
||||
* RabbitMQ
|
||||
* 2.8.7
|
||||
|
||||
* 2.8.7
|
||||
|
||||
* Pacemaker
|
||||
* 1.1.8
|
||||
|
||||
* 1.1.8
|
||||
|
||||
* Corosync
|
||||
* 1.4.3
|
||||
|
||||
* Nagios
|
||||
* 3.4.4
|
||||
|
||||
* 1.4.3
|
||||
|
|
|
@ -41,7 +41,7 @@ and Astute orchestrator passes to the next node in deployment sequence.
|
|||
Deploying OpenStack Cluster Using CLI
|
||||
=====================================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
After you understood how deployment workflow is traversed, you can finally start.
|
||||
Connect the nodes to Master node and power them on. You should also plan your
|
||||
|
@ -111,6 +111,9 @@ Sample YAML configuration for provisioning is listed below:
|
|||
# == id
|
||||
# MCollective node id in mcollective server.cfg.
|
||||
- id: 1
|
||||
# == uid
|
||||
# UID of the node for deployment engine. Should be equal to `id`
|
||||
uid: 1
|
||||
# == mac
|
||||
# MAC address of the interface being used for network boot.
|
||||
mac: 64:43:7B:CA:56:DD
|
||||
|
@ -333,7 +336,7 @@ Wait for command to finish. Now you can start configuring OpenStack cluster para
|
|||
Configuring Nodes for Deployment
|
||||
================================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Node Configuration
|
||||
------------------
|
||||
|
@ -384,9 +387,6 @@ section of the file with data related to deployment.
|
|||
# == internal_br
|
||||
# Name of the internal bridge for Quantum-enabled configuration
|
||||
internal_br: br-mgmt
|
||||
# == id
|
||||
# UID of the node for deployment engine. Should be equal to `id`
|
||||
uid: 1
|
||||
|
||||
General Parameters
|
||||
------------------
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
Deploying via Orchestration
|
||||
===========================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Performing a small series of manual installs many be an acceptable approach in
|
||||
some circumstances, but if you plan on deploying to a large number of servers
|
||||
|
|
|
@ -11,7 +11,7 @@ for a drive around the block. Follow these steps:
|
|||
2. Click the ``Project`` tab in the left-hand column.
|
||||
|
||||
3. Under ``Manage Compute``, choose ``Access & Security`` to set security
|
||||
settings:
|
||||
settings:
|
||||
|
||||
- Click ``Create Keypair`` and enter a name for the new keypair. The
|
||||
private key should download automatically; make sure to keep it safe.
|
||||
|
@ -29,7 +29,7 @@ settings:
|
|||
-1 to the default Security Group and click ``Add Rule`` to save.
|
||||
|
||||
4. Click ``Allocate IP To Project`` and add two new floating IPs. Notice that
|
||||
they come from the pool specified in ``config.yaml`` and ``site.pp``.
|
||||
they come from the pool specified in ``config.yaml`` and ``site.pp``.
|
||||
|
||||
5. Click ``Images & Snapshots``, then ``Create Image``.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
Installing Fuel Master Node
|
||||
===========================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Fuel is distributed as both ISO and IMG images, each of them contains
|
||||
an installer for Fuel Master node. The ISO image is used for CD media devices,
|
||||
|
@ -67,7 +67,7 @@ simple, single-click installation.
|
|||
The requirements for running Fuel on VirtualBox are:
|
||||
|
||||
A host machine with Linux or Mac OS.
|
||||
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, and Ubuntu 12.04.
|
||||
The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10.
|
||||
|
||||
VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both
|
||||
can be downloaded from `<http://www.virtualbox.org/>`_.
|
||||
|
@ -81,7 +81,7 @@ can be downloaded from `<http://www.virtualbox.org/>`_.
|
|||
.. _Install_Automatic:
|
||||
|
||||
Automatic Mode
|
||||
^^^^^^^^^^^^^^
|
||||
++++++++++++++
|
||||
|
||||
When you unpack the scripts, you will see the following important files and
|
||||
folders:
|
||||
|
@ -97,16 +97,49 @@ folders:
|
|||
`launch.sh`
|
||||
Once executed, this script will pick up an image from the ``iso`` directory,
|
||||
create a VM, mount the image to this VM, and automatically install the Fuel
|
||||
Master Тode.
|
||||
After installation of the master node, the script will create slave nodes for
|
||||
OpenStack and boot them via PXE from the Master Node.
|
||||
Master node.
|
||||
After installation of the Master node, the script will create Slave nodes for
|
||||
OpenStack and boot them via PXE from the Master node.
|
||||
Finally, the script will give you the link to access the Web-based UI for the
|
||||
Master Node so you can start installation of an OpenStack cluster.
|
||||
|
||||
Networking Notes for Slave Nodes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Helper scripts for VirtualBox create network adapters eth0, eth1, eth2 assigned
|
||||
to vboxnet0, vboxnet1, vboxnet2 correspondingly. vboxnet0 is dedicated for Fuel
|
||||
network so it is impossible to use it for any other untagged networks.
|
||||
|
||||
Also these scripts assign IP addresses for adapters: vboxnet0 - 10.20.0.1/24,
|
||||
vboxnet1 - 172.16.1.1/24, vboxnet2 - 172.16.0.1/24.
|
||||
|
||||
To access guest VMs from host machine Public and/or Management networks must be
|
||||
untagged and assigned to vboxnet1 and vboxnet2 adapters with IP addresses from
|
||||
ranges specified earlier.
|
||||
|
||||
During installation the Slave nodes access the Internet via Master node.
|
||||
But when installation is done Slave nodes on guest VMs shall access the
|
||||
Internet via the Public network. To make it happen the following command must be
|
||||
executed on host::
|
||||
|
||||
sudo iptables -t nat -A POSTROUTING -s 172.16.1.0/24 \! -d 172.16.1.0/24 -j MASQUERADE
|
||||
|
||||
To access VMs managed by OpenStack it is needed to provide IP addresses from
|
||||
floating IP range. When OpenStack cluster is deployed it is needed to create
|
||||
security groups to provide access to guest VMs using following commands::
|
||||
|
||||
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||
|
||||
You can also add these security groups from Horizon UI.
|
||||
|
||||
IP ranges for Public and Management networks (172.16.*.*) defined in `config.sh`
|
||||
script. You can reassign these IP ranges before running VirtualBox scripts only.
|
||||
|
||||
.. _Install_Manual:
|
||||
|
||||
Manual mode
|
||||
^^^^^^^^^^^
|
||||
+++++++++++
|
||||
|
||||
If you cannot or would rather not run our helper scripts, you can still run
|
||||
Fuel on VirtualBox by following these steps.
|
||||
|
@ -120,7 +153,7 @@ Fuel on VirtualBox by following these steps.
|
|||
helper scripts or install Fuel :ref:`Install_Bare-Metal`.
|
||||
|
||||
Master Node deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
First, create the Master Node VM.
|
||||
|
||||
|
@ -144,7 +177,7 @@ First, create the Master Node VM.
|
|||
of Fuel.
|
||||
|
||||
Adding Slave Nodes
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Next, create Slave nodes where OpenStack needs to be installed.
|
||||
|
||||
|
@ -158,12 +191,12 @@ Next, create Slave nodes where OpenStack needs to be installed.
|
|||
2. Set priority for the network boot:
|
||||
|
||||
.. image:: /_images/vbox-image1.jpg
|
||||
:width: 600px
|
||||
:align: center
|
||||
|
||||
3. Configure the network adapter on each VM:
|
||||
|
||||
.. image:: /_images/vbox-image2.jpg
|
||||
:width: 600px
|
||||
:align: center
|
||||
|
||||
Changing network parameters before the installation
|
||||
---------------------------------------------------
|
||||
|
@ -178,12 +211,13 @@ as the gateway and DNS server you should change the parameters to those shown
|
|||
in the image below:
|
||||
|
||||
.. image:: /_images/network-at-boot.jpg
|
||||
:align: center
|
||||
|
||||
When you're finished making changes, press the <ENTER> key and wait for the
|
||||
installation to complete.
|
||||
|
||||
Changing network parameters after installation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
----------------------------------------------
|
||||
|
||||
.. warning::
|
||||
|
||||
|
@ -234,7 +268,7 @@ Now you should be able to connect to Fuel UI from your network at
|
|||
http://172.18.0.5:8000/
|
||||
|
||||
Name resolution (DNS)
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
---------------------
|
||||
|
||||
During Master Node installation, it is assumed that there is a recursive DNS
|
||||
service on 10.20.0.1.
|
||||
|
@ -247,7 +281,7 @@ your actual DNS)::
|
|||
echo "nameserver 172.0.0.1" > /etc/dnsmasq.upstream
|
||||
|
||||
PXE booting settings
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
--------------------
|
||||
|
||||
By default, `eth0` on Fuel Master node serves PXE requests. If you are planning
|
||||
to use another interface, then it is required to modify dnsmasq settings (which
|
||||
|
@ -272,7 +306,7 @@ therefore they will not be able to boot. For example, CentOS 6.4 uses gPXE
|
|||
implementation instead of more advanced iPXE by default.
|
||||
|
||||
When Master Node installation is done
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
-------------------------------------
|
||||
|
||||
Once the Master node is installed, power on all other nodes and log in to the
|
||||
Fuel UI.
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
.. index:: Fuel UI; Network Issues
|
||||
|
||||
Network Issues
|
||||
==============
|
||||
|
||||
|
@ -10,6 +12,7 @@ multiple switches are not configured correctly and do not allow certain
|
|||
tagged traffic to pass through.
|
||||
|
||||
.. image:: /_images/net_verify_failure.jpg
|
||||
:align: center
|
||||
|
||||
On VirtualBox
|
||||
-------------
|
||||
|
@ -29,64 +32,64 @@ Timeout in connection to OpenStack API from client applications
|
|||
|
||||
If you use Java, Python or any other code to work with OpenStack API, all
|
||||
connections should be done over OpenStack public network. To explain why we
|
||||
can not use Fuel admin network, let's try to run nova client with debug
|
||||
can not use Fuel network, let's try to run nova client with debug
|
||||
option enabled::
|
||||
|
||||
[root@controller-6 ~]# nova --debug list
|
||||
|
||||
REQ: curl -i http://192.168.0.5:5000/v2.0/tokens -X POST -H "Content-Type: appli
|
||||
REQ: curl -i http://192.168.0.2:5000/v2.0/tokens -X POST -H "Content-Type: appli
|
||||
cation/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d
|
||||
'{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "
|
||||
password": "admin"}}}'
|
||||
'{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin",
|
||||
"password": "admin"}}}'
|
||||
|
||||
INFO (connectionpool:191) Starting new HTTP connection (1): 192.168.0.5
|
||||
DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2695
|
||||
RESP: [200] {'date': 'Thu, 06 Jun 2013 09:50:22 GMT', 'content-type': 'applicati
|
||||
on/json', 'content-length': '2695', 'vary': 'X-Auth-Token'}
|
||||
RESP BODY: {"access": {"token": {"issued_at": "2013-06-06T09:50:21.950681", "exp
|
||||
ires": "2013-06-07T09:50:21Z", "id": "d9ab5c927bcb410d9e9ee5bdea3ea020", "tenant
|
||||
": {"description": "admin tenant", "enabled": true, "id": "1a491e1416d041da93daa
|
||||
e1dc8af6d07", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL":
|
||||
"http://192.168.0.5:8774/v2/1a491e1416d041da93daae1dc8af6d07", "region": "Region
|
||||
One", "internalURL": "http://192.168.0.5:8774/v2/1a491e1416d041da93daae1dc8af6d0
|
||||
7", "id": "0281b33145d0417a976b8d0e9bab08b8", "publicURL": "http://240.0.1.5:877
|
||||
4/v2/1a491e1416d041da93daae1dc8af6d07"}], "endpoints_links": [], "type": "comput
|
||||
e", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.0.5:8080", "reg
|
||||
ion": "RegionOne", "internalURL": "http://192.168.0.5:8080", "id": "3c8dea92d2e0
|
||||
46c8bf188b2d357425a1", "publicURL": "http://240.0.1.5:8080"}], "endpoints_links"
|
||||
: [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://192
|
||||
.168.0.5:9292", "region": "RegionOne", "internalURL": "http://192.168.0.5:9292",
|
||||
"id": "d9a08cc4f1294e4c8748966363468089", "publicURL": "http://240.0.1.5:9292"}]
|
||||
, "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"ad
|
||||
minURL": "http://192.168.0.5:8776/v1/1a491e1416d041da93daae1dc8af6d07", "region"
|
||||
: "RegionOne", "internalURL": "http://192.168.0.5:8776/v1/1a491e1416d041da93daae
|
||||
1dc8af6d07", "id": "7563a55f46584e149b822507811b868c", "publicURL": "http://240.
|
||||
0.1.5:8776/v1/1a491e1416d041da93daae1dc8af6d07"}], "endpoints_links": [], "type"
|
||||
: "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.0.5:8
|
||||
773/services/Admin", "region": "RegionOne", "internalURL": "http://192.168.0.5:8
|
||||
773/services/Cloud", "id": "2f5d062c52b24f85a193306809c9600c", "publicURL": "htt
|
||||
p://240.0.1.5:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "nam
|
||||
e": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.0.5:8080/", "region
|
||||
": "RegionOne", "internalURL": "http://192.168.0.5:8080/v1/AUTH_1a491e1416d041da
|
||||
93daae1dc8af6d07", "id": "2bb237d0db004cd08f1be57fd14e2892", "publicURL": "http:
|
||||
//240.0.1.5:8080/v1/AUTH_1a491e1416d041da93daae1dc8af6d07"}], "endpoints_links":
|
||||
[], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "http:
|
||||
//192.168.0.5:35357/v2.0", "region": "RegionOne", "internalURL": "http://192.168
|
||||
.0.5:5000/v2.0", "id": "2fa7c6deb7ad42aabab7935bc269bb4e", "publicURL": "http://
|
||||
240.0.1.5:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keys
|
||||
tone"}], "user": {"username": "admin", "roles_links": [], "id": "d9321ac604694ff
|
||||
b9e4a8517292f55d6", "roles": [{"name": "admin"}], "name": "admin"}, "metadata":
|
||||
{"is_admin": 0, "roles": ["c80a3ab61b2c42b4bcacb4b316856618"]}}}
|
||||
INFO (connectionpool:191) Starting new HTTP connection (1): 192.168.0.2
|
||||
DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 2702
|
||||
RESP: [200] {'date': 'Tue, 06 Aug 2013 13:01:05 GMT', 'content-type': 'applicati
|
||||
on/json', 'content-length': '2702', 'vary': 'X-Auth-Token'}
|
||||
RESP BODY: {"access": {"token": {"issued_at": "2013-08-06T13:01:05.616481", "exp
|
||||
ires": "2013-08-07T13:01:05Z", "id": "c321cd823c8a4852aea4b870a03c8f72", "tenant
|
||||
": {"description": "admin tenant", "enabled": true, "id": "8eee400f7a8a4f35b7a92
|
||||
bc6cb54de42", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL":
|
||||
"http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42", "region": "Region
|
||||
One", "internalURL": "http://192.168.0.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de4
|
||||
2", "id": "6b9563c1e37542519e4fc601b994f980", "publicURL": "http://172.16.1.2:87
|
||||
74/v2/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "type": "compu
|
||||
te", "name": "nova"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080", "re
|
||||
gion": "RegionOne", "internalURL": "http://192.168.0.2:8080", "id": "4db0e11de35
|
||||
74c889179f499f1e53c7e", "publicURL": "http://172.16.1.2:8080"}], "endpoints_link
|
||||
s": [], "type": "s3", "name": "swift_s3"}, {"endpoints": [{"adminURL": "http://1
|
||||
92.168.0.2:9292", "region": "RegionOne", "internalURL": "http://192.168.0.2:9292
|
||||
", "id": "960a3ad83e4043bbbc708733571d433b", "publicURL": "http://172.16.1.2:929
|
||||
2"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [
|
||||
{"adminURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42", "reg
|
||||
ion": "RegionOne", "internalURL": "http://192.168.0.2:8776/v1/8eee400f7a8a4f35b7
|
||||
a92bc6cb54de42", "id": "055edb2aface49c28576347a8c2a5e35", "publicURL": "http://
|
||||
172.16.1.2:8776/v1/8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoints_links": [], "
|
||||
type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://192.168.
|
||||
0.2:8773/services/Admin", "region": "RegionOne", "internalURL": "http://192.168.
|
||||
0.2:8773/services/Cloud", "id": "1e5e51a640f94e60aed0a5296eebdb51", "publicURL":
|
||||
"http://172.16.1.2:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2"
|
||||
, "name": "nova_ec2"}, {"endpoints": [{"adminURL": "http://192.168.0.2:8080/",
|
||||
"region": "RegionOne", "internalURL": "http://192.168.0.2:8080/v1/AUTH_8eee400f
|
||||
7a8a4f35b7a92bc6cb54de42", "id": "081a50a3c9fa49719673a52420a87557", "publicURL
|
||||
": "http://172.16.1.2:8080/v1/AUTH_8eee400f7a8a4f35b7a92bc6cb54de42"}], "endpoi
|
||||
nts_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"admi
|
||||
nURL": "http://192.168.0.2:35357/v2.0", "region": "RegionOne", "internalURL": "
|
||||
http://192.168.0.2:5000/v2.0", "id": "057a7f8e9a9f4defb1966825de957f5b", "publi
|
||||
cURL": "http://172.16.1.2:5000/v2.0"}], "endpoints_links": [], "type": "identit
|
||||
y", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id"
|
||||
: "717701504566411794a9cfcea1a85c1f", "roles": [{"name": "admin"}], "name": "ad
|
||||
min"}, "metadata": {"is_admin": 0, "roles": ["90a1f4f29aef48d7bce3ada631a54261"
|
||||
]}}}
|
||||
|
||||
REQ: curl -i http://172.16.1.2:8774/v2/8eee400f7a8a4f35b7a92bc6cb54de42/servers/
|
||||
detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -
|
||||
H "Accept: application/json" -H "X-Auth-Token: c321cd823c8a4852aea4b870a03c8f72"
|
||||
|
||||
REQ: curl -i http://240.0.1.5:8774/v2/1a491e1416d041da93daae1dc8af6d07/servers/d
|
||||
etail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H
|
||||
"Accept: application/json" -H "X-Auth-Token: d9ab5c927bcb410d9e9ee5bdea3ea020"
|
||||
|
||||
INFO (connectionpool:191) Starting new HTTP connection (1): 240.0.1.5
|
||||
INFO (connectionpool:191) Starting new HTTP connection (1): 172.16.1.2
|
||||
|
||||
Even though initial connection was in 192.168.0.5, then client tries to
|
||||
access public network for Nova API. The reason is because Keystone returns
|
||||
access Public network for Nova API. The reason is because Keystone returns
|
||||
the list of OpenStack services URLs, and for production-grade deployments it
|
||||
is required to access services over public network. If you still need to
|
||||
work with OpenStack API without routing configured, tell us your use case on
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
Understanding and configuring the network
|
||||
.. index:: Fuel UI; Network Configuration
|
||||
|
||||
Understanding and Configuring the Network
|
||||
=========================================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
OpenStack clusters use several types of network managers: FlatDHCPManager,
|
||||
VlanManager and Neutron (formerly Quantum). The current version of Fuel UI
|
||||
VLANManager and Neutron (formerly Quantum). The current version of Fuel UI
|
||||
supports only two (FlatDHCP and VlanManager), but Fuel CLI supports all
|
||||
three. For more information about how the first two network managers work,
|
||||
you can read these two resources:
|
||||
|
@ -15,7 +17,7 @@ you can read these two resources:
|
|||
<http://www.mirantis.com/blog/openstack-networking-vlanmanager/>`_
|
||||
|
||||
FlatDHCPManager (multi-host scheme)
|
||||
------------------------------------
|
||||
-----------------------------------
|
||||
|
||||
The main idea behind the flat network manager is to configure a bridge
|
||||
(i.e. **br100**) on every compute node and have one of the machine's host
|
||||
|
@ -31,49 +33,7 @@ interface is used to give network access to virtual machines, while **eth0**
|
|||
interface is the management network interface.
|
||||
|
||||
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
|
||||
:width: 600px
|
||||
|
||||
..
|
||||
.. uml::
|
||||
node "Compute1" {
|
||||
[eth1\nVM] as compute1_eth1
|
||||
[eth0\nManagement] as compute1_eth0
|
||||
[vm0] as compute1_vm0
|
||||
[vm1] as compute1_vm1
|
||||
[br100] as compute1_br100
|
||||
compute1_br100 -up- compute1_eth1
|
||||
compute1_vm0 -up- compute1_br100
|
||||
compute1_vm1 -up- compute1_br100
|
||||
}
|
||||
|
||||
node "Compute2" {
|
||||
[eth1\nVM] as compute2_eth1
|
||||
[eth0\nManagement] as compute2_eth0
|
||||
[vm0] as compute2_vm0
|
||||
[vm1] as compute2_vm1
|
||||
[br100] as compute2_br100
|
||||
compute2_br100 -up- compute2_eth1
|
||||
compute2_vm0 -up- compute2_br100
|
||||
compute2_vm1 -up- compute2_br100
|
||||
}
|
||||
|
||||
node "Compute3" {
|
||||
[eth1\nVM] as compute3_eth1
|
||||
[eth0\nManagement] as compute3_eth0
|
||||
[vm0] as compute3_vm0
|
||||
[vm1] as compute3_vm1
|
||||
[br100] as compute3_br100
|
||||
compute3_br100 -up- compute3_eth1
|
||||
compute3_vm0 -up- compute3_br100
|
||||
compute3_vm1 -up- compute3_br100
|
||||
}
|
||||
|
||||
compute1_eth1 -up- [L2 switch]
|
||||
compute2_eth1 -up- [L2 switch]
|
||||
compute3_eth1 -up- [L2 switch]
|
||||
compute1_eth0 .up. [L2 switch]
|
||||
compute2_eth0 .up. [L2 switch]
|
||||
compute3_eth0 .up. [L2 switch]
|
||||
:align: center
|
||||
|
||||
Fuel deploys OpenStack in FlatDHCP mode with the so called **multi-host**
|
||||
feature enabled. Without this feature enabled, network traffic from each VM
|
||||
|
@ -89,53 +49,10 @@ the physical network interfaces that are connected to the bridge, but the
|
|||
VLAN interface (i.e. **eth0.102**).
|
||||
|
||||
FlatDHCPManager (single-interface scheme)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
|
||||
-----------------------------------------
|
||||
|
||||
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
|
||||
:width: 600px
|
||||
|
||||
..
|
||||
.. uml::
|
||||
node "Compute1 Node" {
|
||||
[eth0] as compute1_eth0
|
||||
[eth0.101\nManagement] as compute1_eth0_101
|
||||
[eth0.102\nVM] as compute1_eth0_102
|
||||
[vm0] as compute1_vm0
|
||||
[vm1] as compute1_vm1
|
||||
[vm2] as compute1_vm2
|
||||
[vm3] as compute1_vm3
|
||||
[br100] as compute1_br100
|
||||
compute1_eth0 -down- compute1_eth0_101
|
||||
compute1_eth0 -down- compute1_eth0_102
|
||||
compute1_eth0_102 -down- compute1_br100
|
||||
compute1_br100 -down- compute1_vm0
|
||||
compute1_br100 -down- compute1_vm1
|
||||
compute1_br100 -down- compute1_vm2
|
||||
compute1_br100 -down- compute1_vm3
|
||||
}
|
||||
|
||||
node "Compute2 Node" {
|
||||
[eth0] as compute2_eth0
|
||||
[eth0.101\nManagement] as compute2_eth0_101
|
||||
[eth0.102\nVM] as compute2_eth0_102
|
||||
[vm0] as compute2_vm0
|
||||
[vm1] as compute2_vm1
|
||||
[vm2] as compute2_vm2
|
||||
[vm3] as compute2_vm3
|
||||
[br100] as compute2_br100
|
||||
compute2_eth0 -down- compute2_eth0_101
|
||||
compute2_eth0 -down- compute2_eth0_102
|
||||
compute2_eth0_102 -down- compute2_br100
|
||||
compute2_br100 -down- compute2_vm0
|
||||
compute2_br100 -down- compute2_vm1
|
||||
compute2_br100 -down- compute2_vm2
|
||||
compute2_br100 -down- compute2_vm3
|
||||
}
|
||||
|
||||
compute1_eth0 -up- [L2 switch]
|
||||
compute2_eth0 -up- [L2 switch]
|
||||
:align: center
|
||||
|
||||
Therefore all switch ports where compute nodes are connected must be
|
||||
configured as tagged (trunk) ports with required vlans allowed (enabled,
|
||||
|
@ -157,59 +74,12 @@ VMs of other projects. Switch ports must be configured as tagged (trunk)
|
|||
ports to allow this scheme to work.
|
||||
|
||||
.. image:: /_images/flatdhcpmanager-mh_scheme.jpg
|
||||
:width: 600px
|
||||
:align: center
|
||||
|
||||
..
|
||||
.. uml::
|
||||
node "Compute1 Node" {
|
||||
[eth0] as compute1_eth0
|
||||
[eth0.101\nManagement] as compute1_eth0_101
|
||||
[vlan102\n] as compute1_vlan102
|
||||
[vlan103\n] as compute1_vlan103
|
||||
[vm0] as compute1_vm0
|
||||
[vm1] as compute1_vm1
|
||||
[vm2] as compute1_vm2
|
||||
[vm3] as compute1_vm3
|
||||
[br102] as compute1_br102
|
||||
[br103] as compute1_br103
|
||||
compute1_eth0 -down- compute1_eth0_101
|
||||
compute1_eth0 -down- compute1_vlan102
|
||||
compute1_eth0 -down- compute1_vlan103
|
||||
compute1_vlan102 -down- compute1_br102
|
||||
compute1_vlan103 -down- compute1_br103
|
||||
compute1_br102 -down- compute1_vm0
|
||||
compute1_br102 -down- compute1_vm1
|
||||
compute1_br103 -down- compute1_vm2
|
||||
compute1_br103 -down- compute1_vm3
|
||||
}
|
||||
.. index:: Fuel UI; Deployment Schema
|
||||
|
||||
node "Compute2 Node" {
|
||||
[eth0] as compute2_eth0
|
||||
[eth0.101\nManagement] as compute2_eth0_101
|
||||
[vlan102\n] as compute2_vlan102
|
||||
[vlan103\n] as compute2_vlan103
|
||||
[vm0] as compute2_vm0
|
||||
[vm1] as compute2_vm1
|
||||
[vm2] as compute2_vm2
|
||||
[vm3] as compute2_vm3
|
||||
[br102] as compute2_br102
|
||||
[br103] as compute2_br103
|
||||
compute2_eth0 -down- compute2_eth0_101
|
||||
compute2_eth0 -down- compute2_vlan102
|
||||
compute2_eth0 -down- compute2_vlan103
|
||||
compute2_vlan102 -down- compute2_br102
|
||||
compute2_vlan103 -down- compute2_br103
|
||||
compute2_br102 -down- compute2_vm0
|
||||
compute2_br102 -down- compute2_vm1
|
||||
compute2_br103 -down- compute2_vm2
|
||||
compute2_br103 -down- compute2_vm3
|
||||
}
|
||||
|
||||
compute1_eth0 -up- [L2 switch]
|
||||
compute2_eth0 -up- [L2 switch]
|
||||
|
||||
Fuel deployment schema
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
Fuel Deployment Schema
|
||||
======================
|
||||
|
||||
One of the physical interfaces on each host has to be chosen to carry
|
||||
VM-to-VM traffic (fixed network), and switch ports must be configured to
|
||||
|
@ -225,7 +95,8 @@ Once you choose a networking mode (FlatDHCP/VLAN), you must configure equipment
|
|||
accordingly. The diagram below shows an example configuration.
|
||||
|
||||
.. image:: /_images/physical-network.jpg
|
||||
:width: 600px
|
||||
:width: 100%
|
||||
:align: center
|
||||
|
||||
Fuel operates with following logical networks:
|
||||
|
||||
|
@ -251,7 +122,7 @@ Fuel operates with following logical networks:
|
|||
networks (vlan 104 on the scheme).
|
||||
|
||||
Mapping logical networks to physical interfaces on servers
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Fuel allows you to use different physical interfaces to handle different
|
||||
types of traffic. When a node is added to the environment, click at the bottom
|
||||
|
@ -259,7 +130,7 @@ line of the node icon. In the detailed information window, click the "Network
|
|||
Configuration" button to open the physical interfaces configuration screen.
|
||||
|
||||
.. image:: /_images/network-settings.jpg
|
||||
:width: 600px
|
||||
:align: center
|
||||
|
||||
On this screen you can drag-and-drop logical networks to physical interfaces
|
||||
according to your network setup.
|
||||
|
@ -272,7 +143,7 @@ you may not modify network settings, even to move a logical network to another
|
|||
physical interface or VLAN number.
|
||||
|
||||
Switch
|
||||
^^^^^^
|
||||
++++++
|
||||
|
||||
Fuel can configure hosts, however switch configuration is still manual work.
|
||||
Unfortunately the set of configuration steps, and even the terminology used,
|
||||
|
@ -347,7 +218,7 @@ Example configuration for one of the ports on a Cisco switch::
|
|||
vlan 262,100,102,104 # Might be needed for enabling VLANs
|
||||
|
||||
Router
|
||||
^^^^^^
|
||||
++++++
|
||||
|
||||
To make it possible for VMs to access the outside world, you must have an IP
|
||||
address set on a router in the public network. In the examples provided,
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
.. index:: Fuel UI; Post-Deployment Check
|
||||
|
||||
.. _Post-Deployment-Check:
|
||||
|
||||
Post-Deployment Check
|
||||
=====================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
On occasion, even a successful deployment may result in some OpenStack
|
||||
components not working correctly. If this happens, Fuel offers the
|
||||
|
@ -28,7 +30,6 @@ is by their names. Sanity Checks are intended to assist in maintaining your
|
|||
sanity. Smoke Checks tell you where the fires are so you can put them out
|
||||
strategically instead of firehosing the entire installation.
|
||||
|
||||
|
||||
Benefits
|
||||
--------
|
||||
|
||||
|
@ -57,6 +58,7 @@ Now, let`s take a closer look on what should be done to execute the tests and
|
|||
to understand if something is wrong with your OpenStack cluster.
|
||||
|
||||
.. image:: /_images/healthcheck_tab.jpg
|
||||
:align: center
|
||||
|
||||
As you can see on the image above, the Fuel UI now contains a ``Healthcheck``
|
||||
tab, indicated by the Heart icon.
|
||||
|
@ -85,9 +87,10 @@ this section.
|
|||
An actual test run looks like this:
|
||||
|
||||
.. image:: /_images/ostf_screen.jpg
|
||||
:align: center
|
||||
|
||||
What should be done when a test failed
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
--------------------------------------
|
||||
|
||||
If a test failed, there are several ways to investigate the problem. You may
|
||||
prefer to start in Fuel UI since it's feedback is directly related to the
|
||||
|
@ -111,11 +114,11 @@ The first thing to be done is to ensure all OpenStack services are up and runnin
|
|||
To do this you can run sanity test set, or execute the following command on your
|
||||
controller node::
|
||||
|
||||
nova-manage service list
|
||||
nova-manage service list
|
||||
|
||||
If any service is off (has “XXX” status), you can restart it using this command::
|
||||
|
||||
service openstack-<service name> restart
|
||||
service openstack-<service name> restart
|
||||
|
||||
If all services are on, but you`re still experiencing some issues, you can
|
||||
gather information on OpenStack Dashboard (exceeded number of instances,
|
||||
|
@ -126,7 +129,7 @@ have underprovisioned your environment and should check your math and your
|
|||
project requirements.
|
||||
|
||||
Sanity tests description
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
------------------------
|
||||
|
||||
Sanity checks work by sending a query to all OpenStack components to get a
|
||||
response back from them. Many of these tests are simple in that they ask
|
||||
|
@ -137,119 +140,119 @@ what test is used for each service:
|
|||
|
||||
.. topic:: Instances list availability
|
||||
|
||||
Test checks that Nova component can return list of instances.
|
||||
Test checks that Nova component can return list of instances.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of instances.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of instances.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Images list availability
|
||||
|
||||
Test checks that Glance component can return list of images.
|
||||
Test checks that Glance component can return list of images.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of images.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of images.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Volumes list availability
|
||||
|
||||
Test checks that Swift component can return list of volumes.
|
||||
Test checks that Swift component can return list of volumes.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of volumes.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of volumes.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Snapshots list availability
|
||||
|
||||
Test checks that Glance component can return list of snapshots.
|
||||
Test checks that Glance component can return list of snapshots.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of snapshots.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of snapshots.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Flavors list availability
|
||||
|
||||
Test checks that Nova component can return list of flavors.
|
||||
Test checks that Nova component can return list of flavors.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of flavors.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of flavors.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Limits list availability
|
||||
|
||||
Test checks that Nova component can return list of absolute limits.
|
||||
Test checks that Nova component can return list of absolute limits.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of limits.
|
||||
2. Check response.
|
||||
1. Request list of limits.
|
||||
2. Check response.
|
||||
|
||||
.. topic:: Services list availability
|
||||
|
||||
Test checks that Nova component can return list of services.
|
||||
Test checks that Nova component can return list of services.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of services.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of services.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: User list availability
|
||||
|
||||
Test checks that Keystone component can return list of users.
|
||||
Test checks that Keystone component can return list of users.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of services.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of services.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Services execution monitoring
|
||||
|
||||
Test checks that all of the expected services are on, meaning the test will
|
||||
fail if any of the listed services is in “XXX” status.
|
||||
Test checks that all of the expected services are on, meaning the test will
|
||||
fail if any of the listed services is in “XXX” status.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Connect to a controller via SSH.
|
||||
2. Execute nova-manage service list command.
|
||||
3. Check there are no failed services.
|
||||
1. Connect to a controller via SSH.
|
||||
2. Execute nova-manage service list command.
|
||||
3. Check there are no failed services.
|
||||
|
||||
.. topic:: DNS availability
|
||||
|
||||
Test checks that DNS is available.
|
||||
Test checks that DNS is available.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Connect to a controller node via SSH.
|
||||
2. Execute host command for the controller IP.
|
||||
3. Check DNS name can be successfully resolved.
|
||||
1. Connect to a controller node via SSH.
|
||||
2. Execute host command for the controller IP.
|
||||
3. Check DNS name can be successfully resolved.
|
||||
|
||||
.. topic:: Networks availability
|
||||
|
||||
Test checks that Nova component can return list of available networks.
|
||||
|
||||
Test scenario:
|
||||
|
||||
1. Request list of networks.
|
||||
2. Check returned list is not empty.
|
||||
Test checks that Nova component can return list of available networks.
|
||||
|
||||
Test scenario:
|
||||
|
||||
1. Request list of networks.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
.. topic:: Ports availability
|
||||
|
||||
Test checks that Nova component can return list of available ports.
|
||||
Test checks that Nova component can return list of available ports.
|
||||
|
||||
Test scenario:
|
||||
Test scenario:
|
||||
|
||||
1. Request list of ports.
|
||||
2. Check returned list is not empty.
|
||||
1. Request list of ports.
|
||||
2. Check returned list is not empty.
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
|
||||
Smoke tests description
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
-----------------------
|
||||
|
||||
Smoke tests verify how your system handles basic OpenStack operations under
|
||||
normal circumstances. The Smoke test series uses timeout tests for
|
||||
|
@ -265,158 +268,161 @@ negatives. The following is a description of each sanity test available:
|
|||
|
||||
.. topic:: Flavor creation
|
||||
|
||||
Test checks that low requirements flavor can be created.
|
||||
Test checks that low requirements flavor can be created.
|
||||
|
||||
Target component: Nova
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create small-size flavor.
|
||||
2. Check created flavor has expected name.
|
||||
3. Check flavor disk has expected size.
|
||||
1. Create small-size flavor.
|
||||
2. Check created flavor has expected name.
|
||||
3. Check flavor disk has expected size.
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
For more information refer to nova cli reference.
|
||||
|
||||
.. topic:: Volume creation
|
||||
|
||||
Test checks that a small-sized volume can be created.
|
||||
Test checks that a small-sized volume can be created.
|
||||
|
||||
Target component: Compute
|
||||
Target component: Compute
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create a new small-size volume.
|
||||
2. Wait for "available" volume status.
|
||||
3. Check response contains "display_name" section.
|
||||
4. Create instance and wait for "Active" status
|
||||
5. Attach volume to instance.
|
||||
6. Check volume status is "in use".
|
||||
7. Get created volume information by its id.
|
||||
8. Detach volume from instance.
|
||||
9. Check volume has "available" status.
|
||||
10. Delete volume.
|
||||
1. Create a new small-size volume.
|
||||
2. Wait for "available" volume status.
|
||||
3. Check response contains "display_name" section.
|
||||
4. Create instance and wait for "Active" status
|
||||
5. Attach volume to instance.
|
||||
6. Check volume status is "in use".
|
||||
7. Get created volume information by its id.
|
||||
8. Detach volume from instance.
|
||||
9. Check volume has "available" status.
|
||||
10. Delete volume.
|
||||
|
||||
If you see that created volume is in ERROR status, it can mean that you`ve
|
||||
exceeded the maximum number of volumes that can be created. You can check it
|
||||
on OpenStack dashboard. For more information refer to volume management
|
||||
instructions.
|
||||
If you see that created volume is in ERROR status, it can mean that you`ve
|
||||
exceeded the maximum number of volumes that can be created. You can check it
|
||||
on OpenStack dashboard. For more information refer to volume management
|
||||
instructions.
|
||||
|
||||
.. topic:: Instance booting and snapshotting
|
||||
|
||||
Test creates a keypair, checks that instance can be booted from default
|
||||
image, then a snapshot can be created from it and a new instance can be
|
||||
booted from a snapshot. Test also verifies that instances and images reach
|
||||
ACTIVE state upon their creation.
|
||||
Test creates a keypair, checks that instance can be booted from default
|
||||
image, then a snapshot can be created from it and a new instance can be
|
||||
booted from a snapshot. Test also verifies that instances and images reach
|
||||
ACTIVE state upon their creation.
|
||||
|
||||
Target component: Glance
|
||||
Target component: Glance
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create new keypair to boot an instance.
|
||||
2. Boot default image.
|
||||
3. Make snapshot of created server.
|
||||
4. Boot another instance from created snapshot.
|
||||
|
||||
If you see that created instance is in ERROR status, it can mean that you`ve
|
||||
exceeded any system requirements limit. The test is using a nano-flavor with
|
||||
parameters: 64 RAM, 1 GB disk space, 1 virtual CPU presented. For more
|
||||
information refer to nova cli reference, image management instructions.
|
||||
1. Create new keypair to boot an instance.
|
||||
2. Boot default image.
|
||||
3. Make snapshot of created server.
|
||||
4. Boot another instance from created snapshot.
|
||||
|
||||
If you see that created instance is in ERROR status, it can mean that you`ve
|
||||
exceeded any system requirements limit. The test is using a nano-flavor with
|
||||
parameters: 64 RAM, 1 GB disk space, 1 virtual CPU presented. For more
|
||||
information refer to nova cli reference, image management instructions.
|
||||
|
||||
.. topic:: Keypair creation
|
||||
|
||||
Target component: Nova.
|
||||
Target component: Nova.
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create a new keypair, check if it was created successfully
|
||||
(check name is expected, response status is 200).
|
||||
1. Create a new keypair, check if it was created successfully
|
||||
(check name is expected, response status is 200).
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
For more information refer to nova cli reference.
|
||||
|
||||
.. topic:: Security group creation
|
||||
|
||||
Target component: Nova
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create security group, check if it was created correctly
|
||||
(check name is expected, response status is 200).
|
||||
1. Create security group, check if it was created correctly
|
||||
(check name is expected, response status is 200).
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
For more information refer to nova cli reference.
|
||||
|
||||
.. topic:: Network parameters check
|
||||
|
||||
Target component: Nova
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Get list of networks.
|
||||
2. Check seen network labels equal to expected ones.
|
||||
3. Check seen network ids equal to expected ones.
|
||||
1. Get list of networks.
|
||||
2. Check seen network labels equal to expected ones.
|
||||
3. Check seen network ids equal to expected ones.
|
||||
|
||||
For more information refer to nova cli reference.
|
||||
For more information refer to nova cli reference.
|
||||
|
||||
.. topic:: Instance creation
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Target component: Nova
|
||||
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
Scenario:
|
||||
|
||||
For more information refer to nova cli reference, instance management
|
||||
instructions.
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
|
||||
For more information refer to nova cli reference, instance management
|
||||
instructions.
|
||||
|
||||
.. topic:: Floating IP assignment
|
||||
|
||||
Target component: Nova
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
4. Create new floating ip.
|
||||
5. Assign floating ip to created instance.
|
||||
|
||||
For more information refer to nova cli reference, floating ips management
|
||||
instructions.
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
4. Create new floating IP.
|
||||
5. Assign floating IP to created instance.
|
||||
|
||||
For more information refer to nova cli reference, floating ips management
|
||||
instructions.
|
||||
|
||||
.. topic:: Network connectivity check through floating IP
|
||||
|
||||
Target component: Nova
|
||||
Target component: Nova
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
4. Check connectivity for all floating ips using ping command.
|
||||
1. Create new keypair (if it`s nonexistent yet).
|
||||
2. Create new sec group (if it`s nonexistent yet).
|
||||
3. Create instance with usage of created sec group and keypair.
|
||||
4. Check connectivity for all floating IPs using ping command.
|
||||
|
||||
If this test failed, it`s better to run a network check and verify that all
|
||||
connections are correct. For more information refer to the Nova CLI reference's
|
||||
floating IPs management instructions.
|
||||
If this test failed, it`s better to run a network check and verify that all
|
||||
connections are correct. For more information refer to the Nova CLI reference's
|
||||
floating IPs management instructions.
|
||||
|
||||
.. topic:: User creation and authentication in Horizon
|
||||
|
||||
Test creates new user, tenant, user role with admin privileges and logs in
|
||||
to dashboard. Target components: Nova, Keystone
|
||||
Test creates new user, tenant, user role with admin privileges and logs in
|
||||
to dashboard.
|
||||
|
||||
Target components: Nova, Keystone
|
||||
|
||||
Scenario:
|
||||
Scenario:
|
||||
|
||||
1. Create a new tenant.
|
||||
2. Check tenant was created successfully.
|
||||
3. Create a new user.
|
||||
4. Check user was created successfully.
|
||||
5. Create a new user role.
|
||||
6. Check user role was created successfully.
|
||||
7. Perform token authentication.
|
||||
8. Check authentication was successful.
|
||||
9. Send authentication request to Horizon.
|
||||
10. Verify response status is 200.
|
||||
1. Create a new tenant.
|
||||
2. Check tenant was created successfully.
|
||||
3. Create a new user.
|
||||
4. Check user was created successfully.
|
||||
5. Create a new user role.
|
||||
6. Check user role was created successfully.
|
||||
7. Perform token authentication.
|
||||
8. Check authentication was successful.
|
||||
9. Send authentication request to Horizon.
|
||||
10. Verify response status is 200.
|
||||
|
||||
If this test fails on the authentication step, you should first try opening
|
||||
the dashboard - it may be unreachable for some reason and then you should
|
||||
check your network configuration. For more information refer to nova cli
|
||||
reference.
|
||||
If this test fails on the authentication step, you should first try opening
|
||||
the dashboard - it may be unreachable for some reason and then you should
|
||||
check your network configuration. For more information refer to nova cli
|
||||
reference.
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
.. index:: Red Hat OpenStack
|
||||
|
||||
Red Hat OpenStack Notes
|
||||
=======================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
@ -14,33 +16,34 @@ Red Hat account credentials in order to download Red Hat OpenStack Platform.
|
|||
The necessary components will be prepared and loaded into Cobbler. There are
|
||||
two methods Fuel supports for obtaining Red Hat OpenStack packages:
|
||||
|
||||
* Red Hat Subscription Manager (RHSM)
|
||||
* and Red Hat RHN Satellite.
|
||||
* :ref:`RHSM` (default)
|
||||
* :ref:`RHN_Satellite`
|
||||
|
||||
.. index:: Red Hat OpenStack; Deployment Requirements
|
||||
|
||||
Deployment Requirements
|
||||
-----------------------
|
||||
|
||||
Minimal Requirements
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
++++++++++++++++++++
|
||||
|
||||
* Red Hat account (https://access.redhat.com)
|
||||
* Red Hat OpenStack entitlement (one per host)
|
||||
* Internet access for Fuel master host
|
||||
* Red Hat OpenStack entitlement (one per node)
|
||||
* Internet access for Fuel Master name
|
||||
|
||||
Optional requirements
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
+++++++++++++++++++++
|
||||
|
||||
* Red Hat Satellite Server
|
||||
* Configured Satellite activation key
|
||||
|
||||
Deployment types
|
||||
^^^^^^^^^^^^^^^^
|
||||
.. _RHSM:
|
||||
|
||||
* `Red Hat Subscription Management <https://access.redhat.com/site/articles/143253>`_ (default)
|
||||
* `Red Hat RHN Satellite <http://www.redhat.com/products/enterprise-linux/rhn-satellite/>`_
|
||||
|
||||
Red Hat Subscription Management overview
|
||||
----------------------------------------
|
||||
Red Hat Subscription Management (RHSM)
|
||||
--------------------------------------
|
||||
|
||||
Benefits
|
||||
^^^^^^^^
|
||||
++++++++
|
||||
|
||||
* No need to handle large ISOs or physical media.
|
||||
* Register all your clients with just a single username and password.
|
||||
|
@ -50,16 +53,22 @@ Benefits
|
|||
* Download only necessary packages.
|
||||
|
||||
Considerations
|
||||
^^^^^^^^^^^^^^
|
||||
++++++++++++++
|
||||
|
||||
* Must observe Red Hat licensing requirements after deployment
|
||||
* Package download time is dependent on network speed (20-60 minutes)
|
||||
|
||||
Red Hat RHN Satellite overview
|
||||
------------------------------
|
||||
.. seealso::
|
||||
|
||||
`Overview of Subscription Management - Red Hat Customer Portal <https://access.redhat.com/site/articles/143253>`_
|
||||
|
||||
.. _RHN_Satellite:
|
||||
|
||||
Red Hat RHN Satellite
|
||||
---------------------
|
||||
|
||||
Benefits
|
||||
^^^^^^^^
|
||||
++++++++
|
||||
|
||||
* Faster download of Red Hat OpenStack packages
|
||||
* Register all your clients with an activation key
|
||||
|
@ -68,7 +77,7 @@ Benefits
|
|||
* Easier to consume for large enterprise customers
|
||||
|
||||
Considerations
|
||||
^^^^^^^^^^^^^^
|
||||
++++++++++++++
|
||||
|
||||
* Red Hat RHN Satellite is a separate offering from Red Hat and requires
|
||||
dedicated hardware
|
||||
|
@ -76,7 +85,7 @@ Considerations
|
|||
registration packages (just for Fuel Master host)
|
||||
|
||||
What you need
|
||||
^^^^^^^^^^^^^
|
||||
+++++++++++++
|
||||
|
||||
* Red Hat account (https://access.redhat.com)
|
||||
* Red Hat OpenStack entitlement (one per host)
|
||||
|
@ -85,7 +94,7 @@ What you need
|
|||
* Configured Satellite activation key
|
||||
|
||||
Your RHN Satellite activation key must be configured the following channels
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
* RHEL Server High Availability
|
||||
* RHEL Server Load Balancer
|
||||
|
@ -94,6 +103,10 @@ Your RHN Satellite activation key must be configured the following channels
|
|||
* RHN Tools for RHEL
|
||||
* Red Hat OpenStack 3.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
`Red Hat | Red Hat Network Satellite <http://www.redhat.com/products/enterprise-linux/rhn-satellite/>`_
|
||||
|
||||
.. _rhn_sat_channels:
|
||||
|
||||
Fuel looks for the following RHN Satellite channels.
|
||||
|
@ -106,11 +119,13 @@ Fuel looks for the following RHN Satellite channels.
|
|||
|
||||
.. note:: If you create cloned channels, leave these channel strings intact.
|
||||
|
||||
.. index:: Red Hat OpenStack; Troubleshooting
|
||||
|
||||
Troubleshooting
|
||||
---------------
|
||||
|
||||
Issues downloading from Red Hat Subscription Manager
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
If you receive an error from Fuel UI regarding Red Hat OpenStack download
|
||||
issues, ensure that you have a valid subscription to the Red Hat OpenStack
|
||||
|
@ -123,7 +138,7 @@ subscriptions associated with your account.
|
|||
If you are still encountering issues, contact Mirantis Support.
|
||||
|
||||
Issues downloading from Red Hat RHN Satellite
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
+++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
If you receive an error from Fuel UI regarding Red Hat OpenStack download
|
||||
issues, ensure that you have all the necessary channels available on your
|
||||
|
@ -134,7 +149,7 @@ representative <https://access.redhat.com/site/solutions/368643>`_ to get
|
|||
the proper subscriptions associated with your account.
|
||||
|
||||
RHN Satellite error: "rhel-x86_64-server-rs-6 not found"
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
This means your Red Hat Satellite Server has run out of available entitlements
|
||||
or your licenses have expired. Check your RHN Satellite to ensure there is at
|
||||
|
@ -146,7 +161,7 @@ account, please contact your `Red Hat sales representative
|
|||
subscriptions associated with your account.
|
||||
|
||||
Yum Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhel-x86_64-server-6.
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
This can be caused by many problems. This could happen if your SSL
|
||||
certificate does not match the hostname of your RHN Satellite Server or if
|
||||
|
@ -159,7 +174,7 @@ You may find solutions to your issues with repomd.xml at the
|
|||
`Red Hat Support. <https://access.redhat.com/support/>`_.
|
||||
|
||||
GPG Key download failed. Looking for URL your-satellite-server/pub/RHN-ORG-TRUSTED-SSL-CERT
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
This issue has two known problems. If you are using VirtualBox, this may not
|
||||
be properly configured. Ensure that your upstream DNS resolver is correct
|
||||
|
|
|
@ -1,3 +1,8 @@
|
|||
Fuel simplifies the set up of an OpenStack cluster, affording you the ability to dig in and fully understand how OpenStack works. You can deploy on test hardware or in a virtualized environment and root around all you like, but when it comes time to deploy to production there are a few things to take into consideration.
|
||||
Fuel simplifies the set up of an OpenStack cluster, affording you the ability
|
||||
to dig in and fully understand how OpenStack works. You can deploy on test
|
||||
hardware or in a virtualized environment and root around all you like, but
|
||||
when it comes time to deploy to production there are a few things to take
|
||||
into consideration.
|
||||
|
||||
In this section we will talk about such things including how to size your hardware and how to handle large-scale deployments.
|
||||
In this section we will talk about such things including how to size your
|
||||
hardware and how to handle large-scale deployments.
|
|
@ -3,9 +3,9 @@
|
|||
.. _Sizing_Hardware:
|
||||
|
||||
Sizing Hardware
|
||||
---------------
|
||||
===============
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
One of the first questions people ask when planning an OpenStack deployment is
|
||||
"what kind of hardware do I need?" There is no such thing as a one-size-fits-all
|
||||
|
@ -28,7 +28,7 @@ Your needs in each of these areas are going to determine your overall hardware
|
|||
requirements.
|
||||
|
||||
Processing
|
||||
^^^^^^^^^^
|
||||
----------
|
||||
|
||||
In order to calculate how much processing power you need to acquire you will
|
||||
need to determine the number of VMs your cloud will support. You must also
|
||||
|
@ -61,7 +61,7 @@ You will also need to take into account the following:
|
|||
* Choose a good value CPU that supports the technologies you require.
|
||||
|
||||
Memory
|
||||
^^^^^^
|
||||
------
|
||||
|
||||
Continuing to use the example from the previous section, we need to determine
|
||||
how much RAM will be required to support 17 VMs per server. Let's assume that
|
||||
|
@ -86,7 +86,7 @@ all core OS requirements.
|
|||
You can adjust this calculation based on your needs.
|
||||
|
||||
Storage Space
|
||||
^^^^^^^^^^^^^
|
||||
-------------
|
||||
|
||||
When it comes to disk space there are several types that you need to consider:
|
||||
|
||||
|
@ -125,7 +125,7 @@ it critical, a single server would need 18 drives, most likely 2.5" 15,000RPM
|
|||
146GB SAS drives.
|
||||
|
||||
Throughput
|
||||
~~~~~~~~~~
|
||||
++++++++++
|
||||
|
||||
As far as throughput, that's going to depend on what kind of storage you choose.
|
||||
In general, you calculate IOPS based on the packing density (drive IOPS * drives
|
||||
|
@ -161,7 +161,7 @@ replace the drive and push a new node copy. The remaining VMs carry whatever
|
|||
additional load is present due to the temporary loss of one node.
|
||||
|
||||
Remote storage
|
||||
~~~~~~~~~~~~~~
|
||||
++++++++++++++
|
||||
|
||||
IOPS will also be a factor in determining how you plan to handle persistent
|
||||
storage. For example, consider these options for laying out your 50 TB of remote
|
||||
|
@ -185,7 +185,7 @@ You can accomplish the same thing with a single 36 drive frame using 3 TB
|
|||
drives, but this becomes a single point of failure in your cluster.
|
||||
|
||||
Object storage
|
||||
~~~~~~~~~~~~~~
|
||||
++++++++++++++
|
||||
|
||||
When it comes to object storage, you will find that you need more space than
|
||||
you think. For example, this example specifies 50 TB of object storage.
|
||||
|
@ -209,7 +209,7 @@ each, but its not recommended due to the high cost of failure to replication
|
|||
and capacity issues.
|
||||
|
||||
Networking
|
||||
^^^^^^^^^^
|
||||
----------
|
||||
|
||||
Perhaps the most complex part of designing an OpenStack cluster is the
|
||||
networking.
|
||||
|
@ -235,7 +235,7 @@ decrease latency by using two 10 Gb links, bringing the bandwidth per VM to
|
|||
consider.
|
||||
|
||||
Scalability and oversubscription
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
It is one of the ironies of networking that 1Gb Ethernet generally scales
|
||||
better than 10Gb Ethernet -- at least until 100Gb switches are more commonly
|
||||
|
@ -250,7 +250,7 @@ racks, so plan to create "pods", each of which includes both storage and
|
|||
compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.
|
||||
|
||||
Hardware for this example
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
In this example, you are looking at:
|
||||
|
||||
|
@ -264,7 +264,7 @@ switches. Also, as your network grows, you will need to consider uplinks and
|
|||
aggregation switches.
|
||||
|
||||
Summary
|
||||
^^^^^^^
|
||||
-------
|
||||
|
||||
In general, your best bet is to choose a 2 socket server with a balance in I/O,
|
||||
CPU, Memory, and Disk that meets your project requirements.
|
||||
|
|
|
@ -3,9 +3,9 @@
|
|||
.. _Redeploying_An_Environment:
|
||||
|
||||
Redeploying An Environment
|
||||
--------------------------
|
||||
==========================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Because Puppet is additive only, there is no ability to revert changes as you
|
||||
would in a typical application deployment. If a change needs to be backed out,
|
||||
|
@ -19,13 +19,13 @@ and minimizes the headaches associated with maintaining multiple configurations
|
|||
through a single Puppet Master by creating what are called environments.
|
||||
|
||||
Environments
|
||||
^^^^^^^^^^^^
|
||||
------------
|
||||
|
||||
Puppet supports assigning nodes 'environments'. These environments can be
|
||||
mapped directly to your development, QA and production life cycles, so it’s a
|
||||
way to distribute code to nodes that are assigned to those environments.
|
||||
|
||||
* On the Master Node
|
||||
* On the Master node
|
||||
|
||||
The Puppet Master tries to find modules using its ``modulepath`` setting,
|
||||
which by default is ``/etc/puppet/modules``. It is common practice to set
|
||||
|
@ -35,16 +35,18 @@ way to distribute code to nodes that are assigned to those environments.
|
|||
|
||||
For example, you can specify several search paths. The following example
|
||||
dynamically sets the ``modulepath`` so Puppet will check a per-environment
|
||||
folder for a module before serving it from the main set::
|
||||
folder for a module before serving it from the main set:
|
||||
|
||||
[master]
|
||||
modulepath = $confdir/$environment/modules:$confdir/modules
|
||||
.. code-block:: ini
|
||||
|
||||
[production]
|
||||
manifest = $confdir/manifests/site.pp
|
||||
[master]
|
||||
modulepath = $confdir/$environment/modules:$confdir/modules
|
||||
|
||||
[development]
|
||||
manifest = $confdir/$environment/manifests/site.pp
|
||||
[production]
|
||||
manifest = $confdir/manifests/site.pp
|
||||
|
||||
[development]
|
||||
manifest = $confdir/$environment/manifests/site.pp
|
||||
|
||||
* On the Slave Node
|
||||
|
||||
|
@ -53,13 +55,15 @@ way to distribute code to nodes that are assigned to those environments.
|
|||
``production`` environment.
|
||||
|
||||
To set aslave-side environment, just specify the environment setting in the
|
||||
``[agent]`` block of ``puppet.conf``::
|
||||
``[agent]`` block of ``puppet.conf``:
|
||||
|
||||
[agent]
|
||||
environment = development
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
environment = development
|
||||
|
||||
Deployment pipeline
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
-------------------
|
||||
|
||||
* Deploy
|
||||
|
||||
|
|
|
@ -3,13 +3,11 @@
|
|||
.. _Large_Scale_Deployments:
|
||||
|
||||
Large Scale Deployments
|
||||
-----------------------
|
||||
=======================
|
||||
|
||||
When deploying large clusters (of 100 nodes or more) there are two basic
|
||||
bottlenecks:
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
Careful planning is key to eliminating these potential problem areas, but
|
||||
there's another way.
|
||||
|
||||
|
@ -18,7 +16,7 @@ however, that it's always good to have a sense of how to solve these problems
|
|||
should they appear.
|
||||
|
||||
Certificate signing requests and Puppet Master/Cobbler capacity
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
---------------------------------------------------------------
|
||||
|
||||
When deploying a large cluster, you may find that Puppet Master begins to have
|
||||
difficulty when you start exceeding 20 or more simultaneous requests. Part of
|
||||
|
@ -52,9 +50,10 @@ combination of rsync (for modules, manifests, and SSL data) and database
|
|||
replication.
|
||||
|
||||
.. image:: /_images/cobbler-puppet-ha.jpg
|
||||
:align: center
|
||||
|
||||
Downloading of operating systems and other software
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
---------------------------------------------------
|
||||
|
||||
Large deployments can also suffer from a bottleneck in terms of the additional
|
||||
traffic created by downloading software from external sources. One way to avoid
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
.. index:: Deployment Configurations
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
Before you install any hardware or software, you must know what
|
||||
you're trying to achieve. This section looks at the basic components of
|
||||
|
@ -11,91 +13,38 @@ basis for installing OpenStack in the next section.
|
|||
|
||||
As you know, OpenStack provides the following basic services:
|
||||
|
||||
.. topic:: Compute:
|
||||
**Compute:**
|
||||
Compute servers are the workhorses of your installation; they're
|
||||
the servers on which your users' virtual machines are created.
|
||||
`nova-scheduler` controls the life-cycle of these VMs.
|
||||
|
||||
Compute servers are the workhorses of your installation; they're
|
||||
the servers on which your users' virtual machines are created.
|
||||
`nova-scheduler` controls the life-cycle of these VMs.
|
||||
**Networking:**
|
||||
Because an OpenStack cluster (virtually) always includes
|
||||
multiple servers, the ability for them to communicate with each other and with
|
||||
the outside world is crucial. Networking was originally handled by the
|
||||
`nova-network` service, but it has given way to the newer Neutron (formerly
|
||||
Quantum) networking service. Authentication and authorization for these
|
||||
transactions are handled by `keystone`.
|
||||
|
||||
.. topic:: Networking:
|
||||
**Storage:**
|
||||
OpenStack provides for two different types of storage: block
|
||||
storage and object storage. Block storage is traditional data storage, with
|
||||
small, fixed-size blocks that are mapped to locations on storage media. At its
|
||||
simplest level, OpenStack provides block storage using `nova-volume`, but it
|
||||
is common to use `cinder`.
|
||||
|
||||
Because an OpenStack cluster (virtually) always includes
|
||||
multiple servers, the ability for them to communicate with each other and with
|
||||
the outside world is crucial. Networking was originally handled by the
|
||||
`nova-network` service, but it has given way to the newer Neutron (formerly
|
||||
Quantum) networking service. Authentication and authorization for these
|
||||
transactions are handled by `keystone`.
|
||||
Object storage, on the other hand, consists of single variable-size objects
|
||||
that are described by system-level metadata, and you can access this capability
|
||||
using `swift`.
|
||||
|
||||
.. topic:: Storage:
|
||||
|
||||
OpenStack provides for two different types of storage: block
|
||||
storage and object storage. Block storage is traditional data storage, with
|
||||
small, fixed-size blocks that are mapped to locations on storage media. At its
|
||||
simplest level, OpenStack provides block storage using `nova-volume`, but it
|
||||
is common to use `cinder`.
|
||||
|
||||
Object storage, on the other hand, consists of single variable-size objects
|
||||
that are described by system-level metadata, and you can access this capability
|
||||
using `swift`.
|
||||
|
||||
OpenStack storage is used for your users' objects, but it is also used for
|
||||
storing the images used to create new VMs. This capability is handled by `glance`.
|
||||
OpenStack storage is used for your users' objects, but it is also used for
|
||||
storing the images used to create new VMs. This capability is handled by `glance`.
|
||||
|
||||
These services can be combined in many different ways. Out of the box,
|
||||
Fuel supports the following deployment configurations:
|
||||
|
||||
.. index:: Deployment Configurations; Simple (non-HA)
|
||||
|
||||
Simple (non-HA) deployment
|
||||
--------------------------
|
||||
|
||||
In a production environment, you will never have a Simple non-HA
|
||||
deployment of OpenStack, partly because it forces you to make a number
|
||||
of compromises as to the number and types of services that you can
|
||||
deploy. It is, however, extremely useful if you just want to see how
|
||||
OpenStack works from a user's point of view.
|
||||
|
||||
.. image:: /_images/deployment-simple_svg.jpg
|
||||
|
||||
More commonly, your OpenStack installation will consist of multiple
|
||||
servers. Exactly how many is up to you, of course, but the main idea
|
||||
is that your controller(s) are separate from your compute servers, on
|
||||
which your users' VMs will actually run. One arrangement that will
|
||||
enable you to achieve this separation while still keeping your
|
||||
hardware investment relatively modest is to house your storage on your
|
||||
controller nodes.
|
||||
|
||||
.. index:: Deployment Configurations; Compact HA
|
||||
|
||||
Multi-node (HA) deployment (Compact)
|
||||
------------------------------------
|
||||
|
||||
Production environments typically require high availability, which
|
||||
involves several architectural requirements. Specifically, you will
|
||||
need at least three controllers, and
|
||||
certain components will be deployed in multiple locations to prevent
|
||||
single points of failure. That's not to say, however, that you can't
|
||||
reduce hardware requirements by combining your storage, network, and controller
|
||||
nodes:
|
||||
|
||||
.. image:: /_images/deployment-ha-compact_svg.jpg
|
||||
|
||||
.. index:: Deployment Configurations; Full HA
|
||||
|
||||
Multi-node (HA) deployment (Full)
|
||||
---------------------------------
|
||||
|
||||
For large production deployments, its more common to provide
|
||||
dedicated hardware for storage. This architecture gives you the advantages of
|
||||
high availability, but this clean separation makes your cluster more
|
||||
maintainable by separating storage and controller functionality:
|
||||
|
||||
.. image:: /_images/deployment-ha-full_svg.jpg
|
||||
|
||||
Where Fuel really shines is in the creation of more complex architectures, so
|
||||
in this document you'll learn how to use Fuel to easily create a multi-node HA
|
||||
OpenStack cluster. To reduce the amount of hardware you'll need to follow the
|
||||
installation, however, the guide focuses on the Multi-node HA Compact
|
||||
architecture.
|
||||
|
||||
Lets take a closer look at the details of this deployment configuration.
|
||||
- :ref:`Non-HA Simple <Simple>`
|
||||
- :ref:`HA Compact <HA_Compact>`
|
||||
- :ref:`HA Full <HA_Full>`
|
||||
- :ref:`RHOS Non-HA Simple <RHOS_Simple>`
|
||||
- :ref:`RHOS HA Compact <RHOS_Compact>`
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
.. index:: Deployment Configurations; Non-HA Simple
|
||||
|
||||
.. _Simple:
|
||||
|
||||
Simple (non-HA) deployment
|
||||
==========================
|
||||
|
||||
In a production environment, you will never have a Simple non-HA
|
||||
deployment of OpenStack, partly because it forces you to make a number
|
||||
of compromises as to the number and types of services that you can
|
||||
deploy. It is, however, extremely useful if you just want to see how
|
||||
OpenStack works from a user's point of view.
|
||||
|
||||
.. image:: /_images/deployment-simple_svg.jpg
|
||||
:align: center
|
||||
|
||||
More commonly, your OpenStack installation will consist of multiple
|
||||
servers. Exactly how many is up to you, of course, but the main idea
|
||||
is that your controller(s) are separate from your compute servers, on
|
||||
which your users' VMs will actually run. One arrangement that will
|
||||
enable you to achieve this separation while still keeping your
|
||||
hardware investment relatively modest is to house your storage on your
|
||||
controller nodes.
|
|
@ -0,0 +1,20 @@
|
|||
.. index:: Deployment Configurations; HA Compact
|
||||
|
||||
.. _HA_Compact:
|
||||
|
||||
Multi-node (HA) deployment (Compact)
|
||||
====================================
|
||||
|
||||
Production environments typically require high availability, which
|
||||
involves several architectural requirements. Specifically, you will
|
||||
need at least three controllers, and
|
||||
certain components will be deployed in multiple locations to prevent
|
||||
single points of failure. That's not to say, however, that you can't
|
||||
reduce hardware requirements by combining your storage, network, and controller
|
||||
nodes:
|
||||
|
||||
.. image:: /_images/deployment-ha-compact_svg.jpg
|
||||
:align: center
|
||||
|
||||
We'll take a closer look at the details of this deployment configuration in
|
||||
:ref:`Close_look_Compact` section.
|
|
@ -1,3 +1,7 @@
|
|||
.. index:: Deployment Configurations; HA Compact Details
|
||||
|
||||
.. _Close_look_Compact:
|
||||
|
||||
A closer look at the Multi-node (HA) Compact deployment
|
||||
=======================================================
|
||||
|
||||
|
@ -6,6 +10,7 @@ deployment configuration and how it achieves high availability. As you may
|
|||
recall, this configuration looks something like this:
|
||||
|
||||
.. image:: /_images/deployment-ha-compact_svg.jpg
|
||||
:align: center
|
||||
|
||||
OpenStack services are interconnected by RESTful HTTP-based APIs and
|
||||
AMQP-based RPC messages. So redundancy for stateless OpenStack API
|
||||
|
@ -17,6 +22,7 @@ For example, RabbitMQ uses built-in clustering capabilities, while the
|
|||
database uses MySQL/Galera replication.
|
||||
|
||||
.. image:: /_images/ha-overview_svg.jpg
|
||||
:align: center
|
||||
|
||||
Lets take a closer look at what an OpenStack deployment looks like, and
|
||||
what it will take to achieve high availability for an OpenStack
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
.. index:: Deployment Configurations; HA Full
|
||||
|
||||
.. _HA_Full:
|
||||
|
||||
Multi-node (HA) deployment (Full)
|
||||
=================================
|
||||
|
||||
For large production deployments, its more common to provide
|
||||
dedicated hardware for storage. This architecture gives you the advantages of
|
||||
high availability, but this clean separation makes your cluster more
|
||||
maintainable by separating storage and controller functionality:
|
||||
|
||||
.. image:: /_images/deployment-ha-full_svg.jpg
|
||||
:align: center
|
||||
|
||||
Where Fuel really shines is in the creation of more complex architectures, so
|
||||
in this document you'll learn how to use Fuel to easily create a multi-node HA
|
||||
OpenStack cluster. To reduce the amount of hardware you'll need to follow the
|
||||
installation, however, the guide focuses on the Multi-node HA Compact
|
||||
architecture.
|
|
@ -1,7 +1,9 @@
|
|||
Red Hat OpenStack Overview
|
||||
==========================
|
||||
.. index:: Deployment Configurations; Red Hat OpenStack
|
||||
|
||||
.. contents:: :local:
|
||||
Red Hat OpenStack Architectures
|
||||
===============================
|
||||
|
||||
.. contents :local:
|
||||
|
||||
Red Hat has partnered with Mirantis to offer an end-to-end supported
|
||||
distribution of OpenStack powered by Fuel. Because Red Hat offers support
|
||||
|
@ -11,27 +13,27 @@ a highly available OpenStack cluster.
|
|||
|
||||
Below is the list of modifications:
|
||||
|
||||
.. topic:: Database backend:
|
||||
**Database backend:**
|
||||
MySQL with Galera has been replaced with native replication in a
|
||||
Master/Slave configuration. MySQL master is elected via Corosync
|
||||
and master and slave status is managed via Pacemaker.
|
||||
|
||||
MySQL with Galera has been replaced with native replication in a
|
||||
Master/Slave configuration. MySQL master is elected via Corosync
|
||||
and master and slave status is managed via Pacemaker.
|
||||
**Messaging backend:**
|
||||
RabbitMQ has been replaced with QPID. Qpid is an AMQP provider that Red
|
||||
Hat offers, but it cannot be clustered in Red Hat's offering. As a result,
|
||||
Fuel configures three non-clustered, independent QPID brokers. Fuel still
|
||||
offers HA for messaging backend via virtual IP management provided by
|
||||
Corosync.
|
||||
|
||||
.. topic:: Messaging backend:
|
||||
**Nova networking:**
|
||||
Quantum is not available for Red Hat OpenStack because the Red Hat kernel
|
||||
lacks GRE tunneling support for OpenVSwitch. This issue should be
|
||||
fixed in a future release. As a result, Fuel for Red Hat OpenStack
|
||||
Platform will only support Nova networking.
|
||||
|
||||
RabbitMQ has been replaced with QPID. Qpid is an AMQP provider that Red
|
||||
Hat offers, but it cannot be clustered in Red Hat's offering. As a result,
|
||||
Fuel configures three non-clustered, independent QPID brokers. Fuel still
|
||||
offers HA for messaging backend via virtual IP management provided by
|
||||
Corosync.
|
||||
|
||||
.. topic:: Nova networking:
|
||||
|
||||
Quantum is not available for Red Hat OpenStack because the Red Hat kernel
|
||||
lacks GRE tunneling support for OpenVSwitch. This issue should be
|
||||
fixed in a future release. As a result, Fuel for Red Hat OpenStack
|
||||
Platform will only support Nova networking.
|
||||
.. index:: Deployment Configurations; Red Hat OpenStack; RHOS Non-HA Simple
|
||||
|
||||
.. _RHOS_Simple:
|
||||
|
||||
Simple (non-HA) Red Hat OpenStack deployment
|
||||
--------------------------------------------
|
||||
|
@ -43,6 +45,7 @@ deploy. It is, however, extremely useful if you just want to see how
|
|||
OpenStack works from a user's point of view.
|
||||
|
||||
.. image:: /_images/deployment-simple_svg.jpg
|
||||
:align: center
|
||||
|
||||
More commonly, your OpenStack installation will consist of multiple
|
||||
servers. Exactly how many is up to you, of course, but the main idea
|
||||
|
@ -52,6 +55,10 @@ enable you to achieve this separation while still keeping your
|
|||
hardware investment relatively modest is to house your storage on your
|
||||
controller nodes.
|
||||
|
||||
.. index:: Deployment Configurations; Red Hat OpenStack; RHOS HA Compact
|
||||
|
||||
.. _RHOS_Compact:
|
||||
|
||||
Multi-node (HA) Red Hat OpenStack deployment (Compact)
|
||||
------------------------------------------------------
|
||||
|
||||
|
@ -64,6 +71,7 @@ reduce hardware requirements by combining your storage, network, and controller
|
|||
nodes:
|
||||
|
||||
.. image:: /_images/deployment-ha-compact-red-hat_svg.jpg
|
||||
:align: center
|
||||
|
||||
OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based
|
||||
RPC messages. So redundancy for stateless OpenStack API services is implemented
|
||||
|
@ -75,3 +83,4 @@ the help of Pacemaker), while QPID is offered in three independent brokers with
|
|||
virtual IP management to provide high availability.
|
||||
|
||||
.. image:: /_images/ha-overview-red-hat_svg.jpg
|
||||
:align: center
|
|
@ -1,12 +1,12 @@
|
|||
Logical Setup
|
||||
=============
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
An OpenStack HA cluster involves, at a minimum, three types of nodes:
|
||||
controller nodes, compute nodes, and storage nodes.
|
||||
|
||||
Controller Nodes
|
||||
^^^^^^^^^^^^^^^^
|
||||
----------------
|
||||
|
||||
The first order of business in achieving high availability (HA) is
|
||||
redundancy, so the first step is to provide multiple controller nodes.
|
||||
|
@ -15,6 +15,7 @@ achieve HA, and Galera is a quorum-based system. That means that you must provid
|
|||
at least 3 controller nodes.
|
||||
|
||||
.. image:: /_images/logical-diagram-controller_svg.jpg
|
||||
:align: center
|
||||
|
||||
Every OpenStack controller runs HAProxy, which manages a single External
|
||||
Virtual IP (VIP) for all controller nodes and provides HTTP and TCP load
|
||||
|
@ -42,7 +43,7 @@ mechanism for achieving HA:
|
|||
* Quantum agents are managed by Pacemaker.
|
||||
|
||||
Compute Nodes
|
||||
^^^^^^^^^^^^^
|
||||
-------------
|
||||
|
||||
OpenStack compute nodes are, in many ways, the foundation of your
|
||||
cluster; they are the servers on which your users will create their
|
||||
|
@ -53,9 +54,10 @@ redundancy to the end-users of Horizon and REST APIs, reaching out to
|
|||
controller nodes using the VIP and going through HAProxy.
|
||||
|
||||
.. image:: /_images/logical-diagram-compute_svg.jpg
|
||||
:align: center
|
||||
|
||||
Storage Nodes
|
||||
^^^^^^^^^^^^^
|
||||
-------------
|
||||
|
||||
In this OpenStack cluster reference architecture, shared storage acts
|
||||
as a backend for Glance, so that multiple Glance instances running on
|
||||
|
@ -65,3 +67,4 @@ it not only for storing VM images, but also for any other objects such
|
|||
as user files.
|
||||
|
||||
.. image:: /_images/logical-diagram-storage_svg.jpg
|
||||
:align: center
|
||||
|
|
|
@ -16,6 +16,7 @@ deployment is to allocate 4 nodes:
|
|||
- 1 compute node
|
||||
|
||||
.. image:: /_images/deployment-ha-compact_svg.jpg
|
||||
:align: center
|
||||
|
||||
If you want to run storage separately from the controllers, you can do that as
|
||||
well by raising the bar to 9 nodes:
|
||||
|
@ -29,6 +30,7 @@ well by raising the bar to 9 nodes:
|
|||
- 1 Compute node
|
||||
|
||||
.. image:: /_images/deployment-ha-full_svg.jpg
|
||||
:align: center
|
||||
|
||||
Of course, you are free to choose how to deploy OpenStack based on the
|
||||
amount of available hardware and on your goals (such as whether you
|
||||
|
|
|
@ -3,18 +3,21 @@
|
|||
Network Architecture
|
||||
====================
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
The current architecture assumes the presence of 3 NICs, but it can be
|
||||
customized for two or 4+ network interfaces. Most servers arebuilt with at least
|
||||
two network interfaces. In this case, let's consider a typical example of three
|
||||
NIC cards. They're utilized as follows:
|
||||
|
||||
- **eth0**: the internal management network, used for communication with Puppet & Cobbler
|
||||
**eth0**:
|
||||
The internal management network, used for communication with Puppet & Cobbler
|
||||
|
||||
- **eth1**: the public network, and floating IPs assigned to VMs
|
||||
**eth1**:
|
||||
The public network, and floating IPs assigned to VMs
|
||||
|
||||
- **eth2**: the private network, for communication between OpenStack VMs, and the
|
||||
**eth2**:
|
||||
The private network, for communication between OpenStack VMs, and the
|
||||
bridge interface (VLANs)
|
||||
|
||||
In the multi-host networking mode, you can choose between the FlatDHCPManager
|
||||
|
@ -22,11 +25,12 @@ and VlanManager network managers in OpenStack. The figure below illustrates the
|
|||
relevant nodes and networks.
|
||||
|
||||
.. image:: /_images/080-networking-diagram_svg.jpg
|
||||
:align: center
|
||||
|
||||
Lets take a closer look at each network and how its used within the cluster.
|
||||
|
||||
Public Network
|
||||
^^^^^^^^^^^^^^
|
||||
--------------
|
||||
|
||||
This network allows inbound connections to VMs from the outside world (allowing
|
||||
users to connect to VMs from the Internet). It also allows outbound connections
|
||||
|
@ -54,7 +58,7 @@ The public network also provides VIPs for Endpoint nodes, which are used to
|
|||
connect to OpenStack services APIs.
|
||||
|
||||
Internal (Management) Network
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
-----------------------------
|
||||
|
||||
The internal network connects all OpenStack nodes in the cluster. All components
|
||||
of an OpenStack cluster communicate with each other using this network. This
|
||||
|
@ -68,7 +72,7 @@ This network usually is a single C class network from your private, non-globally
|
|||
routed IP address range.
|
||||
|
||||
Private Network
|
||||
^^^^^^^^^^^^^^^
|
||||
---------------
|
||||
|
||||
The private network facilitates communication between each tenant's VMs. Private
|
||||
network address spaces are part of the enterprise network address space. Fixed
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
Neutron vs. nova-network
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
========================
|
||||
|
||||
Neutron (formerly Quantum) is a service which provides networking-as-a-service
|
||||
Neutron (formerly Quantum) is a service which provides Networking-as-a-Service
|
||||
functionality in OpenStack. It has a rich tenant-facing API for defining network
|
||||
connectivity and addressing in the cloud, and gives operators the ability to
|
||||
leverage different networking technologies to power their cloud networking.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
Cinder vs. nova-volume
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
======================
|
||||
|
||||
Cinder is a persistent storage management service, also known as
|
||||
block-storage-as-a-service. It was created to replace nova-volume, and
|
||||
|
|
|
@ -2,24 +2,21 @@
|
|||
|
||||
.. _Swift-and-object-storage-notes:
|
||||
|
||||
Object storage deployment notes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Object Storage Deployment
|
||||
=========================
|
||||
|
||||
Fuel currently supports several scenarios to deploy the object storage:
|
||||
|
||||
* Glance + filesystem
|
||||
|
||||
**Glance + filesystem**
|
||||
By default, Glance uses the file system backend to store virtual machine images.
|
||||
In this case, you can use any of shared file systems supported by Glance.
|
||||
|
||||
* Swift on controllers
|
||||
|
||||
**Swift on controllers**
|
||||
In this mode the role of swift-storage and swift-proxy are combined with a
|
||||
nova-controller. Use it only for testing in order to save nodes. It's not
|
||||
suitable for production environments.
|
||||
|
||||
* Swift on dedicated nodes
|
||||
|
||||
**Swift on dedicated nodes**
|
||||
In this case the Proxy service and Storage (account/container/object) services
|
||||
reside on separate nodes, with two proxy nodes and a minimum of three storage
|
||||
nodes.
|
||||
|
|
|
@ -0,0 +1,404 @@
|
|||
.. index:: HA with Pacemaker and Corosync
|
||||
|
||||
HA with Pacemaker and Corosync
|
||||
==============================
|
||||
|
||||
.. index:: HA with Pacemaker and Corosync; Corosync settings
|
||||
|
||||
Corosync settings
|
||||
-----------------
|
||||
|
||||
Corosync is using Totem protocol which is an implementation of Virtual Synchrony
|
||||
protocol. It uses it in order to provide connectivity between cluster nodes,
|
||||
decide if cluster is quorate to provide services, to provide data layer for
|
||||
services that want to use features of Virtual Synchrony.
|
||||
|
||||
Corosync is used in Fuel as communication and quorum service for Pacemaker
|
||||
cluster resource manager. It's main configuration file is located in
|
||||
``/etc/corosync/corosync.conf``.
|
||||
|
||||
The main corosync section is `totem` section which describes how cluster nodes
|
||||
should communicate::
|
||||
|
||||
totem {
|
||||
version: 2
|
||||
token: 3000
|
||||
token_retransmits_before_loss_const: 10
|
||||
join: 60
|
||||
consensus: 3600
|
||||
vsftype: none
|
||||
max_messages: 20
|
||||
clear_node_high_bit: yes
|
||||
rrp_mode: none
|
||||
secauth: off
|
||||
threads: 0
|
||||
interface {
|
||||
ringnumber: 0
|
||||
bindnetaddr: 10.107.0.8
|
||||
mcastaddr: 239.1.1.2
|
||||
mcastport: 5405
|
||||
}
|
||||
}
|
||||
|
||||
Corosync usually uses multicast UDP transport and sets "redundant ring" for
|
||||
communication. Currently Fuel deploys controllers with one redundant ring. Each
|
||||
ring has it’s own multicast address and bind net address that specifies on which
|
||||
interface corosync should join corresponding multicast group. Fuel uses default
|
||||
corosync configuration, which can also be altered in Fuel manifests.
|
||||
|
||||
.. seealso:: ``man corosync.conf`` or corosync documentation at
|
||||
http://clusterlabs.org/doc/ if you want to know how to tune installation
|
||||
completely
|
||||
|
||||
.. index:: HA with Pacemaker and Corosync; Pacemaker settings
|
||||
|
||||
Pacemaker settings
|
||||
------------------
|
||||
|
||||
Pacemaker is the cluster resource manager used by Fuel to manage Quantum
|
||||
resources, HAProxy, virtual IP addresses and MySQL Galera (or simple MySQL
|
||||
Master/slave replication in case of RHOS installation) cluster. It is done by
|
||||
use of Open Cluster Framework (see http://linux-ha.org/wiki/OCF_Resource_Agents )
|
||||
agent scripts which are deployed in order to start/stop/monitor Quantum services,
|
||||
to manage HAProxy, virtual IP addresses and MySQL replication. These are located
|
||||
at ``/usr/lib/ocf/resource.d/mirantis/quantum-agent-[ovs|dhcp|l3]``,
|
||||
``/usr/lib/ocf/resource.d/mirantis/mysql``, ``/usr/lib/ocf/resource.d/ocf/haproxy``.
|
||||
Firstly, MySQL agent is started, HAproxy and virtual IP addresses are set up.
|
||||
Then Open vSwitch and metadata agents are cloned on all the nodes. Then dhcp and
|
||||
L3 agents are started and tied together by use of pacemaker constraints called
|
||||
"colocation".
|
||||
|
||||
.. seealso:: `Using Rules to Determine Resource
|
||||
Location <http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_using_rules_to_determine_resource_location.html>`_
|
||||
|
||||
MySQL HA script primarily targets to the cluster rebuild after power failure or
|
||||
equal type of disaster - it needs working corosync in which it forms quorum of
|
||||
an epochs of replication and then electing master from node with newest epoch.
|
||||
Be aware of default five minute interval in which every cluster member should be
|
||||
booted to participate in such election. Every node is a self-aware, that means
|
||||
if nobody pushes higher epoch that it retrieved from corosync(neither no one did),
|
||||
it will just elect itself as a master.
|
||||
|
||||
.. index:: HA with Pacemaker and Corosync; How Fuel Deploys HA
|
||||
|
||||
How Fuel Deploys HA
|
||||
-------------------
|
||||
|
||||
Fuel installs corosync service, configures `corosync.conf` and includes pacemaker
|
||||
service plugin into `/etc/corosync/service.d`. Then corosync service starts and
|
||||
spawns corresponding pacemaker processes. Fuel configures cluster properties of
|
||||
pacemaker and then injects resources configuration for virtual IPs, haproxy,
|
||||
mysql and quantum agent resources::
|
||||
|
||||
primitive p_haproxy ocf:pacemaker:haproxy \
|
||||
op monitor interval="20" timeout="30" \
|
||||
op start interval="0" timeout="30" \
|
||||
op stop interval="0" timeout="30"
|
||||
primitive p_mysql ocf:mirantis:mysql \
|
||||
op monitor interval="60" timeout="30" \
|
||||
op start interval="0" timeout="450" \
|
||||
op stop interval="0" timeout="150"
|
||||
primitive p_quantum-dhcp-agent ocf:mirantis:quantum-agent-dhcp \
|
||||
op monitor interval="20" timeout="30" \
|
||||
op start interval="0" timeout="360" \
|
||||
op stop interval="0" timeout="360" \
|
||||
params tenant="services" password="quantum" username="quantum" \
|
||||
os_auth_url="http://10.107.2.254:35357/v2.0" \
|
||||
meta is-managed="true"
|
||||
primitive p_quantum-l3-agent ocf:mirantis:quantum-agent-l3 \
|
||||
op monitor interval="20" timeout="30" \
|
||||
op start interval="0" timeout="360" \
|
||||
op stop interval="0" timeout="360" \
|
||||
params tenant="services" password="quantum" syslog="true" username="quantum" \
|
||||
debug="true" os_auth_url="http://10.107.2.254:35357/v2.0" \
|
||||
meta is-managed="true" target-role="Started"
|
||||
primitive p_quantum-metadata-agent ocf:mirantis:quantum-agent-metadata \
|
||||
op monitor interval="60" timeout="30" \
|
||||
op start interval="0" timeout="30" \
|
||||
op stop interval="0" timeout="30"
|
||||
primitive p_quantum-openvswitch-agent ocf:pacemaker:quantum-agent-ovs \
|
||||
op monitor interval="20" timeout="30" \
|
||||
op start interval="0" timeout="480" \
|
||||
op stop interval="0" timeout="480"
|
||||
primitive vip__management_old ocf:heartbeat:IPaddr2 \
|
||||
op monitor interval="2" timeout="30" \
|
||||
op start interval="0" timeout="30" \
|
||||
op stop interval="0" timeout="30" \
|
||||
params nic="br-mgmt" iflabel="ka" ip="10.107.2.254"
|
||||
primitive vip__public_old ocf:heartbeat:IPaddr2 \
|
||||
op monitor interval="2" timeout="30" \
|
||||
op start interval="0" timeout="30" \
|
||||
op stop interval="0" timeout="30" \
|
||||
params nic="br-ex" iflabel="ka" ip="172.18.94.46"
|
||||
clone clone_p_haproxy p_haproxy \
|
||||
meta interleave="true"
|
||||
clone clone_p_mysql p_mysql \
|
||||
meta interleave="true" is-managed="true"
|
||||
clone clone_p_quantum-metadata-agent p_quantum-metadata-agent \
|
||||
meta interleave="true" is-managed="true"
|
||||
clone clone_p_quantum-openvswitch-agent p_quantum-openvswitch-agent \
|
||||
meta interleave="true"
|
||||
|
||||
And ties them with pacemaker colocation resource::
|
||||
|
||||
colocation dhcp-with-metadata inf: p_quantum-dhcp-agent \
|
||||
clone_p_quantum-metadata-agent
|
||||
colocation dhcp-with-ovs inf: p_quantum-dhcp-agent \
|
||||
clone_p_quantum-openvswitch-agent
|
||||
colocation dhcp-without-l3 -100: p_quantum-dhcp-agent p_quantum-l3-agent
|
||||
colocation l3-with-metadata inf: p_quantum-l3-agent clone_p_quantum-metadata-agent
|
||||
colocation l3-with-ovs inf: p_quantum-l3-agent clone_p_quantum-openvswitch-agent
|
||||
order dhcp-after-metadata inf: clone_p_quantum-metadata-agent p_quantum-dhcp-agent
|
||||
order dhcp-after-ovs inf: clone_p_quantum-openvswitch-agent p_quantum-dhcp-agent
|
||||
order l3-after-metadata inf: clone_p_quantum-metadata-agent p_quantum-l3-agent
|
||||
order l3-after-ovs inf: clone_p_quantum-openvswitch-agent p_quantum-l3-agent
|
||||
|
||||
.. index:: HA with Pacemaker and Corosync; How To Troubleshoot Corosync/Pacemaker
|
||||
|
||||
How To Troubleshoot Corosync/Pacemaker
|
||||
--------------------------------------
|
||||
|
||||
Pacemaker and Corosync come with several CLI utilities that can help you
|
||||
troubleshoot and understand what is going on.
|
||||
|
||||
crm - Cluster Resource Manager
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
This is the main pacemaker utility it shows you state of pacemaker cluster.
|
||||
Several most popular commands that you can use to understand whether your
|
||||
cluster is consistent:
|
||||
|
||||
**crm status**
|
||||
|
||||
This command shows you the main information about pacemaker cluster and state of
|
||||
resources being managed::
|
||||
|
||||
crm(live)# status
|
||||
============
|
||||
Last updated: Tue May 14 15:13:47 2013
|
||||
Last change: Mon May 13 18:36:56 2013 via cibadmin on fuel-controller-01
|
||||
Stack: openais
|
||||
Current DC: fuel-controller-01 - partition with quorum
|
||||
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
|
||||
5 Nodes configured, 5 expected votes
|
||||
3 Resources configured.
|
||||
============
|
||||
|
||||
Online: [ fuel-controller-01 fuel-controller-02 fuel-controller-03
|
||||
fuel-controller-04 fuel-controller-05 ]
|
||||
|
||||
p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs): Started fuel-controller-01
|
||||
p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp): Started fuel-controller-01
|
||||
p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3): Started fuel-controller-01
|
||||
|
||||
**crm(live)# resource**
|
||||
|
||||
Here you can enter resource-specific commands::
|
||||
|
||||
crm(live)resource# status`
|
||||
|
||||
p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs) Started
|
||||
p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp) Started
|
||||
p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3) Started
|
||||
|
||||
**crm(live)resource# start|restart|stop|cleanup <resource_name>**
|
||||
|
||||
These commands let you correspondingly start, stop, restart resources.
|
||||
|
||||
**cleanup**
|
||||
|
||||
Cleanup command cleans resources state on the nodes in case of their failure or
|
||||
unexpected operation, e.g. some residuals of SysVInit operation on resource, in
|
||||
which case pacemaker will manage it by itself, thus deciding in which node to
|
||||
run the resource.
|
||||
|
||||
E.g.::
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
3 Resources configured.
|
||||
============
|
||||
|
||||
3 Nodes configured, 3 expected votes
|
||||
16 Resources configured.
|
||||
|
||||
|
||||
Online: [ controller-01 controller-02 controller-03 ]
|
||||
|
||||
vip__management_old (ocf::heartbeat:IPaddr2): Started controller-01
|
||||
vip__public_old (ocf::heartbeat:IPaddr2): Started controller-02
|
||||
Clone Set: clone_p_haproxy [p_haproxy]
|
||||
Started: [ controller-01 controller-02 controller-03 ]
|
||||
Clone Set: clone_p_mysql [p_mysql]
|
||||
Started: [ controller-01 controller-02 controller-03 ]
|
||||
Clone Set: clone_p_quantum-openvswitch-agent [p_quantum-openvswitch-agent]
|
||||
Started: [ controller-01 controller-02 controller-03 ]
|
||||
Clone Set: clone_p_quantum-metadata-agent [p_quantum-metadata-agent]
|
||||
Started: [ controller-01 controller-02 controller-03 ]
|
||||
p_quantum-dhcp-agent (ocf::mirantis:quantum-agent-dhcp): Started controller-01
|
||||
p_quantum-l3-agent (ocf::mirantis:quantum-agent-l3): Started controller-03
|
||||
|
||||
In this case there were residual OpenStack agent processes that were started by
|
||||
pacemaker in case of network failure and cluster partitioning. After the
|
||||
restoration of connectivity pacemaker saw these duplicate resources running on
|
||||
different nodes. You can let it clean up this situation automatically or, if you
|
||||
do not want to wait, cleanup them manually.
|
||||
|
||||
.. seealso::
|
||||
|
||||
crm interactive help and documentation resources for Pacemaker
|
||||
(e.g. http://doc.opensuse.org/products/draft/SLE-HA/SLE-ha-guide_sd_draft/cha.ha.manual_config.html).
|
||||
|
||||
In some network scenarios one can get cluster split into several parts and
|
||||
``crm status`` showing something like this::
|
||||
|
||||
On ctrl1
|
||||
============
|
||||
….
|
||||
Online: [ ctrl1 ]
|
||||
|
||||
On ctrl2
|
||||
============
|
||||
….
|
||||
Online: [ ctrl2 ]
|
||||
|
||||
On ctrl3
|
||||
============
|
||||
….
|
||||
Online: [ ctrl3 ]
|
||||
|
||||
You can troubleshoot this by checking corosync connectivity between nodes.
|
||||
There are several points:
|
||||
|
||||
1) Multicast should be enabled in the network, IP address configured as
|
||||
multicast should not be filtered, mcastport and mcasport - 1 udp ports should
|
||||
be accepted on management network between controllers
|
||||
|
||||
2) corosync should start after network interfaces are configured
|
||||
|
||||
3) bindnetaddr should be in the management network or at least in the same
|
||||
multicast reachable segment
|
||||
|
||||
You can check this in output of ``ip maddr show``:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 1,8
|
||||
|
||||
5: br-mgmt
|
||||
link 33:33:00:00:00:01
|
||||
link 01:00:5e:00:00:01
|
||||
link 33:33:ff:a3:e2:57
|
||||
link 01:00:5e:01:01:02
|
||||
link 01:00:5e:00:00:12
|
||||
inet 224.0.0.18
|
||||
inet 239.1.1.2
|
||||
inet 224.0.0.1
|
||||
inet6 ff02::1:ffa3:e257
|
||||
inet6 ff02::1
|
||||
|
||||
**corosync-objctl**
|
||||
|
||||
This command is used to get/set runtime corosync configuration values including
|
||||
status of corosync redundant ring members::
|
||||
|
||||
runtime.totem.pg.mrp.srp.members.134245130.ip=r(0) ip(10.107.0.8)
|
||||
runtime.totem.pg.mrp.srp.members.134245130.join_count=1
|
||||
...
|
||||
runtime.totem.pg.mrp.srp.members.201353994.ip=r(0) ip(10.107.0.12)
|
||||
runtime.totem.pg.mrp.srp.members.201353994.join_count=1
|
||||
runtime.totem.pg.mrp.srp.members.201353994.status=joined
|
||||
|
||||
|
||||
If IP of the node is 127.0.0.1 it means that corosync started when only loopback
|
||||
interfaces was available and bound to it.
|
||||
|
||||
If there is only one IP in members list that means there is corosync connectivity
|
||||
issue because the node does not see the other ones. The same stays for the case
|
||||
when members list is incomplete.
|
||||
|
||||
.. index:: HA with Pacemaker and Corosync; How To smoke test HA
|
||||
|
||||
How To Smoke Test HA
|
||||
--------------------
|
||||
|
||||
To test if Quantum HA is working, simply shut down the node hosting, e.g. Quantum
|
||||
agents (either gracefully or hardly). You should see agents start on the other node::
|
||||
|
||||
|
||||
# crm status
|
||||
|
||||
Online: [ fuel-controller-02 fuel-controller-03 fuel-controller-04 fuel-controller-05 ]
|
||||
OFFLINE: [ fuel-controller-01 ]
|
||||
|
||||
p_quantum-plugin-openvswitch-agent (ocf::pacemaker:quantum-agent-ovs): Started fuel-controller-02
|
||||
p_quantum-dhcp-agent (ocf::pacemaker:quantum-agent-dhcp): Started fuel-controller-02
|
||||
p_quantum-l3-agent (ocf::pacemaker:quantum-agent-l3): Started fuel-controller-02
|
||||
|
||||
and see corresponding Quantum interfaces on the new Quantum node::
|
||||
|
||||
# ip link show
|
||||
|
||||
11: tap7b4ded0e-cb: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
|
||||
12: qr-829736b7-34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
|
||||
13: qg-814b8c84-8f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
|
||||
|
||||
You can also check ``ovs-vsctl show output`` to see that all corresponding
|
||||
tunnels/bridges/interfaces are created and connected properly::
|
||||
|
||||
ce754a73-a1c4-4099-b51b-8b839f10291c
|
||||
Bridge br-mgmt
|
||||
Port br-mgmt
|
||||
Interface br-mgmt
|
||||
type: internal
|
||||
Port "eth1"
|
||||
Interface "eth1"
|
||||
Bridge br-ex
|
||||
Port br-ex
|
||||
Interface br-ex
|
||||
type: internal
|
||||
Port "eth0"
|
||||
Interface "eth0"
|
||||
Port "qg-814b8c84-8f"
|
||||
Interface "qg-814b8c84-8f"
|
||||
type: internal
|
||||
Bridge br-int
|
||||
Port patch-tun
|
||||
Interface patch-tun
|
||||
type: patch
|
||||
options: {peer=patch-int}
|
||||
Port br-int
|
||||
Interface br-int
|
||||
type: internal
|
||||
Port "tap7b4ded0e-cb"
|
||||
tag: 1
|
||||
Interface "tap7b4ded0e-cb"
|
||||
type: internal
|
||||
Port "qr-829736b7-34"
|
||||
tag: 1
|
||||
Interface "qr-829736b7-34"
|
||||
type: internal
|
||||
Bridge br-tun
|
||||
Port "gre-1"
|
||||
Interface "gre-1"
|
||||
type: gre
|
||||
options: {in_key=flow, out_key=flow, remote_ip="10.107.0.8"}
|
||||
Port "gre-2"
|
||||
Interface "gre-2"
|
||||
type: gre
|
||||
options: {in_key=flow, out_key=flow, remote_ip="10.107.0.5"}
|
||||
Port patch-int
|
||||
Interface patch-int
|
||||
type: patch
|
||||
options: {peer=patch-tun}
|
||||
Port "gre-3"
|
||||
Interface "gre-3"
|
||||
type: gre
|
||||
options: {in_key=flow, out_key=flow, remote_ip="10.107.0.6"}
|
||||
Port "gre-4"
|
||||
Interface "gre-4"
|
||||
type: gre
|
||||
options: {in_key=flow, out_key=flow, remote_ip="10.107.0.7"}
|
||||
Port br-tun
|
||||
Interface br-tun
|
||||
type: internal
|
||||
ovs_version: "1.4.0+build0"
|
||||
|
|
@ -8,7 +8,7 @@ v3.1-grizzly
|
|||
New Features in Fuel 3.1
|
||||
-------------------------
|
||||
|
||||
.. contents:: :local:
|
||||
.. contents :local:
|
||||
|
||||
* Combined Fuel library and Fuel Web products
|
||||
* Option to deploy Red Hat Enterprise Linux® OpenStack® Platform
|
||||
|
@ -18,11 +18,11 @@ New Features in Fuel 3.1
|
|||
* Improved High Availability resiliency
|
||||
* Horizon password entry can be hidden
|
||||
|
||||
Fuel 3.1 with Integrated graphical and command line controls
|
||||
Fuel 3.1 with Integrated Graphical and Command Line Controls
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In earlier releases, Fuel was distributed as two packages – “Fuel Web” for
|
||||
graphical workflow, and “Fuel library” for command-line based manipulation.
|
||||
graphical workflow, and “Fuel Library” for command-line based manipulation.
|
||||
Starting with this 3.1 release, we’ve integrated these two capabilities into
|
||||
a single offering, referred to simply as Fuel. If you used Fuel Web, you’ll
|
||||
see that capability along with its latest improvements to that capability in
|
||||
|
@ -32,7 +32,7 @@ configurations. Advanced users with more complex environmental needs can
|
|||
still get command-line access to the underlying deployment engine (aka “Fuel
|
||||
Library”).
|
||||
|
||||
Option to deploy Red Hat Enterprise Linux® OpenStack® Platform
|
||||
Option to Deploy Red Hat Enterprise Linux® OpenStack® Platform
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Mirantis Fuel now includes the ability to deploy the Red Hat Enterprise
|
||||
|
@ -44,8 +44,8 @@ nodes or installing the Red Hat provided OpenStack distribution onto Red Hat
|
|||
Enterprise Linux powered nodes.
|
||||
|
||||
.. note:: A Red Hat subscription is required to download and deploy Red Hat
|
||||
Enterprise Linux OpenStack Platform.
|
||||
|
||||
Enterprise Linux OpenStack Platform.
|
||||
|
||||
Mirantis OpenStack Health Check
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
|
@ -78,7 +78,7 @@ Improved High Availability resiliency
|
|||
|
||||
To improve the resiliency of the Mirantis OpenStack High Availability
|
||||
reference architecture, Fuel now deploys all HA services under Pacemaker, a
|
||||
scalable cluster resource manager developed by Clusterlabs. Additional
|
||||
scalable cluster resource manager developed by ClusterLabs. Additional
|
||||
options in the Fuel Library have also been added to Corosync to enable more
|
||||
granular tuning.
|
||||
|
||||
|
|
|
@ -1,81 +1,29 @@
|
|||
|
||||
.. header::
|
||||
|
||||
.. cssclass:: header-table
|
||||
|
||||
+-------------------------------------+-----------------------------------+
|
||||
| Fuel™ for Openstack v3.1 | .. cssclass:: right|
|
||||
| | |
|
||||
| User Guide | ###Section### |
|
||||
+-------------------------------------+-----------------------------------+
|
||||
.. cssclass:: header-table
|
||||
|
||||
+-------------------------------------+-----------------------------------+
|
||||
| Fuel™ for Openstack v3.1 | .. cssclass:: right|
|
||||
| | |
|
||||
| User Guide | ###Section### |
|
||||
+-------------------------------------+-----------------------------------+
|
||||
|
||||
.. footer::
|
||||
|
||||
.. cssclass:: footer-table
|
||||
|
||||
+--------------------------+----------------------+
|
||||
| | .. cssclass:: right|
|
||||
| | |
|
||||
| ©2013, Mirantis Inc. | Page ###Page### |
|
||||
+--------------------------+----------------------+
|
||||
.. cssclass:: footer-table
|
||||
|
||||
+--------------------------+----------------------+
|
||||
| | .. cssclass:: right|
|
||||
| | |
|
||||
| ©2013, Mirantis Inc. | Page ###Page### |
|
||||
+--------------------------+----------------------+
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak oneColumn
|
||||
|
||||
.. include:: index.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: copyright.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. contents:: Table of Contents
|
||||
:depth: 2
|
||||
:depth: 2
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0020-about-fuel.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0030-release-notes.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0040-reference-architecture.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0045-installation-fuel-ui.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0050-installation-fuel-cli.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0055-production-considerations.rst
|
||||
|
||||
.. raw:: pdf
|
||||
|
||||
PageBreak
|
||||
|
||||
.. include:: 0060-frequently-asked-questions.rst
|
||||
.. include:: contents.rst
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
:orphan:
|
||||
|
||||
|
||||
.. header::
|
||||
|
||||
|
|
Loading…
Reference in New Issue