rearrage pages after removing duplicates in docs

This commit is contained in:
Alexey Abashkin 2012-12-24 19:04:54 +04:00
parent 5acf5de36b
commit 261b5b0679
21 changed files with 25 additions and 1147 deletions

View File

@ -8,29 +8,12 @@ Table of contents
.. toctree::
:maxdepth: 2
pages/package-contents-2
pages/how-it-works-2
pages/supported-software-versions-2
pages/reference-architecture-2
pages/reference-architecture/overview-2
pages/reference-architecture/logical-setup-2
pages/reference-architecture/cluster-sizing-2
pages/reference-architecture/network-setup-3
pages/reference-architecture/cinder-vs-nova-volume
pages/reference-architecture/quantum-vs-nova-network
pages/installation-instructions
pages/installation-instructions/introduction-2
pages/installation-instructions/machines-2
pages/installation-instructions/network-setup-4
pages/installation-instructions/installing-configuring-puppet-master-2
pages/installation-instructions/installing-configuring-cobbler-2
pages/installation-instructions/deploying-openstack-2
pages/installation-instructions/common-technical-issues-2
pages/frequently-asked-questions
pages/release-notes-2
pages/release-notes/v0-2-0-folsom
pages/release-notes/v0-1-0-essex
pages/known-issues-and-workarounds
pages/known-issues-and-workarounds/rabbitmq
pages/known-issues-and-workarounds/galera
pages/0010-package-contents
pages/0020-how-it-works
pages/0030-supported-software-versions
pages/0040-reference-architecture
pages/0050-installation-instructions
pages/0060-frequently-asked-questions
pages/0070-release-notes
pages/0080-known-issues-and-workarounds

View File

@ -3,165 +3,9 @@ Reference Architecture
.. contents:: :local:
This reference architecture, combined with Cobbler & Puppet automation, allows you to easily deploy OpenStack in a highly available mode. It means that the failure of a single service or even a whole controller machine will not affect your ability to control the cloud. High availability for OpenStack is provided by integrated open source components, including:
* keepalived
* HAProxy
* RabbitMQ
* MySQL/Galera
It is important to mention that the entire reference architecture is based on the active/active mode for all components. There are no active/standby elements, so the deployment can be easily scaled by adding new active nodes if/as needed: controllers, compute nodes, or storage nodes.
Overview
--------
OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. So, redundancy for stateless OpenStack API services is implemented through the combination of VIP management (keepalived) and load balancing (HAProxy). Stateful OpenStack components, such as state database and messaging server, rely on their respective active/active modes for high availability -- RabbitMQ uses built-in clustering capabilities, while the database uses MySQL/Galera replication.
.. image:: https://docs.google.com/drawings/pub?id=1PzRBUaZEPMG25488mlb42fRdlFS3BygPwbAGBHudnTM&w=750&h=491
Logical Setup
-------------
Controller Nodes
^^^^^^^^^^^^^^^^
Let us take a closer look at what OpenStack deployment looks like, and what it will take to achieve high availability for an OpenStack deployment:
* Every OpenStack cluster has multiple controller nodes in order to provide redundancy. To avoid the split-brain in Galera cluster (quorum-based system), it is strongly advised to plan at least 3 controller nodes
* Every OpenStack controller is running keepalived which manages a single VIP for all controller nodes
* Every OpenStack controller is running HAProxy for HTTP and TCP load balancing of requests going to OpenStack API services, RabbitMQ, and MySQL
* When the end users access OpenStack cloud using Horizon and/or REST API, the request goes to a live controller node which currently holds VIP, and the connection gets terminated by HAProxy
* HAProxy performs load balancing and it may very well send the current request to a service on a different controller node. That includes services and components like nova-api, glance-api, keystone-api, quantum-api, nova-scheduler, Horizon, MySQL, and RabbitMQ
* Involved services and components have their own mechanisms for achieving HA
* nova-api, glance-api, keystone-api, quantum-api and nova-scheduler are stateless services that do not require any special attention besides load balancing
* Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer
* RabbitMQ provides active/active high availability using mirrored queues
* MySQL high availability is achieved through Galera active/active multi-master deployment
.. image:: https://docs.google.com/drawings/pub?id=1aftE8Yes7CdVSZgZD1A82T_2GqL2SMImtRYU914IMyQ&w=869&h=855
Compute Nodes
^^^^^^^^^^^^^
OpenStack compute nodes need to "talk" to controller nodes and reach out to essential services such as RabbitMQ and MySQL. They use the same approach that provides redundancy to the end-users of Horizon and REST APIs, reaching out to controller nodes using VIP and going through HAProxy.
.. image:: https://docs.google.com/drawings/pub?id=16gsjc81Ptb5SL090XYAN8Kunrxfg6lScNCo3aReqdJI&w=873&h=801
Storage Nodes
^^^^^^^^^^^^^
This reference architecture requires shared storage to be present in an OpenStack cluster. The shared storage will act as a backend for Glance, so that multiple glance instances running on controller nodes can store images and retrieve images from it. Our recommendation is to deploy Swift and use it not only for storing VM images, but also for any other objects.
.. image:: https://docs.google.com/drawings/pub?id=1Xd70yy7h5Jq2oBJ12fjnPWP8eNsWilC-ES1ZVTFo0m8&w=777&h=778
Cluster Sizing
--------------
This reference architecture is well suited for production-grade OpenStack deployments on a medium and large scale when you can afford allocating several servers for your OpenStack controller nodes in order to build a fully redundant and highly available environment.
The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes:
* 3 controller nodes, combined with storage
* 1 compute node
.. image:: https://docs.google.com/drawings/pub?id=19Dk1qD5V50-N0KX4kdG_0EhGUBP7D_kLi2dU6caL9AM&w=767&h=413
If you want to run storage separately from controllers, you can do that as well by raising the bar to 7 nodes:
* 3 controller nodes
* 3 storage nodes
* 1 compute node
.. image:: https://docs.google.com/drawings/pub?id=1xmGUrk2U-YWmtoS77xqG0tzO3A47p6cI3mMbzLKG8tY&w=769&h=594
Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and on your goals (whether you want a compute-oriented or storage-oriented cluster).
For a typical OpenStack compute deployment, you can use this table as a high-level guidance to determine the number of controllers, compute, and storage nodes you should have:
============= =========== ======= ==============
# of Machines Controllers Compute Storage
============= =========== ======= ==============
4-10 3 1-7 on controllers
11-40 3 5-34 3 (separate)
41-100 4 31-90 6 (separate)
>100 5 >86 9 (separate)
============= =========== ======= ==============
Network Setup
-------------
The current architecture assumes the presence of 3 NIC cards in hardware, but can be customized to a different number of NICs (less, or more):
* eth0
* management network, communication with Puppet & Cobbler
* eth1
* public network, floating IPs
* eth2
* network for communication between OpenStack VMs, bridge interface (VLANs)
In the multi-host networking mode, you can choose between FlatDHCPManager and VlanManager network managers in OpenStack. Please see the figure below which illustrates all relevant nodes and networks.
.. image:: https://docs.google.com/drawings/pub?id=11KtrvPxqK3ilkAfKPSVN5KzBjnSPIJw-jRDc9fiYhxw&w=810&h=1060
Public Network
^^^^^^^^^^^^^^
This network allows inbound connections to VMs from the outside world (allowing users to connect to VMs from the Internet), as well as it allows outbound connections from VMs to the outside world:
* it provides address space for Floating IPs assigned to individual VM instances. Floating IP is assigned to the VM by project administrator. Nova-network or Quantum services configures this address on the public network interface of Network controller node. If nova-network is in use, then iptables are used to create Destination NAT from this address to Fixed IP of corresponding VM instance through the appropriate virtual bridge interface on the Network controller node
* it provides connectivity to the globally routed address space for VMs. IP address from Public network assigned to a compute node is used as source for SNAT performed for traffic going from VM instances on the compute node to Internet.
Public network also provides Virtual IPs (VIPs) for Endpoint node which are used to connect to OpenStack services APIs.
Public network is usually isolated from Private networks and Management network. Typically it's a single C class network from Customer's network range (globally routed or private range).
Internal (Management) Network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Management network connects all OpenStack nodes in the cluster. All components of an OpenStack cluster communicate with each other using this network. This network must be isolated from Private and Public networks for security reasons.
Management network can also be used for serving iSCSI protocol exchange between Compute and Volume nodes.
This network usually is a single C class network from private IP address range (not globally routed).
Private Network
^^^^^^^^^^^^^^^
Private network facilicates communication between VMs of each tenant. Project network address spaces are part of enterprise network address space. Fixed IPs of virtual instances are directly accessible from the rest of Enterprise network.
Private network can be segmented into separate isolated VLANs, which are managed by nova-network or Quantum services.
Cinder vs. nova-volume
----------------------
Cinder is a persistent storage management service, also known as "block storage as a service". It was created to replace nova-volume.
If you decide use persistent storage, you will need to enable Cinder and supply the list of block devices to it. The block devices can be:
* created by Cobbler during the initial node installation
* attached manually (e.g. as additional virtual disks if you are using VirtualBox, or as additional physical RAID, SAN volumes)
Quantum vs. nova-network
------------------------
Quantum is a service which provides "networking as a service" functionality in OpenStack. It has a rich tenant-facing API for defining network connectivity and addressing in the cloud and gives operators the ability to leverage different networking technologies to power their cloud networking.
There are several common deployment use cases for Quantum. Fuel supports the most common of them called "Provider Router with Private Networks". It provides each tenant with one or more private networks, which can communicate with the outside world via a Quantum router.
In order to deploy Quantum you need to enable it in Fuel configuration, and Fuel will set up an additional node in the OpenStack installation that will act as an L3 router.
.. include:: reference-architecture/0010-overview.rst
.. include:: reference-architecture/0020-logical-setup.rst
.. include:: reference-architecture/0030-cluster-sizing.rst
.. include:: reference-architecture/0040-network-setup.rst
.. include:: reference-architecture/0050-cinder-vs-nova-volume.rst
.. include:: reference-architecture/0060-quantum-vs-nova-network.rst

View File

@ -3,589 +3,11 @@ Installation Instructions
.. contents:: :local:
Introduction
------------
.. include:: installation-instructions/0010-introduction.rst
.. include:: installation-instructions/0020-machines.rst
.. include:: installation-instructions/0030-network-setup.rst
.. include:: installation-instructions/0040-installing-configuring-puppet-master.rst
.. include:: installation-instructions/0050-installing-configuring-cobbler.rst
.. include:: installation-instructions/0060-deploying-openstack.rst
.. include:: installation-instructions/0070-common-technical-issues.rst
You can follow these instructions in order to get a production-grade OpenStack installation on hardware, or you can just do a dry run using VirtualBox to get a feel of how Fuel works.
If you decide to give Fuel a try using VirtualBox, you will need the latest stable VirtualBox (version 4.2.4 at the moment), as well as a stable host system, preferably Mac OS 10.7.x, CentOS 6.3, or Ubuntu 12.04. Windows 8 also works, but is not recommended. VirtualBox has to be installed with "extension pack", which enables PXE boot capabilities on Intel network adapters.
The list of certified hardware configuration is coming up in one of the next versions of Fuel.
If you run into any issues during the installation, please check :ref:`common-technical-issues` for common problems and resolutions.
Machines
--------
At the very minimum, you need to have the following machines in your data center:
* 1x Puppet master and Cobbler server (called "fuel-pm", where "pm" stands for puppet master). You can also choose to have Puppet master and Cobbler server on different nodes
* 3x for OpenStack controllers (called "fuel-01", "fuel-02", and "fuel-03")
* 1x for OpenStack compute (called "fuel-04")
In the case of VirtualBox environment, allocate the following resources for these machines:
* 1+ vCPU
* 512+ MB of RAM for controller nodes
* 1024+ MB of RAM for compute nodes
* 8+ GB of HDD (enable dynamic virtual drive expansion in order to save some disk space)
Network Setup
-------------
The current architecture assumes deployment with 3 network interfaces, for clarity. However, it can be tuned to support different scenarios, for example, deployment with only 2 NICs. The default set of interfaces is defined as follows:
#. eth0 - management network for communication between Puppet master and Puppet clients, as well as PXE/TFTP/DHCP for Cobbler
* every machine will have a static IP address there
* you can configure network addresses/network mask according to your needs, but we will give instructions using the following network settings on this interface:
* 10.0.0.100 for Puppet master
* 10.0.0.101-10.0.0.103 for controller nodes
* 10.0.0.104 for compute nodes
* 255.255.255.0 network mask
* in the case of VirtualBox environment, host machine will be 10.0.0.1
#. eth1 - public network, with access to Internet
* we will assume that DHCP is enabled and every machine gets its IP address on this interface automatically through DHCP
#. eth2 - for communication between OpenStack VMs
* without IP address
* with promiscuous mode enabled
If you are on VirtualBox, create the following host-only adapters:
* VirtualBox -> Preferences...
* Network -> Add host-only network (vboxnet0)
* IPv4 address: 10.0.0.1
* IPv4 mask: 255.255.255.0
* DHCP server: disabled
* Network -> Add host-only network (vboxnet1)
* IPv4 address: 0.0.0.0
* IPv4 mask: 255.255.255.0
* DHCP server: disabled
* If your host operating system is Windows, you need to fulfil an additional step of setting up IP address & network mask under "Control Panel -> Network and Internet -> Network and Sharing Center" for "Virtual Host-Only Network" adapter.
Installing & Configuring Puppet Master
--------------------------------------
If you already have Puppet master installed, you can skip this installation step and go directly to :ref:`puppet-master-stored-config`
Installing Puppet master is a one-time procedure for the entire infrastructure. Once done, Puppet master will act as a single point of control for all of your servers, and you will never have to return to these installation steps again.
Initial Setup
~~~~~~~~~~~~~
If you plan to provision the Puppet master on hardware, you need to make sure that you can boot your server from an ISO.
For VirtualBox, follow these steps to create virtual hardware:
* Machine -> New...
* Name: fuel-pm
* Type: Linux
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
* Machine -> Settings... -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Host-only Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: epn1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
* It is important that host-only "Adapter 1" goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from LAN on the first available network adapter.
* Third adapter is not really needed for Puppet master, as it is only required for OpenStack hosts and communication of tenant VMs.
OS Installation
~~~~~~~~~~~~~~~~~~~
* Pick and download an operating system image. It will be used as a base OS for the Puppet master node:
* `CentOS 6.3 <http://isoredirect.centos.org/centos/6/isos/x86_64/>`_: download CentOS-6.3-x86_64-minimal.iso
* `RHEL 6.3 <https://access.redhat.com/home>`_: download rhel-server-6.3-x86_64-boot.iso
* `Ubuntu 12.04 <https://help.ubuntu.com/community/Installation/MinimalCD>`_: download "Precise Pangolin" Minimal CD
* Mount it to the server CD/DVD drive. In case of VirtualBox, mount it to the fuel-pm virtual machine
* Machine -> Settings... -> Storage -> CD/DVD Drive -> Choose a virtual CD/DVD disk file...
* Boot server (or VM) from CD/DVD drive and install the chosen OS
* Choose root password carefully
* Set up eth0 interface. It will be used for communication between Puppet master and Puppet clients, as well as for Cobbler:
* CentOS/RHEL
* ``vi /etc/sysconfig/network-scripts/ifcfg-eth0``::
DEVICE="eth0"
BOOTPROTO="static"
IPADDR="10.0.0.100"
NETMASK="255.255.255.0"
ONBOOT="yes"
TYPE="Ethernet"
PEERDNS="no"
* Apply network settings::
/etc/sysconfig/network-scripts/ifup eth0
* Ubuntu
* ``vi /etc/network/interfaces`` and add configuration corresponding eth0 interface::
auto eth0
iface eth0 inet static
address 10.0.0.100
netmask 255.255.255.0
network 10.0.0.0
* Apply network settings::
/etc/init.d/networking restart
* Add DNS for Internet hostnames resolution. Both CentOS/RHEL and Ubuntu: ``vi /etc/resolv.conf`` (replace "your-domain-name.com" with your domain name, replace "8.8.8.8" with your DNS IP). Note: you can look up your DNS server on your host machine using ``ipconfig /all`` on Windows, or using ``cat /etc/resolv.conf`` under Linux ::
search your-domain-name.com
nameserver 8.8.8.8
* check that ping to your host machine works. This means that management segment is available::
ping 10.0.0.1
* Set up eth1 interface. It will provide Internet access for Puppet master:
* CentOS/RHEL
* ``vi /etc/sysconfig/network-scripts/ifcfg-eth1``::
DEVICE="eth1"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
* Apply network settings::
/etc/sysconfig/network-scripts/ifup eth1
* Ubuntu
* ``vi /etc/network/interfaces`` and add configuration corresponding eth1 interface::
auto eth1
iface eth1 inet dhcp
* Apply network settings::
/etc/init.d/networking restart
* Check that Internet access works::
ping google.com
* Set up the packages repository
* CentOS/RHEL
* ``vi /etc/yum.repos.d/puppet.repo``::
[puppetlabs]
name=Puppet Labs Packages
baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
* Ubuntu
* from command line run::
wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb
* Install Puppet master
* CentOS/RHEL::
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum upgrade
yum install puppet-server-2.7.19
service puppetmaster start
chkconfig puppetmaster on
service iptables stop
chkconfig iptables off
* Ubuntu::
sudo apt-get update
apt-get install puppet puppetmaster
update-rc.d puppetmaster defaults
* Set hostname
* CentOS/RHEL
* ``vi /etc/sysconfig/network``::
HOSTNAME=fuel-pm
* Ubuntu
* ``vi /etc/hostname``::
fuel-pm
* Both CentOS/RHEL and Ubuntu ``vi /etc/hosts`` (replace "your-domain-name.com" with your domain name)::
127.0.0.1 localhost fuel-pm
10.0.0.100 fuel-pm.your-domain-name.com fuel-pm
10.0.0.101 fuel-01.your-domain-name.com fuel-01
10.0.0.102 fuel-02.your-domain-name.com fuel-02
10.0.0.103 fuel-03.your-domain-name.com fuel-03
10.0.0.104 fuel-04.your-domain-name.com fuel-04
* Run ``hostname fuel-pm`` or reboot to apply hostname
.. _puppet-master-stored-config:
Enabling Stored Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section will show how to configure Puppet to use a technique called stored configuration. It is required by Puppet manifests supplied with Fuel, so that they can store exported resources in Puppet database. This makes use of the PuppetDB.
* Install and configure PuppetDB
* CentOS/RHEL::
yum install puppetdb puppetdb-terminus
chkconfig puppetdb on
* Ubuntu::
apt-get install puppetdb puppetdb-terminus
update-rc.d puppetdb defaults
* Disable selinux on CentOS/RHEL (otherwise Puppet will not be able to connect to PuppetDB)::
sed -i s/SELINUX=.*/SELINUX=disabled/ /etc/sysconfig/selinux
setenforce 0
* Configure Puppet master to use storeconfigs
* ``vi /etc/puppet/puppet.conf`` and add following into [master] section::
storeconfigs = true
storeconfigs_backend = puppetdb
* Configure PuppetDB to use the correct hostname and port
* ``vi /etc/puppet/puppetdb.conf`` and add following into [main] section (replace "your-domain-name.com" with your domain name; if this file does not exist, it will be created)::
server = fuel-pm.your-domain-name.com
port = 8081
* Restart Puppet master to apply settings (Note: these operations may take about two minutes. You can ensure that PuppetDB is running by executing ``telnet fuel-pm.your-domain-name.com 8081``)::
service puppetmaster restart
puppetdb-ssl-setup
service puppetmaster restart
service puppetdb restart
Troubleshooting PuppetDB and SSL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* If you have a problem with SSL and PuppetDB::
service puppetdb stop
rm -rf /etc/puppetdb/ssl
puppetdb-ssl-setup
service puppetdb start
Testing Puppet
~~~~~~~~~~~~~~
* Put a simple configuration into Puppet (replace "your-domain-name.com" with your domain name), so that when you run puppet from any node, it will display the corresponding "Hello world" message
* ``vi /etc/puppet/manifests/site.pp``::
node /fuel-pm.your-domain-name.com/ {
notify{"Hello world from fuel-pm": }
}
node /fuel-01.your-domain-name.com/ {
notify{"Hello world from fuel-01": }
}
node /fuel-02.your-domain-name.com/ {
notify{"Hello world from fuel-02": }
}
node /fuel-03.your-domain-name.com/ {
notify{"Hello world from fuel-03": }
}
node /fuel-04.your-domain-name.com/ {
notify{"Hello world from fuel-04": }
}
* If you are planning to install Cobbler on Puppet master node as well, make configuration changes on Puppet master so that it actually knows how to provision software onto itself (replace "your-domain-name.com" with your domain name)
* ``vi /etc/puppet/puppet.conf``::
[main]
# server
server = fuel-pm.your-domain-name.com
# enable plugin sync
pluginsync = true
* Run puppet agent and observe the "Hello World from fuel-pm" output
* ``puppet agent --test``
Installing Fuel
~~~~~~~~~~~~~~~
First of all, you should copy a complete Fuel package onto your Puppet master machine. Once you put Fuel there, you should unpack the archive and supply Fuel manifests to Puppet::
tar -xzf <fuel-archive-name>.tar.gz
cd fuel
cp -Rf fuel/deployment/puppet/* /etc/puppet/modules/
service puppetmaster restart
Installing & Configuring Cobbler
--------------------------------
Cobbler is a bare metal provisioning system which performs bare metal provisioning and initial installation of Linux on OpenStack nodes. Luckily, we already have a Puppet master installed, so we can install Cobbler using Puppet in a few seconds instead of doing it manually.
Using Puppet to install Cobbler
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On Puppet master:
* ``vi /etc/puppet/manifests/site.pp``
* Copy the content of "fuel/deployment/puppet/cobbler/examples/site.pp" into "/etc/puppet/manifests/site.pp":
.. literalinclude:: ../../deployment/puppet/cobbler/examples/site_fordocs.pp
* Make the following changes in that file:
* Replace IP addresses and ranges according to your network setup. Replace "your-domain-name.com" with your domain name.
* Uncomment the required OS distributions. They will be downloaded and imported into Cobbler during Cobbler installation.
* Change the location of ISO image files to either a local mirror or the fastest available Internet mirror.
* Once the configuration is there, Puppet will know that Cobbler must be installed on the fuel-pm machine. Once Cobbler is installed, the right distro and profile will be automatically added to it. OS image will be downloaded from the mirror and put into Cobbler as well.
* It is necessary to note that in the proposed network configuration the snippet above includes Puppet commands to configure forwarding on Cobbler node to make external resources available via the 10.0.0.0/24 network which is used during the installation process (see "enable_nat_all" and "enable_nat_filter")
* run puppet agent to actually install Cobbler on fuel-pm
* ``puppet agent --test``
Testing cobbler
~~~~~~~~~~~~~~~
* you can check that Cobbler is installed successfully by opening the following URL from your host machine:
* http://fuel-pm/cobbler_web/ (u: cobbler, p: cobbler)
* now you have a fully working instance of Cobbler. Moreover, it is fully configured and capable of installing the chosen OS (CentOS 6.3, RHEL 6.3, or Ubuntu 12.04) on the target OpenStack nodes
Deploying OpenStack
-------------------
Initial setup
~~~~~~~~~~~~~
If you are using hardware, make sure it is capable of PXE booting over the network from Cobbler.
In case of VirtualBox, create the corresponding virtual machines for your OpenStack nodes. Do not start them yet.
* Machine -> New...
* Name: fuel-01 (will need to repeat for fuel-02, fuel-03, and fuel-04)
* Type: Linux
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
* Machine -> System -> Motherboard...
* Check "Network" in "Boot sequence"
* Machine -> Settings... -> Network
* Adapter 1
* Enable Network Adapter
* Attached to: Host-only Adapter
* Name: vboxnet0
* Adapter 2
* Enable Network Adapter
* Attached to: Bridged Adapter
* Name: en1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
* Adapter 3
* Enable Network Adapter
* Attached to: Host-only Adapter
* Name: vboxnet1
* Advanced -> Promiscuous mode: Allow All
* It is important that host-only "Adapter 1" goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from LAN on the first available network adapter.
Configuring nodes in Cobbler
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now you need to define nodes in the Cobbler configuration, so that it knows what OS to install, where to install it, and what configuration actions to take.
On Puppet master, create a directory for configuration (wherever you like) and copy the sample config file for Cobbler from Fuel repository:
* ``mkdir cobbler_config``
* ``cd cobbler_config``
* ``cp /etc/puppet/modules/cobbler/examples/cobbler_system.py .``
* ``cp /etc/puppet/modules/cobbler/examples/nodes.yaml .``
Edit configuration for bare metal provisioning of nodes (nodes.yaml):
* There is essentially a section for every node, and you have to define all OpenStack nodes there (fuel-01, fuel-02, fuel-03, and fuel-04 by default). The config for a single node is provided below. The config for the remaining nodes is very similar
* It is important to get the following parameters correctly specified (they are different for every node):
* name of the system in Cobbler, the very first line
* hostname and DNS name (do not forget to replace "your-domain-name.com" with your domain name)
* MAC addresses for every network interface (you can look them up in VirtualBox by using Machine -> Settings... -> Network -> Adapters)
* static IP address on management interface eth0
* version of Puppet according target OS
* vi nodes.yaml
.. literalinclude:: ../../deployment/puppet/cobbler/examples/nodes.yaml
* for the sake of convenience the "./cobbler_system.py" script is provided. The script reads the definition of the systems from the yaml file and makes calls to Cobbler API to insert these systems into the configuration. Run it using the following command:
* ``./cobbler_system.py -f nodes.yaml -l DEBUG``
Installing OS on the nodes using Cobbler
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now, when Cobbler has the correct configuration, the only thing you need to do is to PXE-boot your nodes. They will boot over the network from DHCP/TFTP provided by Cobbler and will be provisioned accordingly, with the specified operating system and configuration.
In case of VirtualBox, here is what you have to do for every virtual machine (fuel-01, fuel-02, fuel-03, fuel-04):
* Start VM
* Press F12 immediately and select "l" (LAN) as a bootable media
* Wait for the installation to complete
* Check that network is set up correctly and machine can reach package repositories as well as Puppet master
* ``ping download.mirantis.com``
* ``ping fuel-pm.your-domain-name.com``
It is important to note that if you use VLANs in your network configuration, you always have to keep in mind the fact that PXE booting does not work on tagged interfaces. Therefore, all your nodes including the one where the Cobbler service resides must share one untagged VLAN (also called "native VLAN"). You can use the ``dhcp_interface`` parameter of the ``cobbler::server`` class to bind the DHCP service to a certain interface.
Now you have OS installed and configured on all nodes. Moreover, Puppet is installed on the nodes as well and its configuration points to our Puppet master. Therefore, the nodes are almost ready for deploying OpenStack. Now, as the last step, you need to register nodes in Puppet master:
* ``puppet agent --test``
* it will generate a certificate, send to Puppet master for signing, and then fail
* switch to Puppet master and execute:
* ``puppet cert list``
* ``puppet cert sign --all``
* alternatively, you can sign only a single certificate using "puppet cert sign fuel-XX.your-domain-name.com"
* ``puppet agent --test``
* it should successfully complete and result in the "Hello World from fuel-XX" message
Configuring OpenStack cluster in Puppet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In case of VirtualBox, it is recommended to save the current state of every virtual machine using the mechanism of snapshots. It is helpful to have a point to revert to, so that you could install OpenStack using Puppet and then revert and try one more time, if necessary.
* On Puppet master
* create a file with the definition of networks, nodes, and roles. Assume you are deploying a compact configuration, with Controllers and Swift combined:
* ``cp /etc/puppet/modules/openstack/examples/site_openstack_swift_compact.pp /etc/puppet/manifests/site.pp``
* ``vi /etc/puppet/manifests/site.pp`` and edit settings accordingly (see "Configuring Network", "Enabling Quantum", "Enabling Cinder" below):
.. literalinclude:: ../../deployment/puppet/openstack/examples/site_openstack_swift_compact_fordocs.pp
* create a directory with keys, give it appropriate permissions, and generate keys themselves
* ``mkdir /var/lib/puppet/ssh_keys``
* ``cd /var/lib/puppet/ssh_keys``
* ``ssh-keygen -f openstack``
* ``chown -R puppet:puppet /var/lib/puppet/ssh_keys/``
* edit file ``/etc/puppet/fileserver.conf`` and append the following lines: ::
[ssh_keys]
path /var/lib/puppet/ssh_keys
allow *
Configuring Network
^^^^^^^^^^^^^^^^^^^
* You will need to change the following parameters:
* Change IP addresses for "public" and "internal" according to your networking requirements
* Define "$floating_range" and "$fixed_range" accordingly
Enabling Quantum
^^^^^^^^^^^^^^^^
* In order to deploy OpenStack with Quantum you need to setup an additional node that will act as a L3 router. This node is defined in configuration as ``fuel-quantum`` node. You will need to set the following options in order to enable Quantum::
# Network mode: quantum(true) or nova-network(false)
$quantum = true
# API service location
$quantum_host = $internal_virtual_ip
# Keystone and DB user password
$quantum_user_password = 'quantum_pass'
$quantum_db_password = 'quantum_pass'
# DB user name
$quantum_db_user = 'quantum'
# Type of network to allocate for tenant networks.
# You MUST either change this to 'vlan' or change this to 'gre'
# in order for tenant networks to provide connectivity between hosts
# Sometimes it can be handy to use GRE tunnel mode since you don't have to configure your physical switches for VLANs
$tenant_network_type = 'gre'
# For VLAN networks, the VLAN VID on the physical network that realizes the virtual network.
# Valid VLAN VIDs are 1 through 4094.
# For GRE networks, the tunnel ID.
# Valid tunnel IDs are any 32 bit unsigned integer.
$segment_range = '1500:1999'
Enabling Cinder
^^^^^^^^^^^^^^^
* In order to deploy OpenStack with Cinder, simply set ``$cinder = true`` in your site.pp file.
* Then, specify the list of physical devices in ``$nv_physical_volume``. They will be aggregated into "cinder-volumes" volume group.
* Alternatively, you can leave this field blank and create LVM VolumeGroup called "cinder-volumes" on every controller node yourself.
* The available manifests under "examples" assume that you have the same collection of physical devices for VolumeGroup "cinder-volumes" across all of your volume nodes.
* Be careful and do not add block devices to the list containing useful data (e.g. block devices on which your OS resides), as they will be destroyed after you allocate them for Cinder.
* For example::
# Volume manager: cinder(true) or nova-volume(false)
$cinder = true
# Rather cinder/nova-volume (iscsi volume driver) should be enabled
$manage_volumes = true
# Disk or partition for use by cinder/nova-volume
# Each physical volume can be a disk partition, whole disk, meta device, or loopback file
$nv_physical_volume = ['/dev/sdz', '/dev/sdy', '/dev/sdx']
Installing OpenStack on the nodes using Puppet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Install OpenStack controller nodes sequentially, one by one
* run "``puppet agent --test``" on fuel-01
* wait for the installation to complete
* repeat the same for fuel-02 and fuel-03
* .. important:: It is important to establish the cluster of OpenStack controllers in sequential fashion, due to the nature of assembling MySQL cluster based on Galera
* Install OpenStack compute nodes. You can do it in parallel if you wish.
* run "``puppet agent --test``" on fuel-04
* wait for the installation to complete
* Your OpenStack cluster is ready to go.
Note: Due to the Swift setup specifics, it is not enough to run Puppet 1 time. To complete the deployment, you should perform 3 runs of Puppet on each node.
.. _common-technical-issues:
Common Technical Issues
-----------------------
#. Puppet fails with "err: Could not retrieve catalog from remote server: Error 400 on SERVER: undefined method 'fact_merge' for nil:NilClass"
* bug: http://projects.puppetlabs.com/issues/3234
* workaround: ``service puppetmaster restart``
#. Puppet client will never resend the certificate to Puppet master. Certificate cannot be signed and verified.
* bug: http://projects.puppetlabs.com/issues/4680
* workaround:
* on puppet client: "``rm -f /etc/puppet/ssl/certificate_requests/\*.pem``", and "``rm -f /etc/puppet/ssl/certs/\*.pem``"
* on puppet master: "``rm -f /var/lib/puppet/ssl/ca/requests/\*.pem``"
#. The manifests are up-to-date under ``/etc/puppet/manifests``, but Puppet master keeps serving the previous version of manifests to the clients. Manifests seem to be cached by Puppet master.
* issue: https://groups.google.com/forum/?fromgroups=#!topic/puppet-users/OpCBjV1nR2M
* workaround: "``service puppetmaster restart``"
#. Timeout error for fuel-0x when running "``puppet-agent --test``" to install OpenStack when using HDD instead of SSD
* | Sep 26 17:56:15 fuel-02 puppet-agent[1493]: Could not retrieve catalog from remote server: execution expired
| Sep 26 17:56:15 fuel-02 puppet-agent[1493]: Not using cache on failed catalog
| Sep 26 17:56:15 fuel-02 puppet-agent[1493]: Could not retrieve catalog; skipping run
* workaround: ``vi /etc/puppet/puppet.conf``
* add: ``configtimeout = 1200``
#. On running "``puppet agent --test``", the error messages below occur:
* | err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://fuel-pm.your-domain-name.com/plugins
and
| err: Could not retrieve catalog from remote server: Error 400 on SERVER: stack level too deep
| warning: Not using cache on failed catalog
| err: Could not retrieve catalog; skipping run
* The first problem can be solved using the way described here: http://projects.reductivelabs.com/issues/2244
* The second problem can be solved by rebooting Puppet master.

View File

@ -3,25 +3,6 @@ Release Notes
.. contents:: :local:
v0.2.0-folsom
-------------
* Puppet manifests for deploying OpenStack Folsom in HA mode
* Active/Active HA architecture for Folsom, based on RabbitMQ / MySQL Galera / HAProxy / keepalived
* Added support for Ubuntu 12.04 in addition to CentOS 6.3 and RHEL 6.3 (includes bare metal provisioning, Puppet manifests, and OpenStack packages)
* Supports deploying Folsom with Quantum/OVS
* Supports deploying Folsom with Cinder
* Supports Puppet 2.7 and 3.0
v0.1.0-essex
------------
* Puppet manifests for deploying OpenStack Essex in HA mode
* Active/Active HA architecture for Essex, based on RabbitMQ / MySQL Galera / HAProxy / keepalived
* Cobbler-based bare-metal provisioning for CentOS 6.3 and RHEL 6.3
* Access to the mirror with OpenStack packages (http://download.mirantis.com/epel-fuel/)
* Configuration templates for different OpenStack cluster setups
* User Guide
.. include:: release-notes/v0-2-0-folsom.rst
.. include:: release-notes/v0-1-0-essex.rst

View File

@ -3,314 +3,5 @@ Known Issues and Workarounds
.. contents:: :local:
RabbitMQ
^^^^^^^^
At least one RabbitMQ node must remain operational
--------------------------------------------------
**Issue:**
All RabbitMQ nodes must not be shut down simultaneously. RabbitMQ requires
that, after a full shutdown of the cluster, the first node to bring up should
be the last one to shut down.
**Workaround:**
If you experienced a complete power loss, it is recommended to
power up all nodes and then manually start RabbitMQ on all of them within 30
seconds, e.g. using an SSH script. If failed, stop all RabbitMQ's (you might
need to do that using ``kill -9``, as ``rabbitmqctl stop`` may hang after such a
failure) and try starting them in different orders.
There is no easy automatic way to determine which node terminated last and so
should be brought up first, it is just trial and error.
**Background:** See http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792.
Galera
^^^^^^
Galera cluster has no built-in restart or shutdown mechanism
------------------------------------------------------------
**Issue:**
Galera cluster cannot be simply started or stopped. It is supposed to work continuously.
**Workaround:**
The right way to get Galera up and working
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Galera, as high availability software, does not include any built-in full cluster shutdown or restart sequence.
It is supposed to be running on a 24/7/365 basis.
On the other side, deploying, updating or restarting Galera may lead to different issues.
This guide is intended to help avoid some of these issues.
Regular Galera cluster startup includes a combination of the procedures described below.
These procedures, with some differences, are performed by Fuel manifests.
**1. Stop a single Galera node**
There is no dedicated Galera process - Galera works inside MySQL server process.
MySQL server should be patched with Galera WSREP patch to be able to work as Galera cluster.
All Galera stop steps listed below are automatically performed by the mysql init script
supplied by Fuel installation manifests, so in most cases it should be enough to perform the first step only.
In case even init script fails in some (rare, as we hope) circumstances, repeat step 2 manually.
#. Run ``service mysql stop``.
Wait some time to ensure all MySQL processes are shut down.
#. Run ``ps -ef | grep mysql`` and stop ALL(!) **mysqld** and **mysqld_safe** processes.
* Wait 20 seconds and check if the mysqld processes are running again.
* Stop or kill any new mysqld or mysqld_safe processes.
It is very important to stop all MySQL processes - Galera uses ``mysqld_safe``
and it may start additional MySQL processes.
So, even if there is no currently running processes visible, additional processes may be already in process of start.
That is why we check running processes twice. ``mysqld_safe`` has the default timeout 15 seconds before process restart.
If no ``mysqld`` processes are running, the node may be considered shut down.
If there was nothing to kill and all MySQL processes stopped after the ``service mysql stop`` command -
the node may be considered shut down gracefully.
**2. Stop the Galera cluster**
Galera cluster is a master-master replication cluster. Therefore, it is always in the process of synchronization.
The recommended way to stop the cluster includes the following steps:
#. Stop all requests to the cluster from outside
* Default Galera non-synchronized cache size under heavy load may be up to 1 Gib - you may have to wait until every node is fully synced.
Select the first node to shut down - better to start from non-primary nodes.
Connect to this node with mysql console.
* Run ``show status like 'wsrep_local_state%';``
If it is "Synced", then it is OK to start the shutdown node procedure.
If the node is non-synchronized, you may shut it down anyway, but avoid to start a new cluster operation
from this node in the future.
* In mysql console, run the following command:
* ``SET GLOBAL wsrep_on='OFF';``
Replication stops immediately after the ``wsrep_on`` variable is set to "OFF".
So, avoid making any changes to the node after this setting is done.
* Exit from mysql console.
Now, the selected node exited cluster.
#. Fulfil the steps described in the **Stop single Galera node** section.
#. Repeat steps 1 and 2 for each remaining node.
* Remember which node you are going to shut down last - ideally, it should be the primary node in the synced state. This node is the best and the recommended candidate to start up first when you decide to continue cluster operation.
**3. Start Galera and create a new cluster**
Galera writes its state to file ``grastate.dat`` that resides in the location specified in
``wsrep_data_home_dir variable``, defaulting to "mysql_real_data_home".
Fuel OpenStack deployment manifests also place this file to the default location, path: ``/var/lib/mysql/grastate.dat``.
This file is very useful to find out the node with the most recent commit in case of unexpected cluster shutdown.
Simply compare the "UUID" values of ``grastat.dat`` from every node.
The greater "UUID" value indicates which node has the latest commit.
In case the cluster was shut down gracefully and last shut down node is known -
simply perform the steps below to start up the cluster.
Alternatively, you can find the node with the most recent commit using the ``grastat.dat`` files
and start the cluster operation from the node found.
#. Ensure that all Galera nodes are shut down
If one or more nodes are running, they all will be outside the new cluster until restart.
Note that data integrity is not guaranteed in such case.
#. Select the primary node
This node is supposed to start first. It creates a new cluster ID and a new last commit UUID
(the ``wsrep_cluster_state_uuid`` variable represents this UUID inside the MySQL process).
Fuel deployment manifests with default settings set up ``fuel-01`` to be both the primary Galera cluster node
and the first deployed OpenStack controller.
* Open ``/etc/mysql/conf.d/wsrep.cnf``
* Set empty cluster address as follows (including quotation marks):
``wsrep_cluster_address="gcomm://"``
* Save changes to the config file.
#. Run the ``service mysql start`` command on the first primary node or restart MySQL
if there were configuration changes to ``wsrep.cnf``.
* Connect to MySQL server.
* Run the ``SET GLOBAL wsrep_on='ON';`` to start replication within the new cluster. This variable can also be set by editing the ``wsrep.cnf`` file.
* Check the new cluster status by running the following command: ``show status like 'wsrep%';``
* ``wsrep_local_state_comment`` should be "Synced"
* ``wsrep_cluster_status`` should be "Primary"
* ``wsrep_cluster_size`` should be "1", since no more cluster nodes have been started so far.
* ``wsrep_incoming_addresses`` should include only the address of the current node.
#. Select one of the secondary nodes
* Check its ``/etc/mysql/conf.d/wsrep.cnf`` file.
* The ``wsrep_cluster_address="gcomm://node1,node2"`` variable should include the name or IP address
of already started primary node. Otherwise, this node will definitely fail to start.
In case of OpenStack deployed by Fuel manifests with default settings (2 controllers),
this parameter should look like
``wsrep_cluster_address="gcomm://fuel-01:4567,fuel-02:4567"``
* If ``wsrep_cluster_address`` is set correctly, run ``rm -f /var/lib/mysql/grastate.dat`` and then ``service mysql start`` on this node.
#. Connect to any node with mysql and run ``show status like 'wsrep%';`` again.
* ``wsrep_local_state_comment`` should finally change from "Donor/Synced" or other statuses to "Synced".
Time to sync may vary depending on the database size and connection speed.
* ``wsrep_cluster_status`` should be "Primary" on both nodes.
Galera is a master-master replication cluster and every node becomes primary by default (i.e. master).
Galera also supports master-slave configuration for special purposes.
Slave nodes have the "Non-Primary" value for ``wsrep_cluster_status``.
* ``wsrep_cluster_size`` should be "2", since we have just added one more node to the cluster.
* ``wsrep_incoming_addresses`` should include the addresses of both started nodes.
**Note:**
State transfer is a heavy operation not only on the joining node, but also on donor.
In particular state donor may be not able to serve client requests, or be plain slow.
#. Repeat step 4 on all remaining controllers
If all secondary controllers are started successfully and became synced and you do not plan to restart the cluster
in the nearest future, it is strongly recommended to change the ``wsrep`` configuration setting on the first controller.
* Open file ``/etc/mysql/conf.d/wsrep.cnf``.
* Set ``wsrep_cluster_address=`` to the same value (node list) that is used for every secondary controller.
In case of OpenStack deployed by Fuel manifests with default settings (2 controllers),
on every operating controller this parameter should finally look like
``wsrep_cluster_address="gcomm://fuel-01:4567,fuel-02:4567"``
This step is important for future failures or maintenance procedures.
In case Galera primary controller node is restarted with the empty "gcomm" value
(i.e. ``wsrep_cluster_address="gcomm://"``), it creates a new cluster and exits the existing cluster.
The existing cluster nodes may also stop receiving requests and the synchronization process to prevent data
de-synchronization issues.
**Note:**
Starting from mysql version 5.5.28_wsrep23.7 (Galera version 2.2), Galera cluster supports additional start mode.
Instead of setting ``wsrep_cluster_address="gcomm://"``, on the first node one can set the following URL
for cluster address:
``wsrep_cluster_address="gcomm://node1,node2:port2,node3?pc.wait_prim=yes"``,
where ``nodeX`` is the name or IP address of one of available nodes, with optional port.
Therefore, every Galera node may have the same configuration file with the list of all nodes.
It is designed to eliminate all configuration file changes on the first node after the cluster is started.
After the nodes are started, with mysql one may set the ``pc.bootstrap=1`` flag to the node
which should start the new cluster and become the primary node.
All other nodes should automatically perform initial synchronization with this new primary node.
This flag may be also provided for a single selected node via the ``wsrep.cnf`` configuration file as follows:
``wsrep_cluster_address="gcomm://node1,node2:port2,node3?pc.wait_prim=yes&pc.bootstrap=1"``
Unfortunately, due to a bug in mysql init script (<https://bugs.launchpad.net/codership-mysql/+bug/1087368>),
the bootstrap flag is completely ignored in Galera 2.2 (wsrep_2.7). So, to start a new cluster, one should use
the old way with an empty ``gcomm://`` URL.
All other nodes may have both the single node and multiple node list in the ``gcomm`` URL,
the bug affects only the first node - the one that starts the new cluster.
Please note also that nodes with non-empty ``gcomm`` URL may start only if at least one of the nodes
listed in ``gcomm://node1,node2:port2,node3`` is already started and is available for initial synchronization.
For every starting Galera node it is enough to have at least one working node name/address to get full
information about the cluster structure and to perform initial synchronization.
Actually Fuel deployment manifests with default settings may set (or may not set!)
``wsrep_cluster_address="gcomm://"``
on the primary node (first deployed OpenStack controller) and node list like
``wsrep_cluster_address="gcomm://fuel-01:4567,fuel-02:4567"``
on every secondary controller. Therefore, it is a good idea to check these parameters after the deployment is finished.
**Note:**
Galera cluster is a very democratic system. As it is a master-master cluster,
every primary node equals to other primary nodes.
Primary nodes with the same sync state (same ``wsrep_cluster_state_uuid`` value) form the so called quorum -
the majority of primary nodes with the same ``wsrep_cluster_state_uuid``.
Normally, one of the controllers gets a new commit, increases its ``wsrep_cluster_state_uuid`` value
and performs synchronization with other nodes.
In case one of primary controllers fails, Galera cluster continues serving requests as long as the quorum exists.
Exit of the primary controller from the cluster equals to a failure, since after exit this controller
has a new cluster ID and the ``wsrep_cluster_state_uuid`` value that is less than the same value on long-working nodes.
So, 3 working primary controllers are the very minimal Galera cluster size. Recommended Galera cluster size is
6 controllers.
Actually Fuel deployment manifests with default settings deploy non-recommended Galera configuration
with 2 controllers only. It is suitable for testing purposes, but not for production deployments.
**4. Continue existing cluster after failure**
Continuing Galera cluster after power breakdown or other types of failure basically consists of two steps:
backing up every node and finding out the node with the most recent non-damaged replica.
* Helpful tip: add ``wsrep_provider_options="wsrep_on = off;"`` to the ``/etc/mysql/conf.d/wsrep.cnf`` configuration file.
After these steps simply perform the **Start Galera and create a new cluster** procedure,
starting from this found node - the one with the most recent non-damaged replica.
Useful links
~~~~~~~~~~~~
* Galera documentation from Galera authors:
* http://www.codership.com/wiki/doku.php
* Actual Galera and WSREP patch bug list and official Galera/WSREP bug tracker:
* https://launchpad.net/codership-mysql
* https://launchpad.net/galera
* One of recommended Galera cluster robust configurations:
* http://wiki.vps.net/vps-net-features/cloud-servers/template-information/galeramysql-recommended-cluster-configuration/
* Why we use Galera:
* http://openlife.cc/blogs/2011/july/ultimate-mysql-high-availability-solution
* Other questions (seriously, sometimes there is not enough info about Galera available in official Galera docs):
* http://www.google.com
.. include:: known-issues-and-workarounds/0010-rabbitmq.rst
.. include:: known-issues-and-workarounds/0020-galera.rst

View File

@ -1,5 +1,3 @@
Installation Instructions
=========================
Introduction
------------

View File

@ -1,5 +1,3 @@
Installation Instructions
=========================
Machines
--------

View File

@ -1,6 +1,3 @@
Installation Instructions
=========================
Network Setup
-------------

View File

@ -1,5 +1,3 @@
Installation Instructions
=========================
Installing & Configuring Puppet Master
--------------------------------------

View File

@ -1,5 +1,3 @@
Installation Instructions
=========================
Installing & Configuring Cobbler
--------------------------------

View File

@ -1,6 +1,3 @@
Installation Instructions
=========================
Deploying OpenStack
-------------------

View File

@ -1,5 +1,3 @@
Installation Instructions
=========================
.. _common-technical-issues:

View File

@ -1,6 +1,3 @@
Known Issues and Workarounds
============================
RabbitMQ
^^^^^^^^

View File

@ -1,6 +1,3 @@
Known Issues and Workarounds
============================
Galera
^^^^^^

View File

@ -1,6 +1,3 @@
Reference Architecture
======================
Logical Setup
-------------

View File

@ -1,6 +1,3 @@
Reference Architecture
======================
Cluster Sizing
--------------

View File

@ -1,6 +1,3 @@
Reference Architecture
======================
Network Setup
-------------

View File

@ -1,6 +1,3 @@
Reference Architecture
======================
Cinder vs. nova-volume
----------------------

View File

@ -1,6 +1,3 @@
Reference Architecture
======================
Quantum vs. nova-network
------------------------

View File

@ -1,6 +1,3 @@
Release Notes
=============
v0.1.0-essex
------------

View File

@ -1,6 +1,3 @@
Release Notes
=============
v0.2.0-folsom
-------------