major documentation rewrite
This commit is contained in:
parent
5ad99b4807
commit
23b4e22dee
|
@ -9,11 +9,9 @@ Table of contents
|
|||
:maxdepth: 2
|
||||
|
||||
pages/0010-package-contents
|
||||
pages/0020-how-it-works
|
||||
pages/0030-supported-software-versions
|
||||
pages/0020-introduction
|
||||
pages/0040-reference-architecture
|
||||
pages/0050-installation-instructions
|
||||
pages/0055-production-considerations
|
||||
pages/0060-frequently-asked-questions
|
||||
pages/0070-release-notes
|
||||
pages/0080-known-issues-and-workarounds
|
||||
|
||||
pages/0090-creating-fuel-pm-from-scratch
|
||||
|
|
|
@ -3,5 +3,5 @@ Package Contents
|
|||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: package-contents/0010-package-contents.rst
|
||||
.. include:: /pages/package-contents/0010-package-contents.rst
|
||||
|
||||
|
|
|
@ -1,6 +0,0 @@
|
|||
How It Works
|
||||
============
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: how-it-works/0010-how-it-works.rst
|
|
@ -0,0 +1,14 @@
|
|||
.. _Introduction:
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: /pages/introduction/0010-introduction.rst
|
||||
.. include:: /pages/introduction/0020-what-is-fuel.rst
|
||||
.. include:: /pages/introduction/0030-how-it-works.rst
|
||||
.. include:: /pages/introduction/0040-reference-topologies.rst
|
||||
.. include:: /pages/introduction/0050-supported-software.rst
|
||||
.. include:: /pages/introduction/0060-download-fuel.rst
|
||||
.. include:: /pages/introduction/0070-release-notes.rst
|
|
@ -3,5 +3,5 @@ Supported Software & Versions
|
|||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: supported-software-versions/0010-supported-software-versions.rst
|
||||
.. include:: /pages/supported-software-versions/0010-supported-software-versions.rst
|
||||
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
.. _Reference-Archiecture:
|
||||
|
||||
Reference Architecture
|
||||
======================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: reference-architecture/0010-overview.rst
|
||||
.. include:: reference-architecture/0020-logical-setup.rst
|
||||
.. include:: reference-architecture/0030-cluster-sizing.rst
|
||||
.. include:: reference-architecture/0040-network-setup.rst
|
||||
.. include:: reference-architecture/0050-cinder-vs-nova-volume.rst
|
||||
.. include:: reference-architecture/0060-quantum-vs-nova-network.rst
|
||||
.. include:: reference-architecture/0070-swift-notes.rst
|
||||
.. include:: /pages/reference-architecture/0010-overview.rst
|
||||
.. include:: /pages/reference-architecture/0020-logical-setup.rst
|
||||
.. include:: /pages/reference-architecture/0030-cluster-sizing.rst
|
||||
.. include:: /pages/reference-architecture/0040-network-setup.rst
|
||||
.. include:: /pages/reference-architecture/0045-technological-considerations.rst
|
||||
|
||||
|
|
|
@ -1,13 +1,16 @@
|
|||
Installation Instructions
|
||||
=========================
|
||||
.. _Create-Cluster:
|
||||
|
||||
Create an example multi-node OpenStack cluster using Fuel
|
||||
=========================================================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: installation-instructions/0010-introduction.rst
|
||||
.. include:: installation-instructions/0020-machines.rst
|
||||
.. include:: installation-instructions/0030-network-setup.rst
|
||||
.. include:: installation-instructions/0040-installing-configuring-puppet-master.rst
|
||||
.. include:: installation-instructions/0050-installing-configuring-cobbler.rst
|
||||
.. include:: installation-instructions/0060-deploying-openstack.rst
|
||||
.. include:: installation-instructions/0070-common-technical-issues.rst
|
||||
.. include:: /pages/installation-instructions/0010-introduction.rst
|
||||
.. include:: /pages/installation-instructions/0020-machines.rst
|
||||
.. include:: /pages/installation-instructions/0040-installing-configuring-puppet-master.rst
|
||||
.. include:: /pages/installation-instructions/0045-configuring-fuel-pm.rst
|
||||
.. include:: /pages/installation-instructions/0050-installing-configuring-cobbler.rst
|
||||
.. include:: /pages/installation-instructions/0055-installing-os-using-cobbler.rst
|
||||
.. include:: /pages/installation-instructions/0060-deploying-openstack.rst
|
||||
.. include:: /pages/installation-instructions/0065-testing-openstack.rst
|
||||
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
Redeployment of the same environment
|
||||
====================================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: /pages/deployment-pipeline/0010-deployment-pipeline.rst
|
|
@ -0,0 +1,12 @@
|
|||
.. _Production:
|
||||
|
||||
Production Considerations
|
||||
=========================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: /pages/production-considerations/0010-introduction.rst
|
||||
.. include:: /pages/production-considerations/0015-sizing-hardware.rst
|
||||
.. include:: /pages/production-considerations/0020-deployment-pipeline.rst
|
||||
.. include:: /pages/production-considerations/0030-large-deployments.rst
|
||||
.. include:: /pages/production-considerations/0040-orchestration-with-mcollective.rst
|
|
@ -1,6 +1,11 @@
|
|||
.. _FAQ:
|
||||
|
||||
FAQ (Frequently Asked Questions)
|
||||
==========================
|
||||
================================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: frequently-asked-questions/0010-frequently-asked-questions.rst
|
||||
.. include:: /pages/frequently-asked-questions/0005-known-issues-and-workarounds.rst
|
||||
.. include:: /pages/frequently-asked-questions/0070-common-technical-issues.rst
|
||||
.. include:: /pages/frequently-asked-questions/0020-other-questions.rst
|
||||
|
||||
|
|
|
@ -1,8 +0,0 @@
|
|||
Release Notes
|
||||
=============
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: release-notes/v2-0-folsom.rst
|
||||
.. include:: release-notes/v1-0-essex.rst
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
Known Issues and Workarounds
|
||||
============================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: known-issues-and-workarounds/0010-rabbitmq.rst
|
||||
.. include:: known-issues-and-workarounds/0020-galera.rst
|
||||
.. include:: known-issues-and-workarounds/0030-deployment-pipeline.rst
|
|
@ -0,0 +1,8 @@
|
|||
.. _Create-PM:
|
||||
|
||||
Appendix A: Creating Fuel-pm from scratch
|
||||
==========================================
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: /pages/creating-fuel-pm/0010-creating-fuel-pm-from-scratch.rst
|
|
@ -0,0 +1,204 @@
|
|||
|
||||
If you already have Puppet Master installed, you can skip this
|
||||
installation step and go directly to :ref:`Configuring fuel-pm <Configuring-Fuel-PM>`.
|
||||
|
||||
|
||||
|
||||
Installing Puppet master is a one-time procedure for the entire
|
||||
infrastructure. Once done, Puppet master will act as a single point of
|
||||
control for all of your servers, and you will never have to return to
|
||||
these installation steps again.
|
||||
|
||||
|
||||
Initial Setup
|
||||
-------------
|
||||
|
||||
If you plan to provision the Puppet Master on hardware, you need to
|
||||
make sure that you can boot your server from an ISO.
|
||||
|
||||
|
||||
|
||||
For VirtualBox, follow these steps to create the virtual hardware:
|
||||
|
||||
|
||||
* Machine -> New
|
||||
|
||||
|
||||
|
||||
* Name: fuel-pm
|
||||
* Type: Linux
|
||||
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
|
||||
|
||||
|
||||
|
||||
* Machine -> Settings -> Network
|
||||
|
||||
|
||||
|
||||
* Adapter 1
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Host-only Adapter
|
||||
* Name: vboxnet0
|
||||
|
||||
|
||||
|
||||
* Adapter 2
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Bridged Adapter
|
||||
* Name: epn1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
|
||||
|
||||
|
||||
|
||||
It is important that host-only Adapter 1 goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from the LAN on the first available network adapter.
|
||||
|
||||
The Puppet Master doesn't need the third adapter; it is used for OpenStack hosts and communication between tenant VMs.
|
||||
|
||||
|
||||
|
||||
OS Installation
|
||||
---------------
|
||||
|
||||
|
||||
* Pick and download an operating system image. This image will be used as the base OS for the Puppet master node. These insructions assume that you are using CentOS 6.3, but you can also use Ubuntu 12.04 or RHEL 6.3.
|
||||
|
||||
**PLEASE NOTE**: These are the only operating systems on which Fuel has been certified. Using other operating systems can, and in many cases will, produce unpredictable results.
|
||||
|
||||
|
||||
|
||||
* `CentOS 6.3 <http://isoredirect.centos.org/centos/6/isos/x86_64/>`_: download CentOS-6.3-x86_64-minimal.iso
|
||||
|
||||
|
||||
* Mount the downloaded ISO to the machine's CD/DVD drive. In case of VirtualBox, mount it to the fuel-pm virtual machine:
|
||||
|
||||
|
||||
|
||||
* Machine -> Settings -> Storage -> CD/DVD Drive -> Choose a virtual CD/DVD disk file
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
* Boot the server (or VM) from the CD/DVD drive and install the chosen OS
|
||||
|
||||
|
||||
|
||||
* Choose root password carefully
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
* Set up the eth0 interface. This will be the public interface:
|
||||
|
||||
``vi/etc/sysconfig/network-scripts/ifcfg-eth0``::
|
||||
|
||||
DEVICE="eth0"
|
||||
BOOTPROTO="dhcp"
|
||||
ONBOOT="no"
|
||||
TYPE="Ethernet"
|
||||
|
||||
Apply network settings::
|
||||
|
||||
/etc/sysconfig/network-scripts/ifup eth0
|
||||
|
||||
|
||||
|
||||
|
||||
* Set up the eth1 interface. This interface will be used for communication between the Puppet Master and Puppet clients, as well as for Cobbler:
|
||||
|
||||
|
||||
``vi /etc/sysconfig/network-scripts/ifcfg-eth1``::
|
||||
|
||||
DEVICE="eth1"
|
||||
BOOTPROTO="static"
|
||||
IPADDR="10.0.0.100"
|
||||
NETMASK="255.255.255.0"
|
||||
ONBOOT="yes"
|
||||
TYPE="Ethernet"
|
||||
PEERDNS="no"
|
||||
|
||||
|
||||
|
||||
Apply network settings::
|
||||
|
||||
|
||||
/etc/sysconfig/network-scripts/ifup eth1
|
||||
|
||||
|
||||
|
||||
|
||||
* Add DNS for Internet hostnames resolution::
|
||||
|
||||
vi/etc/resolv.conf
|
||||
|
||||
|
||||
|
||||
Replace your-domain-name.com with your domain name, and replace 8.8.8.8 with your DNS IP. Note: you can look up your DNS server on your host machine using ipconfig /all on Windows, or using cat/etc/resolv.conf under Linux. ::
|
||||
|
||||
search your-domain-name.com
|
||||
nameserver 8.8.8.8
|
||||
|
||||
|
||||
|
||||
|
||||
* Check that a ping to your host machine works. This means that the management segment is available::
|
||||
|
||||
ping 10.0.0.1
|
||||
|
||||
|
||||
|
||||
|
||||
* Now check to make sure that internet access is working properly::
|
||||
|
||||
|
||||
|
||||
|
||||
ping google.com
|
||||
|
||||
|
||||
|
||||
|
||||
* Next, set up the packages repository:
|
||||
|
||||
|
||||
|
||||
|
||||
``vi/etc/yum.repos.d/puppet.repo``::
|
||||
|
||||
[puppetlabs] name=Puppet Labs Packages
|
||||
baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/
|
||||
enabled=1 gpgcheck=1 gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
|
||||
|
||||
|
||||
|
||||
|
||||
* Install Puppet Master::
|
||||
|
||||
|
||||
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
yum upgrade
|
||||
yum install puppet-server-2.7.19
|
||||
service puppetmaster
|
||||
start chkconfig puppetmaster on
|
||||
service iptables stop
|
||||
chkconfig iptables off
|
||||
|
||||
|
||||
|
||||
|
||||
* Finally, make sure to turn off selinux::
|
||||
|
||||
|
||||
|
||||
|
||||
sed -i s/SELINUX=.*/SELINUX=disabled/ /etc/selinux/config
|
||||
setenforce 0
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,84 @@
|
|||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Because Puppet is additive only, there was no ability to revert changes as you would in a typical application deployment.
|
||||
If a change needs to be backed out, you must explicitly add configuration to reverse it, check this configuration in,
|
||||
and promote it to production using the pipeline. This meant that if a breaking change did get deployed into production,
|
||||
typically a manual fix was applied, with the proper fix checked into version control subsequently.
|
||||
|
||||
FUEL gives you the ability for isolating code changes while developing combined with minimizing the headaches
|
||||
with maintaining multiple environments serviced by one puppet server.
|
||||
|
||||
|
||||
Environments
|
||||
------------
|
||||
|
||||
Puppet supports putting nodes in environments, this maps cleanly to your development, QA and production life cycles
|
||||
and it’s a way to hand out different code to different nodes.
|
||||
|
||||
* On the Master/Server Node
|
||||
|
||||
The puppetmaster tries to find modules using its modulepath setting, typically something like ``/etc/puppet/modules``.
|
||||
You usually just set this value once in your ``/etc/puppet/puppet.conf`` and that’s it, all done.
|
||||
Environments expand on this and give you the ability to set different settings for different environments.
|
||||
|
||||
You can specify several search paths. The following example dynamically sets the modulepath
|
||||
so Puppet will check a per-environment folder for a module before serving it from the main set::
|
||||
|
||||
[master]
|
||||
modulepath = $confdir/$environment/modules:$confdir/modules
|
||||
|
||||
[production]
|
||||
manifest = $confdir/manifests/site.pp
|
||||
|
||||
[development]
|
||||
manifest = $confdir/$environment/manifests/site.pp
|
||||
|
||||
* On the Agent Node
|
||||
|
||||
Once agent node makes a request, the puppet master gets informed of its environment.
|
||||
If you don’t specify an environment, the agent has the default ``production`` environment.
|
||||
|
||||
To set an environment agent-side, just specify the environment setting in the [agent] block of ``puppet.conf``::
|
||||
|
||||
[agent]
|
||||
environment = development
|
||||
|
||||
|
||||
Deployment pipeline
|
||||
-------------------
|
||||
|
||||
* Deploy
|
||||
|
||||
In order to deploy the multiple environments that aren't interfere with each other
|
||||
you should specify the ``$deployment_id`` option in ``/etc/puppet/manifests/site.pp`` (set it to an even integer value (valid range is 0..254)).
|
||||
|
||||
First of all it is involved in the dynamic environment-based tag generation and globally apply that tag to all resources on each node.
|
||||
It is also used for keepalived daemon, there is a unique virtual_router_id evaluated.
|
||||
|
||||
* Clean/Revert
|
||||
|
||||
At this stage you just need to make sure the environment has the original/virgin state.
|
||||
|
||||
* Puppet node deactivate
|
||||
|
||||
This will ensure that any resources exported by that node will stop appearing in the catalogs served to the agent nodes:
|
||||
|
||||
``puppet node deactivate <node>``
|
||||
where <node> is fully qualified domain name (``puppet cert list --all``)
|
||||
|
||||
You can deactivate a node manually one by one or execute the following command to automatically make the same
|
||||
|
||||
``cert list --all | awk '! /DNS:puppet/ { gsub(/"/, "", $2); print $2}' | xargs puppet node deactivate``
|
||||
|
||||
* Redeploy
|
||||
|
||||
Fire up the puppet agent again to apply a desired node configuration
|
||||
|
||||
|
||||
Links
|
||||
-----
|
||||
|
||||
* http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/
|
||||
* http://docs.puppetlabs.com/guides/environment.html
|
|
@ -0,0 +1,5 @@
|
|||
Known Issues and Workarounds
|
||||
----------------------------
|
||||
|
||||
.. include:: /pages/frequently-asked-questions/known-issues-and-workarounds/0010-rabbitmq.rst
|
||||
.. include:: /pages/frequently-asked-questions/known-issues-and-workarounds/0020-galera.rst
|
|
@ -1,6 +1,4 @@
|
|||
Common Technical Issues
|
||||
-----------------------
|
||||
|
||||
#. **[Q]** Why did you decide to provide OpenStack packages through your own repository at http://download.mirantis.com?
|
||||
|
||||
**[A]** We are fully committed to providing our customers with working and stable bits and pieces in order to make successful OpenStack deployments. It is important to mention that we do not distribute our own version of OpenStack, we rather provide a plain vanilla distribution. So there is no vendor lock-in, and our repository just keeps the history of OpenStack packages certified to work with our Puppet manifests.
|
||||
|
||||
The benefit is that at any moment in time you can install any OpenStack version you want. If you are running Essex, you just need to use Puppet manifests which reference OpenStack packages for Essex from our repository. Once Folsom came out, we added new OpenStack packages for Folsom to our repository and created a separate branch with the corresponding Puppet manifests (which, in turn, reference these packages). With EPEL it would not be possible, as it only keeps the latest version for OpenStack packages.
|
||||
[MISSING CONTENT]
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
Other Questions
|
||||
---------------
|
||||
|
||||
#. **[Q]** Why did you decide to provide OpenStack packages through your own repository at http://download.mirantis.com?
|
||||
|
||||
**[A]** We are fully committed to providing our customers with working and stable bits and pieces in order to make successful OpenStack deployments. It is important to mention that we do not distribute our own version of OpenStack, we rather provide a plain vanilla distribution. So there is no vendor lock-in, and our repository just keeps the history of OpenStack packages certified to work with our Puppet manifests.
|
||||
|
||||
The benefit is that at any moment in time you can install any OpenStack version you want. If you are running Essex, you just need to use Puppet manifests which reference OpenStack packages for Essex from our repository. Once Folsom came out, we added new OpenStack packages for Folsom to our repository and created a separate branch with the corresponding Puppet manifests (which, in turn, reference these packages). With EPEL it would not be possible, as it only keeps the latest version for OpenStack packages.
|
|
@ -0,0 +1,91 @@
|
|||
|
||||
RabbitMQ
|
||||
^^^^^^^^
|
||||
|
||||
**At least one RabbitMQ node must remain operational**
|
||||
|
||||
|
||||
**Issue:**
|
||||
All RabbitMQ nodes must not be shut down simultaneously. RabbitMQ requires
|
||||
that, after a full shutdown of the cluster, the first node to bring up should
|
||||
be the last one to shut down.
|
||||
|
||||
**Workaround:**
|
||||
There are 2 possible scenarios, depending on shutdown results.
|
||||
|
||||
**1. RabbitMQ master node alive and can be started.**
|
||||
|
||||
FUEL installation updates ``/etc/init.d/rabbitmq-server`` init scripts for RHEL/Centos and Ubuntu to customized versions. These scripts attempt to start RabbitMQ 5 times and so give RabbitMQ master node necessary time to start
|
||||
after complete power loss.
|
||||
It is recommended to power up all nodes and then check if RabbitMQ server started on all nodes. All nodes should start automatically.
|
||||
|
||||
**2. Impossible to start RabbitMQ master node (hardware or system failure)**
|
||||
|
||||
There is no easy automatic way to resolve this situation.
|
||||
Proposed solution is to delete mirrored queue directly from mnesia (RabbitMQ database)
|
||||
|
||||
1. Select any alive node. Run
|
||||
|
||||
``erl -mnesia dir '"/var/lib/rabbitmq/mnesia/rabbit\@<failed_controller_name>"'``
|
||||
|
||||
2. Run ``mnesia:start().`` in Erlang console.
|
||||
|
||||
3. Compile and run the following Erlang script::
|
||||
|
||||
AllTables = mnesia:system_info(tables),
|
||||
DataTables = lists:filter(fun(Table) -> Table =/= schema end,
|
||||
AllTables),
|
||||
RemoveTableCopy = fun(Table,Node) ->
|
||||
Nodes = mnesia:table_info(Table,ram_copies) ++
|
||||
mnesia:table_info(Table,disc_copies) ++
|
||||
mnesia:table_info(Table,disc_only_copies),
|
||||
case lists:is_member(Node,Nodes) of
|
||||
true -> mnesia:del_table_copy(Table,Node);
|
||||
false -> ok
|
||||
end
|
||||
end,
|
||||
RemoveTableCopy(Tbl,'rabbit@<failed_controller_name>') || Tbl <- DataTables.
|
||||
rpc:call('rabbit@<failed_controller_name>',mnesia,stop,[]),
|
||||
rpc:call('rabbit@<failed_controller_name>',mnesia,delete_schema,[SchemaDir]),
|
||||
RemoveTablecopy(schema,'rabbit@<failed_controller_name>').
|
||||
|
||||
4. Exit Erlang console ``halt().``
|
||||
|
||||
5. Run ``service rabbitmq-server start``
|
||||
|
||||
**Background:** See http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792.
|
||||
|
||||
.. _https://launchpad.net/galera: https://launchpad.net/galera
|
||||
.. _CentOS 6.3: http://isoredirect.centos.org/centos/6/isos/x86_64/
|
||||
.. _http://wiki.vps.net/vps-net-features/cloud-servers/template-information/galeramysql-recommended-cluster-configuration/: http://wiki.vps.net/vps-net-features/cloud-servers/template-information/galeramysql-recommended-cluster-configuration/
|
||||
.. _http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792: http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792
|
||||
.. _http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/: http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/
|
||||
.. _http://download.mirantis.com/epel-fuel/: http://download.mirantis.com/epel-fuel/
|
||||
.. _Creating the virtual machines: http://#
|
||||
.. _http://projects.reductivelabs.com/issues/2244: http://projects.reductivelabs.com/issues/2244
|
||||
.. _https://bugs.launchpad.net/codership-mysql/+bug/1087368: https://bugs.launchpad.net/codership-mysql/+bug/1087368
|
||||
.. _https://groups.google.com/forum/?fromgroups=#!topic/puppet-users/OpCBjV1nR2M: https://groups.google.com/forum/?fromgroups=#!topic/puppet-users/OpCBjV1nR2M
|
||||
.. _https://www.virtualbox.org/wiki/Downloads: https://www.virtualbox.org/wiki/Downloads
|
||||
.. _Overview: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id8
|
||||
.. _Environments: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id9
|
||||
.. _Useful links: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id6
|
||||
.. _The process of redeploying the same environment: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id7
|
||||
.. _Galera cluster has no built-in restart or shutdown mechanism: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id4
|
||||
.. _The right way to get Galera up and working: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id5
|
||||
.. _At least one RabbitMQ node must remain operational: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id2
|
||||
.. _Galera: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id3
|
||||
.. _RabbitMQ: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id1
|
||||
.. _http://docs.puppetlabs.com/guides/environment.html: http://docs.puppetlabs.com/guides/environment.html
|
||||
.. _Deployment pipeline: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id10
|
||||
.. _Links: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/known-issues-and-workarounds/#id11
|
||||
.. _http://10.0.1.10/: http://10.0.1.10/
|
||||
.. _contact Mirantis for further assistance: http://www.mirantis.com/
|
||||
.. _https://launchpad.net/codership-mysql: https://launchpad.net/codership-mysql
|
||||
.. _http://projects.puppetlabs.com/issues/4680: http://projects.puppetlabs.com/issues/4680
|
||||
.. _http://www.codership.com/wiki/doku.php: http://www.codership.com/wiki/doku.php
|
||||
.. _http://projects.puppetlabs.com/issues/3234: http://projects.puppetlabs.com/issues/3234
|
||||
.. _Enabling Stored Configuration: http://fuel.mirantis.com/reference-documentation-on-fuel-folsom/installing-configuring-puppet-master-2/#puppet-master-stored-config
|
||||
.. _http://openlife.cc/blogs/2011/july/ultimate-mysql-high-availability-solution: http://openlife.cc/blogs/2011/july/ultimate-mysql-high-availability-solution
|
||||
.. _http://www.google.com: http://www.google.com/
|
||||
|
||||
|
|
@ -1,9 +1,6 @@
|
|||
|
||||
Galera
|
||||
^^^^^^
|
||||
|
||||
Galera cluster has no built-in restart or shutdown mechanism
|
||||
------------------------------------------------------------
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
**Issue:**
|
||||
Galera cluster cannot be simply started or stopped. It is supposed to work continuously.
|
||||
|
@ -12,7 +9,7 @@ Galera cluster cannot be simply started or stopped. It is supposed to work conti
|
|||
|
||||
|
||||
The right way to get Galera up and working
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Galera, as high availability software, does not include any built-in full cluster shutdown or restart sequence.
|
||||
|
||||
|
@ -271,7 +268,7 @@ starting from this found node - the one with the most recent non-damaged replica
|
|||
|
||||
|
||||
Useful links
|
||||
~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^
|
||||
|
||||
* Galera documentation from Galera authors:
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
|
||||
Fuel provides the following important bits in order to streamline the process of installing and managing OpenStack:
|
||||
|
||||
* Automation & instructions to install master node with Puppet Master and Cobbler
|
||||
* Snippets, kickstart and preseed files for Cobbler
|
||||
* Puppet manifests for all OpenStack components
|
||||
|
||||
In order to use Fuel, one should create a master node first. Then a configuration should be supplied for an OpenStack installation -- the description of physical nodes, layout of OpenStack components, as well as desired OpenStack settings. After that Fuel automatically performs the deployment procedure according to the reference architecture with built-in high availability for OpenStack components. It performs bare metal provisioning of hardware nodes first, and then does the installation and setup of an OpenStack cloud:
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=15vTTG2_575M7-kOzwsYyDmQrMgCPT2joLF2Cgiyzv7Q&w=678&h=617
|
||||
|
|
@ -1,11 +1,49 @@
|
|||
In this section, you'll learn how to do an actual installation of
|
||||
OpenStack using Fuel. In addition to getting a feel for the steps
|
||||
involved, you'll also gain some familiarity with some of your
|
||||
customization options. While Fuel does provide several different
|
||||
deployment topologies out of the box, its common to want to tweak
|
||||
those architectures for your own situation, so you'll get some practice
|
||||
moving certain features around from the standard installation.
|
||||
|
||||
Introduction
|
||||
------------
|
||||
The first step, however, is to commit to a deployment template. A
|
||||
fairly balanced small size, yet fully featured, deployment is the
|
||||
Multi-node (HA) Swift Compact deployment, so that's what we'll be using
|
||||
through the rest of this guide.
|
||||
|
||||
You can follow these instructions in order to get a production-grade OpenStack installation on hardware, or you can just do a dry run using VirtualBox to get a feel of how Fuel works.
|
||||
|
||||
If you decide to give Fuel a try using VirtualBox, you will need the latest stable VirtualBox (version 4.2.4 at the moment), as well as a stable host system, preferably Mac OS 10.7.x, CentOS 6.3, or Ubuntu 12.04. Windows 8 also works, but is not recommended. VirtualBox has to be installed with "extension pack", which enables PXE boot capabilities on Intel network adapters.
|
||||
|
||||
The list of certified hardware configuration is coming up in one of the next versions of Fuel.
|
||||
Real world installations require a physical hardware infrastructure,
|
||||
but you can easily deploy a small simulation cloud on a single
|
||||
physical machine using VirtualBox. You can follow these instructions
|
||||
in order to install an OpenStack cloud into a test environment using
|
||||
VirtualBox, or to get a production-grade installation using actual
|
||||
hardware.
|
||||
|
||||
|
||||
Before you start
|
||||
----------------
|
||||
|
||||
Before you begin your installation, you will need to make several
|
||||
decisions:
|
||||
|
||||
|
||||
|
||||
|
||||
* **OpenStack features.** You must choose which of the optional OpenStack features you want. For example, you must decide whether you want to install Swift, whether you want Glance to use Swift for image storage, whether you want Cinder for block storage, and whether you want nova-network or Quantum to handle your network connectivity. In the case of this example, we will be installing Swift, and Glance will be using it. We'll also be using Cinder for block storage. To simplify the installation, we'll stick with nova-network over Quantum.
|
||||
* **Deployment topology.** The first decision is whether your deployment requires high availability. If you do choose to do an HA deployment, you have a choice regarding the number of controllers you want to have. Following the recommendations in the previous section for a typical HA topology, we will use 3 OpenStack controllers.
|
||||
* **Cobbler server and Puppet Master.** The heart of a Fuel install is the combination of Puppet Master and Cobbler used to create your resources. Although Cobbler and Puppet Master can be installed on separate machines, it is common practice to install both on a single machine for small to medium size clouds, and that's what we'll be doing in this example.
|
||||
* **Domain name.** Puppet clients generate a Certificate Signing Request (CSR), which is then signed by Puppet Master. The signed certificate can then be used to authenticate the client during provisioning. Certificate generation requires a fully qualified hostname, so you must choose a domain name to be used in your installation. We'll leave this up to you.
|
||||
* **Network addresses.** OpenStack requires a minimum of three networks. If you are deploying on physical hardware two of them -- the public network and the internal, or management network -- must be routable in your networking infrastructure. Additionally, a set of private network addresses should be selected for automatic assignment to guest VMs. (These are fixed IPs for the private network). In our case, we are allocating network addresses as follows:
|
||||
|
||||
* Public network: 10.0.1.0/24
|
||||
* Internal network: 10.0.0.0/24
|
||||
* Private network: 192.168.0.0/16
|
||||
|
||||
* **Network interfaces.** All of those networks need to be assigned to the available NIC cards on the allocated machines. Additionally, if a fourth NIC is available, Cinder or block storage traffic can also be separated and delegated to the fourth NIC. In our case, were assigning networks as follows:
|
||||
|
||||
* Public network: eth1
|
||||
* Internal network: eth0
|
||||
* Private network: eth2
|
||||
* Networking for Cinder: eth3
|
||||
|
||||
If you run into any issues during the installation, please check :ref:`common-technical-issues` for common problems and resolutions.
|
||||
|
|
|
@ -1,17 +1,109 @@
|
|||
Infrastructure allocation
|
||||
-------------------------
|
||||
|
||||
Machines
|
||||
--------
|
||||
The next step is to make sure that you have all of the required
|
||||
hardware and software in place.
|
||||
|
||||
At the very minimum, you need to have the following machines in your data center:
|
||||
|
||||
* 1x Puppet master and Cobbler server (called "fuel-pm", where "pm" stands for puppet master). You can also choose to have Puppet master and Cobbler server on different nodes
|
||||
* 3x for OpenStack controllers (called "fuel-controller-01", "fuel-controller-02", and "fuel-controller-03")
|
||||
* 1x for OpenStack compute (called "fuel-compute-01")
|
||||
Software
|
||||
^^^^^^^^
|
||||
|
||||
In the case of VirtualBox environment, allocate the following resources for these machines:
|
||||
You can download the latest release of Fuel here:
|
||||
|
||||
* 1+ vCPU
|
||||
* 512+ MB of RAM for controller nodes
|
||||
* 1024+ MB of RAM for compute nodes
|
||||
* 1024+ MB of RAM for Puppet master and Cobbler server node
|
||||
* 8+ GB of HDD (enable dynamic virtual drive expansion in order to save some disk space)
|
||||
|
||||
|
||||
[LINK HERE]
|
||||
|
||||
|
||||
Additionally, you can download a pre-built Puppet Master/Cobbler ISO,
|
||||
which will cut down the amount of time you'll need to spend getting
|
||||
Fuel up and running. You can download the ISO here:
|
||||
|
||||
[LINK HERE]
|
||||
|
||||
|
||||
Hardware for a virtual installation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For a virtual installation, you need only a single machine. You can get
|
||||
by on 8GB of RAM, but 16GB will be better. To actually perform the
|
||||
installation, you need a way to create Virtual Machines. This guide
|
||||
assumes that you are using the latest version of VirtualBox (currently
|
||||
4.2.6), which you can download from
|
||||
|
||||
|
||||
|
||||
`https://www.virtualbox.org/wiki/Downloads`
|
||||
|
||||
|
||||
|
||||
You'll need to run VirtualBox on a stable host system. Mac OS 10.7.x,
|
||||
CentOS 6.3, or Ubuntu 12.04 are preferred; results in other operating
|
||||
systems are unpredictable.
|
||||
|
||||
|
||||
|
||||
You will need to allocate the following resources:
|
||||
|
||||
|
||||
|
||||
|
||||
* 1 server to host both Puppet Master and Cobbler. The minimum configuration for this server is:
|
||||
* 32-bit or 64-bit architecture
|
||||
* 1+ CPU or vCPU
|
||||
* 1024+ MB of RAM
|
||||
* 16+ GB of HDD for OS, and Linux distro storage
|
||||
|
||||
|
||||
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Swift Compact mode is: * 64-bit architecture
|
||||
|
||||
* 1+ CPU 1024+ MB of RAM
|
||||
* 8+ GB of HDD for base OS
|
||||
* 10+ GB of HDD for Swift
|
||||
|
||||
* 1 server to act as the OpenStack compute node (called fuel-compute-01). The minimum configuration for a compute node with Cinder deployed on it is:
|
||||
* 64-bit architecture
|
||||
* 2+ CPU, with Intel VTx or AMDV virtualization technology
|
||||
* 2048+ MB of RAM
|
||||
* 50+ GB of HDD for OS, instances, and ephemeral storage
|
||||
* 50+ GB of HDD for Cinder
|
||||
|
||||
Instructions for creating these resources will be provided in :ref:`Installing the OS using Fuel <Install-OS-Using-Fuel>`.
|
||||
|
||||
|
||||
Hardware for a physical infrastructure installation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The amount of hardware necessary for an installation depends on the
|
||||
choices you have made above. This sample installation requires the
|
||||
following hardware:
|
||||
|
||||
* 1 server to host both Puppet Master and Cobbler. The minimum configuration for this server is:
|
||||
* 32-bit or 64-bit architecture
|
||||
* 1+ CPU or vCPU
|
||||
* 1024+ MB of RAM
|
||||
* 16+ GB of HDD for OS, and Linux distro storage
|
||||
|
||||
* 3 servers to act as OpenStack controllers (called fuel-controller-01, fuel-controller-02, and fuel-controller-03). The minimum configuration for a controller in Swift Compact mode is:
|
||||
* 64-bit architecture
|
||||
* 1+ CPU
|
||||
* 1024+ MB of RAM
|
||||
* 400+ GB of HDD
|
||||
|
||||
* 1 server to act as the OpenStack compute node (called fuelcompute-01). The minimum configuration for a compute node with Cinder deployed on it is:
|
||||
* 64-bit architecture
|
||||
* 2+ CPU, with Intel VTx or AMDV virtualization technology
|
||||
* 2048+ MB of RAM
|
||||
* 1+ TB of HDD
|
||||
|
||||
|
||||
|
||||
|
||||
(If you choose to deploy Quantum on a separate node, you will need an
|
||||
additional server with specifications comparable to the controller
|
||||
nodes.)
|
||||
|
||||
|
||||
|
||||
For a list of certified hardware configurations, please contact the
|
||||
Mirantis Services team.
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
Network Setup
|
||||
-------------
|
||||
|
||||
|
|
|
@ -1,264 +1,212 @@
|
|||
|
||||
Installing & Configuring Puppet Master
|
||||
--------------------------------------
|
||||
Now that you know what you're going to install and where you're going to
|
||||
install it, its time to begin putting the pieces together. To do that,
|
||||
you'll need to create the Puppet master and Cobbler servers, which will
|
||||
actually provision and set up your OpenStack nodes.
|
||||
|
||||
If you already have Puppet master installed, you can skip this installation step and go directly to :ref:`puppet-master-stored-config`
|
||||
|
||||
Installing Puppet master is a one-time procedure for the entire infrastructure. Once done, Puppet master will act as a single point of control for all of your servers, and you will never have to return to these installation steps again.
|
||||
|
||||
Initial Setup
|
||||
~~~~~~~~~~~~~
|
||||
Installing Puppet Master is a one-time procedure for the entire
|
||||
infrastructure. Once done, Puppet Master will act as a single point of
|
||||
control for all of your servers, and you will never have to return to
|
||||
these installation steps again.
|
||||
|
||||
|
||||
|
||||
The deployment of the Puppet Master server -- named fuel-pm in these
|
||||
instructions -- varies slightly between the physical and simulation
|
||||
environments. In a physical infrastructure, fuel-pm must have a
|
||||
network presence on the same network the physical machines will
|
||||
ultimately PXE boot from. In a simulation environment fuel-pm only
|
||||
needs virtual network (hostonlyif) connectivity.
|
||||
|
||||
|
||||
|
||||
The easiest way to create an instance of fuel-pm is to download the
|
||||
Mirantis custom iso from:
|
||||
|
||||
|
||||
|
||||
[LINK HERE]
|
||||
|
||||
|
||||
|
||||
This iso can be used to create fuel-pm on a physical or virtual
|
||||
machine based on CentOS6.3x86_64minimal.iso. If for some reason you
|
||||
can't use this ISO, follow the instructions in :ref:`Creating the Puppet master <Create-PM>` to create
|
||||
your own fuel-pm, then skip ahead to :ref:`Configuring fuel-pm <Configuring-Fuel-PM>`.
|
||||
|
||||
|
||||
Network setup
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
OpenStack requires a minimum of three distinct networks: internal (or
|
||||
management), public, and private. The simplest and best mapping is to
|
||||
assign each network to a different physical interface. However, not
|
||||
all machines have three NICs, and OpenStack can be configured and
|
||||
deployed with only two physical NICs, collapsing the internal and
|
||||
public traffic onto a single NIC.
|
||||
|
||||
|
||||
|
||||
If you are deploying to a simulation environment, however, it makes
|
||||
sense to just allocate three NICs to each VM in your OpenStack
|
||||
infrastructure. For VirtualBox, this means creating three Host Only
|
||||
interfaces, vboxnet0, vboxnet1, and vboxnet2, for the internal,
|
||||
public, and private networks respectively.
|
||||
|
||||
|
||||
|
||||
Finally, we must assign networks to the internal, public, and private
|
||||
networks, and ip addresses to fuel-pm, fuel-controllers, and fuel-compute nodes. For a real deployment using physical infrastructure you
|
||||
must work with your IT department to determine which IPs to use, but
|
||||
for the purposes of this exercise we will assume the below network and
|
||||
ip assignments:
|
||||
|
||||
|
||||
|
||||
|
||||
#. 10.0.0.0/24: management or internal network, for communication between Puppet master and Puppet clients, as well as PXE/TFTP/DHCP for Cobbler
|
||||
#. 10.0.1.0/24: public network, for the High Availability (HA) Virtual IP (VIP), as well as floating IPs assigned to OpenStack guest VMs
|
||||
#. 192.168.0.0/16: private network, fixed IPs automatically assigned to guest VMs by OpenStack upon their creation (more discussion about this later on)
|
||||
|
||||
|
||||
|
||||
|
||||
Next we need to allocate a static IP address from the internal network
|
||||
to eth1 for fuel-pm, and eth0 for the controller, compute, and quantum
|
||||
nodes. For High Availability (HA) we must choose and assign an IP
|
||||
address from the public network to HaProxy running on the controllers.
|
||||
You can configure network addresses/network mask according to your
|
||||
needs, but our instructions will assume the following network settings
|
||||
on the interfaces:
|
||||
|
||||
|
||||
|
||||
#. eth0: internal management network, where each machine will have a static IP address
|
||||
|
||||
* 10.0.0.100 for Puppet master
|
||||
* 10.0.0.101-10.0.0.103 for controller nodes
|
||||
* 10.0.0.110-10.0.0.126 for compute nodes
|
||||
* 255.255.255.0 network mask
|
||||
* for a VirtualBox environment, the host machine will be 10.0.0.1
|
||||
|
||||
#. eth1: public network
|
||||
|
||||
* 10.0.1.10 public Virtual IP for access to the Horizon GUI (OpenStack management interface)
|
||||
|
||||
#. eth2: for communication between OpenStack VMs without IP address with promiscuous mode enabled.
|
||||
|
||||
|
||||
|
||||
If you are on VirtualBox, please create or make sure the following
|
||||
hostonly adapters exist and are configured correctly:
|
||||
|
||||
|
||||
|
||||
|
||||
* VirtualBox -> File -> Preferences...
|
||||
|
||||
* Network -> Add hostonly network (vboxnet0)
|
||||
|
||||
* IPv4 address: 10.0.0.1
|
||||
* IPv4 mask: 255.255.255.0
|
||||
* DHCP server: disabled
|
||||
|
||||
* Network -> Add hostonly network (vboxnet1)
|
||||
|
||||
* IPv4 address: 10.0.1.1
|
||||
* IPv4 mask: 255.255.255.0
|
||||
* DHCP server: disabled
|
||||
|
||||
* Network -> Add hostonly network (vboxnet2)
|
||||
|
||||
* IPv4 address: 0.0.0.0
|
||||
* IPv4 mask: 255.255.255.0
|
||||
* DHCP server: disabled
|
||||
|
||||
|
||||
|
||||
|
||||
After creating these interfaces, reboot VirtualBox to make sure that
|
||||
DHCP isnt running in the background.
|
||||
|
||||
|
||||
|
||||
Once this is set up, you can access the host machine via any of these
|
||||
networks at the preassigned ip address (i.e. 10.0.0.1 from eth0, or
|
||||
10.0.1.1 from eth1)
|
||||
|
||||
|
||||
|
||||
Installing on Windows isn't recommended, but if you're attempting it,
|
||||
you will also need to set up the IP address & network mask under
|
||||
Control Panel > Network and Internet > Network and Sharing Center for the
|
||||
Virtual HostOnly Network adapter.
|
||||
|
||||
|
||||
Creating fuel-pm on a Physical Machine
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you plan to provision the Puppet master on hardware, you need to
|
||||
create a bootable DVD or USB disk from the downloaded ISO, then make
|
||||
sure that you can boot your server from the DVD or USB drive. Once you've booted from the fuel-pm ISO, follow the instructions in the installation routine.
|
||||
|
||||
|
||||
Creating fuel-pm on a Virtual Machine
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The process of creating a virtual machine in VirtualBox depends on
|
||||
whether your deployment is purely virtual or consists of a virtual
|
||||
fuel-pm controlling physical hardware. If your deployment is purely
|
||||
virtual then Adapter 2 should be a Hostonly adapter attached to
|
||||
vboxnet0, but if your deployment infrastructure consists of a virtual
|
||||
fuel-pm controlling physical machines Adapter 2 must be a Bridged
|
||||
Adapter, connected to whatever network interface of the host machine
|
||||
is connected to your physical machines.
|
||||
|
||||
|
||||
|
||||
Start up VirtualBox and create a new machine as follows:
|
||||
|
||||
|
||||
If you plan to provision the Puppet master on hardware, you need to make sure that you can boot your server from an ISO.
|
||||
|
||||
For VirtualBox, follow these steps to create virtual hardware:
|
||||
|
||||
* Machine -> New...
|
||||
* Name: fuel-pm
|
||||
|
||||
* Name: fuel-pm
|
||||
* Type: Linux
|
||||
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
|
||||
* Version: Red Hat (32 or 64 Bit)
|
||||
* Memory: 1024 MB
|
||||
* Drive space: 16 GB HDD
|
||||
|
||||
* Machine -> Settings... -> Network
|
||||
* Adapter 1
|
||||
|
||||
* Adapter 1
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Host-only Adapter
|
||||
* Name: vboxnet0
|
||||
* Attached to: NAT Adapter
|
||||
|
||||
* Adapter 2
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Bridged Adapter
|
||||
* Name: epn1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
|
||||
* It is important that host-only "Adapter 1" goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from LAN on the first available network adapter.
|
||||
* Third adapter is not really needed for Puppet master, as it is only required for OpenStack hosts and communication of tenant VMs.
|
||||
* VirtualBox simulation infrastructure:
|
||||
|
||||
OS Installation
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
* Attached to: Hostonly Adapter
|
||||
* Name: vboxnet0
|
||||
|
||||
* Pick and download an operating system image. It will be used as a base OS for the Puppet master node:
|
||||
* `CentOS 6.3 <http://isoredirect.centos.org/centos/6/isos/x86_64/>`_: download CentOS-6.3-x86_64-minimal.iso
|
||||
* `RHEL 6.3 <https://access.redhat.com/home>`_: download rhel-server-6.3-x86_64-boot.iso
|
||||
* `Ubuntu 12.04 <https://help.ubuntu.com/community/Installation/MinimalCD>`_: download "Precise Pangolin" Minimal CD
|
||||
* Physical machine infrastructure:
|
||||
|
||||
* Mount it to the server CD/DVD drive. In case of VirtualBox, mount it to the fuel-pm virtual machine
|
||||
* Machine -> Settings... -> Storage -> CD/DVD Drive -> Choose a virtual CD/DVD disk file...
|
||||
* Attached to: Bridged Adapter
|
||||
* Name: eth0, wlan0 (any host network connected to your physical machines)
|
||||
|
||||
* Boot server (or VM) from CD/DVD drive and install the chosen OS
|
||||
* Choose root password carefully
|
||||
* Machine -> Storage
|
||||
|
||||
* Set up eth0 interface. It will be used for communication between Puppet master and Puppet clients, as well as for Cobbler:
|
||||
* CentOS/RHEL
|
||||
* ``vi /etc/sysconfig/network-scripts/ifcfg-eth0``::
|
||||
|
||||
DEVICE="eth0"
|
||||
BOOTPROTO="static"
|
||||
IPADDR="10.0.0.100"
|
||||
NETMASK="255.255.255.0"
|
||||
ONBOOT="yes"
|
||||
TYPE="Ethernet"
|
||||
PEERDNS="no"
|
||||
|
||||
* Apply network settings::
|
||||
|
||||
/etc/sysconfig/network-scripts/ifup eth0
|
||||
|
||||
* Ubuntu
|
||||
* ``vi /etc/network/interfaces`` and add configuration corresponding eth0 interface::
|
||||
|
||||
auto eth0
|
||||
iface eth0 inet static
|
||||
address 10.0.0.100
|
||||
netmask 255.255.255.0
|
||||
network 10.0.0.0
|
||||
|
||||
* Apply network settings::
|
||||
|
||||
/etc/init.d/networking restart
|
||||
|
||||
* Add DNS for Internet hostnames resolution. Both CentOS/RHEL and Ubuntu: ``vi /etc/resolv.conf`` (replace "your-domain-name.com" with your domain name, replace "8.8.8.8" with your DNS IP). Note: you can look up your DNS server on your host machine using ``ipconfig /all`` on Windows, or using ``cat /etc/resolv.conf`` under Linux ::
|
||||
|
||||
search your-domain-name.com
|
||||
nameserver 8.8.8.8
|
||||
|
||||
* check that ping to your host machine works. This means that management segment is available::
|
||||
|
||||
ping 10.0.0.1
|
||||
|
||||
* Set up eth1 interface. It will provide Internet access for Puppet master:
|
||||
* CentOS/RHEL
|
||||
* ``vi /etc/sysconfig/network-scripts/ifcfg-eth1``::
|
||||
|
||||
DEVICE="eth1"
|
||||
BOOTPROTO="dhcp"
|
||||
ONBOOT="yes"
|
||||
TYPE="Ethernet"
|
||||
|
||||
* Apply network settings::
|
||||
|
||||
/etc/sysconfig/network-scripts/ifup eth1
|
||||
|
||||
* Ubuntu
|
||||
* ``vi /etc/network/interfaces`` and add configuration corresponding eth1 interface::
|
||||
|
||||
auto eth1
|
||||
iface eth1 inet dhcp
|
||||
|
||||
* Apply network settings::
|
||||
|
||||
/etc/init.d/networking restart
|
||||
|
||||
* Check that Internet access works::
|
||||
|
||||
ping google.com
|
||||
|
||||
* Set up the packages repository
|
||||
* CentOS/RHEL
|
||||
* ``vi /etc/yum.repos.d/puppet.repo``::
|
||||
|
||||
[puppetlabs]
|
||||
name=Puppet Labs Packages
|
||||
baseurl=http://yum.puppetlabs.com/el/$releasever/products/$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
|
||||
|
||||
* Ubuntu
|
||||
* from command line run::
|
||||
|
||||
wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
|
||||
sudo dpkg -i puppetlabs-release-precise.deb
|
||||
|
||||
* Install Puppet master
|
||||
* CentOS/RHEL::
|
||||
|
||||
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
|
||||
yum upgrade
|
||||
yum install puppet-server-2.7.19
|
||||
service puppetmaster start
|
||||
chkconfig puppetmaster on
|
||||
service iptables stop
|
||||
chkconfig iptables off
|
||||
|
||||
* Ubuntu::
|
||||
|
||||
sudo apt-get update
|
||||
apt-get install puppet puppetmaster
|
||||
update-rc.d puppetmaster defaults
|
||||
|
||||
* Set hostname
|
||||
* CentOS/RHEL
|
||||
* ``vi /etc/sysconfig/network``::
|
||||
|
||||
HOSTNAME=fuel-pm
|
||||
|
||||
* Ubuntu
|
||||
* ``vi /etc/hostname``::
|
||||
|
||||
fuel-pm
|
||||
|
||||
* Both CentOS/RHEL and Ubuntu ``vi /etc/hosts`` (replace "your-domain-name.com" with your domain name)::
|
||||
|
||||
127.0.0.1 localhost fuel-pm
|
||||
10.0.0.100 fuel-pm.your-domain-name.com fuel-pm
|
||||
10.0.0.101 fuel-controller-01.your-domain-name.com fuel-controller-01
|
||||
10.0.0.102 fuel-controller-02.your-domain-name.com fuel-controller-02
|
||||
10.0.0.103 fuel-controller-03.your-domain-name.com fuel-controller-03
|
||||
10.0.0.110 fuel-compute-01.your-domain-name.com fuel-compute-01
|
||||
|
||||
* Run ``hostname fuel-pm`` or reboot to apply hostname
|
||||
|
||||
.. _puppet-master-stored-config:
|
||||
|
||||
Enabling Stored Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section will show how to configure Puppet to use a technique called stored configuration. It is required by Puppet manifests supplied with Fuel, so that they can store exported resources in Puppet database. This makes use of the PuppetDB.
|
||||
|
||||
* Install and configure PuppetDB
|
||||
* CentOS/RHEL::
|
||||
|
||||
yum install puppetdb puppetdb-terminus
|
||||
chkconfig puppetdb on
|
||||
|
||||
* Ubuntu::
|
||||
|
||||
apt-get install puppetdb puppetdb-terminus
|
||||
update-rc.d puppetdb defaults
|
||||
|
||||
* Disable selinux on CentOS/RHEL (otherwise Puppet will not be able to connect to PuppetDB)::
|
||||
|
||||
sed -i s/SELINUX=.*/SELINUX=disabled/ /etc/selinux/config
|
||||
setenforce 0
|
||||
|
||||
* Configure Puppet master to use storeconfigs
|
||||
* ``vi /etc/puppet/puppet.conf`` and add following into [master] section::
|
||||
|
||||
storeconfigs = true
|
||||
storeconfigs_backend = puppetdb
|
||||
|
||||
* Configure PuppetDB to use the correct hostname and port
|
||||
* ``vi /etc/puppet/puppetdb.conf`` and add following into [main] section (replace "your-domain-name.com" with your domain name; if this file does not exist, it will be created)::
|
||||
|
||||
server = fuel-pm.your-domain-name.com
|
||||
port = 8081
|
||||
|
||||
* Restart Puppet master to apply settings (Note: these operations may take about two minutes. You can ensure that PuppetDB is running by executing ``telnet fuel-pm.your-domain-name.com 8081``)::
|
||||
|
||||
service puppetmaster restart
|
||||
puppetdb-ssl-setup
|
||||
service puppetmaster restart
|
||||
service puppetdb restart
|
||||
* Attach the downloaded ISO as a drive
|
||||
|
||||
|
||||
Troubleshooting PuppetDB and SSL
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* If you have a problem with SSL and PuppetDB::
|
||||
|
||||
service puppetdb stop
|
||||
rm -rf /etc/puppetdb/ssl
|
||||
puppetdb-ssl-setup
|
||||
service puppetdb start
|
||||
|
||||
|
||||
Testing Puppet
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
* Put a simple configuration into Puppet (replace "your-domain-name.com" with your domain name), so that when you run puppet from any node, it will display the corresponding "Hello world" message
|
||||
* ``vi /etc/puppet/manifests/site.pp``::
|
||||
|
||||
node /fuel-pm.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-pm": }
|
||||
}
|
||||
node /fuel-controller-01.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-01": }
|
||||
}
|
||||
node /fuel-controller-02.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-02": }
|
||||
}
|
||||
node /fuel-controller-03.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-03": }
|
||||
}
|
||||
node /fuel-compute-01.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-compute-01": }
|
||||
}
|
||||
|
||||
* If you are planning to install Cobbler on Puppet master node as well, make configuration changes on Puppet master so that it actually knows how to provision software onto itself (replace "your-domain-name.com" with your domain name)
|
||||
* ``vi /etc/puppet/puppet.conf``::
|
||||
|
||||
[main]
|
||||
# server
|
||||
server = fuel-pm.your-domain-name.com
|
||||
|
||||
# enable plugin sync
|
||||
pluginsync = true
|
||||
|
||||
* Run puppet agent and observe the "Hello World from fuel-pm" output
|
||||
* ``puppet agent --test``
|
||||
|
||||
Installing Fuel
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
First of all, you should copy a complete Fuel package onto your Puppet master machine. Once you put Fuel there, you should unpack the archive and supply Fuel manifests to Puppet::
|
||||
|
||||
tar -xzf <fuel-archive-name>.tar.gz
|
||||
cd <fuel-archive-name>
|
||||
cp -Rf deployment/puppet/* /etc/puppet/modules/
|
||||
service puppetmaster restart
|
||||
Start the new machine and follow the instructions to install the ISO.
|
||||
|
|
|
@ -0,0 +1,210 @@
|
|||
.. _Configuring-Fuel-PM:
|
||||
|
||||
Configuring fuel-pm
|
||||
--------------------------------
|
||||
Once the installation is complete, you will need to finish the configuration to adjust for your own local values.
|
||||
|
||||
* Check network settings and connectivity and correct any errors:
|
||||
|
||||
* Check host connectivity by pinging the host machine::
|
||||
|
||||
ping 10.0.0.1
|
||||
|
||||
|
||||
* Check that Internet access works by pinging an outside host::
|
||||
|
||||
ping google.com
|
||||
|
||||
|
||||
* Check the hostname. Running ::
|
||||
|
||||
hostname
|
||||
|
||||
should return ::
|
||||
|
||||
fuel-pm
|
||||
|
||||
If not, set the hostname:
|
||||
|
||||
* CentOS/RHEL
|
||||
|
||||
``vi /etc/sysconfig/network`` ::
|
||||
|
||||
HOSTNAME=fuel-pm
|
||||
|
||||
* Ubuntu
|
||||
|
||||
``vi /etc/hostname``::
|
||||
|
||||
fuel-pm
|
||||
|
||||
|
||||
* Check the fully qualified hostname (FQDN) value. ::
|
||||
|
||||
hostname -f
|
||||
|
||||
should return ::
|
||||
|
||||
fuel-pm.your-domain-name.com
|
||||
|
||||
If not, correct the /etc/resolv.conf file by replacing your-domain-name.com below with your actual domain name, and 8.8.8.8 with your actual DNS server.
|
||||
|
||||
(Note: you can look up your DNS server on your host machine using ipconfig /all on Windows, or using cat /etc/resolv.conf under Linux) ::
|
||||
|
||||
search your-domain-name.com
|
||||
nameserver 8.8.8.8
|
||||
|
||||
* Run ::
|
||||
|
||||
hostname fuel-pm
|
||||
|
||||
or reboot to apply changes to the hostname.
|
||||
|
||||
|
||||
* Add the OpenStack hostnames to your domain. You can do this by actually adding them to DNS, or by simply editing the /etc/hosts file. In either case, replace your-domain-name.com with your domain name.
|
||||
|
||||
``vi /etc/hosts``::
|
||||
|
||||
127.0.0.1 localhost
|
||||
10.0.0.100 fuel-pm.your-domain-name.com fuel-pm
|
||||
10.0.0.101 fuel-controller-01.your-domain-name.com fuel-controller-01
|
||||
10.0.0.102 fuel-controller-02.your-domain-name.com fuel-controller-02
|
||||
10.0.0.103 fuel-controller-03.your-domain-name.com fuel-controller-03
|
||||
10.0.0.110 fuel-compute-01.your-domain-name.com fuel-compute-01
|
||||
|
||||
|
||||
Enabling Stored Configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Fuel's Puppet manifests call for storing exported resources in the
|
||||
Puppet database using PuppetDB, so the next step is to configure
|
||||
Puppet to use a technique called stored configuration.
|
||||
|
||||
|
||||
|
||||
|
||||
* Configure Puppet Master to use storeconfigs:
|
||||
|
||||
|
||||
``vi /etc/puppet/puppet.conf`` and add following into the ``[master]`` section::
|
||||
|
||||
storeconfigs = true
|
||||
storeconfigs_backend = puppetdb
|
||||
|
||||
* Configure PuppetDB to use the correct hostname and port:
|
||||
|
||||
``vi /etc/puppet/puppetdb.conf`` to create the ``puppetdb.conf`` file and add the following (replace ``your-domain-name.com`` with your domain name)::
|
||||
|
||||
[main]
|
||||
server = fuel-pm.your-domain-name.com
|
||||
port = 8081
|
||||
|
||||
* Configure Puppet Master's file server capability:
|
||||
|
||||
``vi /etc/puppet/fileserver.conf`` and append the following lines::
|
||||
|
||||
[ssh_keys]
|
||||
path /var/lib/puppet/ssh_keys
|
||||
allow *
|
||||
|
||||
|
||||
|
||||
|
||||
* Create a directory with keys, give it appropriate permissions, and generate the keys themselves::
|
||||
|
||||
|
||||
mkdir /var/lib/puppet/ssh_keys
|
||||
cd /var/lib/puppet/ssh_keys
|
||||
ssh-keygen -f openstack
|
||||
chown -R puppet:puppet /var/lib/puppet/ssh_keys/
|
||||
|
||||
|
||||
|
||||
|
||||
* Finally, set up SSL for PuppetDB and restart the puppetmaster and puppetdb services::
|
||||
|
||||
|
||||
service puppetmaster restart
|
||||
puppetdb-ssl-setup
|
||||
service puppetmaster restart
|
||||
service puppetdb restart
|
||||
|
||||
|
||||
|
||||
|
||||
* **IMPORTANT**: Note that while these operations appear to finish quickly, it can actually take several minutes for puppetdb to complete its startup process. You'll know it has finished starting up when you can successfully telnet to port 8081::
|
||||
|
||||
telnet pm.your-domain-name.com 8081
|
||||
|
||||
|
||||
Testing Puppet
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Put a simple configuration into Puppet -- replace your-domain-name.com
|
||||
with your domain name -- so that when you run puppet on various nodes,
|
||||
it will display the appropriate Hello world message:
|
||||
|
||||
``vi /etc/puppet/manifests/site.pp``::
|
||||
|
||||
|
||||
node /fuel-pm.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-pm": }
|
||||
}
|
||||
node /fuel-controller-01.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-01": }
|
||||
}
|
||||
node /fuel-controller-02.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-02": }
|
||||
}
|
||||
node /fuel-controller-03.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-controller-03": }
|
||||
}
|
||||
node /fuel-compute-01.your-domain-name.com/ {
|
||||
notify{"Hello world from fuel-compute-01": }
|
||||
}
|
||||
|
||||
|
||||
|
||||
If you are planning to install Cobbler on the Puppet Master node as
|
||||
well (as we are in this example), make configuration changes on the
|
||||
Puppet Master so that it actually knows how to provision software onto
|
||||
itself (replace your-domain-name. com with your domain name):
|
||||
|
||||
|
||||
|
||||
``vi /etc/puppet/puppet.conf``::
|
||||
|
||||
|
||||
[main]
|
||||
# server
|
||||
server = fuel-pm.your-domain-name.com
|
||||
|
||||
# enable plugin sync
|
||||
pluginsync = true
|
||||
|
||||
|
||||
Finally, to make sure everything is working properly, run puppet agent
|
||||
and to see the Hello World from fuel-pm output::
|
||||
|
||||
puppet agent --test
|
||||
|
||||
|
||||
|
||||
|
||||
Troubleshooting PuppetDB and SSL
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The first time you run puppet, its not unusual to have difficulties
|
||||
with the SSL setup. If so, remove the original files and start again,
|
||||
like so::
|
||||
|
||||
|
||||
service puppetmaster stop
|
||||
service puppetdb stop
|
||||
rm -rf /etc/puppetdb/ssl
|
||||
puppetdb-ssl-setup
|
||||
service puppetdb start
|
||||
service puppetmaster start
|
||||
|
||||
Again, remember that it may take several minutes before puppetdb is
|
||||
fully running, despite appearances to the contrary.
|
|
@ -1,34 +1,228 @@
|
|||
|
||||
Installing & Configuring Cobbler
|
||||
--------------------------------
|
||||
|
||||
Cobbler is a bare metal provisioning system which performs bare metal provisioning and initial installation of Linux on OpenStack nodes. Luckily, we already have a Puppet master installed, so we can install Cobbler using Puppet in a few seconds instead of doing it manually.
|
||||
Cobbler performs bare metal provisioning and initial installation of
|
||||
Linux on OpenStack nodes. Luckily, you already have a Puppet Master
|
||||
installed and Fuel includes instructions for installing Cobbler, so
|
||||
you can install Cobbler using Puppet in a few seconds, rather than
|
||||
doing it manually.
|
||||
|
||||
|
||||
Installing Fuel
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Installing Fuel is a simple matter of copying the complete Fuel
|
||||
package to fuel-pm and unpacking it in the proper location in order to
|
||||
supply Fuel manifests to Puppet::
|
||||
|
||||
|
||||
|
||||
tar -xzf <fuel-archive-name>.tar.gz
|
||||
cd <fuel-archive-name>
|
||||
cp -Rf deployment/puppet/* /etc/puppet/modules/
|
||||
service puppetmaster restart
|
||||
|
||||
|
||||
|
||||
From here, using Fuel is a matter of making sure it has the
|
||||
appropriate site.pp file from the Fuel distribution.
|
||||
|
||||
|
||||
Using Puppet to install Cobbler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
On Puppet master:
|
||||
On fuel-pm, copy the contents of ::
|
||||
|
||||
* ``vi /etc/puppet/manifests/site.pp``
|
||||
|
||||
* Copy the content of "fuel/deployment/puppet/cobbler/examples/site.pp" into "/etc/puppet/manifests/site.pp":
|
||||
.. literalinclude:: ../../deployment/puppet/cobbler/examples/site_fordocs.pp
|
||||
|
||||
* Make the following changes in that file:
|
||||
* Replace IP addresses and ranges according to your network setup. Replace "your-domain-name.com" with your domain name.
|
||||
* Uncomment the required OS distributions. They will be downloaded and imported into Cobbler during Cobbler installation.
|
||||
* Change the location of ISO image files to either a local mirror or the fastest available Internet mirror.
|
||||
<FUEL_DIR>/deployment/puppet/cobbler/examples/site.pp
|
||||
|
||||
|
||||
|
||||
into your existing ::
|
||||
|
||||
|
||||
|
||||
/etc/puppet/manifests/site.pp
|
||||
|
||||
|
||||
|
||||
file. The file has its own documentation, so it's a good idea to look through it to get a feel for the big picture and understand what's going on. The general idea is that this file sets
|
||||
certain parameters such as networking information, then defines the OS
|
||||
distributions Cobbler will serve so they can be imported into Cobbler
|
||||
as it's installed.
|
||||
|
||||
|
||||
|
||||
Lets take a look at some of the major points, and highlight where you
|
||||
will need to make changes::
|
||||
|
||||
|
||||
|
||||
...
|
||||
# [server] IP address that will be used as address of cobbler server.
|
||||
# It is needed to download kickstart files, call cobbler API and
|
||||
# so on. Required.
|
||||
$server = '10.0.0.100'
|
||||
|
||||
|
||||
|
||||
This, remember, is the fuel-pm server, which is acting as both the
|
||||
Puppet Master and Cobbler servers. ::
|
||||
|
||||
|
||||
|
||||
# Interface for cobbler instances
|
||||
$dhcp_interface = 'eth1'
|
||||
|
||||
|
||||
|
||||
The Cobbler instance needs to provide DHCP to each of the new nodes,
|
||||
so you will need to specify which interface will handle that. ::
|
||||
|
||||
|
||||
|
||||
$dhcp_start_address = '10.0.0.201'
|
||||
$dhcp_end_address = '10.0.0.254'
|
||||
|
||||
|
||||
|
||||
Change the $dhcp_start_address and $dhcp_end_address to match the network allocations you made
|
||||
earlier. The important thing is to make sure there are no conflicts with the static IPs you are allocating. ::
|
||||
|
||||
|
||||
|
||||
$dhcp_netmask = '255.255.255.0'
|
||||
$dhcp_gateway = '10.0.0.100'
|
||||
$domain_name = 'your-domain-name.com'
|
||||
|
||||
|
||||
|
||||
Change the $domain_name to your own domain name. ::
|
||||
|
||||
|
||||
|
||||
$name_server = '10.0.0.100'
|
||||
$next_server = '10.0.0.100'
|
||||
$cobbler_user = 'cobbler'
|
||||
$cobbler_password = 'cobbler'
|
||||
$pxetimeout = '0'
|
||||
|
||||
# Predefined mirror type to use: internal or external (should be removed soon)
|
||||
$mirror_type = 'external'
|
||||
|
||||
|
||||
|
||||
Change the $mirror_type to be external so Fuel knows to request
|
||||
resources from Internet sources rather than having to set up your own
|
||||
internal repositories.
|
||||
|
||||
|
||||
|
||||
The next step is to define the node itself, and the distributions it
|
||||
will serve. ::
|
||||
|
||||
|
||||
...
|
||||
type => $mirror_type,
|
||||
}
|
||||
|
||||
node fuel-pm{
|
||||
|
||||
class {'cobbler::nat': nat_range => $nat_range}
|
||||
...
|
||||
|
||||
|
||||
|
||||
The file assumes that you're installing Cobbler on a separate machine.
|
||||
Since you're installing it on fuel-pm, change the node name here.
|
||||
|
||||
|
||||
|
||||
Next, you will need to uncomment the required OS distributions so that
|
||||
they can be downloaded and imported into Cobbler during Cobbler
|
||||
installation.
|
||||
|
||||
|
||||
|
||||
In this example we'll focus on CentOs, so uncomment these lines and
|
||||
change the location of ISO image files to either a local mirror or the
|
||||
fastest available Internet mirror for CentOS6.3x86_64minimal.iso::
|
||||
|
||||
|
||||
|
||||
...
|
||||
# CentOS distribution
|
||||
# Uncomment the following section if you want CentOS image to be downloaded and imported into Cobbler
|
||||
# Replace "http://address/of" with valid hostname and path to the mirror where the image is stored
|
||||
|
||||
Class[cobbler::distro::centos63_x86_64] ->
|
||||
Class[cobbler::profile::centos63_x86_64]
|
||||
|
||||
class { cobbler::distro::centos63_x86_64:
|
||||
http_iso => "http://address/of/CentOS-6.3-x86_64-minimal.iso",
|
||||
ks_url => "cobbler",
|
||||
require => Class[cobbler],
|
||||
}
|
||||
|
||||
class { cobbler::profile::centos63_x86_64: }
|
||||
|
||||
# Ubuntu distribution
|
||||
# Uncomment the following section if you want Ubuntu image to be downloaded and imported into Cobbler
|
||||
# Replace "http://address/of" with valid hostname and path to the mirror where the image is stored
|
||||
...
|
||||
|
||||
|
||||
|
||||
If you want Cobbler to serve Ubuntu or RedHat distributions in
|
||||
addition to CentOS, perform the same actions for those sections.
|
||||
|
||||
|
||||
|
||||
With those changes in place, Puppet knows that Cobbler must be
|
||||
installed on the fuel-pm machine, and will also add the right distro and profile. The CentOS
|
||||
image will be downloaded from the mirror and imported into Cobbler as
|
||||
well.
|
||||
|
||||
|
||||
|
||||
Note that while we've set up the network so that external resources are
|
||||
accessed through the 10.0.1.0/24 network, this configuration includes
|
||||
Puppet commands to configure forwarding on the Cobbler node to make
|
||||
external resources available via the 10.0.0.0/24 network, which is used
|
||||
during the installation process (see enable_nat_all and
|
||||
enable_nat_filter).
|
||||
|
||||
|
||||
|
||||
Finally, run the puppet agent to actually install Cobbler on fuel-pm::
|
||||
|
||||
puppet agent --test
|
||||
|
||||
* Once the configuration is there, Puppet will know that Cobbler must be installed on the fuel-pm machine. Once Cobbler is installed, the right distro and profile will be automatically added to it. OS image will be downloaded from the mirror and put into Cobbler as well.
|
||||
|
||||
* It is necessary to note that in the proposed network configuration the snippet above includes Puppet commands to configure forwarding on Cobbler node to make external resources available via the 10.0.0.0/24 network which is used during the installation process (see "enable_nat_all" and "enable_nat_filter")
|
||||
|
||||
* run puppet agent to actually install Cobbler on fuel-pm
|
||||
* ``puppet agent --test``
|
||||
|
||||
Testing cobbler
|
||||
~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
* you can check that Cobbler is installed successfully by opening the following URL from your host machine:
|
||||
* http://fuel-pm/cobbler_web/ (u: cobbler, p: cobbler)
|
||||
* now you have a fully working instance of Cobbler. Moreover, it is fully configured and capable of installing the chosen OS (CentOS 6.3, RHEL 6.3, or Ubuntu 12.04) on the target OpenStack nodes
|
||||
You can check that Cobbler is installed successfully by opening the
|
||||
following URL from your host machine:
|
||||
|
||||
|
||||
|
||||
http://fuel-pm/cobbler_web/ (u: cobbler, p: cobbler)
|
||||
|
||||
|
||||
|
||||
If fuel-pm doesnt resolve on your host machine, you can access the
|
||||
Cobbler dashboard from:
|
||||
|
||||
|
||||
|
||||
http://10.0.0.100/cobbler_web
|
||||
|
||||
|
||||
|
||||
At this point you should have a fully working instance of Cobbler,
|
||||
fully configured and capable of installing the chosen OS (CentOS 6.3, RHEL 6.3, or Ubuntu 12.04) on
|
||||
the target OpenStack nodes.
|
||||
|
|
|
@ -0,0 +1,386 @@
|
|||
.. _Install-OS-Using-Fuel:
|
||||
|
||||
Installing the OS using Fuel
|
||||
----------------------------
|
||||
|
||||
Now you're ready to start creating the OpenStack servers themselves.
|
||||
The first step is to let Fuel's Cobbler kickstart and preseed files
|
||||
assist in the installation of operating systems on the target servers.
|
||||
|
||||
|
||||
Initial setup
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
If you are using hardware, make sure it is capable of PXE booting over
|
||||
the network from Cobbler. You'll also need each server's mac address.
|
||||
|
||||
|
||||
|
||||
If you're using VirtualBox, you will need to create the corresponding
|
||||
virtual machines for your OpenStack nodes. Follow these instructions
|
||||
to create machines named fuel-controller-01, fuel-controller-02, fuel-
|
||||
controller-03, and fuel-compute-02, but do not start them yet.
|
||||
|
||||
|
||||
|
||||
As you create each network adapter, click Advanced to expose and
|
||||
record the corresponding mac address.
|
||||
|
||||
|
||||
|
||||
|
||||
* Machine -> New...
|
||||
|
||||
|
||||
|
||||
* Name: fuel-controller-01 (you will need to repeat these steps for fuel-controller-02, fuel-controller-03, and fuel-compute-01)
|
||||
* Type: Linux
|
||||
* Version: Red Hat (64 Bit)
|
||||
|
||||
|
||||
|
||||
* Machine -> System -> Motherboard...
|
||||
|
||||
|
||||
|
||||
* Check Network in Boot sequence
|
||||
|
||||
|
||||
|
||||
* Machine -> Settings... -> Network
|
||||
|
||||
|
||||
|
||||
* Adapter 1
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Hostonly Adapter
|
||||
* Name: vboxnet0
|
||||
|
||||
|
||||
|
||||
* Adapter 2
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Hostonly Adapter
|
||||
* Name: vboxnet1
|
||||
|
||||
|
||||
|
||||
* Adapter 3
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: Hostonly Adapter
|
||||
* Name: vboxnet2
|
||||
* Advanced -> Promiscuous mode: Allow All
|
||||
|
||||
|
||||
|
||||
* Adapter 4
|
||||
|
||||
|
||||
|
||||
* Enable Network Adapter
|
||||
* Attached to: NAT
|
||||
|
||||
|
||||
|
||||
* Machine -> Settings -> Storage
|
||||
|
||||
|
||||
|
||||
* Controller: SATA
|
||||
|
||||
|
||||
|
||||
* Click the Add icon at the bottom of the Storage Tree pane
|
||||
* Add a second VDI disk of 10GB for storage
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
It is important that hostonly Adapter 1 goes first, as Cobbler will
|
||||
use vboxnet0 for PXE, and VirtualBox boots from LAN on the first
|
||||
available network adapter.
|
||||
|
||||
|
||||
|
||||
Adapter 4 is not strictly necessary, and can be thought of as an
|
||||
implementation detail. Its role is to bypass a limitation of Hostonly
|
||||
interfaces, and simplify internet access from the VM. It is possible
|
||||
to accomplish the same without using Adapter 4, but it requires
|
||||
bridged adapters or manipulating the iptables routes of the host, so
|
||||
using Adapter 4 is much easier.
|
||||
|
||||
Also, the additional drive volume will be used as storage space by Cinder, and configured later in the process.
|
||||
|
||||
|
||||
Configuring nodes in Cobbler
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now you need to define nodes in the Cobbler configuration, so that it
|
||||
knows what OS to install, where to install it, and what configuration
|
||||
actions to take. On fuel-pm, create a directory for
|
||||
configuration (wherever you like) and copy the sample config file for
|
||||
Cobbler from Fuel::
|
||||
|
||||
|
||||
|
||||
mkdir cobbler_config
|
||||
cd cobbler_config
|
||||
cp /etc/puppet/modules/cobbler/examples/cobbler_system.py .
|
||||
cp /etc/puppet/modules/cobbler/examples/nodes.yaml .
|
||||
|
||||
|
||||
|
||||
This configuration file contains definitions for all of the OpenStack
|
||||
nodes in your cluster. You can either keep them together in one file,
|
||||
or create a separate file for each node. In any case, lets look at the
|
||||
configuration for a single node. As you can see, you will need to make
|
||||
sure that you check and/or edit the following values **for every single
|
||||
node**:
|
||||
|
||||
|
||||
|
||||
|
||||
* The name of the system in Cobbler
|
||||
* The profile -- switch to ubuntu_1204_x86_64 if necessary
|
||||
* The correct version of Puppet according to your target OS
|
||||
* Your domain name
|
||||
* The hostname and DNS IP
|
||||
* MAC addresses for every network interface
|
||||
* The static IP address on management interface eth0
|
||||
* The default gateway for your network
|
||||
* The mac address for eth3, **which doesnt exist** in the default configuration
|
||||
|
||||
|
||||
|
||||
|
||||
Heres what the file should look like for fuel-controller-01. Replace
|
||||
your-domain-name.com and the mac addresses with your own values to
|
||||
complete the changes::
|
||||
|
||||
|
||||
|
||||
fuel-controller-01:
|
||||
# for Centos
|
||||
profile: "centos63_x86_64"
|
||||
# for Ubuntu
|
||||
# profile: "ubuntu_1204_x86_64"
|
||||
netboot-enabled: "1"
|
||||
# for Ubuntu
|
||||
# ksmeta: "puppet_version=2.7.19-1puppetlabs2 \
|
||||
# for Centos
|
||||
ksmeta: "puppet_version=2.7.19-1.el6\
|
||||
puppet_auto_setup=1 \
|
||||
puppet_master=fuel-pm.your-domain-name.com\
|
||||
puppet_enable=0 \
|
||||
ntp_enable=1 \
|
||||
mco_auto_setup=1 \
|
||||
mco_pskey=un0aez2ei9eiGaequaey4loocohjuch4Ievu3shaeweeg5Uthi \
|
||||
mco_stomphost=10.0.0.100\
|
||||
mco_stompport=61613 \
|
||||
mco_stompuser=mcollective \
|
||||
mco_stomppassword=AeN5mi5thahz2Aiveexo \
|
||||
mco_enable=1"
|
||||
# If you need create 'cinder-volumes' VG at install OS -- uncomment this line and move it above in middle of ksmeta section.
|
||||
# At this line you need describe list of block devices, that must come in this group.
|
||||
# cinder_bd_for_vg=/dev/sdb,/dev/sdc \
|
||||
hostname: "fuel-controller-01"
|
||||
name-servers: "10.0.0.100"
|
||||
name-servers-search: "your-domain-name.com"
|
||||
interfaces:
|
||||
eth0:
|
||||
mac: "52:54:00:0a:39:ec"
|
||||
static: "1"
|
||||
ip-address: "10.0.0.101"
|
||||
netmask: "255.255.255.0"
|
||||
dns-name: "fuel-controller-01.your-domain-name.com"
|
||||
management: "1"
|
||||
eth1:
|
||||
mac: "52:54:00:e6:dc:c9"
|
||||
static: "0"
|
||||
eth2:
|
||||
mac: "52:54:00:ae:22:04"
|
||||
static: "1"
|
||||
eth3:
|
||||
mac: "52:54:00:ae:44:42"
|
||||
interfaces_extra:
|
||||
eth0:
|
||||
peerdns: "no"
|
||||
eth1:
|
||||
peerdns: "no"
|
||||
eth2:
|
||||
promisc: "yes"
|
||||
userctl: "yes"
|
||||
peerdns: "no"
|
||||
|
||||
|
||||
Next you need to load these values into Cobbler. For the sake of
|
||||
convenience, Fuel includes the ./cobbler_system.py script, which reads
|
||||
the definition of the systems from the yaml file and makes calls to
|
||||
Cobbler API to insert these systems into the configuration. Run it
|
||||
using the following command::
|
||||
|
||||
|
||||
|
||||
./cobbler_system.py -f nodes.yaml -l DEBUG
|
||||
|
||||
|
||||
|
||||
If you've separated the configuration for your nodes into multiple
|
||||
files, be sure to run this once for each file.
|
||||
|
||||
|
||||
Installing OS on the nodes using Cobbler
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now that Cobbler has the correct configuration, the only thing you
|
||||
need to do is to PXE-boot your nodes. This means that they will boot over the network, with
|
||||
DHCP/TFTP provided by Cobbler, and will be provisioned accordingly,
|
||||
with the specified operating system and configuration.
|
||||
|
||||
|
||||
|
||||
In case of VirtualBox, start each virtual machine (fuel-controller-01,
|
||||
fuel-controller-02, fuel-controller-03, fuel-compute-01) as follows:
|
||||
|
||||
|
||||
|
||||
|
||||
#. Start the VM.
|
||||
#. Press F12 immediately and select l (LAN) as a bootable media.
|
||||
#. Wait for the installation to complete.
|
||||
#. Log into the new machine using root/r00tme.
|
||||
#. Check that networking is set up correctly and the machine can reach the Puppet Master and package repositories::
|
||||
|
||||
ping fuel-pm.your-domain-name.com
|
||||
ping download.mirantis.com
|
||||
|
||||
|
||||
|
||||
**It is important to note** that if you use VLANs in your network
|
||||
configuration, you always have to keep in mind the fact that PXE
|
||||
booting does not work on tagged interfaces. Therefore, all your nodes,
|
||||
including the one where the Cobbler service resides, must share one
|
||||
untagged VLAN (also called native VLAN). You can use the
|
||||
dhcp_interface parameter of the cobbler::server class to bind the DHCP
|
||||
service to a certain interface.
|
||||
|
||||
|
||||
|
||||
Register the nodes with the Puppet Master
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
At this point you the have OS installed configured on all nodes. Fuel
|
||||
has also made sure that these nodes have been configured, with Puppet
|
||||
installed and pointing to the Puppet Master, so the nodes are almost
|
||||
ready for deploying OpenStack. As the last step, you need to register the
|
||||
nodes in Puppet master. Do this by running the Puppet agent::
|
||||
|
||||
|
||||
|
||||
puppet agent --test
|
||||
|
||||
|
||||
|
||||
This action generates a certificate, sends it to the Puppet Master for
|
||||
signing, and then fails. That's fine. It's exactly what we want to
|
||||
happen; we just want to send the certificate request to the Puppet
|
||||
Master.
|
||||
|
||||
|
||||
|
||||
Once you've done this on all four nodes, switch to the Puppet Master
|
||||
and sign the certificate requests::
|
||||
|
||||
|
||||
|
||||
puppet cert list
|
||||
puppet cert sign --all
|
||||
|
||||
|
||||
|
||||
Alternatively, you can sign only a single certificate using::
|
||||
|
||||
|
||||
|
||||
puppet cert sign fuel-XX.your-domain-name.com
|
||||
|
||||
|
||||
|
||||
Now return to the newly installed node and run the Puppet agent again::
|
||||
|
||||
|
||||
|
||||
puppet agent --test
|
||||
|
||||
|
||||
|
||||
This time the process should successfully complete and result in the
|
||||
"Hello World from fuel-XX" message you defined earlier.
|
||||
|
||||
|
||||
|
||||
The last step before installing OpenStack is to prepare the partitions
|
||||
on which Swift and Cinder will store their data. Later versions of
|
||||
Fuel will do this for you, but for now, manually prepare the volume by
|
||||
fdisk and initialize it. To do that, follow these steps:
|
||||
|
||||
|
||||
|
||||
|
||||
#. Create the partition itself::
|
||||
|
||||
|
||||
|
||||
|
||||
fdisk /dev/sdb
|
||||
n(for new)
|
||||
p(for partition)
|
||||
<enter> (to accept the defaults)
|
||||
<enter> (to accept the defaults)
|
||||
w(to save changes)
|
||||
|
||||
|
||||
|
||||
|
||||
#. Initialize the XFS partition::
|
||||
|
||||
|
||||
|
||||
|
||||
mkfs.xfs -i size=1024 -f /dev/sdb1
|
||||
|
||||
|
||||
|
||||
|
||||
#. For a standard swift install, all data drives are mounted directly under /srv/node, so first create the mount point::
|
||||
|
||||
|
||||
|
||||
|
||||
mkdir -p /srv/node/sdb1
|
||||
|
||||
|
||||
|
||||
|
||||
#. Finally, add the new partition to fstab so it mounts automatically, then mount all current partitions::
|
||||
|
||||
|
||||
|
||||
|
||||
echo "/div/sdv1 /srv/node/sdb1 xfs
|
||||
noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
|
||||
mount -a
|
||||
|
|
@ -2,160 +2,538 @@
|
|||
Deploying OpenStack
|
||||
-------------------
|
||||
|
||||
Initial setup
|
||||
~~~~~~~~~~~~~
|
||||
At this point you have functioning servers that are ready to have
|
||||
OpenStack installed. If you're using VirtualBox, save the current state
|
||||
of every virtual machine by taking a snapshot. (To do that while the
|
||||
machine is running, highlight the machine in the main VirtualBox
|
||||
console, click the Snapshots button, then click the camera icon.) This
|
||||
way you can go back to this point and try again if necessary.
|
||||
|
||||
If you are using hardware, make sure it is capable of PXE booting over the network from Cobbler.
|
||||
|
||||
In case of VirtualBox, create the corresponding virtual machines for your OpenStack nodes. Do not start them yet.
|
||||
|
||||
* Machine -> New...
|
||||
* Name: fuel-controller-01 (will need to repeat for fuel-controller-02, fuel-controller-03, and fuel-compute-01)
|
||||
* Type: Linux
|
||||
* Version: Red Hat (64 Bit) or Ubuntu (64 Bit)
|
||||
To install the new cluster, the Puppet Master needs a configuration
|
||||
file that defines all of the appropriate networks, nodes, and roles.
|
||||
Fortunately, Fuel provides several different configurations, and as we
|
||||
have discussed, we are going to use the Multi-node (HA) Swift Compact
|
||||
architecture, or Swift Compact. To configure the Puppet Master to use
|
||||
the Swift Compact topology, copy that configuration file into the
|
||||
Puppet Master::
|
||||
|
||||
* Machine -> System -> Motherboard...
|
||||
* Check "Network" in "Boot sequence"
|
||||
|
||||
* Machine -> Settings... -> Network
|
||||
* Adapter 1
|
||||
* Enable Network Adapter
|
||||
* Attached to: Host-only Adapter
|
||||
* Name: vboxnet0
|
||||
|
||||
cp /etc/puppet/modules/openstack/examples/site_openstack_swift_compact.pp /etc/puppet/manifests/site.pp
|
||||
|
||||
|
||||
|
||||
The next step will be to go through the site.pp file and make any
|
||||
necessary customizations. In our case, were going to do three things:
|
||||
|
||||
#. Customize network settings to match our actual machines
|
||||
#. Turn off Quantum, since we made the decision not to use it in order to simplify matters
|
||||
#. Set up Cinder so that scheduling is handled by the controllers (as normal) but the actual storage happens on the compute node
|
||||
|
||||
|
||||
|
||||
Lets start with the basic network customization::
|
||||
|
||||
|
||||
|
||||
### GENERAL CONFIG ###
|
||||
# This section sets main parameters such as hostnames and IP addresses of different nodes
|
||||
|
||||
# This is the name of the public interface. The public network provides address space for Floating IPs, as well as public IP accessibility to the API endpoints.
|
||||
$public_interface = 'eth1'
|
||||
$public_br = 'br-ex'
|
||||
|
||||
* Adapter 2
|
||||
* Enable Network Adapter
|
||||
* Attached to: Bridged Adapter
|
||||
* Name: en1 (Wi-Fi Airport), or whatever network interface of the host machine with Internet access
|
||||
|
||||
* Adapter 3
|
||||
* Enable Network Adapter
|
||||
* Attached to: Host-only Adapter
|
||||
* Name: vboxnet1
|
||||
* Advanced -> Promiscuous mode: Allow All
|
||||
|
||||
* It is important that host-only "Adapter 1" goes first, as Cobbler will use vboxnet0 for PXE, and VirtualBox boots from LAN on the first available network adapter.
|
||||
|
||||
Configuring nodes in Cobbler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now you need to define nodes in the Cobbler configuration, so that it knows what OS to install, where to install it, and what configuration actions to take.
|
||||
|
||||
On Puppet master, create a directory for configuration (wherever you like) and copy the sample config file for Cobbler from Fuel repository:
|
||||
|
||||
* ``mkdir cobbler_config``
|
||||
* ``cd cobbler_config``
|
||||
* ``cp /etc/puppet/modules/cobbler/examples/cobbler_system.py .``
|
||||
* ``cp /etc/puppet/modules/cobbler/examples/nodes.yaml .``
|
||||
|
||||
Edit configuration for bare metal provisioning of nodes (nodes.yaml):
|
||||
|
||||
* There is essentially a section for every node, and you have to define all OpenStack nodes there (fuel-controller-01, fuel-controller-02, fuel-controller-03, and fuel-compute-01 by default). The config for a single node is provided below. The config for the remaining nodes is very similar
|
||||
* It is important to get the following parameters correctly specified (they are different for every node):
|
||||
* name of the system in Cobbler, the very first line
|
||||
* hostname and DNS name (do not forget to replace "your-domain-name.com" with your domain name)
|
||||
* MAC addresses for every network interface (you can look them up in VirtualBox by using Machine -> Settings... -> Network -> Adapters)
|
||||
* static IP address on management interface eth0
|
||||
* version of Puppet according target OS
|
||||
* vi nodes.yaml
|
||||
.. literalinclude:: ../../deployment/puppet/cobbler/examples/nodes.yaml
|
||||
|
||||
* for the sake of convenience the "./cobbler_system.py" script is provided. The script reads the definition of the systems from the yaml file and makes calls to Cobbler API to insert these systems into the configuration. Run it using the following command:
|
||||
* ``./cobbler_system.py -f nodes.yaml -l DEBUG``
|
||||
|
||||
Installing OS on the nodes using Cobbler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now, when Cobbler has the correct configuration, the only thing you need to do is to PXE-boot your nodes. They will boot over the network from DHCP/TFTP provided by Cobbler and will be provisioned accordingly, with the specified operating system and configuration.
|
||||
|
||||
In case of VirtualBox, here is what you have to do for every virtual machine (fuel-controller-01, fuel-controller-02, fuel-controller-03, fuel-compute-01):
|
||||
|
||||
* Start VM
|
||||
* Press F12 immediately and select "l" (LAN) as a bootable media
|
||||
* Wait for the installation to complete
|
||||
* Check that network is set up correctly and machine can reach package repositories as well as Puppet master
|
||||
* ``ping download.mirantis.com``
|
||||
* ``ping fuel-pm.your-domain-name.com``
|
||||
|
||||
It is important to note that if you use VLANs in your network configuration, you always have to keep in mind the fact that PXE booting does not work on tagged interfaces. Therefore, all your nodes including the one where the Cobbler service resides must share one untagged VLAN (also called "native VLAN"). You can use the ``dhcp_interface`` parameter of the ``cobbler::server`` class to bind the DHCP service to a certain interface.
|
||||
|
||||
Now you have OS installed and configured on all nodes. Moreover, Puppet is installed on the nodes as well and its configuration points to our Puppet master. Therefore, the nodes are almost ready for deploying OpenStack. Now, as the last step, you need to register nodes in Puppet master:
|
||||
|
||||
* ``puppet agent --test``
|
||||
* it will generate a certificate, send to Puppet master for signing, and then fail
|
||||
* switch to Puppet master and execute:
|
||||
* ``puppet cert list``
|
||||
* ``puppet cert sign --all``
|
||||
* alternatively, you can sign only a single certificate using "puppet cert sign fuel-XX.your-domain-name.com"
|
||||
* ``puppet agent --test``
|
||||
* it should successfully complete and result in the "Hello World from fuel-XX" message
|
||||
|
||||
Configuring OpenStack cluster in Puppet
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In case of VirtualBox, it is recommended to save the current state of every virtual machine using the mechanism of snapshots. It is helpful to have a point to revert to, so that you could install OpenStack using Puppet and then revert and try one more time, if necessary.
|
||||
|
||||
* On Puppet master
|
||||
* edit file ``/etc/puppet/fileserver.conf`` and append the following lines: ::
|
||||
# This is the name of the internal interface. It will be attached to the management network, where data exchange between components of the OpenStack cluster will happen.
|
||||
$internal_interface = 'eth0'
|
||||
$internal_br = 'br-mgmt'
|
||||
|
||||
[ssh_keys]
|
||||
path /var/lib/puppet/ssh_keys
|
||||
allow *
|
||||
# This is the name of the private interface. All traffic within OpenStack tenants' networks will go through this interface.
|
||||
$private_interface = 'eth2'
|
||||
|
||||
* create a directory with keys, give it appropriate permissions, and generate keys themselves
|
||||
* ``mkdir /var/lib/puppet/ssh_keys``
|
||||
* ``cd /var/lib/puppet/ssh_keys``
|
||||
* ``ssh-keygen -f openstack``
|
||||
* ``chown -R puppet:puppet /var/lib/puppet/ssh_keys/``
|
||||
* create a file with the definition of networks, nodes, and roles. Assume you are deploying a compact configuration, with Controllers and Swift combined:
|
||||
``cp /etc/puppet/modules/openstack/examples/site_openstack_swift_compact.pp /etc/puppet/manifests/site.pp``
|
||||
* ``vi /etc/puppet/manifests/site.pp`` and edit settings accordingly (see "Configuring Network", "Enabling Quantum", "Enabling Cinder" below):
|
||||
|
||||
.. literalinclude:: ../../deployment/puppet/openstack/examples/site_openstack_swift_compact_fordocs.pp
|
||||
|
||||
In this case, we don't need to make any changes to the interface
|
||||
settings, because they match what we've already set up. ::
|
||||
|
||||
# Public and Internal VIPs. These virtual addresses are required by HA topology and will be managed by keepalived.
|
||||
$internal_virtual_ip = '10.0.0.10'
|
||||
# Change this IP to IP routable from your 'public' network,
|
||||
# e. g. Internet or your office LAN, in which your public
|
||||
# interface resides
|
||||
$public_virtual_ip = '10.0.1.10'
|
||||
|
||||
|
||||
|
||||
The Virtual IPs, however, are not correct for our setup. The host IPs
|
||||
specified are in use elsewhere in the configuration, and the
|
||||
$public_virtual_ip needs to be on the public network we've already
|
||||
specified, so make the changes you see here to sync up with our actual
|
||||
setup. ::
|
||||
|
||||
|
||||
|
||||
# Array containing key/value pairs of controllers and IP addresses for their internal interfaces. Must have an entry for every controller node.
|
||||
# Fully Qualified domain names are allowed here along with short hostnames.
|
||||
|
||||
$controller_internal_addresses = {'fuel-controller-01' => '10.0.0.101','fuel-controller-02' => '10.0.0.102','fuel-controller-03' =>'10.0.0.103'}
|
||||
|
||||
# Set hostname of swift_master.
|
||||
...
|
||||
|
||||
|
||||
|
||||
Next, fix the internal IP addresses for the controllers to match the
|
||||
addresses they were given earlier.
|
||||
|
||||
|
||||
You'll need to make similar adjustments to the actual node definitions::
|
||||
|
||||
|
||||
...
|
||||
$primary_controller = false
|
||||
}
|
||||
|
||||
$addresses_hash = {
|
||||
'fuel-controller-01' => {
|
||||
'internal_address' => '10.0.0.101',
|
||||
'public_address' => '10.0.1.101',
|
||||
},
|
||||
'fuel-controller-02' => {
|
||||
'internal_address' => '10.0.0.102',
|
||||
'public_address' => '10.0.1.102',
|
||||
},
|
||||
'fuel-controller-03' => {
|
||||
'internal_address' => '10.0.0.103',
|
||||
'public_address' => '10.0.1.103',
|
||||
},
|
||||
'fuel-compute-01' => {
|
||||
'internal_address' => '10.0.0.110',
|
||||
'public_address' => '10.0.1.110',
|
||||
},
|
||||
'fuel-compute-02' => {
|
||||
...
|
||||
|
||||
|
||||
|
||||
Again, the internal and public addresses need to match what has
|
||||
already been set. Don't worry about the fuel-compute-02 and fuel-quantum nodes; were not using them in this setup. (You can delete them
|
||||
if you want, but its not necessary.)
|
||||
|
||||
|
||||
|
||||
Finally, you need to correct the gateway and DNS values::
|
||||
|
||||
|
||||
|
||||
...
|
||||
'fuel-quantum' => {
|
||||
'internal_address' => '10.0.0.108',
|
||||
'public_address' => '10.0.204.108',
|
||||
},
|
||||
}
|
||||
$addresses = $addresses_hash
|
||||
$default_gateway = '10.0.1.1'
|
||||
$dns_nameservers = [$addresses['fuel-pm']['internal_address'],]
|
||||
# Need point to cobbler node IP if you use default use case.
|
||||
|
||||
# Set internal address on which services should listen.
|
||||
# We assume that this IP will is equal to one of the haproxy
|
||||
...
|
||||
|
||||
|
||||
|
||||
The default gateway is the host machine, or more specifically, the
|
||||
first Hostonly adapter we specified in VirtualBox, which we set to
|
||||
10.0.1.1.
|
||||
|
||||
|
||||
|
||||
Finally, make sure that the $dns_nameservers value is looking for
|
||||
fuel-pm, rather than fuel-cobbler, because we've combined them into one
|
||||
machine.
|
||||
|
||||
|
||||
|
||||
Now that the network is configured for the servers, lets look at the
|
||||
network services.
|
||||
|
||||
|
||||
Enabling Quantum
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In order to deploy OpenStack with Quantum you need to setup an
|
||||
additional node that will act as a L3 router, or run Quantum out of
|
||||
one of the existing nodes. In our case we've opted to turn off Quantum::
|
||||
|
||||
|
||||
...
|
||||
### GENERAL CONFIG END ###
|
||||
### NETWORK/QUANTUM ###
|
||||
# Specify network/quantum specific settings
|
||||
|
||||
# Should we use quantum or nova-network(deprecated).
|
||||
# Consult OpenStack documentation for differences between them.
|
||||
$quantum = false
|
||||
$quantum_netnode_on_cnt = false
|
||||
|
||||
Notice that if we were going to keep Quantum on, the $quantum_netnode_on_cnt lets us specify whether we want Quantum to run
|
||||
on the controllers. ::
|
||||
|
||||
Configuring Network
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* You will need ``vi /etc/puppet/manifests/site.pp`` (see above) to change the following parameters:
|
||||
|
||||
* Change IP addresses for "public" and "internal" according to your networking requirements::
|
||||
# Specify network creation criteria:
|
||||
# Should puppet automatically create networks?
|
||||
$create_networks = true
|
||||
# Fixed IP addresses are typically used for communication between VM instances.
|
||||
$fixed_range = '10.0.198.128/27'
|
||||
# Floating IP addresses are used for communication of VM instances with the outside world (e.g. Internet).
|
||||
$floating_range = '10.0.1.128/28'
|
||||
|
||||
$internal_virtual_ip = '10.0.0.253' # IP address must be in address space of management network (eth0)
|
||||
$public_virtual_ip = '10.xxx.yyy.253' # must be in address space of public network (eth1) , but not in DHCP range and floating range (see below).
|
||||
|
||||
* Define "$floating_range" and "$fixed_range" accordingly::
|
||||
|
||||
$floating_range = '10.xxx.yyy.128/26' # IP-address from the public address space.
|
||||
$fixed_range = '10.0.198.0/24' # This subnet used for service purpose only. Specify any unused by you subnet here.
|
||||
The Floating IPs will be assigned to OpenStack VMs, and will be the
|
||||
way in which they will be accessed from the Internet, so the
|
||||
$floating_range needs to be on the public network. (Notice also that
|
||||
this range includes 10.0.1.253; that's why we had to move the
|
||||
$public_virtual_ip to 10.0.1.10.) ::
|
||||
|
||||
* Specify network manager. It can be 'nova.network.manager.FlatDHCPManager', 'nova.network.manager.FlatManager' or 'nova.network.manager.VlanManager'::
|
||||
|
||||
$network_manager = 'nova.network.manager.FlatDHCPManager'
|
||||
|
||||
* Define how many networks to be created at once::
|
||||
# These parameters are passed to the previously specified network manager , e.g. nova-manage network create.
|
||||
# Not used in Quantum.
|
||||
# Consult openstack docs for corresponding network manager.
|
||||
# https://fuel-dev.mirantis.com/docs/0.2/pages/0050-installation-instructions.html#network-setup
|
||||
$num_networks = 1
|
||||
$network_size = 31
|
||||
$vlan_start = 300
|
||||
|
||||
$num_networks = 1 # Number of networks to create
|
||||
$network_size = 255 # Number of IPs per network
|
||||
$vlan_start = 300 # VLAN ID to start with
|
||||
IDs from (vlan_start) to (vlan_start + num_networks-1) are generated automatically
|
||||
# Quantum
|
||||
# Segmentation type for isolating traffic between tenants
|
||||
...
|
||||
|
||||
**Note:**
|
||||
The last options are specific to nova network and will be ignored if the quantum service is enabled
|
||||
|
||||
Configuring for Syslog
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Finally, just as a note, you don't need to change anything here, but
|
||||
since this example uses nova-network its good to note these values. You have the option to create multiple VLANs, and the
|
||||
IDs for those VLANs run from vlan_start to (vlan_start + num_networks - 1), and are generated
|
||||
automatically.
|
||||
|
||||
|
||||
Enabling Cinder
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
While this example doesnt use Quantum, it does use Cinder, and with
|
||||
some very specific variations from the default. Specifically, as we
|
||||
said before, while the Cinder scheduler will continue to run on the
|
||||
controllers, the actual storage takes place on the compute nodes, on
|
||||
the /dev/sdb1 partition you created earlier. Cinder will be activated
|
||||
on any node that contains the specified block devices -- unless
|
||||
specified otherwise -- so let's look at what all of that means for the
|
||||
configuration. ::
|
||||
|
||||
|
||||
|
||||
...
|
||||
### CINDER/VOLUME ###
|
||||
|
||||
# Should we use cinder or nova-volume(obsolete)
|
||||
# Consult openstack docs for differences between them
|
||||
$cinder = true
|
||||
|
||||
# Should we install cinder on compute nodes?
|
||||
$cinder_on_computes = true
|
||||
|
||||
We want Cinder to be on the compute nodes, so set this value to true. ::
|
||||
|
||||
|
||||
|
||||
#Set it to true if your want cinder-volume been installed to the host
|
||||
#Otherwise it will install api and scheduler services
|
||||
$manage_volumes = true
|
||||
|
||||
# Setup network interface, which Cinder uses to export iSCSI targets.
|
||||
# This interface defines which IP to use to listen on iscsi port for
|
||||
# incoming connections of initiators
|
||||
$cinder_iscsi_bind_iface = 'eth3'
|
||||
|
||||
|
||||
|
||||
Here you have the opportunity to specify which network interface
|
||||
Cinder uses for its own traffic. As you may recall, we set up a fourth
|
||||
NIC, and we can specify that here now, rather than using the default
|
||||
internal interface. ::
|
||||
|
||||
|
||||
|
||||
# Below you can add physical volumes to cinder. Please replace values with the actual names of devices.
|
||||
# This parameter defines which partitions to aggregate into cinder-volumes or nova-volumes LVM VG
|
||||
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
# USE EXTREME CAUTION WITH THIS SETTING! IF THIS PARAMETER IS DEFINED,
|
||||
# IT WILL AGGREGATE THE VOLUMES INTO AN LVM VOLUME GROUP
|
||||
# AND ALL THE DATA THAT RESIDES ON THESE VOLUMES WILL BE LOST!
|
||||
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
# Leave this parameter empty if you want to create [cinder|nova]-volumes VG by yourself
|
||||
$nv_physical_volume = ['/dev/sdb']
|
||||
|
||||
### CINDER/VOLUME END ###
|
||||
...
|
||||
|
||||
|
||||
|
||||
We only want to allocate the /dev/sdb value for Cinder, so adjust
|
||||
$nv_physical_volume accordingly. Note, however, that this is a global
|
||||
value; it will apply to all servers, including the controllers --
|
||||
unless we specify otherwise, which we will in a moment.
|
||||
|
||||
|
||||
|
||||
**Be careful** to not add block devices to the list which contain useful
|
||||
data (e.g. block devices on which your OS resides), as they will be
|
||||
destroyed after you allocate them for Cinder.
|
||||
|
||||
|
||||
|
||||
Now lets look at the other storage-based service: Swift.
|
||||
|
||||
|
||||
Enabling Swift
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
There aren't many changes that you will need to make to the default
|
||||
configuration in order to enable Swift to work properly in Swift
|
||||
Compact mode, but you will need to adjust for the fact that we are
|
||||
running Swift on physical partitions::
|
||||
|
||||
|
||||
...
|
||||
### GLANCE and SWIFT ###
|
||||
|
||||
# Which backend to use for glance
|
||||
# Supported backends are "swift" and "file"
|
||||
$glance_backend = 'swift'
|
||||
|
||||
# Use loopback device for swift:
|
||||
# set 'loopback' or false
|
||||
# This parameter controls where swift partitions are located:
|
||||
# on physical partitions or inside loopback devices.
|
||||
$swift_loopback = false
|
||||
|
||||
The default value is loopback, which tells Swift to use a loopback storage device, which is basically a file that acts like a drive, rather than an actual physical drive. ::
|
||||
|
||||
|
||||
# Which IP address to bind swift components to: e.g., which IP swift-proxy should listen on
|
||||
$swift_local_net_ip = $internal_address
|
||||
|
||||
# IP node of controller used during swift installation
|
||||
# and put into swift configs
|
||||
$controller_node_public = $internal_virtual_ip
|
||||
|
||||
# Set hostname of swift_master.
|
||||
# It tells on which swift proxy node to build
|
||||
# *ring.gz files. Other swift proxies/storages
|
||||
# will rsync them.
|
||||
# Short hostnames allowed only. No FQDNs.
|
||||
|
||||
# Hash of proxies hostname|fqdn => ip mappings.
|
||||
# This is used by controller_ha.pp manifests for haproxy setup
|
||||
# of swift_proxy backends
|
||||
$swift_proxies = $controller_internal_addresses
|
||||
|
||||
### Glance and swift END ###
|
||||
...
|
||||
|
||||
|
||||
|
||||
Now we just need to make sure that all of our nodes get the proper
|
||||
values.
|
||||
|
||||
|
||||
Defining the node configurations
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now that we've set all of the global values, its time to make sure that
|
||||
the actual node definitions are correct. For example, by default all
|
||||
nodes will enable Cinder on /dev/sdb, but we don't want that for the
|
||||
controllers, so set nv_physical_volume to null, and manage_volumes to false. ::
|
||||
|
||||
|
||||
|
||||
...
|
||||
class compact_controller (
|
||||
$quantum_network_node = false
|
||||
) {
|
||||
class { 'openstack::controller_ha':
|
||||
controller_public_addresses => $controller_public_addresses,
|
||||
controller_internal_addresses => $controller_internal_addresses,
|
||||
internal_address => $internal_address,
|
||||
...
|
||||
tenant_network_type => $tenant_network_type,
|
||||
segment_range => $segment_range,
|
||||
cinder => $cinder,
|
||||
cinder_iscsi_bind_iface => $cinder_iscsi_bind_iface,
|
||||
manage_volumes => false,
|
||||
galera_nodes => $controller_hostnames,
|
||||
nv_physical_volume => null,
|
||||
use_syslog => $use_syslog,
|
||||
nova_rate_limits => $nova_rate_limits,
|
||||
cinder_rate_limits => $cinder_rate_limits,
|
||||
horizon_use_ssl => $horizon_use_ssl,
|
||||
}
|
||||
class { 'swift::keystone::auth':
|
||||
password => $swift_user_password,
|
||||
public_address => $public_virtual_ip,
|
||||
internal_address => $internal_virtual_ip,
|
||||
admin_address => $internal_virtual_ip,
|
||||
}
|
||||
}
|
||||
...
|
||||
|
||||
|
||||
|
||||
Fortunately, Fuel includes a class for the controllers, so you don't
|
||||
have to make these changes for each individual controller. As you can
|
||||
see, the controllers generally use the global values, but in this case
|
||||
you're telling the controllers not to manage_volumes, and not to use
|
||||
/dev/sdb for Cinder.
|
||||
|
||||
|
||||
|
||||
If you look down a little further, this class then goes on to help
|
||||
specify the individual controllers::
|
||||
|
||||
|
||||
...
|
||||
# Definition of the first OpenStack controller.
|
||||
node /fuel-controller-01/ {
|
||||
class {'::node_netconfig':
|
||||
mgmt_ipaddr => $::internal_address,
|
||||
mgmt_netmask => $::internal_netmask,
|
||||
public_ipaddr => $::public_address,
|
||||
public_netmask => $::public_netmask,
|
||||
stage => 'netconfig',
|
||||
}
|
||||
class {'nagios':
|
||||
proj_name => $proj_name,
|
||||
services => [
|
||||
'host-alive','nova-novncproxy','keystone', 'nova-scheduler',
|
||||
'nova-consoleauth', 'nova-cert', 'haproxy', 'nova-api', 'glance-api',
|
||||
'glance-registry','horizon', 'rabbitmq', 'mysql', 'swift-proxy',
|
||||
'swift-account', 'swift-container', 'swift-object',
|
||||
],
|
||||
whitelist => ['127.0.0.1', $nagios_master],
|
||||
hostgroup => 'controller',
|
||||
}
|
||||
|
||||
class { compact_controller: }
|
||||
$swift_zone = 1
|
||||
|
||||
class { 'openstack::swift::storage_node':
|
||||
storage_type => $swift_loopback,
|
||||
swift_zone => $swift_zone,
|
||||
swift_local_net_ip => $internal_address,
|
||||
}
|
||||
|
||||
class { 'openstack::swift::proxy':
|
||||
swift_user_password => $swift_user_password,
|
||||
swift_proxies => $swift_proxies,
|
||||
primary_proxy => $primary_proxy,
|
||||
controller_node_address => $internal_virtual_ip,
|
||||
swift_local_net_ip => $internal_address,
|
||||
}
|
||||
}
|
||||
...
|
||||
|
||||
|
||||
|
||||
Notice also that each controller has the swift_zone specified, so each
|
||||
of the three controllers can represent each of the three Swift zones.
|
||||
|
||||
|
||||
|
||||
|
||||
One final fix
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
Although the $controller_public_addresses value is deprecated, it must
|
||||
be specified correctly or your cluster will not function properly. You
|
||||
can find this value at the very bottom of the site.pp file::
|
||||
|
||||
|
||||
...
|
||||
# This configuration option is deprecated and will be removed in future releases. It's currently kept for backward compatibility.
|
||||
$controller_public_addresses = {'fuel-controller-01' => '10.0.1.101','fuel-controller-02' => '10.0.1.102','fuel-controller-03' =>'10.0.1.103'}
|
||||
|
||||
|
||||
|
||||
Now you're ready to perform the actual installation.
|
||||
|
||||
|
||||
Installing OpenStack on the nodes using Puppet
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now that you've set all of your configurations, all that's left to stand
|
||||
up your OpenStack cluster is to run Puppet on each of your nodes; the
|
||||
Puppet Master knows what to do for each of them.
|
||||
|
||||
|
||||
|
||||
Start by logging in to fuel-controller-01 and running the Puppet
|
||||
agent. One optional step would be to use the script command to log all
|
||||
of your output so you can check for errors if necessary::
|
||||
|
||||
|
||||
|
||||
script agent-01.log
|
||||
puppet agent --test
|
||||
|
||||
|
||||
|
||||
You will to see a great number of messages scroll by, and the
|
||||
installation will take a significan't amount of time. When the process
|
||||
has completed, press CTRL-D to stop logging and grep for errors::
|
||||
|
||||
|
||||
|
||||
grep err: agent-01.log
|
||||
|
||||
|
||||
|
||||
If you find any errors relating to other nodes, ignore them for now.
|
||||
|
||||
|
||||
|
||||
Now you can run the same installation procedure on fuel-controller-01
|
||||
and fuel-controller-02, as well as fuel-compute-01.
|
||||
|
||||
|
||||
|
||||
Note that the controllers must be installed sequentially due to the
|
||||
nature of assembling a MySQL cluster based on Galera, which means that
|
||||
one must complete its installation before the next begins, but that
|
||||
compute nodes can be installed concurrently once the controllers are
|
||||
in place.
|
||||
|
||||
|
||||
|
||||
In some cases, you may find errors related to resources that are not
|
||||
yet available when the installation takes place. To solve that
|
||||
problem, simply re-run the puppet agent on the affected node, and
|
||||
again grep for error messages.
|
||||
|
||||
|
||||
|
||||
When you see no errors on any of your nodes, your OpenStack cluster is
|
||||
ready to go.
|
||||
|
||||
Configuring OpenStack to use syslog
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* If you want to use syslog server, you need to do the following steps:
|
||||
|
||||
Set $use_syslog variable to true in site.pp
|
||||
|
||||
Adjust corresponding variables in "if $use_syslog" clause
|
||||
|
||||
::
|
||||
Adjust the corresponding variables in "if $use_syslog" clause::
|
||||
|
||||
$use_syslog = true
|
||||
if $use_syslog {
|
||||
if $use_syslog {
|
||||
class { "::rsyslog::client":
|
||||
log_local => true,
|
||||
log_auth_local => true,
|
||||
|
@ -164,7 +542,6 @@ Adjust corresponding variables in "if $use_syslog" clause
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
For remote logging:
|
||||
|
||||
server => <syslog server hostname or ip>
|
||||
|
@ -174,13 +551,32 @@ For remote logging:
|
|||
For local logging:
|
||||
|
||||
set log_local and log_auth_local to true
|
||||
|
||||
|
||||
Setting the mirror type
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
|
||||
To tell Fuel to download packages from external repos provided by Mirantis and your distribution vendors, set the $mirror_type variable to "external"::
|
||||
|
||||
...
|
||||
# If you want to set up a local repository, you will need to manually adjust mirantis_repos.pp,
|
||||
# though it is NOT recommended.
|
||||
$mirror_type = 'external'
|
||||
$enable_test_repo = false
|
||||
...
|
||||
|
||||
Future versions of Fuel will enable you to use your own internal repositories.
|
||||
|
||||
Configuring Rate-Limits
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Sometimes (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-compute-API.html) In this case you can change them to appropriate values.
|
||||
Openstack has predefined limits on different HTTP queries for nova-compute and cinder services. Some
|
||||
times (e.g. for big clouds or test scenarios) these limits are too strict. (See http://docs.openstac
|
||||
k.org/folsom/openstack-compute/admin/content/configuring-compute-API.html) In this case you can chan
|
||||
ge them to appropriate values.
|
||||
|
||||
There are to hashes describing these limits: $nova_rate_limits and $cinder_rate_limits. ::
|
||||
There are two hashes describing these limits: $nova_rate_limits and $cinder_rate_limits. ::
|
||||
|
||||
$nova_rate_limits = { 'POST' => '10',
|
||||
'POST_SERVERS' => '50',
|
||||
|
@ -192,144 +588,10 @@ There are to hashes describing these limits: $nova_rate_limits and $cinder_rate_
|
|||
'PUT' => 10, 'GET' => 3,
|
||||
'DELETE' => 100 }
|
||||
|
||||
Enabling Quantum
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
* In order to deploy OpenStack with Quantum you need to setup an additional node that will act as a L3 router. This node is defined in configuration as ``fuel-quantum`` node. You will need to set the following options in order to enable Quantum::
|
||||
|
||||
# Network mode: quantum(true) or nova-network(false)
|
||||
$quantum = true
|
||||
|
||||
# API service location
|
||||
$quantum_host = $internal_virtual_ip
|
||||
|
||||
# Keystone and DB user password
|
||||
$quantum_user_password = 'quantum_pass'
|
||||
$quantum_db_password = 'quantum_pass'
|
||||
|
||||
# DB user name
|
||||
$quantum_db_user = 'quantum'
|
||||
|
||||
# Type of network to allocate for tenant networks.
|
||||
# You MUST either change this to 'vlan' or change this to 'gre'
|
||||
# in order for tenant networks to provide connectivity between hosts
|
||||
# Sometimes it can be handy to use GRE tunnel mode since you don't have to configure your physical switches for VLANs
|
||||
$tenant_network_type = 'gre'
|
||||
|
||||
# For VLAN networks, the VLAN VID on the physical network that realizes the virtual network.
|
||||
# Valid VLAN VIDs are 1 through 4094.
|
||||
# For GRE networks, the tunnel ID.
|
||||
# Valid tunnel IDs are any 32 bit unsigned integer.
|
||||
$segment_range = '1500:1999'
|
||||
|
||||
Mirror choosing
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
At present we can have several types of mirrors for package downloading. One can either use external repos provided by Mirantis and your distribution vendors or use internal repos. This behavior is controlled by $mirror_type variable in site.pp. Set it to 'external' - as of version 2.0 it is not possible to define custom internal repo, but it will be possible in future versions. Anyway, you can modify mirantis_repos.pp to run with your internal repo.
|
||||
|
||||
Enabling Cinder
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
* In order to deploy OpenStack with Cinder, simply set ``$cinder = true`` in your site.pp file.
|
||||
* If you need export cinder volumes from compute nodes (not only from controller nodes), set ``$cinder_on_computes = true`` in your site.pp file.
|
||||
* Then, specify the list of physical devices in ``$nv_physical_volume``. They will be aggregated into "cinder-volumes" volume group.
|
||||
* Alternatively, you can leave this field blank and create LVM VolumeGroup called "cinder-volumes" on every controller node yourself. Cobbler automation allows you to create this volume group during bare metal provisioning phase through parameter "cinder_bd_for_vg" in nodes.yaml file.
|
||||
* The available manifests under "examples" assume that you have the same collection of physical devices for VolumeGroup "cinder-volumes" across all of your volume nodes.
|
||||
* Cinder will be activated on any node that contains ``$nv_phyical_volume`` block device(s) or "cinder-volumes" volume group, including both controller and compute nodes.
|
||||
* Be careful to not add block devices to the list which contain useful data (e.g. block devices on which your OS resides), as they will be destroyed after you allocate them for Cinder.
|
||||
* You can specify network interface, that will be used for exports cinder volumes (by default used management network interface). For this set ``$cinder_iscsi_bind_iface = 'ethX'`` option.
|
||||
* For example::
|
||||
|
||||
# Volume manager: cinder(true) or nova-volume(false)
|
||||
$cinder = true
|
||||
$cinder_on_computes = true
|
||||
|
||||
# Setup network interface, which Cinder used for export iSCSI targets.
|
||||
$cinder_iscsi_bind_iface = 'ethX'
|
||||
|
||||
# Rather cinder/nova-volume (iscsi volume driver) should be enabled
|
||||
$manage_volumes = true
|
||||
|
||||
# Disk or partition for use by cinder/nova-volume
|
||||
# Each physical volume can be a disk partition, whole disk, meta device, or loopback file
|
||||
$nv_physical_volume = ['/dev/sdz', '/dev/sdy', '/dev/sdx']
|
||||
|
||||
.. _create-the-XFS-partition:
|
||||
|
||||
Enabling Swift
|
||||
^^^^^^^^^^^^^^^
|
||||
The following options should be changed if necessary: ::
|
||||
|
||||
# make a backend selection (file or swift)
|
||||
$glance_backend = 'swift'
|
||||
|
||||
# 'loopback' for testing (it creates two loopback devices on every node)
|
||||
# false for pre-built devices
|
||||
$swift_loopback = 'loopback'
|
||||
|
||||
# defines where to place the ringbuilder
|
||||
$swift_master = 'fuel-swiftproxy-01'
|
||||
|
||||
# all of the swift services, rsync daemon on the storage nodes listen on their local net ip addresses
|
||||
$swift_local_net_ip = $internal_address
|
||||
|
||||
|
||||
In ``openstack/examples/site_openstack_swift_standalone.pp`` example, the following nodes are specified:
|
||||
|
||||
* fuel-swiftproxyused as the ringbuilder + proxy node
|
||||
* fuel-swift-01, fuel-swift-02, fuel-swift-03 used as the storage nodes
|
||||
|
||||
In ``openstack/examples/site_openstack_swift_compact.pp`` example, the role of swift-storage and swift-proxy combined with controllers.
|
||||
|
||||
For more realistic use-cases, you should manually prepare volumes by fdisk and initialize it:
|
||||
|
||||
|
||||
* create the XFS partition:
|
||||
|
||||
``mkfs.xfs -i size=1024 -f /dev/sdx1``
|
||||
|
||||
* mount device/partition:
|
||||
|
||||
For a standard swift install, all data drives are mounted directly under ``/srv/node``
|
||||
|
||||
``mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sdx1 /srv/node/sdx``
|
||||
|
||||
|
||||
Installing OpenStack on the nodes using Puppet
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Install OpenStack controller nodes sequentially, one by one
|
||||
* run "``puppet agent --test``" on fuel-controller-01
|
||||
* wait for the installation to complete
|
||||
* repeat the same for fuel-controller-02 and fuel-controller-03
|
||||
* .. important:: It is important to establish the cluster of OpenStack controllers in sequential fashion, due to the nature of assembling MySQL cluster based on Galera
|
||||
|
||||
* Install OpenStack compute nodes. You can do it in parallel if you wish.
|
||||
* run "``puppet agent --test``" on fuel-compute-01
|
||||
* wait for the installation to complete
|
||||
|
||||
* Install Swift nodes in standalone/compact mode.
|
||||
|
||||
To fully configure a Swift environment, the nodes must be configured in the following order:
|
||||
|
||||
* First the storage nodes need to be configured.
|
||||
This creates the storage services (object, container, account) and exports all of the storage endpoints
|
||||
for the ring builder into puppetDB.
|
||||
**Note:** The replicator service fails to start in this initial configuration. It is Ok.
|
||||
|
||||
* Next, the ringbuild and swift proxy must be configured.
|
||||
The ringbuilder needs to collect the storage endpoints and create the ring database before the proxy
|
||||
can be installed. It also sets up an rsync server which is used to host the ring database.
|
||||
Resources are exported that are used to rsync the ring database from this server.
|
||||
|
||||
* Finally, the storage nodes should be run again so that they can rsync the ring databases.
|
||||
|
||||
**Note:** In compact mode, as storage and proxy services are on the same node, to complete the deployment, you should perform 2 runs of Puppet on each node (run it once on all 3 controllers, then a second time on each controller). But if you are using loopback devices it requires to run a third time.
|
||||
|
||||
* Your OpenStack cluster is ready to go.
|
||||
|
||||
Installing Nagios Monitoring using Puppet
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Installing nagios NRPE on compute or controller node: ::
|
||||
|
||||
|
@ -374,7 +636,7 @@ in this case:
|
|||
|
||||
|
||||
Examples of OpenStack installation sequences
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
**First, please see the link below for details about different deployment scenarios.**
|
||||
|
||||
|
@ -391,10 +653,10 @@ Examples of OpenStack installation sequences
|
|||
* Run additional deployment pass on Controller 1 only (``fuel-controller-01``) to finalize Galera cluster configuration.
|
||||
* Run deployment pass on Quantum node (``fuel-quantum``) to install Quantum router.
|
||||
* Run deployment pass on every compute node (``fuel-compute-01 ... fuel-compute-xx``) - unlike controllers these nodes may be deployed in parallel.
|
||||
* Sequentially run deployment pass on every storage node (``fuel-swift-01 ... fuel-swift-xx``) node. By default these nodes named as ``fuel-swift-xx``. Errors in Swift storage like */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected on Storage nodes during the deployment passes until the very final pass.
|
||||
* Sequentially run deployment pass on every storage node (``fuel-sowift-01`` ... ``fuel-swift-xx``) node. By default these nodes named as ``fuel-swift-xx``. Errors in Swift storage like */Stage[main]/Swift::Storage::Container/Ring_container_device[<device address>]: Could not evaluate: Device not found check device on <device address>* are expected on Storage nodes during the deployment passes until the very final pass.
|
||||
* In case loopback devices are used on storage nodes (``$swift_loopback = 'loopback'`` in ``site.pp``) - run deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node one more time. Skip this step in case loopback is off (``$swift_loopback = false`` in ``site.pp``). Again, ignore errors in *Swift::Storage::Container* during this deployment pass.
|
||||
* Run deployment pass on every SwiftProxy node (``fuel-swiftproxy-01 ... fuel-swiftproxy-02``). Node names are set by ``$swift_proxies`` variable in ``site.pp``. There are 2 Swift Proxies by default.
|
||||
* Repeat deployment pass on every storage (``fuel-swift-01 ... fuel-swift-xx``) node. No Swift storage errors should appear during this deployment pass!
|
||||
* Repeat deployment pass on every storage (``fuel-swift-01`` ... ``fuel-swift-xx``) node. No Swift storage errors should appear during this deployment pass!
|
||||
|
||||
**Example2:** **Compact OpenStack deployment with storage and swift-proxy combined with nova-controller on the same nodes**
|
||||
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
Testing OpenStack
|
||||
-----------------
|
||||
|
||||
Now that you've installed OpenStack, its time to take your new
|
||||
openstack cloud for a drive. Follow these steps:
|
||||
|
||||
|
||||
|
||||
|
||||
#. On the host machine, open your browser to
|
||||
|
||||
|
||||
|
||||
|
||||
http://10.0.1.10/
|
||||
|
||||
|
||||
|
||||
and login as nova/nova (unless you changed this information in
|
||||
site.pp)
|
||||
|
||||
|
||||
#. In the network and security groups tab:
|
||||
|
||||
|
||||
#. Create a new key/pair for future use
|
||||
#. Add tcp 22 22 to default network setting
|
||||
#. Add icmp -1 -1 to default network settings
|
||||
#. Allocate 2 floating ips for future use
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#. The next step is to upload an image to use for creating VMs, but an
|
||||
OpenStack bug prevents you from doing this in the browser. Instead,
|
||||
log in to any of the controllers as root and execute the following
|
||||
commands::
|
||||
|
||||
|
||||
|
||||
|
||||
~/source openrc
|
||||
glance image-create --name cirros --container-format bare --disk-format qcow2 --is-public yes --location https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
|
||||
|
||||
|
||||
#. Go back to the browser and launch a new instance of this image
|
||||
using the tiny flavor.
|
||||
#. On the instances page:
|
||||
|
||||
|
||||
|
||||
#. Click the image and look at the settings.
|
||||
#. Click the logs tab to look at the logs.
|
||||
#. Click the VNC tab to log in. If you see just a big black rectangle, the machine is in screensaver mode; click the grey area and press the space bar to wake it up, then login as cirros/cubswin:).
|
||||
#. Do ifconfig -a | more and see the assigned ip address.
|
||||
#. Do sudo fdisk -l and see the disk. Notice that there arent any; no volume has yet been assigned to this VM.
|
||||
|
||||
|
||||
|
||||
#. Assign a floating ip address to your instance.
|
||||
#. From your host machine, ping the floating ip assigned to this VM.
|
||||
#. If that works, you can try to ssh cirros@floating-ip from the host machine.
|
||||
#. Back in the browser, go to the volumes tab and create a new volume, then attach it to the instance.
|
||||
#. Go back to the VNC tab and repeat fdisk -l and see the new unpartitioned disk attached.
|
||||
|
||||
|
||||
|
||||
|
||||
From here, your new VM is ready to be used.
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
This document explains how to use Fuel to more easily create and
|
||||
maintain an OpenStack cloud infrastructure.
|
||||
|
||||
Fuel can be used to create virtually any OpenStack topology, but the
|
||||
installation includes several pre-defined configurations. To simplify
|
||||
matters, the guide emphasises a single common reference architecture,
|
||||
the multi-node, high-availability topology. It begins by explaining
|
||||
that architecture, then moves on to the details of creating that
|
||||
architecture in a development setting using VirtualBox. Finally, it
|
||||
gives you the information you need to know to create this and other
|
||||
OpenStack architectures in a production environment.
|
||||
|
||||
This document assumes that you are familiar with general Linux
|
||||
commands and administration concepts, as well as general networking
|
||||
concepts. You should have some familiarity with grid or virtualization
|
||||
systems such as Amazon Web Services or VMware, as well as OpenStack
|
||||
itself, but you don't need to be an expert.
|
||||
|
||||
The Fuel Users Guide is organized as follows:
|
||||
|
||||
This section, :ref:`Introduction <Introduction>`, explains what Fuel is and gives you a
|
||||
general idea of how it works.
|
||||
|
||||
Section 2, :ref:`OpenStack Reference Architecture <Reference-Archiecture>`, provides a general
|
||||
look at the components that make up OpenStack, and describes the
|
||||
reference architecture to be instantiated in Section 3.
|
||||
|
||||
Section 3, :ref:`Create an example multi-node OpenStack cluster using
|
||||
Fuel <Create-Cluster>`, takes you step-by-step through the process of creating a high-
|
||||
availability OpenStack cluster on test hardware.
|
||||
|
||||
Section 4, :ref:`Production Considerations <Production>`, looks at the real-world
|
||||
questions and problems involved in creating an OpenStack cluster for
|
||||
production use. It discusses issues such as network layout and
|
||||
hardware requirements, and provides tips and tricks for creating a cluster of up to 100
|
||||
nodes.
|
||||
|
||||
Even with a utility as powerful as Fuel, creating an OpenStack cluster
|
||||
can be complex, and Section 5, :ref:`Frequently Asked Questions <FAQ>`, covers
|
||||
many of the issues that tend to arise during that process.
|
||||
|
||||
Finally, the Users Guide assumes that you are taking advantage of
|
||||
certain shortcuts, such as using a pre-built Puppet master; if you prefer not to go that route,
|
||||
Appendix A, :ref:`Creating the Puppet master <Create-PM>`.
|
||||
|
||||
Lets start off by taking a look at Fuel itself. Well start by
|
||||
explaining what it is and how it works, then get you set up and ready
|
||||
to start using it.
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
What is Fuel?
|
||||
-----------------
|
||||
Fuel is a ready-to-install collection of all of the packages and
|
||||
scripts you need to create a robust, configurable, vendor-independant
|
||||
OpenStack cloud in your own environment.
|
||||
|
||||
A single OpenStack cloud consists of packages from many different open
|
||||
source projects, each with its own requirements, installation
|
||||
procedures, and configuration management. Fuel brings all of these
|
||||
projects together into a single open source distribution, with
|
||||
components that have been tested and are guaranteed to work together,
|
||||
all wrapped up using scripts to help you to work through a single
|
||||
installation rather than multiple smaller installations.
|
||||
|
||||
Simply put, Fuel is a way for you to easily configure and install an
|
||||
OpenStack-based infrastructure in your own environment.
|
||||
|
||||
.. image:: introduction/FuelSimpleDiagramv.gif
|
|
@ -0,0 +1,32 @@
|
|||
How Fuel Works
|
||||
--------------
|
||||
|
||||
Fuel works on the premise that rather than installing each of the
|
||||
myriad components that make up OpenStack directly, you can instead use
|
||||
a configuration management system such as Puppet to create scripts
|
||||
that can provide a configurable, repeatable, sharable installation
|
||||
process.
|
||||
|
||||
In practice, that means that the process of using Fuel is as follows:
|
||||
|
||||
First, Use Fuel's automation tools and instructions to set up a master
|
||||
node with Puppetmaster and Cobbler. This process only needs to be
|
||||
completed once per installation.
|
||||
|
||||
Next, use Fuel's snippets, kickstart files, and preseed files for
|
||||
Cobbler to boot the appropriate servers from bare metal and
|
||||
automatically install the appropriate operating systems. These virtual
|
||||
or physical servers boot up already prepared to call on the Puppet
|
||||
Master to receive their respective OpenStack components.
|
||||
|
||||
Finally, to complete the basic OpenStack install, use Fuel's puppet
|
||||
manifests to install OpenStack on the newly created servers. These
|
||||
manifests are completely customizable, enabling you to start with one
|
||||
of the included OpenStack architectures and adapt to your own
|
||||
situation as necessary.
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=15vTTG2_575M7-kOzwsYyDmQrMgCPT2joLF2Cgiyzv7Q&w=678&h=617
|
||||
|
||||
Fuel comes with several pre-defined topologies, some of which include
|
||||
additional options from which you can choose.
|
||||
|
|
@ -0,0 +1,38 @@
|
|||
Reference Topologies Provided By Fuel
|
||||
-------------------------------------
|
||||
|
||||
One of the advantages of Fuel is that it comes with several pre-built
|
||||
reference topologies that you can use to immediately deploy your own
|
||||
OpenStack cloud infrastructure. Reference topologies are well-specified configurations of OpenStack and its constituent components
|
||||
tailored to one or more cloud use cases. As of version 2.0, Fuel
|
||||
provides the ability to create the following cluster types without
|
||||
requiring extensive customization:
|
||||
|
||||
**Single node**: Perfect for getting a basic feel for how OpenStack works, the Single-node installation is the simplest way to get OpenStack up and running. The single-node installation provides an easy way to install an entire OpenStack cluster on a single physical or virtual machine.
|
||||
|
||||
**Multi-node (non-HA)**: The Multi-node (non-HA) installation enables you to try out additional OpenStack services such as Cinder, Quantum, and Swift without requiring the increased hardware involved in ensuring high availability. In addition to the ability to independently specify which services to activate, you also have the following options:
|
||||
|
||||
**Compact Swift**: When you choose this option, Swift will be installed on your controllers, reducing your hardware requirements by eliminating the need for additional Swift servers.
|
||||
|
||||
**Standalone Swift**: This option enables you to install independant Swift nodes, so that you can separate their operation from your controller nodes.
|
||||
|
||||
[INSERT GRAPHIC, ALIGNED TO THE SIDE TO PREVENT TEXT BREAK,
|
||||
DEMONSTRATING THE DIFFERENCE BETWEEN COMPACT SWIFT AND STANDALONE
|
||||
SWIFT IN TERMS OF SERVERS REQUIRED.]
|
||||
|
||||
**Multi-node (HA)**: When you're ready to begin your move to production, the Multi-node (HA) topology is a straightforward way to create an OpenStack cluster that provides high availability. With three controller nodes and the ability to individually specify services such as Cinder, Quantum, and Swift, Fuel provides the following variations of the Multi-node (HA) topology:
|
||||
|
||||
**Compact Swift**: When you choose this variation, Swift will be installed on your controllers, reducing your hardware requirements by eliminating the need for additional Swift servers while still addressing high availability requirements.
|
||||
|
||||
**Standalone Swift**: This variation enables you to install independant Swift nodes, so that you can separate their operation from your controller nodes.
|
||||
|
||||
**Compact Quantum**: If you don't need the flexibility of a separate Quantum node, Fuel Library provides the option to combine your Quantum node with one of your controllers.
|
||||
|
||||
In addition to these configurations, Fuel is designed to be completely
|
||||
customizable. Upcoming editions of this guide discuss techniques for
|
||||
creating additional OpenStack reference topologies.
|
||||
|
||||
.. Need the correct location for the "contact Mirantis" link; do we have a special sales link?
|
||||
|
||||
To configure Fuel immediately for more extensive variations on these
|
||||
use cases, you can `contact Mirantis for further assistance <http://www.mirantis.com>`_.
|
|
@ -0,0 +1,19 @@
|
|||
Supported Software
|
||||
------------------
|
||||
|
||||
Fuel has been tested and is guaranteed to work with the following software components:
|
||||
|
||||
* Operating Systems
|
||||
* CentOS 6.3 (x86_64 architecture only)
|
||||
* RHEL 6.3 (x86_64 architecture only)
|
||||
* Ubuntu 12.04 (x86_64 architecture only)
|
||||
|
||||
* Puppet (IT automation tool)
|
||||
* 2.7.19
|
||||
* 3.0.0
|
||||
|
||||
* Cobbler (bare-metal provisioning tool)
|
||||
* 2.2.3
|
||||
|
||||
* OpenStack
|
||||
* Folsom release
|
|
@ -0,0 +1,20 @@
|
|||
Download Fuel
|
||||
-------------
|
||||
|
||||
The first step in installing Fuel is to download the version
|
||||
appropriate to your environment. Fuel is available for both Essex and
|
||||
Folsom OpenStack installations, and will be available for Grizzly
|
||||
shortly after Grizzly's release.
|
||||
|
||||
[FUEL DOWNLOAD]
|
||||
|
||||
[ESSEX DOWNLOAD]
|
||||
|
||||
To make your installation easier, we also offer a pre-built Puppet
|
||||
Master ISO:
|
||||
|
||||
[PM ISO]
|
||||
|
||||
You can mount this ISO in a physical or VirtualBox machine in order to
|
||||
easily create your master node. (Instructions for performing this step
|
||||
without the ISO are given in Appendix A.)
|
|
@ -0,0 +1,6 @@
|
|||
Release Notes
|
||||
-------------
|
||||
|
||||
.. include:: /pages/introduction/release-notes/v2-0-folsom.rst
|
||||
.. include:: /pages/introduction/release-notes/v1-0-essex.rst
|
||||
|
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
|
@ -1,6 +1,6 @@
|
|||
|
||||
v1.0-essex
|
||||
------------
|
||||
^^^^^^^^^^
|
||||
|
||||
* Puppet manifests for deploying OpenStack Essex in HA mode
|
||||
* Active/Active HA architecture for Essex, based on RabbitMQ / MySQL Galera / HAProxy / keepalived
|
|
@ -1,6 +1,6 @@
|
|||
|
||||
v2.0-folsom
|
||||
-------------
|
||||
^^^^^^^^^^^
|
||||
|
||||
* Puppet manifests for deploying OpenStack Folsom in HA mode
|
||||
* Active/Active HA architecture for Folsom, based on RabbitMQ / MySQL Galera / HAProxy / keepalived
|
|
@ -1,56 +0,0 @@
|
|||
|
||||
RabbitMQ
|
||||
^^^^^^^^
|
||||
|
||||
At least one RabbitMQ node must remain operational
|
||||
--------------------------------------------------
|
||||
|
||||
**Issue:**
|
||||
All RabbitMQ nodes must not be shut down simultaneously. RabbitMQ requires
|
||||
that, after a full shutdown of the cluster, the first node to bring up should
|
||||
be the last one to shut down.
|
||||
|
||||
**Workaround:**
|
||||
There are 2 possible scenarios, depending on shutdown results.
|
||||
|
||||
**1. RabbitMQ master node alive and can be started.**
|
||||
|
||||
FUEL installation updates ``/etc/init.d/rabbitmq-server`` init scripts for RHEL/Centos and Ubuntu to customized versions. These scripts attempt to start RabbitMQ 5 times and so give RabbitMQ master node necessary time to start
|
||||
after complete power loss.
|
||||
It is recommended to power up all nodes and then check if RabbitMQ server started on all nodes. All nodes should start automatically.
|
||||
|
||||
**2. Impossible to start RabbitMQ master node (hardware or system failure)**
|
||||
|
||||
There is no easy automatic way to resolve this situation.
|
||||
Proposed solution is to delete mirrored queue directly from mnesia (RabbitMQ database)
|
||||
|
||||
1. Select any alive node. Run
|
||||
|
||||
``erl -mnesia dir '"/var/lib/rabbitmq/mnesia/rabbit\@<failed_controller_name>"'``
|
||||
|
||||
2. Run ``mnesia:start().`` in Erlang console.
|
||||
|
||||
3. Compile and run the following Erlang script::
|
||||
|
||||
AllTables = mnesia:system_info(tables),
|
||||
DataTables = lists:filter(fun(Table) -> Table =/= schema end,
|
||||
AllTables),
|
||||
RemoveTableCopy = fun(Table,Node) ->
|
||||
Nodes = mnesia:table_info(Table,ram_copies) ++
|
||||
mnesia:table_info(Table,disc_copies) ++
|
||||
mnesia:table_info(Table,disc_only_copies),
|
||||
case lists:is_member(Node,Nodes) of
|
||||
true -> mnesia:del_table_copy(Table,Node);
|
||||
false -> ok
|
||||
end
|
||||
end,
|
||||
RemoveTableCopy(Tbl,'rabbit@<failed_controller_name>') || Tbl <- DataTables.
|
||||
rpc:call('rabbit@<failed_controller_name>',mnesia,stop,[]),
|
||||
rpc:call('rabbit@<failed_controller_name>',mnesia,delete_schema,[SchemaDir]),
|
||||
RemoveTablecopy(schema,'rabbit@<failed_controller_name>').
|
||||
|
||||
4. Exit Erlang console ``halt().``
|
||||
|
||||
5. Run ``service rabbitmq-server start``
|
||||
|
||||
**Background:** See http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/19792.
|
|
@ -0,0 +1 @@
|
|||
[NEED CONTENT HERE]
|
|
@ -0,0 +1,4 @@
|
|||
Sizing Hardware
|
||||
---------------
|
||||
|
||||
[CONTENT TO BE ADDED]
|
|
@ -0,0 +1,83 @@
|
|||
Redepolying an environment
|
||||
--------------------------
|
||||
|
||||
Because Puppet is additive only, there was no ability to revert changes as you would in a typical application deployment.
|
||||
If a change needs to be backed out, you must explicitly add configuration to reverse it, check this configuration in,
|
||||
and promote it to production using the pipeline. This meant that if a breaking change did get deployed into production,
|
||||
typically a manual fix was applied, with the proper fix checked into version control subsequently.
|
||||
|
||||
FUEL gives you the ability for isolating code changes while developing combined with minimizing the headaches
|
||||
with maintaining multiple environments serviced by one puppet server.
|
||||
|
||||
|
||||
Environments
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Puppet supports putting nodes in environments, this maps cleanly to your development, QA and production life cycles
|
||||
and it’s a way to hand out different code to different nodes.
|
||||
|
||||
* On the Master/Server Node
|
||||
|
||||
The puppetmaster tries to find modules using its modulepath setting, typically something like ``/etc/puppet/modules``.
|
||||
You usually just set this value once in your ``/etc/puppet/puppet.conf`` and that’s it, all done.
|
||||
Environments expand on this and give you the ability to set different settings for different environments.
|
||||
|
||||
You can specify several search paths. The following example dynamically sets the modulepath
|
||||
so Puppet will check a per-environment folder for a module before serving it from the main set::
|
||||
|
||||
[master]
|
||||
modulepath = $confdir/$environment/modules:$confdir/modules
|
||||
|
||||
[production]
|
||||
manifest = $confdir/manifests/site.pp
|
||||
|
||||
[development]
|
||||
manifest = $confdir/$environment/manifests/site.pp
|
||||
|
||||
* On the Agent Node
|
||||
|
||||
Once agent node makes a request, the puppet master gets informed of its environment.
|
||||
If you don’t specify an environment, the agent has the default ``production`` environment.
|
||||
|
||||
To set an environment agent-side, just specify the environment setting in the [agent] block of ``puppet.conf``::
|
||||
|
||||
[agent]
|
||||
environment = development
|
||||
|
||||
|
||||
Deployment pipeline
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
* Deploy
|
||||
|
||||
In order to deploy the multiple environments that aren't interfere with each other
|
||||
you should specify the ``$deployment_id`` option in ``/etc/puppet/manifests/site.pp`` (set it to an even integer value (valid range is 0..254)).
|
||||
|
||||
First of all it is involved in the dynamic environment-based tag generation and globally apply that tag to all resources on each node.
|
||||
It is also used for keepalived daemon, there is a unique virtual_router_id evaluated.
|
||||
|
||||
* Clean/Revert
|
||||
|
||||
At this stage you just need to make sure the environment has the original/virgin state.
|
||||
|
||||
* Puppet node deactivate
|
||||
|
||||
This will ensure that any resources exported by that node will stop appearing in the catalogs served to the agent nodes:
|
||||
|
||||
``puppet node deactivate <node>``
|
||||
where <node> is fully qualified domain name (``puppet cert list --all``)
|
||||
|
||||
You can deactivate a node manually one by one or execute the following command to automatically make the same
|
||||
|
||||
``cert list --all | awk '! /DNS:puppet/ { gsub(/"/, "", $2); print $2}' | xargs puppet node deactivate``
|
||||
|
||||
* Redeploy
|
||||
|
||||
Fire up the puppet agent again to apply a desired node configuration
|
||||
|
||||
|
||||
Links
|
||||
^^^^^
|
||||
|
||||
* http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/
|
||||
* http://docs.puppetlabs.com/guides/environment.html
|
|
@ -0,0 +1,4 @@
|
|||
Large Scale Deployments
|
||||
-----------------------
|
||||
|
||||
[NEED CONTENT]
|
|
@ -0,0 +1,2 @@
|
|||
Deployment orchestration based on mcollective
|
||||
---------------------------------------------
|
|
@ -2,6 +2,188 @@ Overview
|
|||
--------
|
||||
|
||||
|
||||
OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. So, redundancy for stateless OpenStack API services is implemented through the combination of VIP management (keepalived) and load balancing (HAProxy). Stateful OpenStack components, such as state database and messaging server, rely on their respective active/active modes for high availability -- RabbitMQ uses built-in clustering capabilities, while the database uses MySQL/Galera replication.
|
||||
Before you install any hardware or software, you must know what it is
|
||||
you're trying to achieve. This section looks at the basic components of
|
||||
an OpenStack infrastructure and organizes them into one of the more
|
||||
common reference architectures. You'll then use that architecture as a
|
||||
basis for installing OpenStack in the next section.
|
||||
|
||||
|
||||
|
||||
As you know, OpenStack provides the following basic services:
|
||||
|
||||
|
||||
**Compute**
|
||||
|
||||
Compute servers are the workhorses of your installation; they're the
|
||||
servers on which your users' virtual machines are created. **Nova-scheduler** controls the life-cycle of these VMs.
|
||||
|
||||
|
||||
**Networking**
|
||||
|
||||
Because an OpenStack cluster (virtually) always includes multiple
|
||||
servers, the ability for them to communicate with each other and with
|
||||
the outside world is crucial. Networking was originally handled by the
|
||||
**Nova-network** service, but it is slowly giving way to the newer **Quantum** networking service. Authentication and
|
||||
authorization for these transactions are handled by **Keystone**.
|
||||
|
||||
|
||||
**Storage**
|
||||
|
||||
OpenStack provides for two different types of storage: block storage
|
||||
and object storage. Block storage is traditional data storage, with
|
||||
small, fixed-size blocks that are mapped to locations on storage media. At
|
||||
its simplest level, OpenStack provides block storage using **nova-
|
||||
volume**, but it is common to use **Cinder**.
|
||||
|
||||
|
||||
|
||||
Object storage, on the other hand, consists of single variable-size
|
||||
objects that are described by system-level metadata, and you can
|
||||
access this capability using **Swift**.
|
||||
|
||||
|
||||
|
||||
OpenStack storage is used for your users' objects, but it is also used
|
||||
for storing the images used to create new VMs. This capability is
|
||||
handled by **Glance**.
|
||||
|
||||
|
||||
|
||||
These services can be combined in many different ways. Out of the box,
|
||||
Fuel supports the following topologies:
|
||||
|
||||
|
||||
Single node deployment
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In a production environment, you will never have a single-node
|
||||
deployment of OpenStack, partly because it forces you to make a number
|
||||
of compromises as to the number and types of services that you can
|
||||
deploy. It is, however, extremely useful if you just want to see how
|
||||
OpenStack works from a user's point of view. In this case, all of the
|
||||
essential services run out of a single server:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
|
||||
Multi-node (non-HA) deployment (compact Swift)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
More commonly, your OpenStack installation will consist of multiple
|
||||
servers. Exactly how many is up to you, of course, but the main idea
|
||||
is that your controller(s) are separate from your compute servers, on
|
||||
which your users' VMs will actually run. One arrangement that will
|
||||
enable you to achieve this separation while still keeping your
|
||||
hardware investment relatively modest is to house your storage on your
|
||||
controller nodes:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
Multi-node (non-HA) deployment (standalone Swift)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A more common arrangement is to provide separate servers for storage.
|
||||
This has the advantage of reducing the number of controllers you must
|
||||
provide; because Swift runs on its own servers, you can reduce the
|
||||
number of controllers from three (or five, for a full Swift implementation) to one, if desired:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
|
||||
Multi-node (HA) deployment (Compact Swift)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Production environments typically require high availability, which
|
||||
involves several architectural requirements. Specifically, you will
|
||||
need at least three controllers (to prevent split-brain problems), and
|
||||
certain components will be deployed in multiple locations to prevent
|
||||
single points of failure. That's not to say, however, that you can't
|
||||
reduce hardware requirements by combining your storage and controller
|
||||
nodes:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
|
||||
Multi-node (HA) deployment (Compact Quantum)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Another way you can add functionality to your cluster without
|
||||
increasing hardware requirements is to install Quantum on your
|
||||
controller nodes. This architecture still provides high availability,
|
||||
but avoids the need for a separate Quantum node:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
Multi-node (HA) deployment (Standalone)
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For larger production deployments, its more common to provide
|
||||
dedicated hardware for storage and networking. This architecture still
|
||||
gives you the advantages of high availability, but this clean
|
||||
separation makes your cluster more maintainable by separating storage,
|
||||
networking, and controller functionality:
|
||||
|
||||
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
Where Fuel really shines is in the creation of more complex
|
||||
architectures, so in this document you'll learn how to use Fuel to
|
||||
easily create a multi-node HA OpenStack cluster. To reduce the amount
|
||||
of hardware you'll need to follow the installation in section 3,
|
||||
however, the guide focuses on the Multi-node HA Compact Swift
|
||||
architecture.
|
||||
|
||||
|
||||
|
||||
Lets take a closer look at the details of this topology.
|
||||
|
||||
A closer look at the Multi-node (non-HA) deployment (compact Swift)
|
||||
-------------------------------------------------------------------
|
||||
|
||||
In this section, you'll learn more about the Multi-node (HA) Compact
|
||||
Swift topology and how it achieves high availability in preparation
|
||||
for installing this cluster in section 3. As you may recall, this
|
||||
topology looks something like this:
|
||||
|
||||
[INSERT DIAGRAM HERE]
|
||||
|
||||
|
||||
|
||||
OpenStack services are interconnected by RESTful HTTP-based APIs and
|
||||
AMQP-based RPC messages. So, redundancy for stateless OpenStack API
|
||||
services is implemented through the combination of Virtual IP(VIP)
|
||||
management using keepalived and load balancing using HAProxy. Stateful
|
||||
OpenStack components, such as state database and messaging server,
|
||||
rely on their respective active/active modes for high availability.
|
||||
For example, RabbitMQ uses built-in clustering capabilities, while the
|
||||
database uses MySQL/Galera replication.
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=1PzRBUaZEPMG25488mlb42fRdlFS3BygPwbAGBHudnTM&w=750&h=491
|
||||
|
||||
Lets take a closer look at what an OpenStack deployment looks like, and
|
||||
what it will take to achieve high availability for an OpenStack
|
||||
deployment.
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=1PzRBUaZEPMG25488mlb42fRdlFS3BygPwbAGBHudnTM&w=750&h=491
|
|
@ -2,29 +2,59 @@
|
|||
Logical Setup
|
||||
-------------
|
||||
|
||||
An OpenStack HA cluster involves, at a minimum, three types of nodes:
|
||||
controller nodes, compute nodes, and storage nodes.
|
||||
|
||||
Controller Nodes
|
||||
^^^^^^^^^^^^^^^^
|
||||
Let us take a closer look at what OpenStack deployment looks like, and what it will take to achieve high availability for an OpenStack deployment:
|
||||
|
||||
* Every OpenStack cluster has multiple controller nodes in order to provide redundancy. To avoid the split-brain in Galera cluster (quorum-based system), it is strongly advised to plan at least 3 controller nodes
|
||||
* Every OpenStack controller is running keepalived which manages a single VIP for all controller nodes
|
||||
* Every OpenStack controller is running HAProxy for HTTP and TCP load balancing of requests going to OpenStack API services, RabbitMQ, and MySQL
|
||||
* When the end users access OpenStack cloud using Horizon and/or REST API, the request goes to a live controller node which currently holds VIP, and the connection gets terminated by HAProxy
|
||||
* HAProxy performs load balancing and it may very well send the current request to a service on a different controller node. That includes services and components like nova-api, glance-api, keystone-api, quantum-api, nova-scheduler, Horizon, MySQL, and RabbitMQ
|
||||
* Involved services and components have their own mechanisms for achieving HA
|
||||
* nova-api, glance-api, keystone-api, quantum-api and nova-scheduler are stateless services that do not require any special attention besides load balancing
|
||||
* Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer
|
||||
* RabbitMQ provides active/active high availability using mirrored queues
|
||||
* MySQL high availability is achieved through Galera active/active multi-master deployment
|
||||
|
||||
The first order of business in achieving high availability (HA) is
|
||||
redundancy, so the first step is to provide multiple controller nodes.
|
||||
You must keep in mind, however, that the database uses Galera to
|
||||
achieve HA, and Galera is a quorum-based system. That means that in
|
||||
order to avoid split-brain problems, you must provide at least 3
|
||||
controller nodes.
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=1aftE8Yes7CdVSZgZD1A82T_2GqL2SMImtRYU914IMyQ&w=869&h=855
|
||||
|
||||
|
||||
|
||||
Every OpenStack controller runs keepalived, which manages a single
|
||||
Virtual IP (VIP) for all controller nodes, and HAProxy, which manages
|
||||
HTTP and TCP load balancing of requests going to OpenStack API
|
||||
services, RabbitMQ, and MySQL.
|
||||
|
||||
|
||||
|
||||
When an end user accesses the OpenStack cloud using Horizon or makes a
|
||||
request to the REST API for services such as nova-api, glance-api,
|
||||
keystone-api, quantum-api, nova-scheduler, MySQL or RabbitMQ, the
|
||||
request goes to the live controller node currently holding the VIP,
|
||||
and the connection gets terminated by HAProxy. When the next request
|
||||
comes in, HAProxy handles it, and may send it to the original
|
||||
controller or another in the cluster, depending on load conditions.
|
||||
|
||||
|
||||
|
||||
Each of the services housed on the controller nodes has its own
|
||||
mechanism for achieving HA:
|
||||
|
||||
|
||||
* nova-api, glance-api, keystone-api, quantum-api and nova-scheduler are stateless services that do not require any special attention besides load balancing.
|
||||
* Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer.
|
||||
* RabbitMQ provides active/active high availability using mirrored queues.
|
||||
* MySQL high availability is achieved through Galera active/active multi-master deployment.
|
||||
|
||||
|
||||
Compute Nodes
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
OpenStack compute nodes need to "talk" to controller nodes and reach out to essential services such as RabbitMQ and MySQL. They use the same approach that provides redundancy to the end-users of Horizon and REST APIs, reaching out to controller nodes using VIP and going through HAProxy.
|
||||
OpenStack compute nodes are, in many ways, the foundation of your
|
||||
cluster; they are the servers on which your users will create their
|
||||
Virtual Machines (VMs) and host their applications. Compute nodes need
|
||||
to talk to controller nodes and reach out to essential services such
|
||||
as RabbitMQ and MySQL. They use the same approach that provides
|
||||
redundancy to the end-users of Horizon and REST APIs, reaching out to
|
||||
controller nodes using the VIP and going through HAProxy.
|
||||
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=16gsjc81Ptb5SL090XYAN8Kunrxfg6lScNCo3aReqdJI&w=873&h=801
|
||||
|
@ -33,7 +63,13 @@ OpenStack compute nodes need to "talk" to controller nodes and reach out to esse
|
|||
Storage Nodes
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
This reference architecture requires shared storage to be present in an OpenStack cluster. The shared storage will act as a backend for Glance, so that multiple glance instances running on controller nodes can store images and retrieve images from it. Our recommendation is to deploy Swift and use it not only for storing VM images, but also for any other objects.
|
||||
|
||||
In this OpenStack cluster reference architecture, shared storage acts
|
||||
as a backend for Glance, so that multiple Glance instances running on
|
||||
controller nodes can store images and retrieve images from it. To
|
||||
achieve this, you are going to deploy Swift. This enables you to use
|
||||
it not only for storing VM images, but also for any other objects such
|
||||
as user files.
|
||||
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=1Xd70yy7h5Jq2oBJ12fjnPWP8eNsWilC-ES1ZVTFo0m8&w=777&h=778
|
||||
|
|
|
@ -2,9 +2,16 @@
|
|||
Cluster Sizing
|
||||
--------------
|
||||
|
||||
This reference architecture is well suited for production-grade OpenStack deployments on a medium and large scale when you can afford allocating several servers for your OpenStack controller nodes in order to build a fully redundant and highly available environment.
|
||||
This reference architecture is well suited for production-grade
|
||||
OpenStack deployments on a medium and large scale when you can afford
|
||||
allocating several servers for your OpenStack controller nodes in
|
||||
order to build a fully redundant and highly available environment.
|
||||
|
||||
|
||||
|
||||
The absolute minimum requirement for a highly-available OpenStack
|
||||
deployment is to allocate 4 nodes:
|
||||
|
||||
The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes:
|
||||
|
||||
* 3 controller nodes, combined with storage
|
||||
* 1 compute node
|
||||
|
@ -13,7 +20,7 @@ The absolute minimum requirement for a highly-available OpenStack deployment is
|
|||
.. image:: https://docs.google.com/drawings/pub?id=19Dk1qD5V50-N0KX4kdG_0EhGUBP7D_kLi2dU6caL9AM&w=767&h=413
|
||||
|
||||
|
||||
If you want to run storage separately from controllers, you can do that as well by raising the bar to 7 nodes:
|
||||
If you want to run storage separately from the controllers, you can do that as well by raising the bar to 7 nodes:
|
||||
|
||||
* 3 controller nodes
|
||||
* 3 storage nodes
|
||||
|
@ -23,9 +30,15 @@ If you want to run storage separately from controllers, you can do that as well
|
|||
.. image:: https://docs.google.com/drawings/pub?id=1xmGUrk2U-YWmtoS77xqG0tzO3A47p6cI3mMbzLKG8tY&w=769&h=594
|
||||
|
||||
|
||||
Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and on your goals (whether you want a compute-oriented or storage-oriented cluster).
|
||||
Of course, you are free to choose how to deploy OpenStack based on the
|
||||
amount of available hardware and on your goals (such as whether you
|
||||
want a compute-oriented or storage-oriented cluster).
|
||||
|
||||
For a typical OpenStack compute deployment, you can use this table as a high-level guidance to determine the number of controllers, compute, and storage nodes you should have:
|
||||
|
||||
|
||||
For a typical OpenStack compute deployment, you can use this table as
|
||||
high-level guidance to determine the number of controllers, compute,
|
||||
and storage nodes you should have:
|
||||
|
||||
============= =========== ======= ==============
|
||||
# of Machines Controllers Compute Storage
|
||||
|
@ -34,4 +47,4 @@ For a typical OpenStack compute deployment, you can use this table as a high-lev
|
|||
11-40 3 5-34 3 (separate)
|
||||
41-100 4 31-90 6 (separate)
|
||||
>100 5 >86 9 (separate)
|
||||
============= =========== ======= ==============
|
||||
============= =========== ======= ==============
|
||||
|
|
|
@ -1,46 +1,96 @@
|
|||
|
||||
Network Setup
|
||||
-------------
|
||||
Network Architecture
|
||||
--------------------
|
||||
|
||||
The current architecture assumes the presence of 3 NIC cards in hardware, but can be customized to a different number of NICs (less, or more):
|
||||
|
||||
* eth0
|
||||
* management network, communication with Puppet & Cobbler
|
||||
* eth1
|
||||
* public network, floating IPs
|
||||
* eth2
|
||||
* network for communication between OpenStack VMs, bridge interface (VLANs)
|
||||
The current architecture assumes the presence of 3 NIC cards in
|
||||
hardware, but can be customized to a different number of NICs (less,
|
||||
or more). For example, in section 3 you will see how to deploy your
|
||||
cluster with four NICs.
|
||||
|
||||
In the multi-host networking mode, you can choose between FlatDHCPManager and VlanManager network managers in OpenStack. Please see the figure below which illustrates all relevant nodes and networks.
|
||||
In this case, however, let's consider a typical example of 3 NIC cards.
|
||||
They're utilized as follows:
|
||||
|
||||
|
||||
* **eth0**: the internal management network, used for communication with Puppet & Cobbler
|
||||
* **eth1**: the public network, and floating IPs assigned to VMs
|
||||
* **eth2**: the private network, for communication between OpenStack VMs, and the bridge interface (VLANs)
|
||||
|
||||
|
||||
In the multi-host networking mode, you can choose between
|
||||
FlatDHCPManager and VlanManager network managers in OpenStack. The
|
||||
figure below illustrates the relevant nodes and networks.
|
||||
|
||||
.. image:: https://docs.google.com/drawings/pub?id=11KtrvPxqK3ilkAfKPSVN5KzBjnSPIJw-jRDc9fiYhxw&w=810&h=1060
|
||||
|
||||
Lets take a closer look at each network and how its used within the
|
||||
cluster.
|
||||
|
||||
|
||||
|
||||
Public Network
|
||||
^^^^^^^^^^^^^^
|
||||
This network allows inbound connections to VMs from the outside world
|
||||
(allowing users to connect to VMs from the Internet). It also allows
|
||||
outbound connections from VMs to the outside world.
|
||||
|
||||
This network allows inbound connections to VMs from the outside world (allowing users to connect to VMs from the Internet), as well as it allows outbound connections from VMs to the outside world:
|
||||
|
||||
* it provides address space for Floating IPs assigned to individual VM instances. Floating IP is assigned to the VM by project administrator. Nova-network or Quantum services configures this address on the public network interface of Network controller node. If nova-network is in use, then iptables are used to create Destination NAT from this address to Fixed IP of corresponding VM instance through the appropriate virtual bridge interface on the Network controller node
|
||||
* it provides connectivity to the globally routed address space for VMs. IP address from Public network assigned to a compute node is used as source for SNAT performed for traffic going from VM instances on the compute node to Internet.
|
||||
|
||||
Public network also provides Virtual IPs (VIPs) for Endpoint node which are used to connect to OpenStack services APIs.
|
||||
For security reasons, the public network is usually isolated from the
|
||||
private network and internal (management) network. Typically, its a
|
||||
single C class network from your globally routed or private network
|
||||
range.
|
||||
|
||||
Public network is usually isolated from Private networks and Management network. Typically it's a single C class network from Customer's network range (globally routed or private range).
|
||||
To enable Internet access to VMs, the public network provides the
|
||||
address space for the floating IPs assigned to individual VM instances
|
||||
by the project administrator. Nova-network or Quantum services can
|
||||
then configure this address on the public network interface of the
|
||||
Network controller node. If the cluster uses nova-network, nova-
|
||||
network uses iptables to create a Destination NAT from this address to
|
||||
the fixed IP of the corresponding VM instance through the appropriate
|
||||
virtual bridge interface on the Network controller node.
|
||||
|
||||
|
||||
|
||||
In the other direction, the public network provides connectivity to
|
||||
the globally routed address space for VMs. The IP address from the
|
||||
public network that has been assigned to a compute node is used as the
|
||||
source for the Source NAT performed for traffic going from VM
|
||||
instances on the compute node to Internet.
|
||||
|
||||
|
||||
|
||||
The public network also provides VIPs for Endpoint nodes, which are
|
||||
used to connect to OpenStack services APIs.
|
||||
|
||||
Internal (Management) Network
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Management network connects all OpenStack nodes in the cluster. All components of an OpenStack cluster communicate with each other using this network. This network must be isolated from Private and Public networks for security reasons.
|
||||
The internal network connects all OpenStack nodes in the cluster. All
|
||||
components of an OpenStack cluster communicate with each other using
|
||||
this network. This network must be isolated from both the private and
|
||||
public networks for security reasons.
|
||||
|
||||
Management network can also be used for serving iSCSI protocol exchange between Compute and Volume nodes.
|
||||
|
||||
This network usually is a single C class network from private IP address range (not globally routed).
|
||||
|
||||
The internal network can also be used for serving iSCSI protocol
|
||||
exchanges between Compute and Storage nodes.
|
||||
|
||||
|
||||
|
||||
This network usually is a single C class network from your private,
|
||||
non-globally routed IP address range.
|
||||
|
||||
|
||||
Private Network
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Private network facilicates communication between VMs of each tenant. Project network address spaces are part of enterprise network address space. Fixed IPs of virtual instances are directly accessible from the rest of Enterprise network.
|
||||
The private network facilitates communication between each tenant's
|
||||
VMs. Private network address spaces are part of the enterprise network
|
||||
address space. Fixed IPs of virtual instances are directly accessible
|
||||
from the rest of Enterprise network.
|
||||
|
||||
Private network can be segmented into separate isolated VLANs, which are managed by nova-network or Quantum services.
|
||||
|
||||
|
||||
The private network can be segmented into separate isolated VLANs,
|
||||
which are managed by nova-network or Quantum services.
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
Technical Considerations
|
||||
----------------------------
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. include:: /pages/reference-architecture/technical-considerations/0010-overview.rst
|
||||
.. include:: /pages/reference-architecture/technical-considerations/0060-quantum-vs-nova-network.rst
|
||||
.. include:: /pages/reference-architecture/technical-considerations/0050-cinder-vs-nova-volume.rst
|
||||
.. include:: /pages/reference-architecture/technical-considerations/0070-swift-notes.rst
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
Cinder vs. nova-volume
|
||||
----------------------
|
||||
|
||||
Cinder is a persistent storage management service, also known as "block storage as a service". It was created to replace nova-volume.
|
||||
|
||||
If you decide use persistent storage, you will need to enable Cinder and supply the list of block devices to it. The block devices can be:
|
||||
|
||||
* created by Cobbler during the initial node installation
|
||||
* attached manually (e.g. as additional virtual disks if you are using VirtualBox, or as additional physical RAID, SAN volumes)
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
Quantum vs. nova-network
|
||||
------------------------
|
||||
|
||||
Quantum is a service which provides "networking as a service" functionality in OpenStack. It has a rich tenant-facing API for defining network connectivity and addressing in the cloud and gives operators the ability to leverage different networking technologies to power their cloud networking.
|
||||
|
||||
There are several common deployment use cases for Quantum. Fuel supports the most common of them called "Provider Router with Private Networks". It provides each tenant with one or more private networks, which can communicate with the outside world via a Quantum router.
|
||||
|
||||
In order to deploy Quantum you need to enable it in Fuel configuration, and Fuel will set up an additional node in the OpenStack installation that will act as an L3 router.
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
.. _Swift-and-object-storage-notes:
|
||||
|
||||
Swift (object storage) notes
|
||||
----------------------------
|
||||
|
||||
FUEL currently supports several ways to deploy the swift service:
|
||||
|
||||
* Swift absent
|
||||
|
||||
The default backend that Glance uses to store virtual machine images is the filesystem backend.
|
||||
This simple backend writes image files to the local filesystem.
|
||||
In this case, you can use any of shared file systems which are supported by the Glance.
|
||||
|
||||
* Swift compact
|
||||
|
||||
In this mode the role of swift-storage and swift-proxy combined with a nova-controller.
|
||||
Use it only for testing in order to save nodes but for production use is not really suitable.
|
||||
|
||||
* Swift standalone
|
||||
|
||||
In this case the Proxy service and Storage (account/container/object) services reside on separate nodes.
|
||||
There is deployed one proxy node and three storage nodes.
|
||||
Three Storage nodes can be used initially, but a minimum of 5 is recommended for a production cluster.
|
|
@ -0,0 +1,4 @@
|
|||
Before performing any installations, you'll need to make a number of
|
||||
decisions about which services to deploy, but from a general
|
||||
architectural perspective, it's important to think about how you want
|
||||
to handle both networking and block storage.
|
|
@ -0,0 +1,26 @@
|
|||
|
||||
Cinder vs. nova-volume
|
||||
----------------------
|
||||
|
||||
Cinder is a persistent storage management service, also known as block-storage-as-a-service. It was created to replace nova-volume, and
|
||||
provides persistent storage for VMs.
|
||||
|
||||
|
||||
|
||||
If you decide use Cinder for persistent storage, you will need to both
|
||||
enable Cinder and create the block devices on which it will store data.
|
||||
You will then provide information about those blocks devices during the Fuel
|
||||
install. (You'll see an example how to do this in section 3.)
|
||||
|
||||
|
||||
|
||||
Cinder block devices can be:
|
||||
|
||||
|
||||
* created by Cobbler during the initial node installation, or
|
||||
* attached manually (e.g. as additional virtual disks if you are using VirtualBox, or as additional physical RAID, SAN volumes)
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
|
||||
Quantum vs. nova-network
|
||||
------------------------
|
||||
|
||||
Quantum is a service which provides networking-as-a-service
|
||||
functionality in OpenStack. It has a rich tenant-facing API for
|
||||
defining network connectivity and addressing in the cloud, and gives
|
||||
operators the ability to leverage different networking technologies to
|
||||
power their cloud networking.
|
||||
|
||||
|
||||
|
||||
There are several common deployment use cases for Quantum. Fuel
|
||||
supports the most common of them, called Provider Router with Private
|
||||
Networks. It provides each tenant with one or more private networks,
|
||||
which can communicate with the outside world via a Quantum router.
|
||||
|
||||
|
||||
|
||||
Quantum is not, however, required in order to run an OpenStack
|
||||
cluster; if you don't need (or want) this added functionality, it's
|
||||
perfectly acceptable to continue using nova-network.
|
||||
|
||||
|
||||
|
||||
In order to deploy Quantum, you need to enable it in the Fuel
|
||||
configuration. Fuel will then set up an additional node in the
|
||||
OpenStack installation to act as an L3 router, or, depending on the configuration options you've chosen, install Quantum on the controllers.
|
|
@ -0,0 +1,21 @@
|
|||
.. _Swift-and-object-storage-notes:
|
||||
|
||||
Swift (object storage) notes
|
||||
----------------------------
|
||||
|
||||
FUEL currently supports several ways to deploy the swift service:
|
||||
|
||||
* Swift absent
|
||||
|
||||
By default, Glance uses the filesystem backend to store virtual machine images. In this case, you can use any of shared file systems Glance supports.
|
||||
|
||||
* Swift compact
|
||||
|
||||
In this mode the role of swift-storage and swift-proxy are combined with a nova-controller. Use it only for testing in order to save nodes; it's not suitable for production.
|
||||
|
||||
* Swift standalone
|
||||
|
||||
In this case the Proxy service and Storage (account/container/object) services reside on separate nodes, with one proxy node and a minimum of three storage nodes. (For a production cluster, a minimum of five nodes is recommended.)
|
||||
|
||||
Now let's look at performing an actual OpenStack installation using Fuel.
|
||||
|
Loading…
Reference in New Issue