Merge "Upgrade the rst convention of the User Guide"

This commit is contained in:
Zuul 2018-03-21 07:05:24 +00:00 committed by Gerrit Code Review
commit badf67ddf1
7 changed files with 470 additions and 290 deletions

View File

@ -1,8 +1,9 @@
===========
User Guides
===========
.. toctree::
:maxdepth: 1
:maxdepth: 2
quickstart
multinode

View File

@ -9,17 +9,17 @@ with Kolla. A basic multiple regions deployment consists of separate
OpenStack installation in two or more regions (RegionOne, RegionTwo, ...)
with a shared Keystone and Horizon. The rest of this documentation assumes
Keystone and Horizon are deployed in RegionOne, and other regions have
access to the admin endpoint (i.e., ``kolla_internal_fqdn``) of RegionOne.
access to the admin endpoint (for example, ``kolla_internal_fqdn``) of
RegionOne.
It also assumes that the operator knows the name of all OpenStack regions
in advance, and considers as many Kolla deployments as there are regions.
There are specifications of multiple regions deployment at:
`<http://docs.openstack.org/arch-design/multi-site-architecture.html>`__
and
`<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
There is specifications of multiple regions deployment at
`Multi Region Support for Heat
<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
Deployment of the first region with Keystone and Horizon
========================================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deployment of the first region results in a typical Kolla deployment
whenever, it is an *all-in-one* or *multinode* deployment (see
@ -27,27 +27,33 @@ whenever, it is an *all-in-one* or *multinode* deployment (see
``/etc/kolla/globals.yml`` configuration file. First of all, ensure that
Keystone and Horizon are enabled:
::
.. code-block:: console
enable_keystone: "yes"
enable_horizon: "yes"
.. end
Then, change the value of ``multiple_regions_names`` to add names of other
regions. In this example, we consider two regions. The current one,
formerly knows as RegionOne, that is hided behind
``openstack_region_name`` variable, and the RegionTwo:
::
.. code-block:: none
openstack_region_name: "RegionOne"
multiple_regions_names:
- "{{ openstack_region_name }}"
- "RegionTwo"
.. note:: Kolla uses these variables to create necessary endpoints into
Keystone so that services of other regions can access it. Kolla
also updates the Horizon ``local_settings`` to support multiple
regions.
.. end
.. note::
Kolla uses these variables to create necessary endpoints into
Keystone so that services of other regions can access it. Kolla
also updates the Horizon ``local_settings`` to support multiple
regions.
Finally, note the value of ``kolla_internal_fqdn`` and run
``kolla-ansible``. The ``kolla_internal_fqdn`` value will be used by other
@ -55,7 +61,7 @@ regions to contact Keystone. For the sake of this example, we assume the
value of ``kolla_internal_fqdn`` is ``10.10.10.254``.
Deployment of other regions
===========================
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deployment of other regions follows an usual Kolla deployment except that
OpenStack services connect to the RegionOne's Keystone. This implies to
@ -63,7 +69,7 @@ update the ``/etc/kolla/globals.yml`` configuration file to tell Kolla how
to reach Keystone. In the following, ``kolla_internal_fqdn_r1`` refers to
the value of ``kolla_internal_fqdn`` in RegionOne:
::
.. code-block:: none
kolla_internal_fqdn_r1: 10.10.10.254
@ -77,32 +83,39 @@ the value of ``kolla_internal_fqdn`` in RegionOne:
project_name: "admin"
domain_name: "default"
.. end
Configuration files of cinder,nova,neutron,glance... have to be updated to
contact RegionOne's Keystone. Fortunately, Kolla offers to override all
configuration files at the same time thanks to the
``node_custom_config`` variable (see :ref:`service-config`). This
implies to create a ``global.conf`` file with the following content:
::
.. code-block:: ini
[keystone_authtoken]
auth_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_admin_url }}
.. end
The Placement API section inside the nova configuration file also has
to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``nova.conf`` file with below content:
::
.. code-block:: ini
[placement]
auth_url = {{ keystone_admin_url }}
.. end
The Heat section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``heat.conf`` file with below content:
::
.. code-block:: ini
[trustee]
auth_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_internal_url }}
@ -113,33 +126,44 @@ directory, a ``heat.conf`` file with below content:
[clients_keystone]
auth_uri = {{ keystone_internal_url }}
.. end
The Ceilometer section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``ceilometer.conf`` file with below content:
::
[service_credentials]
auth_url = {{ keystone_internal_url }}
.. code-block:: ini
[service_credentials]
auth_url = {{ keystone_internal_url }}
.. end
And link the directory that contains these files into the
``/etc/kolla/globals.yml``:
::
.. code-block:: none
node_custom_config: path/to/the/directory/of/global&nova_conf/
.. end
Also, change the name of the current region. For instance, RegionTwo:
::
.. code-block:: none
openstack_region_name: "RegionTwo"
.. end
Finally, disable the deployment of Keystone and Horizon that are
unnecessary in this region and run ``kolla-ansible``:
::
.. code-block:: none
enable_keystone: "no"
enable_horizon: "no"
.. end
The configuration is the same for any other region.

View File

@ -24,17 +24,21 @@ Edit the ``/etc/kolla/globals.yml`` and add the following where 192.168.1.100
is the IP address of the machine and 5000 is the port where the registry is
currently running:
::
.. code-block:: none
docker_registry = 192.168.1.100:5000
docker_registry = 192.168.1.100:5000
.. end
The Kolla community recommends using registry 2.3 or later. To deploy registry
with version 2.3 or later, do the following:
::
.. code-block:: console
cd kolla
tools/start-registry
cd kolla
tools/start-registry
.. end
The Docker registry can be configured as a pull through cache to proxy the
official Kolla images hosted in Docker Hub. In order to configure the local
@ -42,75 +46,96 @@ registry as a pull through cache, in the host machine set the environment
variable ``REGISTRY_PROXY_REMOTEURL`` to the URL for the repository on
Docker Hub.
::
.. code-block:: console
export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
.. end
.. note::
Pushing to a registry configured as a pull-through cache is unsupported.
For more information, Reference the `Docker Documentation
<https://docs.docker.com/registry/configuration/>`__.
Pushing to a registry configured as a pull-through cache is unsupported.
For more information, Reference the `Docker Documentation
<https://docs.docker.com/registry/configuration/>`__.
.. _configure_docker_all_nodes:
Configure Docker on all nodes
=============================
.. note:: As the subtitle for this section implies, these steps should be
applied to all nodes, not just the deployment node.
.. note::
After starting the registry, it is necessary to instruct Docker that it will
be communicating with an insecure registry. To enable insecure registry
communication on CentOS, modify the ``/etc/sysconfig/docker`` file to contain
the following where 192.168.1.100 is the IP address of the machine where the
registry is currently running:
As the subtitle for this section implies, these steps should be
applied to all nodes, not just the deployment node.
::
After starting the registry, it is necessary to instruct Docker that
it will be communicating with an insecure registry.
For example, To enable insecure registry communication on CentOS,
modify the ``/etc/sysconfig/docker`` file to contain the following where
``192.168.1.100`` is the IP address of the machine where the registry
is currently running:
# CentOS
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000"
.. path /etc/sysconfig/docker
.. code-block:: ini
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000"
.. end
For Ubuntu, check whether its using upstart or systemd.
::
.. code-block:: console
# stat /proc/1/exe
File: '/proc/1/exe' -> '/lib/systemd/systemd'
# stat /proc/1/exe
File: '/proc/1/exe' -> '/lib/systemd/systemd'
Edit ``/etc/default/docker`` and add:
Edit ``/etc/default/docker`` and add the following configuration:
::
.. path /etc/default/docker
.. code-block:: ini
# Ubuntu
DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
.. end
If Ubuntu is using systemd, additional settings needs to be configured.
Copy Docker's systemd unit file to ``/etc/systemd/system/`` directory:
::
.. code-block:: console
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
.. end
Next, modify ``/etc/systemd/system/docker.service``, add ``environmentFile``
variable and add ``$DOCKER_OPTS`` to the end of ExecStart in ``[Service]``
section:
section.
::
For CentOS:
.. path /etc/systemd/system/docker.service
.. code-block:: ini
# CentOS
[Service]
MountFlags=shared
EnvironmentFile=/etc/sysconfig/docker
ExecStart=
ExecStart=/usr/bin/docker daemon $INSECURE_REGISTRY
# Ubuntu
[Service]
MountFlags=shared
EnvironmentFile=-/etc/default/docker
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
.. end
For Ubuntu:
.. path /etc/systemd/system/docker.service
.. code-block:: ini
[Service]
MountFlags=shared
EnvironmentFile=-/etc/default/docker
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
.. end
.. note::
@ -120,14 +145,22 @@ section:
Restart Docker by executing the following commands:
::
For CentOS or Ubuntu with systemd:
# CentOS or Ubuntu with systemd
systemctl daemon-reload
systemctl restart docker
.. code-block:: console
# Ubuntu with upstart or sysvinit
sudo service docker restart
systemctl daemon-reload
systemctl restart docker
.. end
For Ubuntu with upstart or sysvinit:
.. code-block:: console
service docker restart
.. end
.. _edit-inventory:
@ -152,7 +185,7 @@ controls how ansible interacts with remote hosts.
information about SSH authentication please reference
`Ansible documentation <http://docs.ansible.com/ansible/intro_inventory.html>`__.
::
.. code-block:: none
# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
@ -161,6 +194,8 @@ controls how ansible interacts with remote hosts.
control01 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
.. end
.. note::
Additional inventory parameters might be required according to your
@ -173,7 +208,7 @@ For more advanced roles, the operator can edit which services will be
associated in with each group. Keep in mind that some services have to be
grouped together and changing these around can break your deployment:
::
.. code-block:: none
[kibana:children]
control
@ -184,6 +219,8 @@ grouped together and changing these around can break your deployment:
[haproxy:children]
network
.. end
Deploying Kolla
===============
@ -203,9 +240,11 @@ Deploying Kolla
First, check that the deployment targets are in a state where Kolla may deploy
to them:
::
.. code-block:: console
kolla-ansible prechecks -i <path/to/multinode/inventory/file>
kolla-ansible prechecks -i <path/to/multinode/inventory/file>
.. end
.. note::
@ -215,8 +254,8 @@ to them:
Run the deployment:
::
.. code-block:: console
kolla-ansible deploy -i <path/to/multinode/inventory/file>
kolla-ansible deploy -i <path/to/multinode/inventory/file>
.. _Building Container Images: https://docs.openstack.org/kolla/latest/image-building.html
.. end

View File

@ -5,7 +5,8 @@ Operating Kolla
===============
Upgrading
=========
~~~~~~~~~
Kolla's strategy for upgrades is to never make a mess and to follow consistent
patterns during deployment such that upgrades from one environment to the next
are simple to automate.
@ -28,48 +29,68 @@ choosing.
If the alpha identifier is not used, Kolla will deploy or upgrade using the
version number information contained in the release. To customize the
version number uncomment openstack_release in globals.yml and specify
version number uncomment ``openstack_release`` in ``globals.yml`` and specify
the version number desired.
For example, to deploy a custom built Liberty version built with the
``kolla-build --tag 1.0.0.0`` operation, change globals.yml::
For example, to deploy a custom built ``Liberty`` version built with the
:command:`kolla-build --tag 1.0.0.0` operation, configure the ``globals.yml``
file:
openstack_release: 1.0.0.0
.. code-block:: none
Then run the command to deploy::
openstack_release: 1.0.0.0
kolla-ansible deploy
.. end
If using Liberty and a custom alpha number of 0, and upgrading to 1, change
globals.yml::
Then run the following command to deploy:
openstack_release: 1.0.0.1
.. code-block:: console
Then run the command to upgrade::
kolla-ansible deploy
kolla-ansible upgrade
.. end
.. note:: Varying degrees of success have been reported with upgrading
the libvirt container with a running virtual machine in it. The libvirt
upgrade still needs a bit more validation, but the Kolla community feels
confident this mechanism can be used with the correct Docker graph driver.
If using Liberty and a custom alpha number of 0, and upgrading to 1,
configure the ``globals.yml`` file:
.. note:: The Kolla community recommends the btrfs or aufs graph drivers for
storing data as sometimes the LVM graph driver loses track of its reference
counting and results in an unremovable container.
.. code-block:: none
.. note:: Because of system technical limitations, upgrade of a libvirt
container when using software emulation (``virt_type = qemu`` in nova.conf),
does not work at all. This is acceptable because KVM is the recommended
virtualization driver to use with Nova.
openstack_release: 1.0.0.1
.. note:: Please note that when the ``use_preconfigured_databases`` flag is
set to ``"yes"``, you need to have the ``log_bin_trust_function_creators``
set to ``1`` by your database administrator before performing the upgrade.
.. end
Then run the command to upgrade:
.. code-block:: console
kolla-ansible upgrade
.. end
.. note::
Varying degrees of success have been reported with upgrading
the libvirt container with a running virtual machine in it. The libvirt
upgrade still needs a bit more validation, but the Kolla community feels
confident this mechanism can be used with the correct Docker graph driver.
.. note::
The Kolla community recommends the btrfs or aufs graph drivers for
storing data as sometimes the LVM graph driver loses track of its reference
counting and results in an unremovable container.
.. note::
Because of system technical limitations, upgrade of a libvirt
container when using software emulation (``virt_type = qemu`` in
``nova.conf`` file), does not work at all. This is acceptable because
KVM is the recommended virtualization driver to use with Nova.
Tips and Tricks
===============
~~~~~~~~~~~~~~~
Kolla ships with several utilities intended to facilitate ease of operation.
``tools/cleanup-containers`` is used to remove deployed containers from the
@ -113,21 +134,26 @@ Environment.
tests.
.. note::
In order to do smoke tests, requires ``kolla_enable_sanity_checks=yes``.
In order to do smoke tests, requires ``kolla_enable_sanity_checks=yes``.
``kolla-mergepwd --old OLD_PASSWDS --new NEW_PASSWDS --final FINAL_PASSWDS``
is used to merge passwords from old installation with newly generated
passwords during upgrade of Kolla release. The workflow is:
- Save old passwords from ``/etc/kolla/passwords.yml`` into
``passwords.yml.old``
- Generate new passwords via ``kolla-genpwd`` as ``passwords.yml.new``
- Merge ``passwords.yml.old`` and ``passwords.yml.new`` into
``/etc/kolla/passwords.yml``
#. Save old passwords from ``/etc/kolla/passwords.yml`` into
``passwords.yml.old``.
#. Generate new passwords via ``kolla-genpwd`` as ``passwords.yml.new``.
#. Merge ``passwords.yml.old`` and ``passwords.yml.new`` into
``/etc/kolla/passwords.yml``.
For example::
For example:
mv /etc/kolla/passwords.yml passwords.yml.old
cp kolla-ansible/etc/kolla/passwords.yml passwords.yml.new
kolla-genpwd -p passwords.yml.new
kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml
.. code-block:: console
mv /etc/kolla/passwords.yml passwords.yml.old
cp kolla-ansible/etc/kolla/passwords.yml passwords.yml.new
kolla-genpwd -p passwords.yml.new
kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml
.. end

View File

@ -24,7 +24,7 @@ The host machine must satisfy the following minimum requirements:
.. note::
Root access to the deployment host machine is required.
Root access to the deployment host machine is required.
Install dependencies
~~~~~~~~~~~~~~~~~~~~
@ -34,34 +34,42 @@ before proceeding.
For CentOS, run:
::
.. code-block:: console
yum install epel-release
yum install python-pip
pip install -U pip
yum install epel-release
yum install python-pip
pip install -U pip
.. end
For Ubuntu, run:
::
.. code-block:: console
apt-get update
apt-get install python-pip
pip install -U pip
apt-get update
apt-get install python-pip
pip install -U pip
.. end
To build the code with ``pip`` package manager, install the following
dependencies:
For CentOS, run:
::
.. code-block:: console
yum install python-devel libffi-devel gcc openssl-devel libselinux-python
yum install python-devel libffi-devel gcc openssl-devel libselinux-python
.. end
For Ubuntu, run:
::
.. code-block:: console
apt-get install python-dev libffi-dev gcc libssl-dev python-selinux
apt-get install python-dev libffi-dev gcc libssl-dev python-selinux
.. end
Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
Ansible from distribution packaging if the distro packaging has recommended
@ -76,17 +84,21 @@ repository to install via yum -- to do so, take a look at Fedora's EPEL `docs
On CentOS or RHEL systems, this can be done using:
::
.. code-block:: console
yum install ansible
yum install ansible
.. end
Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using:
::
.. code-block:: console
pip install -U ansible
pip install -U ansible
.. end
.. note::
@ -95,19 +107,24 @@ installed using:
If DEB based systems include a version of Ansible that meets Kolla's version
requirements it can be installed by:
::
.. code-block:: console
apt-get install ansible
apt-get install ansible
.. end
It's beneficial to add the following options to ansible
configuration file ``/etc/ansible/ansible.cfg``:
::
.. path /etc/ansible/ansible.cfg
.. code-block:: ini
[defaults]
host_key_checking=False
pipelining=True
forks=100
[defaults]
host_key_checking=False
pipelining=True
forks=100
.. end
Install Kolla-ansible
~~~~~~~~~~~~~~~~~~~~~
@ -117,65 +134,80 @@ Install Kolla-ansible for deployment or evaluation
Install kolla-ansible and its dependencies using ``pip``.
::
.. code-block:: console
pip install kolla-ansible
pip install kolla-ansible
.. end
Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory.
For CentOS, run:
::
.. code-block:: console
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
.. end
For Ubuntu, run:
::
.. code-block:: console
cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/kolla/
cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/kolla/
.. end
Copy the ``all-in-one`` and ``multinode`` inventory files to
the current directory.
For CentOS, run:
::
.. code-block:: console
cp /usr/share/kolla-ansible/ansible/inventory/* .
.. end
For Ubuntu, run:
::
.. code-block:: console
cp /usr/local/share/kolla-ansible/ansible/inventory/* .
.. end
Install Kolla for development
-----------------------------
Clone the Kolla and Kolla-Ansible repositories from git.
::
.. code-block:: console
git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible
git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible
.. end
Kolla-ansible holds the configuration files (``globals.yml`` and
``passwords.yml``) in ``etc/kolla``. Copy the configuration
files to ``/etc/kolla`` directory.
::
.. code-block:: console
cp -r kolla-ansible/etc/kolla /etc/kolla/
cp -r kolla-ansible/etc/kolla /etc/kolla/
.. end
Kolla-ansible holds the inventory files (``all-in-one`` and ``multinode``)
in ``ansible/inventory``. Copy the inventory files to the current
directory.
::
.. code-block:: console
cp kolla-ansible/ansible/inventory/* .
cp kolla-ansible/ansible/inventory/* .
.. end
Prepare initial configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -194,46 +226,50 @@ than one node, edit ``multinode`` inventory:
Edit the first section of ``multinode`` with connection details of your
environment, for example:
::
.. code-block:: none
[control]
10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true
# Ansible supports syntax like [10:12] - that means 10, 11 and 12.
# Become clausule means "use sudo".
[control]
10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true
# Ansible supports syntax like [10:12] - that means 10, 11 and 12.
# Become clausule means "use sudo".
[network:children]
control
# when you specify group_name:children, it will use contents of group specified.
[network:children]
control
# when you specify group_name:children, it will use contents of group specified.
[compute]
10.0.0.[13:14] ansible_user=ubuntu ansible_password=foobar ansible_become=true
[compute]
10.0.0.[13:14] ansible_user=ubuntu ansible_password=foobar ansible_become=true
[monitoring]
10.0.0.10
# This group is for monitoring node.
# Fill it with one of the controllers' IP address or some others.
[monitoring]
10.0.0.10
# This group is for monitoring node.
# Fill it with one of the controllers' IP address or some others.
[storage:children]
compute
[storage:children]
compute
[deployment]
localhost ansible_connection=local become=true
# use localhost and sudo
[deployment]
localhost ansible_connection=local become=true
# use localhost and sudo
.. end
To learn more about inventory files, check
`Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_.
To confirm that our inventory is correct, run:
::
.. code-block:: console
ansible -m ping all
ansible -m ping all
.. end
.. note::
Ubuntu might not come with python pre-installed. That will cause
errors in ping module. To quickly install python with ansible you
can run ``ansible -m raw -a "apt-get -y install python-dev all"``
Ubuntu might not come with python pre-installed. That will cause
errors in ping module. To quickly install python with ansible you
can run ``ansible -m raw -a "apt-get -y install python-dev all"``
Kolla passwords
---------------
@ -244,16 +280,20 @@ manually or by running random password generator:
For deployment or evaluation, run:
::
.. code-block:: console
kolla-genpwd
kolla-genpwd
.. end
For development, run:
::
.. code-block:: console
cd kolla-ansible/tools
./generate_passwords.py
cd kolla-ansible/tools
./generate_passwords.py
.. end
Kolla globals.yml
-----------------
@ -279,9 +319,11 @@ There are a few options that are required to deploy Kolla-Ansible:
For newcomers, we recommend to use CentOS 7 or Ubuntu 16.04.
::
.. code-block:: console
kolla_base_distro: "centos"
kolla_base_distro: "centos"
.. end
Next "type" of installation needs to be configured.
Choices are:
@ -301,16 +343,20 @@ There are a few options that are required to deploy Kolla-Ansible:
Source builds are proven to be slightly more reliable than binary.
::
.. code-block:: console
kolla_install_type: "source"
kolla_install_type: "source"
.. end
To use DockerHub images, the default image tag has to be overriden. Images are
tagged with release names. For example to use stable Pike images set
::
.. code-block:: console
openstack_release: "pike"
openstack_release: "pike"
.. end
It's important to use same version of images as kolla-ansible. That
means if pip was used to install kolla-ansible, that means it's latest stable
@ -318,9 +364,11 @@ There are a few options that are required to deploy Kolla-Ansible:
master branch, DockerHub also provides daily builds of master branch (which is
tagged as ``master``):
::
.. code-block:: console
openstack_release: "master"
openstack_release: "master"
.. end
* Networking
@ -330,18 +378,22 @@ There are a few options that are required to deploy Kolla-Ansible:
First interface to set is "network_interface". This is the default interface
for multiple management-type networks.
::
.. code-block:: console
network_interface: "eth0"
network_interface: "eth0"
.. end
Second interface required is dedicated for Neutron external (or public)
networks, can be vlan or flat, depends on how the networks are created.
This interface should be active without IP address. If not, instances
won't be able to access to the external networks.
::
.. code-block:: console
neutron_external_interface: "eth1"
neutron_external_interface: "eth1"
.. end
To learn more about network configuration, refer `Network overview
<https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_.
@ -351,9 +403,11 @@ There are a few options that are required to deploy Kolla-Ansible:
*not used* address in management network that is connected to our
``network_interface``.
::
.. code-block:: console
kolla_internal_vip_address: "10.1.0.250"
kolla_internal_vip_address: "10.1.0.250"
.. end
* Enable additional services
@ -361,9 +415,11 @@ There are a few options that are required to deploy Kolla-Ansible:
support for a vast selection of additional services. To enable them, set
``enable_*`` to "yes". For example, to enable Block Storage service:
::
.. code-block:: console
enable_cinder: "yes"
enable_cinder: "yes"
.. end
Kolla now supports many OpenStack services, there is
`a list of available services
@ -385,42 +441,54 @@ the correct versions.
#. Bootstrap servers with kolla deploy dependencies:
::
.. code-block:: console
kolla-ansible -i ./multinode bootstrap-servers
kolla-ansible -i ./multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts:
::
.. code-block:: console
kolla-ansible -i ./multinode prechecks
kolla-ansible -i ./multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment:
::
.. code-block:: console
kolla-ansible -i ./multinode deploy
kolla-ansible -i ./multinode deploy
.. end
* For development, run:
#. Bootstrap servers with kolla deploy dependencies:
::
.. code-block:: console
cd kolla-ansible/tools
./kolla-ansible -i ./multinode bootstrap-servers
cd kolla-ansible/tools
./kolla-ansible -i ./multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts:
::
.. code-block:: console
./kolla-ansible -i ./multinode prechecks
./kolla-ansible -i ./multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment:
::
.. code-block:: console
./kolla-ansible -i ./multinode deploy
./kolla-ansible -i ./multinode deploy
.. end
When this playbook finishes, OpenStack should be up, running and functional!
If error occurs during execution, refer to
@ -432,35 +500,44 @@ Using OpenStack
OpenStack requires an openrc file where credentials for admin user etc are set.
To generate this file run
::
.. code-block:: console
kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh
kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh
.. end
Install basic OpenStack CLI clients:
::
.. code-block:: console
pip install python-openstackclient python-glanceclient python-neutronclient
pip install python-openstackclient python-glanceclient python-neutronclient
.. end
Depending on how you installed Kolla-Ansible, there is script that will create
example networks, images, and so on.
For pip install and CentOS host:
::
.. code-block:: console
. /usr/share/kolla-ansible/init-runonce
. /usr/share/kolla-ansible/init-runonce
.. end
For pip install and Ubuntu host:
::
.. code-block:: console
. /usr/local/share/kolla-ansible/init-runonce
. /usr/local/share/kolla-ansible/init-runonce
.. end
For git pulled source:
::
.. code-block:: console
. kolla-ansible/tools/init-runonce
. kolla-ansible/tools/init-runonce
.. end

View File

@ -5,52 +5,58 @@ Kolla Security
==============
Non Root containers
===================
The OpenStack services, with a few exceptions, run as non root inside of
Kolla's containers. Kolla uses the Docker provided USER flag to set the
appropriate user for each service.
~~~~~~~~~~~~~~~~~~~
The OpenStack services, with a few exceptions, run as non root inside
of Kolla's containers. Kolla uses the Docker provided ``USER`` flag to
set the appropriate user for each service.
SELinux
=======
The state of SELinux in Kolla is a work in progress. The short answer is you
must disable it until selinux polices are written for the Docker containers.
~~~~~~~
To understand why Kolla needs to set certain selinux policies for services that
you wouldn't expect to need them (rabbitmq, mariadb, glance, etc.) we must take
a step back and talk about Docker.
The state of SELinux in Kolla is a work in progress. The short answer
is you must disable it until selinux polices are written for the
Docker containers.
Docker has not had the concept of persistent containerized data until recently.
This means when a container is run the data it creates is destroyed when the
container goes away, which is obviously no good in the case of upgrades.
To understand why Kolla needs to set certain selinux policies for
services that you wouldn't expect to need them (rabbitmq, mariadb, glance
and so on) we must take a step back and talk about Docker.
It was suggested data containers could solve this issue by only holding data if
they were never recreated, leading to a scary state where you could lose access
to your data if the wrong command was executed. The real answer to this problem
came in Docker 1.9 with the introduction of named volumes. You could now
address volumes directly by name removing the need for so called **data
containers** all together.
Docker has not had the concept of persistent containerized data until
recently. This means when a container is run the data it creates is
destroyed when the container goes away, which is obviously no good
in the case of upgrades.
Another solution to the persistent data issue is to use a host bind mount which
involves making, for sake of example, host directory ``var/lib/mysql``
available inside the container at ``var/lib/mysql``. This absolutely solves the
problem of persistent data, but it introduces another security issue,
permissions. With this host bind mount solution the data in ``var/lib/mysql``
will be owned by the mysql user in the container. Unfortunately, that mysql
user in the container could have any UID/GID and thats who will own the data
outside the container introducing a potential security risk. Additionally, this
method dirties the host and requires host permissions to the directories to
bind mount.
It was suggested data containers could solve this issue by only holding
data if they were never recreated, leading to a scary state where you
could lose access to your data if the wrong command was executed. The
real answer to this problem came in Docker 1.9 with the introduction of
named volumes. You could now address volumes directly by name removing
the need for so called **data containers** all together.
Another solution to the persistent data issue is to use a host bind
mount which involves making, for sake of example, host directory
``var/lib/mysql`` available inside the container at ``var/lib/mysql``.
This absolutely solves the problem of persistent data, but it introduces
another security issue, permissions. With this host bind mount solution
the data in ``var/lib/mysql`` will be owned by the mysql user in the
container. Unfortunately, that mysql user in the container could have
any UID/GID and thats who will own the data outside the container
introducing a potential security risk. Additionally, this method
dirties the host and requires host permissions to the directories
to bind mount.
The solution Kolla chose is named volumes.
Why does this matter in the case of selinux? Kolla does not run the process it
is launching as root in most cases. So glance-api is run as the glance user,
and mariadb is run as the mysql user, and so on. When mounting a named volume
in the location that the persistent data will be stored it will be owned by the
root user and group. The mysql user has no permissions to write to this folder
now. What Kolla does is allow a select few commands to be run with sudo as the
mysql user. This allows the mysql user to chown a specific, explicit directory
and store its data in a named volume without the security risk and other
downsides of host bind mounts. The downside to this is selinux blocks those
sudo commands and it will do so until we make explicit policies to allow those
operations.
Why does this matter in the case of selinux? Kolla does not run the
process. It is launching as root in most cases. So glance-api is run
as the glance user, and mariadb is run as the mysql user, and so on.
When mounting a named volume in the location that the persistent data
will be stored it will be owned by the root user and group. The mysql
user has no permissions to write to this folder now. What Kolla does
is allow a select few commands to be run with sudo as the mysql user.
This allows the mysql user to chown a specific, explicit directory
and store its data in a named volume without the security risk and
other downsides of host bind mounts. The downside to this is selinux
blocks those sudo commands and it will do so until we make explicit
policies to allow those operations.

View File

@ -5,19 +5,21 @@ Troubleshooting Guide
=====================
Failures
========
~~~~~~~~
If Kolla fails, often it is caused by a CTRL-C during the deployment
process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured environment, the
Kolla community has added a precheck feature which ensures the deployment
targets are in a state where Kolla may deploy to them. To run the prechecks,
execute:
To correct the problem where Operators have a misconfigured environment,
the Kolla community has added a precheck feature which ensures the
deployment targets are in a state where Kolla may deploy to them. To
run the prechecks:
::
.. code-block:: console
kolla-ansible prechecks
kolla-ansible prechecks
.. end
If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options
@ -30,9 +32,11 @@ In this scenario, Kolla's behavior is undefined.
The fastest way during to recover from a deployment failure is to
remove the failed deployment:
::
.. code-block:: console
kolla-ansible destroy -i <<inventory-file>>
kolla-ansible destroy -i <<inventory-file>>
.. end
Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new
@ -40,37 +44,46 @@ version. If running multinode from a registry, each node's Docker image cache
must be refreshed with the latest images before a new deployment can occur. To
refresh the docker cache from the local Docker registry:
::
.. code-block:: console
kolla-ansible pull
kolla-ansible pull
.. end
Debugging Kolla
===============
~~~~~~~~~~~~~~~
The status of containers after deployment can be determined on the deployment
targets by executing:
::
.. code-block:: console
docker ps -a
docker ps -a
.. end
If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug`_ or contacting the developers via IRC.
seek help by filing a `launchpad bug <https://bugs.launchpad.net/kolla-ansible/+filebug>`__
or contacting the developers via IRC.
The logs can be examined by executing:
::
.. code-block:: console
docker exec -it fluentd bash
docker exec -it fluentd bash
.. end
The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run:
::
.. code-block:: console
docker logs <container-name>
docker logs <container-name>
.. end
Note that most of the containers don't log to stdout so the above command will
provide no information.
@ -79,19 +92,13 @@ To learn more about Docker command line operation please refer to `Docker
documentation <https://docs.docker.com/reference/>`__.
When ``enable_central_logging`` is enabled, to view the logs in a web browser
using Kibana, go to:
::
http://<kolla_internal_vip_address>:<kibana_server_port>
or http://<kolla_external_vip_address>:<kibana_server_port>
and authenticate using ``<kibana_user>`` and ``<kibana_password>``.
using Kibana, go to
``http://<kolla_internal_vip_address>:<kibana_server_port>`` or
``http://<kolla_external_vip_address>:<kibana_server_port>``. Authenticate
using ``<kibana_user>`` and ``<kibana_password>``.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``
``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value of
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
.. _launchpad bug: https://bugs.launchpad.net/kolla-ansible/+filebug