[Docs] Backport Master structure

This is backport combining the documentation changes applied master
according to the queens blueprint "docs-improvements":

* [Docs] Flatten out monitoring
(cherry picked from commit ebdd5759b1)
* [Docs] Move upgrade guides into ops
(cherry picked from commit 56194bcb5a)
* [Docs] Merge advanced configuration into reference
(cherry picked from commit ba7e064ef9)
* [Docs] Uniform landing text
(cherry picked from commit 134ec81016)
* [Docs] Move AIO to first scenario
(cherry picked from commit dc8d6256ce)
* [Docs] Include test scenario as a new user story
(cherry picked from commit 3d76d5e2e2)
* [Docs] Fix references
(cherry picked from commit 1d47028911)
* [Docs] Move more examples to user guide
(cherry picked from commit 73c45a8108)
* [Docs] Move Ceph example to user guides
(cherry picked from commit d27e329a5a)
* [Docs] Move network architecture into reference
(cherry picked from commit 99ca16e85e)
* [Docs] Centralize Inventory documentation
(cherry picked from commit eb89fa513a)
* [Docs] Move limited connectivity to user guide
(cherry picked from commit b6eb92beca)
* [Docs] Migrate security into user guide
(cherry picked from commit f1a7525570)
* [Docs] Guide users more
(cherry picked from commit 99f4f17751)
* [Docs] Add explicit warnings on common mistake
(cherry picked from commit 41bd98385b)

Change-Id: I4b39f2a9f33eff7d0433a98a085cf4fd05cef75e
This commit is contained in:
Jean-Philippe Evrard 2018-02-17 14:09:26 +00:00
parent 2f53b2e1c2
commit 3eca1b5b77
63 changed files with 776 additions and 717 deletions

View File

@ -49,7 +49,7 @@ https://git.openstack.org/cgit/openstack/openstack-ansible-<ROLENAME>.
.. _official OpenStack project: https://governance.openstack.org/reference/projects/index.html
.. _Home Page: https://governance.openstack.org/reference/projects/openstackansible.html
.. _Deployment Guide: https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest
.. _Quick Start: https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html
.. _Quick Start: https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html
.. _Developer Documentation: https://docs.openstack.org/openstack-ansible/latest/contributor/index.html
.. _Source: https://git.openstack.org/cgit/openstack/openstack-ansible
.. _OpenStack Mailing Lists: http://lists.openstack.org/

View File

@ -1,50 +0,0 @@
========
Affinity
========
When OpenStack-Ansible generates its dynamic inventory, the affinity
setting determines how many containers of a similar type are deployed on a
single physical host.
Using ``shared-infra_hosts`` as an example, consider this
``openstack_user_config.yml`` configuration:
.. code-block:: yaml
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single Memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, which means that each host runs one of each
container type.
If you are deploying a stand-alone Object Storage (swift) environment,
you can skip the deployment of RabbitMQ. If you use this configuration,
your ``openstack_user_config.yml`` file would look as follows:
.. code-block:: yaml
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
This configuration deploys a Memcached container and a database container
on each host, but no RabbitMQ containers.

View File

@ -1,12 +0,0 @@
==================================
Appendix I: Advanced configuration
==================================
.. TODO: include intro on what advanced configuration is, whether its required
or optional, and when someone should do it
.. toctree::
:maxdepth: 2
app-advanced-config-override
app-advanced-config-affinity

View File

@ -1,270 +0,0 @@
===========================================
Overriding OpenStack configuration defaults
===========================================
OpenStack has many configuration options available in ``.conf`` files
(in a standard ``INI`` file format),
policy files (in a standard ``JSON`` format) and ``YAML`` files.
.. note::
``YAML`` files are only in the ceilometer project at this time.
OpenStack-Ansible enables you to reference any options in the
`OpenStack Configuration Reference`_ through the use of a simple set of
configuration entries in the ``/etc/openstack_deploy/user_variables.yml``.
This section describes how to use the configuration entries in the
``/etc/openstack_deploy/user_variables.yml`` file to override default
configuration settings. For more information, see the
:dev_docs:`Setting overrides in configuration files
<extending.html#setting-overrides-in-configuration-files>` section in the
developer documentation.
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
Overriding .conf files
~~~~~~~~~~~~~~~~~~~~~~
Most often, overrides are implemented for the ``<service>.conf`` files
(for example, ``nova.conf``). These files use a standard INI file format.
For example, you might want to add the following parameters to the
``nova.conf`` file:
.. code-block:: ini
[DEFAULT]
remove_unused_original_minimum_age_seconds = 43200
[libvirt]
cpu_mode = host-model
disk_cachemodes = file=directsync,block=none
[database]
idle_timeout = 300
max_pool_size = 10
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
.. note::
The general format for the variable names used for overrides is
``<service>_<filename>_<file extension>_overrides``. For example, the variable
name used in these examples to add parameters to the ``nova.conf`` file is
``nova_nova_conf_overrides``.
You can also apply overrides on a per-host basis with the following
configuration in the ``/etc/openstack_deploy/openstack_user_config.yml``
file:
.. code-block:: yaml
compute_hosts:
900089-compute001:
ip: 192.0.2.10
host_vars:
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
Use this method for any files with the ``INI`` format for in OpenStack projects
deployed in OpenStack-Ansible.
Overriding .json files
~~~~~~~~~~~~~~~~~~~~~~
To implement access controls that are different from the ones in a standard
OpenStack environment, you can adjust the default policies applied by services.
Policy files are in a ``JSON`` format.
For example, you might want to add the following policy in the ``policy.json``
file for the Identity service (keystone):
.. code-block:: json
{
"identity:foo": "rule:admin_required",
"identity:bar": "rule:admin_required"
}
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
keystone_policy_overrides:
identity:foo: "rule:admin_required"
identity:bar: "rule:admin_required"
.. note::
The general format for the variable names used for overrides is
``<service>_policy_overrides``. For example, the variable name used in this
example to add a policy to the Identity service (keystone) ``policy.json`` file
is ``keystone_policy_overrides``.
Use this method for any files with the ``JSON`` format in OpenStack projects
deployed in OpenStack-Ansible.
To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is
``<service>_policy_overrides``.
Overriding .yml files
~~~~~~~~~~~~~~~~~~~~~
You can override ``.yml`` file values by supplying replacement YAML content.
.. note::
All default YAML file content is completely overwritten by the overrides,
so the entire YAML source (both the existing content and your changes)
must be provided.
For example, you might want to define a meter exclusion for all hardware
items in the default content of the ``pipeline.yml`` file for the
Telemetry service (ceilometer):
.. code-block:: yaml
sources:
- name: meter_source
interval: 600
meters:
- "!hardware.*"
sinks:
- meter_sink
- name: foo_source
value: foo
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
ceilometer_pipeline_yaml_overrides:
sources:
- name: meter_source
interval: 600
meters:
- "!hardware.*"
sinks:
- meter_sink
- name: source_foo
value: foo
.. note::
The general format for the variable names used for overrides is
``<service>_<filename>_<file extension>_overrides``. For example, the variable
name used in this example to define a meter exclusion in the ``pipeline.yml`` file
for the Telemetry service (ceilometer) is ``ceilometer_pipeline_yaml_overrides``.
Currently available overrides
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following override variables are available.
Galera:
* galera_client_my_cnf_overrides
* galera_my_cnf_overrides
* galera_cluster_cnf_overrides
* galera_debian_cnf_overrides
Telemetry service (ceilometer):
* ceilometer_policy_overrides
* ceilometer_ceilometer_conf_overrides
* ceilometer_event_definitions_yaml_overrides
* ceilometer_event_pipeline_yaml_overrides
* ceilometer_pipeline_yaml_overrides
Block Storage (cinder):
* cinder_policy_overrides
* cinder_rootwrap_conf_overrides
* cinder_api_paste_ini_overrides
* cinder_cinder_conf_overrides
Image service (glance):
* glance_glance_api_paste_ini_overrides
* glance_glance_api_conf_overrides
* glance_glance_cache_conf_overrides
* glance_glance_manage_conf_overrides
* glance_glance_registry_paste_ini_overrides
* glance_glance_registry_conf_overrides
* glance_glance_scrubber_conf_overrides
* glance_glance_scheme_json_overrides
* glance_policy_overrides
Orchestration service (heat):
* heat_heat_conf_overrides
* heat_api_paste_ini_overrides
* heat_default_yaml_overrides
* heat_aws_rds_dbinstance_yaml_overrides
* heat_policy_overrides
Identity service (keystone):
* keystone_keystone_conf_overrides
* keystone_keystone_default_conf_overrides
* keystone_keystone_paste_ini_overrides
* keystone_policy_overrides
Networking service (neutron):
* neutron_neutron_conf_overrides
* neutron_ml2_conf_ini_overrides
* neutron_dhcp_agent_ini_overrides
* neutron_api_paste_ini_overrides
* neutron_rootwrap_conf_overrides
* neutron_policy_overrides
* neutron_dnsmasq_neutron_conf_overrides
* neutron_l3_agent_ini_overrides
* neutron_metadata_agent_ini_overrides
* neutron_metering_agent_ini_overrides
Compute service (nova):
* nova_nova_conf_overrides
* nova_rootwrap_conf_overrides
* nova_api_paste_ini_overrides
* nova_policy_overrides
Object Storage service (swift):
* swift_swift_conf_overrides
* swift_swift_dispersion_conf_overrides
* swift_proxy_server_conf_overrides
* swift_account_server_conf_overrides
* swift_account_server_replicator_conf_overrides
* swift_container_server_conf_overrides
* swift_container_server_replicator_conf_overrides
* swift_object_server_conf_overrides
* swift_object_server_replicator_conf_overrides
Tempest:
* tempest_tempest_conf_overrides
pip:
* pip_global_conf_overrides
.. note::
Possible additional overrides can be found in the "Tunable Section"
of each role's ``main.yml`` file, such as
``/etc/ansible/roles/role_name/defaults/main.yml``.

View File

@ -1,13 +0,0 @@
====================================
Appendix J: Ceph-Ansible integration
====================================
OpenStack-Ansible allows `Ceph storage <https://ceph.com>`_ cluster integration
using the roles maintained by the `Ceph-Ansible`_ project. Deployers can
enable the ``ceph-install`` playbook by adding hosts to the
``ceph-mon_hosts`` and ``ceph-osd_hosts`` groups in
``openstack_user_config.yml``, and then configuring `Ceph-Ansible specific vars
<https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.sample>`_
in the OpenStack-Ansible ``user_variables.yml`` file.
.. _Ceph-Ansible: https://github.com/ceph/ceph-ansible/

View File

@ -1,6 +1,6 @@
================================
Appendix K: Additional resources
================================
====================
Additional resources
====================
Ansible resources:

View File

@ -5,15 +5,4 @@ Appendices
.. toctree::
:maxdepth: 2
app-config-test.rst
app-config-prod.rst
app-config-pod.rst
app-config-prod-ceph.rst
app-custom-layouts.rst
app-security.rst
app-networking.rst
app-limited-connectivity.rst
app-advanced-config-sslcertificates
app-advanced-config-options.rst
app-ceph.rst
app-resources.rst

View File

@ -335,7 +335,7 @@ if watermark == "":
dev_branch_link_name = ""
deploy_guide_prefix = "http://docs.openstack.org/project-deploy-guide/openstack-ansible/{}/%s".format(deploy_branch_link_name)
dev_docs_prefix = "http://docs.openstack.org/openstack-ansible/{}%s".format(dev_branch_link_name)
dev_docs_prefix = "http://docs.openstack.org/openstack-ansible/{}/%s".format(deploy_branch_link_name)
role_docs_prefix = "http://docs.openstack.org/openstack-ansible-%s/{}".format(dev_branch_link_name)
extlinks = {'deploy_guide': (deploy_guide_prefix, ''),

View File

@ -39,8 +39,9 @@ host.
.. note::
The file is heavily commented with details about the various options.
See :ref:`openstack-user-config-reference` for more details.
This file is heavily commented with details about the various options.
See our :dev_docs:`User Guide <user/index.html>` and
:dev_docs:`Reference Guide <reference/index.html>` for more details.
The configuration in the ``openstack_user_config.yml`` file defines which hosts
run the containers and services deployed by OpenStack-Ansible. For
@ -55,16 +56,12 @@ individually in the example file as they are contained in the os-infra hosts.
You can specify image-hosts or dashboard-hosts if you want to scale out in a
specific manner.
For examples, please see :ref:`test-environment-config`,
:ref:`production-environment-config`, and :ref:`pod-environment-config`
For examples, please see our :dev_docs:`User Guides <user/index.html>`
For details about how the inventory is generated from the environment
configuration, see
`developer-inventory <https://docs.openstack.org/openstack-ansible/latest/reference/index.html>`_.
For details about how variable precedence works, and how to override
group vars, see
`developer-inventory-and-vars <https://docs.openstack.org/openstack-ansible/latest/contributor/inventory-and-vars.html>`_.
For details about how the inventory is generated, from the environment
configuration and the variable precedence, see our
:dev_docs:`Reference Guide <reference/index.html>` under the inventory
section.
Installing additional services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -81,6 +78,14 @@ OpenStack-Ansible has many options that you can use for the advanced
configuration of services. Each role's documentation provides information
about the available options.
.. important::
This step is essential to tailoring OpenStack-Ansible to your needs
and is generally overlooked by new deployers. Have a look at each
role documentation, user guides, and reference if you want a tailor
made cloud.
Infrastructure service roles
----------------------------

View File

@ -154,7 +154,9 @@ Install the source and dependencies for the deployment host.
.. note::
If you are installing with limited connectivity, please review
:ref:`limited-connectivity-appendix` before proceeding.
:dev_docs:`Installing with limited connectivity
<user/limited-connectivity/index.html>`
before proceeding.
#. Clone the latest stable release of the OpenStack-Ansible Git repository in
the ``/opt/openstack-ansible`` directory:

View File

@ -9,7 +9,7 @@ intended for deployers.
.. note::
If you want to do a quick proof of concept of OpenStack, read the
`All-In-One quickstart Guide <https://docs.openstack.org/openstack-ansible/latest/contributor/quickstart-aio.html>`_
`All-In-One quickstart Guide <https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html>`_
instead of this document. This document is a walkthrough of a deploy
using OpenStack-Ansible, with all its configurability.

View File

@ -5,14 +5,14 @@ Next steps
Now that you have verified that your OpenStack cloud
is working, here is what you can do next:
Contribute to OpenStack-Ansible
===============================
If you want to contribute to OpenStack-Ansible, please
have a look at our `Contributors guide <https://docs.openstack.org/openstack-ansible/latest/contributor/index.html>`_.
Operate OpenStack-Ansible
=========================
Have a look at our `Operations guide <https://docs.openstack.org/openstack-ansible/latest/admin/index.html>`_.
Review our `Operations guide <https://docs.openstack.org/openstack-ansible/latest/admin/index.html>`_
to learn about verifying your environment in more detail, and creating your first networks, images, and instances.
Contribute to OpenStack-Ansible
===============================
Review our `Contributors guide <https://docs.openstack.org/openstack-ansible/latest/contributor/index.html>`_
to learn about contributing to OpenStack-Ansible.

View File

@ -9,7 +9,9 @@ hosts requires manual configuration because it varies from one use case to
another. This section describes the network configuration that must be
implemented on all target hosts.
For more information about how networking works, see :ref:`network-appendix`.
For more information about how networking works, see the
:dev_docs:`OpenStack-Ansible Reference Architecture, section Container
Networking <reference/architecture/index.html>`.
Host network bridges
~~~~~~~~~~~~~~~~~~~~

View File

@ -23,7 +23,9 @@ The following table shows bridges that are to be configured on hosts.
+-------------+-----------------------+-------------------------------------+
For a detailed reference of how the host and container networking is
implemented, refer to :ref:`network-appendix`.
implemented, refer to
:dev_docs:`OpenStack-Ansible Reference Architecture, section Container
Networking <reference/architecture/index.html>`.
For use case examples, refer to :ref:`test-environment-config` and
:ref:`production-environment-config`.
For use case examples, refer to
:dev_docs:`User Guides <user/index.html>`.

View File

@ -29,17 +29,11 @@ configuration and testing.
# lxc-attach -n infra1_utility_container-161a4084
#. Source the ``admin`` tenant credentials:
#. List your openstack users:
.. code-block:: console
# source /root/openrc
#. Run an OpenStack command that uses one or more APIs. For example:
.. code-block:: console
# openstack user list
# openstack user list --os-cloud=default
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
@ -60,7 +54,7 @@ configuration and testing.
+----------------------------------+--------------------+
Verifying the Dashboard (horizon)
---------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. With a web browser, access the Dashboard by using the external load
balancer IP address defined by the ``external_lb_vip_address`` option

View File

@ -1,3 +1,5 @@
.. _backup-restore:
==============================
Back up and restore your cloud
==============================
@ -26,4 +28,4 @@ Database backups and recovery
MySQL data is available on the infrastructure nodes.
You can recover databases, and rebuild the galera cluster.
For more information, see
:ref:`galera-cluster-maintenance`.
:ref:`galera-cluster-recovery`.

View File

@ -5,15 +5,21 @@ Operations Guide
This guide provides information about operating your OpenStack-Ansible
deployment.
For information how to deploy your OpenStack-Ansible cloud, refer to the
`Deployment Guide <https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/>`_.
for step-by-step instructions on how to deploy the OpenStack packages and
For information on how to deploy your OpenStack-Ansible cloud, refer to the
:deploy_guide:`Deployment Guide <index.html>` for step-by-step
instructions on how to deploy the OpenStack packages and
dependencies on your cloud using OpenStack-Ansible.
This guide is recommended for users of a successfully deployed
OpenStack-Ansible cloud. This explains from basic operations such as
adding images, booting instances, and attaching volumes, to the
more complex operations like upgrading.
For user guides, see the :dev_docs:`User Guide <user/index.html>`.
For information on how to contribute, extend or develop OpenStack-Ansible,
see the :dev_docs:`Contributors Guide <contributor/index.html>`.
For in-depth technical information, see the
:dev_docs:`OpenStack-Ansible Reference <reference/index.html>`.
This guide ranges from first operations to verify your deployment, to
the major upgrades procedures.
.. toctree::
:maxdepth: 1
@ -22,7 +28,7 @@ more complex operations like upgrading.
openstack-operations.rst
maintenance-tasks.rst
scale-environment.rst
monitoring-systems.rst
monitor-environment/monitoring-systems.rst
backup-restore.rst
troubleshooting.rst
upgrades/minor-updates.rst

View File

@ -142,6 +142,8 @@ one of the nodes.
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
wsrep_cluster_status Primary
.. _galera-cluster-recovery:
Galera cluster recovery
~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -1,11 +0,0 @@
===========================
Monitoring your environment
===========================
This is a draft monitoring environment page for the proposed OpenStack-Ansible
operations guide.
.. toctree::
:maxdepth: 2
monitor-environment/monitoring-systems.rst

View File

@ -1,6 +1,6 @@
=======================================================
Integrate OpenStack-Ansible into your monitoring system
=======================================================
===========================
Monitoring your environment
===========================
This is a draft monitoring system page for the proposed OpenStack-Ansible
operations guide.

View File

@ -14,7 +14,7 @@ needed in an environment, it is possible to create additional nodes.
.. warning::
Make sure you back up your current OpenStack environment
before adding any new nodes. See :ref:`backing-up` for more
before adding any new nodes. See :ref:`backup-restore` for more
information.
#. Add the node to the ``infra_hosts`` stanza of the

View File

@ -1,6 +1,5 @@
.. _upgrading-manually:
==================
Upgrading manually
==================

View File

@ -1,6 +1,5 @@
.. _upgrading-by-using-a-script:
===========================
Upgrading by using a script
===========================
@ -10,8 +9,8 @@ the code for migrating from |previous_release_formal_name| to
.. warning::
The upgrade script is still under active development. Do *not* run it
on a production environment at this time.
The upgrade script is always under active development.
Test it on a development environment first.
Running the upgrade script
~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -1,17 +1,18 @@
========
Overview
========
==============
Major upgrades
==============
An OpenStack-Ansible environment can upgrade to a minor or a major version.
This guide provides information about the upgrade process from
|previous_release_formal_name| to |current_release_formal_name|
for OpenStack-Ansible.
.. note::
You can only upgrade between sequential releases.
Upgrades between minor versions of OpenStack-Ansible require
updating the repository clone to the latest minor release tag, and then
running playbooks against the target hosts. For more information, see
:ref:`upgrading-to-a-minor-version`.
Introduction
============
For upgrades between major versions, the OpenStack-Ansible repository provides
playbooks and scripts to upgrade an environment. The ``run-upgrade.sh``
@ -24,7 +25,11 @@ major upgrade process performs the following actions:
- Places flag files that are created by the migration scripts in order to
achieve idempotency. These files are placed in the |upgrade_backup_dir|
directory.
- Upgrades the RabbitMQ server. See :ref:`setup-infra-playbook` for details.
- Upgrades the infrastructure servers.
See :ref:`setup-infra-playbook` for details.
For more information about the major upgrade process, see
:ref:`upgrading-by-using-a-script` and :ref:`upgrading-manually`.
.. include:: major-upgrades-with-script.rst
.. include:: major-upgrades-manual-upgrade.rst

View File

@ -1,19 +1,19 @@
.. _upgrading-to-a-minor-version:
=====================
Minor version upgrade
=====================
=================================
Executing a minor version upgrade
=================================
Upgrades between minor versions of OpenStack-Ansible require updating the
repository to the latest minor release tag, and then running playbooks
against the target hosts. This section provides instructions for those tasks.
Upgrades between minor versions of OpenStack-Ansible require
updating the repository clone to the latest minor release tag, updating
the ansible roles, and then running playbooks against the target hosts.
This section provides instructions for those tasks.
Prerequisites
~~~~~~~~~~~~~
To avoid issues and simplify troubleshooting during the upgrade, disable the
security hardening role by setting the ``apply_security_hardening`` variable
to ``False`` in the :file:`user_variables.yml` file.
to ``False`` in the :file:`user_variables.yml` file, and
backup your openstack-ansible installation.
Execute a minor version upgrade
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -29,7 +29,7 @@ import sys
# Create dynamic table file.
CONF_PATH = os.path.dirname(os.path.realpath(__file__))
SCENARIO_TABLE = 'contributor/scenario-table-gen.html'
SCENARIO_TABLE = 'user/aio/scenario-table-gen.html'
TABLE_FILE = os.path.join(CONF_PATH, SCENARIO_TABLE)
stg = imp.load_source(
'scenario_table_gen',
@ -338,7 +338,7 @@ upgrade_backup_dir = "``/etc/openstack_deploy."+previous_release_capital_name+"`
# Used to reference the deploy guide
deploy_guide_prefix = "http://docs.openstack.org/project-deploy-guide/openstack-ansible/{}/%s".format(deploy_branch_link_name)
dev_docs_prefix = "http://docs.openstack.org/openstack-ansible/{}%s".format(dev_branch_link_name)
dev_docs_prefix = "http://docs.openstack.org/openstack-ansible/{}/%s".format(deploy_branch_link_name)
rst_epilog = """
.. |previous_release_branch_name| replace:: %s

View File

@ -5,15 +5,25 @@ Developer Documentation
In this section, you will find documentation relevant to developing
OpenStack-Ansible.
For information on how to install and deploy OpenStack-Ansible, see the
`Deployment Guide <https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/>`_.
For information on how to deploy your OpenStack-Ansible cloud, refer to the
:deploy_guide:`Deployment Guide <index.html>` for step-by-step
instructions on how to deploy the OpenStack packages and
dependencies on your cloud using OpenStack-Ansible.
For user guides, see the :dev_docs:`User Guide <user/index.html>`.
For information on how to manage and operate OpenStack-Ansible, see the
see the :dev_docs:`Operations Guide <admin/index.html>`.
For in-depth technical information, see the
:dev_docs:`OpenStack-Ansible Reference <reference/index.html>`.
Contents:
.. toctree::
:maxdepth: 2
quickstart-aio
inventory-and-vars
scripts
contribute

File diff suppressed because one or more lines are too long

View File

@ -51,13 +51,12 @@ arguments to ``ansible-playbook`` as a convenience.
bootstrap-aio.sh
----------------
The ``bootstrap-aio.sh`` script prepares a host for an `All-In-One`_ (AIO)
The ``bootstrap-aio.sh`` script prepares a host for an
:ref:`All-In-One <quickstart-aio>` (AIO)
deployment for the purposes of development and gating. The script creates the
necessary partitions, directories, and configurations. The script can be
configured using environment variables - more details are provided on the
`All-In-One`_ page.
.. _All-In-One: quickstart-aio.html
:ref:`All-In-One <quickstart-aio>` page.
Development and Testing
^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -1,8 +1,8 @@
.. _network-appendix:
.. _container-networking:
================================
Appendix G: Container networking
================================
====================
Container networking
====================
OpenStack-Ansible deploys Linux containers (LXC) and uses Linux
bridging between the container and the host interfaces to ensure that
@ -53,7 +53,7 @@ namespaces.
The following image demonstrates how the container network interfaces are
connected to the host's bridges and physical network interfaces:
.. image:: figures/networkcomponents.png
.. image:: ../figures/networkcomponents.png
Network diagrams
~~~~~~~~~~~~~~~~
@ -64,7 +64,7 @@ Hosts with services running in containers
The following diagram shows how all of the interfaces and bridges interconnect
to provide network connectivity to the OpenStack deployment:
.. image:: figures/networkarch-container-external.png
.. image:: ../figures/networkarch-container-external.png
The interface ``lxcbr0`` provides connectivity for the containers to the
outside world, thanks to dnsmasq (dhcp/dns) + NAT.
@ -84,7 +84,7 @@ OpenStack-Ansible deploys the Compute service on the physical host rather than
in a container. The following diagram shows how to use bridges for
network connectivity:
.. image:: figures/networkarch-bare-external.png
.. image:: ../figures/networkarch-bare-external.png
Neutron traffic
---------------
@ -96,12 +96,12 @@ networking-agents container. The diagram shows how DHCP agents provide
information (IP addresses and DNS servers) to the instances, and how routing
works on the image.
.. image:: figures/networking-neutronagents.png
.. image:: ../figures/networking-neutronagents.png
The following diagram shows how virtual machines connect to the ``br-vlan`` and
``br-vxlan`` bridges and send traffic to the network outside the host:
.. image:: figures/networking-compute.png
.. image:: ../figures/networking-compute.png
.. _openstack-user-config-reference:
@ -112,7 +112,7 @@ The ``openstack_user_config.yml.example`` file is heavily commented with the
details of how to do more advanced container networking configuration. The
contents of the file are shown here for reference.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.example
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.example
:language: yaml
:start-after: under the License.

View File

@ -0,0 +1,15 @@
============
Architecture
============
Many operational requirements have been taken into consideration for
the design of the OpenStack-Ansible project.
In this chapter, you can find details about `why` OpenStack-Ansible
was architected in this way.
.. toctree::
:maxdepth: 1
security.rst
container-networking.rst

View File

@ -1,15 +1,13 @@
====================
Appendix F: Security
====================
.. _security-design:
Security
========
Security is one of the top priorities within OpenStack-Ansible (OSA), and many
security enhancements for OpenStack clouds are available in deployments by
default. This appendix provides a detailed overview of the most important
default. This section provides a detailed overview of the most important
security enhancements.
For more information about configuring security, see
:deploy_guide:`Appendix H <app-advanced-config-options.html>`.
.. note::
Every deployer has different security requirements.
@ -33,7 +31,8 @@ certificates, keys, and CA certificates.
To learn more about how to customize the deployment of encrypted
communications, see
:deploy_guide:`Securing services with SSL certificates <app-advanced-config-sslcertificates.html>`.
:deploy_guide:`Securing services with SSL
certificates <app-advanced-config-sslcertificates.html>`.
Host security hardening
~~~~~~~~~~~~~~~~~~~~~~~
@ -55,27 +54,6 @@ to all deployments. The role has been carefully designed to perform as follows:
* Balance security with OpenStack performance and functionality
* Run as quickly as possible
The role is applicable to physical hosts within an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing the value of
the ``apply_security_hardening`` variable in the ``user_variables.yml`` file
to ``false``:
.. code-block:: yaml
apply_security_hardening: false
You can apply security hardening configurations to an existing environment or
audit an environment by using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Apply security hardening configurations
openstack-ansible security-hardening.yml
# Perform a quick audit by using Ansible's check mode
openstack-ansible --check security-hardening.yml
For more information about the security configurations, see the
`security hardening role`_ documentation.

View File

@ -1,5 +1,7 @@
Using overrides
===============
.. _user-overrides:
Overriding default configuration
================================
user_*.yml files
~~~~~~~~~~~~~~~~
@ -18,7 +20,7 @@ variables in files named following the ``user_*.yml`` pattern so they will be
sourced alongside those used exclusively by OpenStack-Ansible.
Ordering and precedence
~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^
``user_*.yml`` files contain YAML variables which are applied as extra-vars
when executing ``openstack-ansible`` to run playbooks. They will be sourced
@ -26,8 +28,8 @@ in alphanumeric order by ``openstack-ansible``. If duplicate variables occur
in the ``user_*.yml`` files, the variable in the last file read will take
precedence.
Adding extra python packages into the environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Adding extra python packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The system will allow you to install and build any package that is a python
installable. The repository infrastructure will look for and create any
@ -64,8 +66,8 @@ Once the variables are set call the play ``repo-build.yml`` to build all of the
wheels within the repository infrastructure. When ready run the target plays to
deploy your overridden source code.
Setting overrides in configuration files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Setting overrides in configuration files with config_template
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All of the services that use YAML, JSON, or INI for configuration can receive
overrides through the use of a Ansible action plugin named ``config_template``.
@ -75,16 +77,14 @@ preset template option. All OpenStack-Ansible roles allow for this
functionality where applicable. Files available to receive overrides can be
seen in the ``defaults/main.yml`` file as standard empty dictionaries (hashes).
Practical guidance for using this feature is available in the
:deploy_guide:`Deployment Guide <app-advanced-config-override.html>`.
This module has been `rejected for inclusion`_ into Ansible Core.
.. _rejected for inclusion: https://github.com/ansible/ansible/pull/12555
This module was not accepted into Ansible Core (see `PR1`_ and `PR2`_), and
will never be.
.. _PR1: https://github.com/ansible/ansible/pull/12555
.. _PR2: https://github.com/ansible/ansible/pull/35453
config_template documentation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
These are the options available as found within the virtual module
documentation section.
@ -135,6 +135,13 @@ documentation section.
Example task using the config_template module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this task the ``test.ini.j2`` file is a template which will be rendered and
written to disk at ``/tmp/test.ini``. The **config_overrides** entry is a
dictionary (hash) which allows a deployer to set arbitrary data as overrides to
be written into the configuration file at run time. The **config_type** entry
specifies the type of configuration file the module will be interacting with;
available options are "yaml", "json", and "ini".
.. code-block:: yaml
- name: Run config template ini
@ -145,8 +152,7 @@ Example task using the config_template module
config_type: ini
Example overrides dictionary (hash)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here is an example override dictionary (hash)
.. code-block:: yaml
@ -155,8 +161,7 @@ Example overrides dictionary (hash)
new_item: 12345
Original template file ``test.ini.j2``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And here is the template file:
.. code-block:: ini
@ -164,9 +169,8 @@ Original template file ``test.ini.j2``
value1 = abc
value2 = 123
Rendered on disk file ``/tmp/test.ini``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The rendered file on disk, namely ``/tmp/test.ini`` looks like
this:
.. code-block:: ini
@ -176,14 +180,6 @@ Rendered on disk file ``/tmp/test.ini``
new_item = 12345
In this task the ``test.ini.j2`` file is a template which will be rendered and
written to disk at ``/tmp/test.ini``. The **config_overrides** entry is a
dictionary (hash) which allows a deployer to set arbitrary data as overrides to
be written into the configuration file at run time. The **config_type** entry
specifies the type of configuration file the module will be interacting with;
available options are "yaml", "json", and "ini".
Discovering available overrides
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -199,3 +195,260 @@ The list of overrides available may be found by executing:
find . -name "main.yml" -exec grep '_.*_overrides:' {} \; \
| grep -v "^#" \
| sort -u
The following override variables are currently available:
Galera:
* galera_client_my_cnf_overrides
* galera_my_cnf_overrides
* galera_cluster_cnf_overrides
* galera_debian_cnf_overrides
Telemetry service (ceilometer):
* ceilometer_policy_overrides
* ceilometer_ceilometer_conf_overrides
* ceilometer_event_definitions_yaml_overrides
* ceilometer_event_pipeline_yaml_overrides
* ceilometer_pipeline_yaml_overrides
Block Storage (cinder):
* cinder_policy_overrides
* cinder_rootwrap_conf_overrides
* cinder_api_paste_ini_overrides
* cinder_cinder_conf_overrides
Image service (glance):
* glance_glance_api_paste_ini_overrides
* glance_glance_api_conf_overrides
* glance_glance_cache_conf_overrides
* glance_glance_manage_conf_overrides
* glance_glance_registry_paste_ini_overrides
* glance_glance_registry_conf_overrides
* glance_glance_scrubber_conf_overrides
* glance_glance_scheme_json_overrides
* glance_policy_overrides
Orchestration service (heat):
* heat_heat_conf_overrides
* heat_api_paste_ini_overrides
* heat_default_yaml_overrides
* heat_aws_rds_dbinstance_yaml_overrides
* heat_policy_overrides
Identity service (keystone):
* keystone_keystone_conf_overrides
* keystone_keystone_default_conf_overrides
* keystone_keystone_paste_ini_overrides
* keystone_policy_overrides
Networking service (neutron):
* neutron_neutron_conf_overrides
* neutron_ml2_conf_ini_overrides
* neutron_dhcp_agent_ini_overrides
* neutron_api_paste_ini_overrides
* neutron_rootwrap_conf_overrides
* neutron_policy_overrides
* neutron_dnsmasq_neutron_conf_overrides
* neutron_l3_agent_ini_overrides
* neutron_metadata_agent_ini_overrides
* neutron_metering_agent_ini_overrides
Compute service (nova):
* nova_nova_conf_overrides
* nova_rootwrap_conf_overrides
* nova_api_paste_ini_overrides
* nova_policy_overrides
Object Storage service (swift):
* swift_swift_conf_overrides
* swift_swift_dispersion_conf_overrides
* swift_proxy_server_conf_overrides
* swift_account_server_conf_overrides
* swift_account_server_replicator_conf_overrides
* swift_container_server_conf_overrides
* swift_container_server_replicator_conf_overrides
* swift_object_server_conf_overrides
* swift_object_server_replicator_conf_overrides
Tempest:
* tempest_tempest_conf_overrides
pip:
* pip_global_conf_overrides
.. note::
Possible additional overrides can be found in the "Tunable Section"
of each role's ``main.yml`` file, such as
``/etc/ansible/roles/role_name/defaults/main.yml``.
Overriding OpenStack configuration defaults
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack has many configuration options available in ``.conf`` files
(in a standard ``INI`` file format),
policy files (in a standard ``JSON`` format) and ``YAML`` files, and
can therefore use the ``config_template`` module described above.
OpenStack-Ansible enables you to reference any options in the
`OpenStack Configuration Reference`_ through the use of a simple set of
configuration entries in the ``/etc/openstack_deploy/user_variables.yml``.
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
Overriding .conf files
^^^^^^^^^^^^^^^^^^^^^^
Most often, overrides are implemented for the ``<service>.conf`` files
(for example, ``nova.conf``). These files use a standard INI file format.
For example, you might want to add the following parameters to the
``nova.conf`` file:
.. code-block:: ini
[DEFAULT]
remove_unused_original_minimum_age_seconds = 43200
[libvirt]
cpu_mode = host-model
disk_cachemodes = file=directsync,block=none
[database]
idle_timeout = 300
max_pool_size = 10
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
.. note::
The general format for the variable names used for overrides is
``<service>_<filename>_<file extension>_overrides``. For example, the variable
name used in these examples to add parameters to the ``nova.conf`` file is
``nova_nova_conf_overrides``.
You can also apply overrides on a per-host basis with the following
configuration in the ``/etc/openstack_deploy/openstack_user_config.yml``
file:
.. code-block:: yaml
compute_hosts:
900089-compute001:
ip: 192.0.2.10
host_vars:
nova_nova_conf_overrides:
DEFAULT:
remove_unused_original_minimum_age_seconds: 43200
libvirt:
cpu_mode: host-model
disk_cachemodes: file=directsync,block=none
database:
idle_timeout: 300
max_pool_size: 10
Use this method for any files with the ``INI`` format for in OpenStack projects
deployed in OpenStack-Ansible.
Overriding .json files
^^^^^^^^^^^^^^^^^^^^^^
To implement access controls that are different from the ones in a standard
OpenStack environment, you can adjust the default policies applied by services.
Policy files are in a ``JSON`` format.
For example, you might want to add the following policy in the ``policy.json``
file for the Identity service (keystone):
.. code-block:: json
{
"identity:foo": "rule:admin_required",
"identity:bar": "rule:admin_required"
}
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
keystone_policy_overrides:
identity:foo: "rule:admin_required"
identity:bar: "rule:admin_required"
.. note::
The general format for the variable names used for overrides is
``<service>_policy_overrides``. For example, the variable name used in this
example to add a policy to the Identity service (keystone) ``policy.json`` file
is ``keystone_policy_overrides``.
Use this method for any files with the ``JSON`` format in OpenStack projects
deployed in OpenStack-Ansible.
To assist you in finding the appropriate variable name to use for
overrides, the general format for the variable name is
``<service>_policy_overrides``.
Overriding .yml files
^^^^^^^^^^^^^^^^^^^^^
You can override ``.yml`` file values by supplying replacement YAML content.
.. note::
All default YAML file content is completely overwritten by the overrides,
so the entire YAML source (both the existing content and your changes)
must be provided.
For example, you might want to define a meter exclusion for all hardware
items in the default content of the ``pipeline.yml`` file for the
Telemetry service (ceilometer):
.. code-block:: yaml
sources:
- name: meter_source
interval: 600
meters:
- "!hardware.*"
sinks:
- meter_sink
- name: foo_source
value: foo
To do this, you use the following configuration entry in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
ceilometer_pipeline_yaml_overrides:
sources:
- name: meter_source
interval: 600
meters:
- "!hardware.*"
sinks:
- meter_sink
- name: source_foo
value: foo
.. note::
The general format for the variable names used for overrides is
``<service>_<filename>_<file extension>_overrides``. For example, the variable
name used in this example to define a meter exclusion in the ``pipeline.yml`` file
for the Telemetry service (ceilometer) is ``ceilometer_pipeline_yaml_overrides``.

View File

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

View File

Before

Width:  |  Height:  |  Size: 107 KiB

After

Width:  |  Height:  |  Size: 107 KiB

View File

Before

Width:  |  Height:  |  Size: 180 KiB

After

Width:  |  Height:  |  Size: 180 KiB

View File

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

View File

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 114 KiB

View File

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 134 KiB

View File

@ -5,11 +5,26 @@ OpenStack-Ansible Reference
This chapter contains all the extra reference information
to deploy, configure, or upgrade an OpenStack-Ansible cloud.
For information on how to deploy your OpenStack-Ansible cloud, refer to the
:deploy_guide:`Deployment Guide <index.html>` for step-by-step
instructions on how to deploy the OpenStack packages and
dependencies on your cloud using OpenStack-Ansible.
For user guides, see the :dev_docs:`User Guide <user/index.html>`.
For information on how to manage and operate OpenStack-Ansible, see the
see the :dev_docs:`Operations Guide <admin/index.html>`.
For information on how to contribute, extend or develop OpenStack-Ansible,
see the :dev_docs:`Contributors Guide <contributor/index.html>`.
.. toctree::
:maxdepth: 1
conventions.rst
inventory/inventory.rst
configuration/advanced-config.rst
architecture/index.rst
commands/reference.rst
upgrades/reference.rst

View File

@ -1,9 +1,8 @@
.. _configuring-inventory:
Configuring the inventory
=========================
conf.d
~~~~~~
Common OpenStack services and their configuration are defined by
OpenStack-Ansible in the
``/etc/openstack_deploy/openstack_user_config.yml`` settings file.
@ -11,16 +10,24 @@ OpenStack-Ansible in the
Additional services should be defined with a YAML file in
``/etc/openstack_deploy/conf.d``, in order to manage file size.
env.d
~~~~~
The ``/etc/openstack_deploy/env.d`` directory sources all YAML files into the
deployed environment, allowing a deployer to define additional group mappings.
This directory is used to extend the environment skeleton, or modify the
defaults defined in the ``inventory/env.d`` directory.
To understand how the dynamic inventory works, see
:ref:`inventory-in-depth`.
.. warning::
Never edit or delete the files
``/etc/openstack_deploy/openstack_inventory.json`` or
``/etc/openstack_deploy/openstack_hostnames_ips.yml``. This can
lead to file corruptions, and problems with the inventory: hosts
and container could disappear and new ones would appear,
breaking your existing deployment.
Configuration constraints
~~~~~~~~~~~~~~~~~~~~~~~~~
@ -44,6 +51,103 @@ which the container resides is added to the ``lxc_hosts`` inventory group.
Using this name for a group in the configuration will result in a runtime
error.
Customizing existing components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deploying directly on hosts
---------------------------
To deploy a component directly on the host instead of within a container, set
the ``is_metal`` property to ``true`` for the container group in the
``container_skel`` section in the appropriate file.
The use of ``container_vars`` and mapping from container groups to host groups
is the same for a service deployed directly onto the host.
.. note::
The ``cinder-volume`` component is deployed directly on the host by
default. See the ``env.d/cinder.yml`` file for this example.
Omit a service or component from the deployment
-----------------------------------------------
To omit a component from a deployment, you can use one of several options:
- Remove the ``physical_skel`` link between the container group and
the host group by deleting the related file located in the ``env.d/``
directory.
- Do not run the playbook that installs the component.
Unless you specify the component to run directly on a host by using the
``is_metal`` property, a container is created for this component.
- Adjust the :ref:`affinity`
to 0 for the host group. Similar to the second option listed here, Unless
you specify the component to run directly on a host by using the ``is_metal``
property, a container is created for this component.
Deploy existing components on dedicated hosts
---------------------------------------------
To deploy a ``shared-infra`` component to dedicated hosts, modify the
files that specify the host groups and container groups for the component.
For example, to run Galera directly on dedicated hosts, you would perform the
following steps:
#. Modify the ``container_skel`` section of the ``env.d/galera.yml`` file.
For example:
.. code-block:: yaml
container_skel:
galera_container:
belongs_to:
- db_containers
contains:
- galera
properties:
is_metal: true
.. note::
To deploy within containers on these dedicated hosts, omit the
``is_metal: true`` property.
#. Assign the ``db_containers`` container group (from the preceding step) to a
host group by providing a ``physical_skel`` section for the host group
in a new or existing file, such as ``env.d/galera.yml``.
For example:
.. code-block:: yaml
physical_skel:
db_containers:
belongs_to:
- all_containers
db_hosts:
belongs_to:
- hosts
#. Define the host group (``db_hosts``) in a ``conf.d/`` file (such as
``galera.yml``). For example:
.. code-block:: yaml
db_hosts:
db-host1:
ip: 172.39.123.11
db-host2:
ip: 172.39.123.12
db-host3:
ip: 172.39.123.13
.. note::
Each of the custom group names in this example (``db_containers``
and ``db_hosts``) are arbitrary. Choose your own group names,
but ensure the references are consistent among all relevant files.
Checking inventory configuration for errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -78,14 +78,17 @@ physical host and not in a container. For an example of ``is_metal: true``
being used refer to ``inventory/env.d/cinder.yml`` in the
``container_skel`` section.
For more details, see :ref:`configuring-inventory`.
Outputs
^^^^^^^
~~~~~~~
Once executed, the script will output an ``openstack_inventory.json`` file into
the directory specified with the ``--config`` argument. This is used as the
source of truth for repeated runs.
.. note::
.. warning::
The ``openstack_inventory.json`` file is the source of truth for the
environment. Deleting this in a production environment means that the UUID
portion of container names will be regenerated, which then results in new

View File

@ -14,5 +14,6 @@ for OpenStack-Ansible.
generate-inventory
configure-inventory
understanding-inventory
manage-inventory
advanced-topics

View File

@ -1,6 +1,7 @@
================================================
Appendix E: Customizing host and service layouts
================================================
.. _inventory-in-depth:
Understanding the inventory
===========================
The default layout of containers and services in OpenStack-Ansible (OSA) is
determined by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and
@ -21,8 +22,8 @@ To customize the layout of the components for your deployment, modify the
host groups and container groups appropriately before running the installation
playbooks.
Understanding host groups
~~~~~~~~~~~~~~~~~~~~~~~~~
Understanding host groups (conf.d structure)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As part of the initial configuration, each target host appears either in the
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
@ -50,8 +51,8 @@ variables to any component containers on the specific host.
particularly for new services, by using a new file in the
``conf.d/`` directory.
Understanding container groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Understanding container groups (env.d structure)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional group mappings are located within files in the
``/etc/openstack_deploy/env.d/`` directory. These groups are treated as
@ -61,11 +62,11 @@ groups, that define where each service deploys. By reviewing files within the
in the default layout.
For example, the ``shared-infra.yml`` file defines a container group,
``shared- infra_containers``, as a subset of the all_containers inventory
group. The ``shared- infra_containers`` container group is mapped to the
``shared-infra_hosts`` host group. All of the service components in the
``shared-infra_containers`` container group are deployed to each target host
in the ``shared-infra_hosts host`` group.
``shared-infra_containers``, as a subset of the ``all_containers``
inventory group. The ``shared- infra_containers`` container group is
mapped to the ``shared-infra_hosts`` host group. All of the service
components in the ``shared-infra_containers`` container group are
deployed to each target host in the ``shared-infra_hosts host`` group.
Within a ``physical_skel`` section, the OpenStack-Ansible dynamic inventory
expects to find a pair of keys. The first key maps to items in the
@ -93,98 +94,53 @@ group. Other services might have more complex deployment needs. They define and
consume inventory container groups differently. Mapping components to several
groups in this way allows flexible targeting of roles and tasks.
Customizing existing components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _affinity:
Deploying directly on hosts
---------------------------
Affinity
~~~~~~~~
To deploy a component directly on the host instead of within a container, set
the ``is_metal`` property to ``true`` for the container group in the
``container_skel`` section in the appropriate file.
When OpenStack-Ansible generates its dynamic inventory, the affinity
setting determines how many containers of a similar type are deployed on a
single physical host.
The use of ``container_vars`` and mapping from container groups to host groups
is the same for a service deployed directly onto the host.
Using ``shared-infra_hosts`` as an example, consider this
``openstack_user_config.yml`` configuration:
.. note::
.. code-block:: yaml
The ``cinder-volume`` component is deployed directly on the host by
default. See the ``env.d/cinder.yml`` file for this example.
shared-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
Omit a service or component from the deployment
-----------------------------------------------
Three hosts are assigned to the `shared-infra_hosts` group,
OpenStack-Ansible ensures that each host runs a single database container,
a single Memcached container, and a single RabbitMQ container. Each host has
an affinity of 1 by default, which means that each host runs one of each
container type.
To omit a component from a deployment, you can use one of several options:
If you are deploying a stand-alone Object Storage (swift) environment,
you can skip the deployment of RabbitMQ. If you use this configuration,
your ``openstack_user_config.yml`` file would look as follows:
- Remove the ``physical_skel`` link between the container group and
the host group by deleting the related file located in the ``env.d/``
directory.
- Do not run the playbook that installs the component.
Unless you specify the component to run directly on a host by using the
``is_metal`` property, a container is created for this component.
- Adjust the :deploy_guide:`affinity <app-advanced-config-affinity.html>`
to 0 for the host group. Similar to the second option listed here, Unless
you specify the component to run directly on a host by using the``is_metal``
property, a container is created for this component.
.. code-block:: yaml
Deploy existing components on dedicated hosts
---------------------------------------------
shared-infra_hosts:
infra1:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.101
infra2:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.102
infra3:
affinity:
rabbit_mq_container: 0
ip: 172.29.236.103
To deploy a ``shared-infra`` component to dedicated hosts, modify the
files that specify the host groups and container groups for the component.
For example, to run Galera directly on dedicated hosts, you would perform the
following steps:
#. Modify the ``container_skel`` section of the ``env.d/galera.yml`` file.
For example:
.. code-block:: yaml
container_skel:
galera_container:
belongs_to:
- db_containers
contains:
- galera
properties:
is_metal: true
.. note::
To deploy within containers on these dedicated hosts, omit the
``is_metal: true`` property.
#. Assign the ``db_containers`` container group (from the preceding step) to a
host group by providing a ``physical_skel`` section for the host group
in a new or existing file, such as ``env.d/galera.yml``.
For example:
.. code-block:: yaml
physical_skel:
db_containers:
belongs_to:
- all_containers
db_hosts:
belongs_to:
- hosts
#. Define the host group (``db_hosts``) in a ``conf.d/`` file (such as
``galera.yml``). For example:
.. code-block:: yaml
db_hosts:
db-host1:
ip: 172.39.123.11
db-host2:
ip: 172.39.123.12
db-host3:
ip: 172.39.123.13
.. note::
Each of the custom group names in this example (``db_containers``
and ``db_hosts``) are arbitrary. Choose your own group names,
but ensure the references are consistent among all relevant files.
This configuration deploys a Memcached container and a database container
on each host, but no RabbitMQ containers.

View File

@ -1,6 +1,8 @@
===========
Quick Start
===========
.. _quickstart-aio:
===============
Quickstart: AIO
===============
All-in-one (AIO) builds are a great way to perform an OpenStack-Ansible build
for:
@ -158,7 +160,7 @@ Notes:
The next step is to bootstrap Ansible and the Ansible roles for the
development environment. Deployers can customize roles by adding variables to
override the defaults in each role (see :ref:`adding-galaxy-roles`). Run the
override the defaults in each role (see :ref:`user-overrides`). Run the
following to bootstrap Ansible:
.. code-block:: shell-session
@ -234,6 +236,7 @@ Keystone service, execute:
Rebooting an AIO
----------------
As the AIO includes all three cluster members of MariaDB/Galera, the cluster
has to be re-initialized after the host is rebooted.
@ -251,6 +254,7 @@ section in the operations guide.
Rebuilding an AIO
-----------------
Sometimes it may be useful to destroy all the containers and rebuild the AIO.
While it is preferred that the AIO is entirely destroyed and rebuilt, this
isn't always practical. As such the following may be executed instead:

View File

@ -0,0 +1 @@
<table border="1"><thead valign="bottom"><tr><th style="padding-left:5px;padding-right:5px;" class="head"></th><th style="padding-left:5px;padding-right:5px;" class="head">aio_basekit</th><th style="padding-left:5px;padding-right:5px;" class="head">aio_lxc</th><th style="padding-left:5px;padding-right:5px;" class="head">aio_metal</th><th style="padding-left:5px;padding-right:5px;" class="head">ceph</th><th style="padding-left:5px;padding-right:5px;" class="head">octavia</th><th style="padding-left:5px;padding-right:5px;" class="head">tacker</th><th style="padding-left:5px;padding-right:5px;" class="head">translations</th></tr></thead><tbody valign="top"><tr><td align="left">heat</td><td>&#160;</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">tacker</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td><td>&#160;</td></tr><tr><td align="left">octavia</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">glance</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">neutron</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">trove</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">magnum</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">keystone</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">designate</td><td>&#160;</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">ceph</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td>&#160;</td></tr><tr><td align="left">nova</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">swift</td><td>&#160;</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">haproxy</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td></tr><tr><td align="left">cinder</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">horizon</td><td>&#160;</td><td align="center">X</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr><tr><td align="left">sahara</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td>&#160;</td><td align="center">X</td></tr></tbody></table>

View File

@ -1,13 +1,10 @@
.. _production-ceph-environment-config:
=============================================================
Appendix D: Example Ceph production environment configuration
=============================================================
=======================
Ceph production example
=======================
Introduction
~~~~~~~~~~~~
This appendix describes an example production environment for a working
This section describes an example production environment for a working
OpenStack-Ansible (OSA) deployment with high availability services and using
the Ceph backend for images, volumes, and instances.
@ -25,9 +22,30 @@ This example environment has the following characteristics:
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-production-ceph.png
.. image:: ../figures/arch-layout-production-ceph.png
:width: 100%
Integration with Ceph
~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible allows `Ceph storage <https://ceph.com>`_ cluster
integration in two ways:
* connecting to your own ceph cluster by pointing to its information
in ``user_variables.yml``
* deploying a ceph cluster by using the roles maintained by the
`Ceph-Ansible`_ project. Deployers can enable the ``ceph-install``
playbook by adding hosts to the ``ceph-mon_hosts`` and ``ceph-osd_hosts``
groups in ``openstack_user_config.yml``, and then configuring
`Ceph-Ansible specific vars
<https://github.com/ceph/ceph-ansible/blob/master/group_vars/all.yml.sample>`_
in the OpenStack-Ansible ``user_variables.yml`` file.
.. _Ceph-Ansible: https://github.com/ceph/ceph-ansible/
This example will focus on the deployment of both OpenStack-Ansible
and its Ceph cluster.
Network configuration
~~~~~~~~~~~~~~~~~~~~~
@ -89,7 +107,7 @@ following is the ``/etc/network/interfaces`` file for ``infra1``.
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@ -102,7 +120,7 @@ environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.prod-ceph.example
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.prod-ceph.example
Environment customizations
--------------------------
@ -113,10 +131,11 @@ the services will run in a container (the default), or on the host (on
metal).
For a ceph environment, you can run the ``cinder-volume`` in a container.
To do this you will need to create a ``/etc/openstack_deploy/env.d/cinder.yml`` file
with the following content:
To do this you will need to create a
``/etc/openstack_deploy/env.d/cinder.yml`` file with the following
content:
.. literalinclude:: ../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
.. literalinclude:: ../../../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
User variables
--------------
@ -127,7 +146,7 @@ overrides for the default variables.
For this example environment, we configure a HA load balancer.
We implement the load balancer (HAProxy) with an HA layer (keepalived)
on the infrastructure hosts.
Your ``/etc/openstack_deploy/user_variables.yml`` must have the following content
to configure haproxy, keepalived and ceph:
Your ``/etc/openstack_deploy/user_variables.yml`` must have the
following content to configure haproxy, keepalived and ceph:
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.prod-ceph.example
.. literalinclude:: ../../../../etc/openstack_deploy/user_variables.yml.prod-ceph.example

View File

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 163 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 215 KiB

After

Width:  |  Height:  |  Size: 215 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

@ -1,16 +1,31 @@
=============
Upgrade Guide
=============
==========
User Guide
==========
This guide provides information about the upgrade process from
|previous_release_formal_name| to |current_release_formal_name|
for OpenStack-Ansible.
In this section, you will find user stories and examples relevant to
deploying OpenStack-Ansible.
For information on how to deploy your OpenStack-Ansible cloud, refer to the
:deploy_guide:`Deployment Guide <index.html>` for step-by-step
instructions on how to deploy the OpenStack packages and
dependencies on your cloud using OpenStack-Ansible.
For information on how to manage and operate OpenStack-Ansible, see the
see the :dev_docs:`Operations Guide <admin/index.html>`.
For information on how to contribute, extend or develop OpenStack-Ansible,
see the :dev_docs:`Contributors Guide <contributor/index.html>`.
For in-depth technical information, see the
:dev_docs:`OpenStack-Ansible Reference <reference/index.html>`.
.. toctree::
:maxdepth: 2
:maxdepth: 1
overview
minor-upgrade
script-upgrade
manual-upgrade
reference
aio/quickstart.rst
test/example.rst
prod/example.rst
limited-connectivity/index.rst
l3pods/example.rst
ceph/full-deploy.rst
security/index.rst

View File

@ -1,13 +1,10 @@
.. _pod-environment-config:
============================================================
Appendix C: Example layer 3 routed environment configuration
============================================================
==========================
Routed environment example
==========================
Introduction
~~~~~~~~~~~~
This appendix describes an example production environment for a working
This section describes an example production environment for a working
OpenStack-Ansible (OSA) deployment with high availability services where
provider networks and connectivity between physical machines are routed
(layer 3).
@ -27,7 +24,7 @@ This example environment has the following characteristics:
Tunnel, and Storage Networks of each pod. The gateway address is the first
usable address within each network's subnet.
.. image:: figures/arch-layout-production.png
.. image:: ../figures/arch-layout-production.png
:width: 100%
Network configuration
@ -105,7 +102,7 @@ following is the ``/etc/network/interfaces`` file for ``infra1``.
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.pod.example
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.pod.example
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@ -130,7 +127,7 @@ pods.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.pod.example
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.pod.example
Environment customizations
--------------------------
@ -144,7 +141,7 @@ For this environment, the ``cinder-volume`` runs in a container on the
infrastructure hosts. To achieve this, implement
``/etc/openstack_deploy/env.d/cinder.yml`` with the following content:
.. literalinclude:: ../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
.. literalinclude:: ../../../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
User variables
--------------
@ -156,4 +153,4 @@ For this environment, implement the load balancer on the infrastructure
hosts. Ensure that keepalived is also configured with HAProxy in
``/etc/openstack_deploy/user_variables.yml`` with the following content.
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.prod.example
.. literalinclude:: ../../../../etc/openstack_deploy/user_variables.yml.prod.example

View File

@ -1,8 +1,6 @@
.. _limited-connectivity-appendix:
================================================
Appendix H: Installing with limited connectivity
================================================
====================================
Installing with limited connectivity
====================================
Many playbooks and roles in OpenStack-Ansible retrieve dependencies from the
public Internet by default. Many deployers block direct outbound connectivity

View File

@ -1,13 +1,10 @@
.. _production-environment-config:
========================================================
Appendix B: Example production environment configuration
========================================================
======================
Production environment
======================
Introduction
~~~~~~~~~~~~
This appendix describes an example production environment for a working
This is an example production environment for a working
OpenStack-Ansible (OSA) deployment with high availability services.
This example environment has the following characteristics:
@ -24,8 +21,9 @@ This example environment has the following characteristics:
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-production.png
.. image:: ../figures/arch-layout-production.png
:width: 100%
:alt: Production environment host layout
Network configuration
~~~~~~~~~~~~~~~~~~~~~
@ -84,7 +82,7 @@ following is the ``/etc/network/interfaces`` file for ``infra1``.
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.prod.example
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@ -97,7 +95,7 @@ environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.prod.example
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.prod.example
Environment customizations
--------------------------
@ -111,7 +109,7 @@ For this environment, the ``cinder-volume`` runs in a container on the
infrastructure hosts. To achieve this, implement
``/etc/openstack_deploy/env.d/cinder.yml`` with the following content:
.. literalinclude:: ../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
.. literalinclude:: ../../../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
User variables
--------------
@ -123,4 +121,4 @@ For this environment, implement the load balancer on the infrastructure
hosts. Ensure that keepalived is also configured with HAProxy in
``/etc/openstack_deploy/user_variables.yml`` with the following content.
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.prod.example
.. literalinclude:: ../../../../etc/openstack_deploy/user_variables.yml.prod.example

View File

@ -0,0 +1,29 @@
Apply ansible-hardening
=======================
The ``ansible-hardening`` role is applicable to physical hosts within
an OpenStack-Ansible deployment
that are operating as any type of node, infrastructure or compute. By
default, the role is enabled. You can disable it by changing the value of
the ``apply_security_hardening`` variable in the ``user_variables.yml`` file
to ``false``:
.. code-block:: yaml
apply_security_hardening: false
You can apply security hardening configurations to an existing environment or
audit an environment by using a playbook supplied with OpenStack-Ansible:
.. code-block:: bash
# Apply security hardening configurations
openstack-ansible security-hardening.yml
# Perform a quick audit by using Ansible's check mode
openstack-ansible --check security-hardening.yml
For more information about the security configurations, see the
`security hardening role`_ documentation.
.. _security hardening role: http://docs.openstack.org/developer/ansible-hardening/

View File

@ -0,0 +1,12 @@
=================
Security settings
=================
This chapter contains information to configure specific security
settings for your OpenStack-Ansible cloud.
For understanding security design, please see
:ref:`security-design`.
.. include:: ssl-certificates.rst
.. include:: hardening.rst

View File

@ -1,4 +1,3 @@
=======================================
Securing services with SSL certificates
=======================================
@ -12,24 +11,17 @@ communication between services:
All public endpoints reside behind haproxy, resulting in the only certificate
management most environments need are those for haproxy.
When deploying with OpenStack-Ansible, you can either use self-signed certificates
that are generated during the deployment process or provide SSL certificates,
keys, and CA certificates from your own trusted certificate authority. Highly
secured environments use trusted, user-provided certificates for as
many services as possible.
When deploying with OpenStack-Ansible, you can either use self-signed
certificates that are generated during the deployment process or provide
SSL certificates, keys, and CA certificates from your own trusted
certificate authority. Highly secured environments use trusted,
user-provided certificates for as many services as possible.
.. note::
Perform all SSL certificate configuration in
``/etc/openstack_deploy/user_variables.yml`` file and not in the playbooks
or roles themselves. The variables to set which provide the path on the deployment
node to the certificates for HAProxy configuration are:
.. code-block:: yaml
haproxy_user_ssl_cert: /etc/openstack_deploy/ssl/example.com.crt
haproxy_user_ssl_key: /etc/openstack_deploy/ssl/example.com.key
haproxy_user_ssl_ca_cert: /etc/openstack_deploy/ssl/ExampleCA.crt
``/etc/openstack_deploy/user_variables.yml`` file. Do not edit the playbooks
or roles themselves.
Self-signed certificates
~~~~~~~~~~~~~~~~~~~~~~~~
@ -113,7 +105,22 @@ OpenStack-Ansible:
the ``/etc/openstack_deploy/user_variables.yml`` file.
#. Run the playbook for that service.
For example, to deploy user-provided certificates for RabbitMQ,
HAProxy example
---------------
The variables to set which provide the path on the deployment
node to the certificates for HAProxy configuration are:
.. code-block:: yaml
haproxy_user_ssl_cert: /etc/openstack_deploy/ssl/example.com.crt
haproxy_user_ssl_key: /etc/openstack_deploy/ssl/example.com.key
haproxy_user_ssl_ca_cert: /etc/openstack_deploy/ssl/ExampleCA.crt
RabbitMQ example
----------------
To deploy user-provided certificates for RabbitMQ,
copy the certificates to the deployment host, edit
the ``/etc/openstack_deploy/user_variables.yml`` file and set the following
three variables:

View File

@ -1,13 +1,8 @@
.. _test-environment-config:
========================
Test environment example
========================
==================================================
Appendix A: Example test environment configuration
==================================================
Introduction
~~~~~~~~~~~~
This appendix describes an example test environment for a working
Here is an example test environment for a working
OpenStack-Ansible (OSA) deployment with a small number of servers.
This example environment has the following characteristics:
@ -20,7 +15,7 @@ This example environment has the following characteristics:
* Internet access via the router address 172.29.236.1 on the
Management Network
.. image:: figures/arch-layout-test.png
.. image:: ../figures/arch-layout-test.png
:width: 100%
:alt: Test environment host layout
@ -71,7 +66,7 @@ following is the ``/etc/network/interfaces`` file for ``infra1``.
configuration files are replaced with the appropriate name. The same
applies to additional network interfaces.
.. literalinclude:: ../../etc/network/interfaces.d/openstack_interface.cfg.test.example
.. literalinclude:: ../../../../etc/network/interfaces.d/openstack_interface.cfg.test.example
Deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@ -84,7 +79,7 @@ environment layout.
The following configuration describes the layout for this environment.
.. literalinclude:: ../../etc/openstack_deploy/openstack_user_config.yml.test.example
.. literalinclude:: ../../../../etc/openstack_deploy/openstack_user_config.yml.test.example
Environment customizations
--------------------------
@ -103,10 +98,10 @@ User variables
The ``/etc/openstack_deploy/user_variables.yml`` file defines the global
overrides for the default variables.
For this environment, you are using the same IP address for the internal
and external endpoints. You will need to ensure that the internal and public
For this environment, if you want to use the same IP address for the internal
and external endpoints, you will need to ensure that the internal and public
OpenStack endpoints are served with the same protocol. This is done with
the following content:
.. literalinclude:: ../../etc/openstack_deploy/user_variables.yml.test.example
.. literalinclude:: ../../../../etc/openstack_deploy/user_variables.yml.test.example