Factor out Install Guide for Debian with debconf

To simplify the build tool chain,
factor out Install Guide for Debian with debconf.
Continueously, keep Install Guide for Debian without debconf
in the doc/install-guide directory. Also, use the contents
at doc/install-guide as possible for consistency.

On the following patches, clean up doc/install-guide sources
to cleanup the contents and build tool chains.

Change-Id: I8df6b3b382137d08d60f85bc41bcd98ac1f4eb47
This commit is contained in:
KATO Tomoyuki 2016-05-14 09:40:46 +09:00
parent 4a6d44204f
commit 31b31410f9
113 changed files with 2737 additions and 44 deletions

1
.gitignore vendored
View File

@ -10,6 +10,7 @@ target/
/doc/install-guide/build-obs/
/doc/install-guide/build-ubuntu/
/doc/install-guide/build-debian/
/doc/install-guide-debconf/build-debian/
.doctrees
build/
/build-*.log.gz

View File

@ -0,0 +1,30 @@
[metadata]
name = openstackinstallguide
summary = OpenStack Installation Guides
author = OpenStack
author-email = openstack-docs@lists.openstack.org
home-page = http://docs.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Topic :: Documentation
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
[build_sphinx]
all_files = 1
build-dir = build
source-dir = source
[wheel]
universal = 1
[pbr]
warnerrors = True

View File

@ -0,0 +1,30 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-aodh.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-cinder.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-glance.rst

View File

@ -0,0 +1,210 @@
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Telemetry
service, code-named ceilometer, on the controller node. The Telemetry
service collects measurements from most OpenStack services and
optionally triggers alarms.
Prerequisites
-------------
Before you install and configure the Telemetry service, you must
create a database, service credentials, and API endpoints. However,
unlike other services, the Telemetry service uses a NoSQL database.
See :ref:`environment-nosql-database` to install and configure
MongoDB before proceeding further.
#. Create the ``ceilometer`` database:
.. code-block:: console
# mongo --host controller --eval '
db = db.getSiblingDB("ceilometer");
db.createUser({user: "ceilometer",
pwd: "CEILOMETER_DBPASS",
roles: [ "readWrite", "dbAdmin" ]})'
MongoDB shell version: 2.4.x
connecting to: controller:27017/test
{
"user" : "ceilometer",
"pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
"roles" : [
"readWrite",
"dbAdmin"
],
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
Replace ``CEILOMETER_DBPASS`` with a suitable password.
.. note::
If the command fails saying you are not authorized to insert a user,
you may need to temporarily comment out the ``auth`` option in
the ``/etc/mongodb.conf`` file, restart the MongoDB service using
``systemctl restart mongodb``, and try calling the command again.
#. Source the ``admin`` credentials to gain access to admin-only
CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
* Create the ``ceilometer`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt ceilometer
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | c859c96f57bd4989a8ea1a0b1d8ff7cd |
| name | ceilometer |
+-----------+----------------------------------+
* Add the ``admin`` role to the ``ceilometer`` user.
.. code-block:: console
$ openstack role add --project service --user ceilometer admin
.. note::
This command provides no output.
* Create the ``ceilometer`` service entity:
.. code-block:: console
$ openstack service create --name ceilometer \
--description "Telemetry" metering
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Telemetry |
| enabled | True |
| id | 5fb7fd1bb2954fddb378d4031c28c0e4 |
| name | ceilometer |
| type | metering |
+-------------+----------------------------------+
#. Create the Telemetry service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
metering public http://controller:8777
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b808b67b848d443e9eaaa5e5d796970c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5fb7fd1bb2954fddb378d4031c28c0e4 |
| service_name | ceilometer |
| service_type | metering |
| url | http://controller:8777 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
metering internal http://controller:8777
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c7009b1c2ee54b71b771fa3d0ae4f948 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5fb7fd1bb2954fddb378d4031c28c0e4 |
| service_name | ceilometer |
| service_type | metering |
| url | http://controller:8777 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
metering admin http://controller:8777
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b2c00566d0604551b5fe1540c699db3d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5fb7fd1bb2954fddb378d4031c28c0e4 |
| service_name | ceilometer |
| service_type | metering |
| url | http://controller:8777 |
+--------------+----------------------------------+
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt-get install ceilometer-api ceilometer-collector \
ceilometer-agent-central ceilometer-agent-notification
python-ceilometerclient
Respond to prompts for
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
#. Edit the ``/etc/ceilometer/ceilometer.conf`` file and complete
the following actions:
* In the ``[database]`` section, configure database access:
.. code-block:: ini
[database]
...
connection = mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer
Replace ``CEILOMETER_DBPASS`` with the password you chose for the
Telemetry service database. You must escape special characters such
as ':', '/', '+', and '@' in the connection string in accordance
with `RFC2396 <https://www.ietf.org/rfc/rfc2396.txt>`_.
* In the ``[service_credentials]`` section, configure service credentials:
.. code-block:: ini
[service_credentials]
...
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
interface = internalURL
region_name = RegionOne
Replace ``CEILOMETER_PASS`` with the password you chose for
the ``ceilometer`` user in the Identity service.
Finalize installation
---------------------
#. Restart the Telemetry services:
.. code-block:: console
# service ceilometer-agent-central restart
# service ceilometer-agent-notification restart
# service ceilometer-api restart
# service ceilometer-collector restart

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-nova.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-swift.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/ceilometer.rst

View File

@ -0,0 +1,63 @@
.. _cinder-controller:
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Block
Storage service, code-named cinder, on the controller node. This
service requires at least one additional storage node that provides
volumes to instances.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt-get install cinder-api cinder-scheduler
Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
#. Edit the ``/etc/cinder/cinder.conf`` file and complete the
following actions:
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
use the management interface IP address of the controller node:
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
Configure Compute to use Block Storage
--------------------------------------
* Edit the ``/etc/nova/nova.conf`` file and add the following
to it:
.. code-block:: ini
[cinder]
os_region_name = RegionOne
Finalize installation
---------------------
#. Restart the Compute API service:
.. code-block:: console
# service nova-api restart
#. Restart the Block Storage services:
.. code-block:: console
# service cinder-scheduler restart
# service cinder-api restart

View File

@ -0,0 +1 @@
../../install-guide/source/cinder-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/cinder-storage-install.rst

View File

@ -0,0 +1 @@
../../install-guide/source/cinder-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/cinder.rst

View File

@ -0,0 +1 @@
../../common

View File

@ -0,0 +1,308 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
# import sys
import openstackdocstheme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# TODO(ajaeger): enable PDF building, for example add 'rst2pdf.pdfbuilder'
# extensions =
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Installation Guide'
bug_tag = u'install-guide'
copyright = u'2015-2016, OpenStack contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# A few variables have to be set for the log-a-bug feature.
# giturl: The location of conf.py on Git. Must be set manually.
# gitsha: The SHA checksum of the bug description. Automatically extracted from git log.
# bug_tag: Tag for categorizing the bug. Must be set manually.
# These variables are passed to the logabug code via html_context.
giturl = u'http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide-debconf/source'
git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '"
gitsha = os.popen(git_cmd).read().strip('\n')
html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
"giturl": giturl}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['common/cli*', 'common/nova*',
'common/get_started_with_openstack.rst',
'common/get_started_openstack_services.rst',
'common/get_started_feedback.rst',
'common/get_started_logical_architecture.rst',
'common/get_started_dashboard.rst',
'common/get_started_storage_concepts.rst',
'common/get_started_data_processing.rst',
'common/dashboard_customizing.rst',
'shared/note_configuration_vary_by_distribution.rst']
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [openstackdocstheme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# So that we can enable "log-a-bug" links from each output HTML page, this
# variable must be set to a format that includes year, month, day, hours and
# minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'install-guide'
# If true, publish source files
html_copy_source = False
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'InstallGuide.tex', u'Install Guide',
u'OpenStack contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'installguide', u'Install Guide',
[u'OpenStack contributors'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'InstallGuide', u'Install Guide',
u'OpenStack contributors', 'InstallGuide',
'This guide shows OpenStack end users how to install '
'an OpenStack cloud.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'InstallGuide', u'Install Guide',
u'OpenStack contributors')
]

View File

@ -0,0 +1,101 @@
:orphan:
======================
Register API endpoints
======================
All Debian packages for API services, except the ``heat-api`` package,
register the service in the Identity service catalog. This feature is
helpful because API endpoints are difficult to remember.
.. note::
The ``heat-common`` package and not the ``heat-api`` package configures the
Orchestration service.
When you install a package for an API service, you are prompted to
register that service. However, after you install or upgrade the package
for an API service, Debian immediately removes your response to this
prompt from the debconf database. Consequently, you are prompted to
re-register the service with the Identity service. If you already
registered the API service, respond ``no`` when you upgrade.
.. image:: ../figures/debconf-screenshots/api-endpoint_1_register_endpoint.png
|
This screen registers packages in the Identity service catalog:
.. image:: ../figures/debconf-screenshots/api-endpoint_2_keystone_server_ip.png
|
You are prompted for the Identity service ``admin_token`` value. The
Identity service uses this value to register the API service. When you
set up the ``keystone`` package, this value is configured automatically.
.. image:: ../figures/debconf-screenshots/api-endpoint_3_keystone_authtoken.png
|
This screen configures the IP addresses for the service. The
configuration script automatically detects the IP address used by the
interface that is connected to the default route (``/sbin/route`` and
``/sbin/ip``).
Unless you have a unique set up for your network, press **ENTER**.
.. image:: ../figures/debconf-screenshots/api-endpoint_4_service_endpoint_ip_address.png
|
This screen configures the region name for the service. For example,
``us-east-coast`` or ``europe-paris``.
.. image:: ../figures/debconf-screenshots/api-endpoint_5_region_name.png
|
The Debian package post installation scripts will then perform the below
commands for you:
.. code-block:: console
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://${KEYSTONE_ENDPOINT_IP}:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
service create \
--name=${SERVICE_NAME} \
--description="${SERVICE_DESC}" \
${SERVICE_TYPE}
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://${KEYSTONE_ENDPOINT_IP}:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
endpoint create \
--region "${REGION_NAME}" \
${SERVICE_NAME} public http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL}
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://${KEYSTONE_ENDPOINT_IP}:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
endpoint create \
--region "${REGION_NAME}" \
${SERVICE_NAME} internal http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL}
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://${KEYSTONE_ENDPOINT_IP}:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
endpoint create \
--region "${REGION_NAME}" \
${SERVICE_NAME} admin http://${PKG_ENDPOINT_IP}:${SERVICE_PORT}${SERVICE_URL}
The values of ``AUTH_TOKEN``, ``KEYSTONE_ENDPOINT_IP``,
``PKG_ENDPOINT_IP``, and ``REGION_NAME`` depend on the answer you will
provide to the debconf prompts. But the values of ``SERVICE_NAME``,
``SERVICE_TYPE``, ``SERVICE_DESC``, and ``SERVICE_URL`` are already
pre-wired in each package, so you don't have to remember them.

View File

@ -0,0 +1,120 @@
:orphan:
================
debconf concepts
================
This chapter explains how to use the Debian ``debconf`` and
``dbconfig-common`` packages to configure OpenStack services. These
packages enable users to perform configuration tasks. When users
install OpenStack packages, ``debconf`` prompts the user for responses,
which seed the contents of configuration files associated with that package.
After package installation, users can update the configuration of a
package by using the :command:`dpkg-reconfigure` program.
If you are familiar with these packages and pre-seeding, you can proceed
to :doc:`../keystone`.
The Debian packages
-------------------
The rules described here are from the `Debian Policy
Manual <http://www.debian.org/doc/debian-policy/>`__. If any rule
described in this chapter is not respected, you have found a serious bug
that must be fixed.
When you install or upgrade a Debian package, all configuration file
values are preserved. Using the ``debconf`` database as a registry is
considered a bug in Debian. If you edit something in any OpenStack
configuration file, the ``debconf`` package reads that value when it
prepares to prompt the user. For example, to change the log in name for
the RabbitMQ messaging queue for a service, you can edit its value in
the corresponding configuration file.
To opt out of using the ``debconf`` package, run the
:command:`dpkg-reconfigure` command and select non-interactive mode:
.. code-block:: console
# dpkg-reconfigure -plow debconf
Then, ``debconf`` does not prompt you.
Another way to disable the ``debconf`` package is to prefix the
:command:`apt` command with ``DEBIAN_FRONTEND=noninteractive``,
as follows:
.. code-block:: console
# DEBIAN_FRONTEND=noninteractive apt-get install nova-api
If you configure a package with ``debconf`` incorrectly, you can
re-configure it, as follows:
.. code-block:: console
# dpkg-reconfigure PACKAGE-NAME
This calls the post-installation script for the ``PACKAGE-NAME`` package
after the user responds to all prompts. If you cannot install a Debian
package in a non-interactive way, you have found a release-critical bug
in Debian. Report it to the Debian bug tracking system.
Generally, the ``-common`` packages install the configuration files. For
example, the ``glance-common`` package installs the ``glance-api.conf``
and ``glance-registry.conf`` files. So, for the Image service, you must
re-configure the ``glance-common`` package. The same applies for
``cinder-common``, ``nova-common``, and ``heat-common`` packages.
In ``debconf``, the higher the priority for a screen, the greater the
chance that the user sees that screen. If a ``debconf`` screen has
``medium`` priority and you configure the Debian system to show only
``critical`` prompts, which is the default in Debian, the user does not
see that ``debconf`` screen. Instead, the default for the related package
is used. In the Debian OpenStack packages, a number of ``debconf`` screens
are set with ``medium`` priority. Consequently, if you want to respond to
all ``debconf`` screens from the Debian OpenStack packages, you must run
the following command and select the ``medium`` priority before you install
any packages:
.. code-block:: console
# dpkg-reconfigure debconf
.. note::
The packages do not require pre-depends. If ``dbconfig-common`` is
already installed on the system, the user sees all prompts. However,
you cannot define the order in which the ``debconf`` screens appear.
The user must make sense of it even if the prompts appear in an
illogical order.
|
Pre-seed debconf prompts
------------------------
You can pre-seed all ``debconf`` prompts. To pre-seed means to store
responses in the ``debconf`` database so that ``debconf`` does not prompt
the user for responses. Pre-seeding enables a hands-free installation for
users. The package maintainer creates scripts that automatically
configure the services.
The following example shows how to pre-seed an automated MySQL Server
installation:
.. code-block:: bash
MYSQL_PASSWORD=MYSQL_PASSWORD
echo "mysql-server-5.5 mysql-server/root_password password ${MYSQL_PASSWORD}
mysql-server-5.5 mysql-server/root_password seen true
mysql-server-5.5 mysql-server/root_password_again password ${MYSQL_PASSWORD}
mysql-server-5.5 mysql-server/root_password_again seen true
" | debconf-set-selections
DEBIAN_FRONTEND=noninteractive apt-get install -y --force-yes mysql-server
The ``seen true`` option tells ``debconf`` that a specified screen was
already seen by the user so do not show it again. This option is useful
for upgrades.

View File

@ -0,0 +1,167 @@
:orphan:
===========================================
Configure the database with dbconfig-common
===========================================
Many of the OpenStack services need to be configured to access a
database. These are configured through a DSN (Database Source Name)
directive as follows:
.. code-block:: ini
[database]
connection = mysql+pymysql://keystone:0dec658e3f14a7d@localhost/keystonedb
This ``connection`` directive will be handled by the ``dbconfig-common``
package, which provides a standard Debian interface. It enables you to
configure Debian database parameters. It includes localized prompts for
many languages and it supports the following database backends: SQLite,
MySQL, and PostgreSQL.
By default, the ``dbconfig-common`` package configures the OpenStack
services to use SQLite. So if you use debconf in non-interactive mode
and without pre-seeding, the OpenStack services that you install will
use SQLite.
By default, ``dbconfig-common`` does not provide access to database servers
over a network. If you want the ``dbconfig-common`` package to prompt for
remote database servers that are accessed over a network and not through
a UNIX socket file, reconfigure it, as follows:
.. code-block:: console
# apt-get install dbconfig-common && dpkg-reconfigure dbconfig-common
These screens appear when you re-configure the ``dbconfig-common`` package:
.. image:: ../figures/debconf-screenshots/dbconfig-common_keep_admin_pass.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_used_for_remote_db.png
|
Unlike other debconf prompts, you cannot pre-seed the responses for the
``dbconfig-common`` prompts by using ``debconf-set-selections``. Instead,
you must create a file in :file:`/etc/dbconfig-common`. For example, you
might create a keystone configuration file for ``dbconfig-common`` that is
located in :file:`/etc/dbconfig-common/keystone.conf`, as follows:
.. code-block:: ini
dbc_install='true'
dbc_upgrade='true'
dbc_remove=''
dbc_dbtype='mysql'
dbc_dbuser='keystone'
dbc_dbpass='PASSWORD'
dbc_dbserver=''
dbc_dbport=''
dbc_dbname='keystonedb'
dbc_dbadmin='root'
dbc_basepath=''
dbc_ssl=''
dbc_authmethod_admin=''
dbc_authmethod_user=''
After you create this file, run this command:
.. code-block:: console
# apt-get install keystone
The Identity service is installed with MySQL as the database back end,
``keystonedb`` as database name, and the localhost socket file. The
corresponding DSN (Database Source Name) will then be:
.. code-block:: ini
[database]
connection = mysql+pymysql://keystone:PASSWORD@localhost/keystonedb
The ``dbconfig-common`` package will configure MySQL for these access
rights, and create the database for you. Since OpenStack 2014.1.1, all
OpenStack packages in Debian are performing the following MySQL query
after database creation (if you decide to use MySQL as a back-end):
.. code-block:: ini
ALTER DATABASE keystone CHARACTER SET utf8 COLLATE utf8_unicode_ci
So, if using Debian, you wont need to care about database creation,
access rights and character sets. All that is handled for you by the
packages.
As an example, here are screenshots from the ``cinder-common`` package:
.. image:: ../figures/debconf-screenshots/dbconfig-common_1_configure-with-dbconfig-yes-no.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_2_db-types.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_3_connection_method.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_4_mysql_root_password.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_5_mysql_app_password.png
|
.. image:: ../figures/debconf-screenshots/dbconfig-common_6_mysql_app_password_confirm.png
|
By default in Debian, you can access the MySQL server from either
localhost through the socket file or 127.0.0.1. To access it over the
network, you must edit the :file:`/etc/mysql/my.cnf` file, and the
``mysql.user`` table. To do so, Debian provides a helper script in the
``openstack-deploy`` package. To use it, install the package:
.. code-block:: console
# apt-get install openstack-deploy
and run the helper script:
.. code-block:: console
# /usr/share/openstack-deploy/mysql-remote-root
Alternatively, if you do not want to install this package, run this
script to enable remote root access:
.. code-block:: bash
#!/bin/sh
set -e
SQL="mysql --defaults-file=/etc/mysql/debian.cnf -Dmysql -e"
ROOT_PASS=`${SQL} "SELECT Password FROM user WHERE User='root' LIMIT 1;" \
| tail -n 1`
${SQL} "REPLACE INTO user SET host='%', user='root',\
password='${ROOT_PASS}', Select_priv='Y', Insert_priv='Y',\
Update_priv='Y', Delete_priv='Y', Create_priv='Y', Drop_priv='Y',\
Reload_priv='Y', Shutdown_priv='Y', Process_priv='Y', File_priv='Y',\
Grant_priv='Y', References_priv='Y', Index_priv='Y', Alter_priv='Y',\
Super_priv='Y', Show_db_priv='Y', Create_tmp_table_priv='Y',\
Lock_tables_priv='Y', Execute_priv='Y', Repl_slave_priv='Y',\
Repl_client_priv='Y', Create_view_priv='Y', Show_view_priv='Y',\
Create_routine_priv='Y', Alter_routine_priv='Y', Create_user_priv='Y',\
Event_priv='Y', Trigger_priv='Y' "
${SQL} "FLUSH PRIVILEGES"
sed -i 's|^bind-address[ \t]*=.*|bind-address = 0.0.0.0|' /etc/mysql/my.cnf
/etc/init.d/mysql restart
You must enable remote access before you install OpenStack services on
multiple nodes.

View File

@ -0,0 +1,56 @@
:orphan:
======================================
Services and the [keystone_authtoken]
======================================
Because most OpenStack services must access the Identity service, you
must configure the IP address of the ``keystone`` server to be able to
access it. You must also configure the ``admin_tenant_name``,
``admin_user``, and ``admin_password`` options for each service to work.
Generally, this section looks like this:
.. code-block:: ini
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
The debconf system helps users configure the ``auth_uri``,
``identity_uri``, ``admin_tenant_name``, ``admin_user``, and
``admin_password`` options.
The following screens show an example Image service configuration:
.. image:: ../figures/debconf-screenshots/service_keystone_authtoken_server_hostname.png
|
.. image:: ../figures/debconf-screenshots/service_keystone_authtoken_admin_tenant_name.png
|
.. image:: ../figures/debconf-screenshots/service_keystone_authtoken_tenant_admin_user.png
|
.. image:: ../figures/debconf-screenshots/service_keystone_authtoken_admin_password.png
This information is stored in the configuration file for each service.
For example:
.. code-block:: ini
/etc/ceilometer/ceilometer.conf
/etc/nova/api-paste.ini
/etc/glance/glance-api-paste.ini
/etc/glance/glance-registry.ini
/etc/cinder/cinder.conf
/etc/neutron/neutron.conf
The Debian OpenStack packages offer automation for this, so OpenStack
users do not have to manually edit the configuration files.

View File

@ -0,0 +1,36 @@
:orphan:
===============================
RabbitMQ credentials parameters
===============================
For every package that must connect to a Messaging Server, the Debian
package enables you to configure the IP address for that server and the
user name and password that is used to connect. The following example
shows configuration with the ``ceilometer-common`` package:
.. image:: ../figures/debconf-screenshots/rabbitmq-host.png
|
.. image:: ../figures/debconf-screenshots/rabbitmq-user.png
|
.. image:: ../figures/debconf-screenshots/rabbitmq-password.png
|
These debconf screens appear in: ``ceilometer-common``, ``cinder-common``,
``glance-common``, ``heat-common``, ``neutron-common``, and ``nova-common``.
This will configure the below directives (example from ``nova.conf``):
.. code-block:: ini
[DEFAULT]
rabbit_host=localhost
rabbit_userid=guest
rabbit_password=guest
The other directives concerning RabbitMQ will stay untouched.

View File

@ -0,0 +1,14 @@
:orphan:
================================
Configure OpenStack with debconf
================================
.. toctree::
:maxdepth: 2
debconf-concepts.rst
debconf-dbconfig-common.rst
debconf-rabbitmq.rst
debconf-keystone-authtoken.rst
debconf-api-endpoints.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-memcached.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-messaging.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking-compute.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking-controller.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking-storage-cinder.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking-storage-swift.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-networking.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-nosql-database.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-ntp-controller.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-ntp-other.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-ntp-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-ntp.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-packages.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-security.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment-sql-database.rst

View File

@ -0,0 +1 @@
../../install-guide/source/environment.rst

View File

@ -0,0 +1 @@
../../install-guide/source/figures

View File

@ -0,0 +1,27 @@
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Image service,
code-named glance, on the controller node. For simplicity, this
configuration stores images on the local file system.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt-get install glance python-glanceclient
#. Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
#. Select the ``keystone`` pipeline to configure the Image service
to use the Identity service:
.. image:: figures/debconf-screenshots/glance-common_pipeline_flavor.png
:width: 100%

View File

@ -0,0 +1 @@
../../install-guide/source/glance-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/glance.rst

View File

@ -0,0 +1,44 @@
.. _heat-install:
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the
Orchestration service, code-named heat, on the controller node.
Install and configure components
--------------------------------
#. Run the following commands to install the packages:
.. code-block:: console
# apt-get install heat-api heat-api-cfn heat-engine python-heat-client
#. Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
#. Edit the ``/etc/heat/heat.conf`` file and complete the following
actions:
* In the ``[ec2authtoken]`` section, configure Identity service access:
.. code-block:: ini
[ec2authtoken]
...
auth_uri = http://controller:5000/v2.0
Finalize installation
---------------------
#. Restart the Orchestration services:
.. code-block:: console
# service heat-api restart
# service heat-api-cfn restart
# service heat-engine restart

View File

@ -0,0 +1 @@
../../install-guide/source/heat-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/heat-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/heat.rst

View File

@ -0,0 +1 @@
../../install-guide/source/horizon-install.rst

View File

@ -0,0 +1 @@
../../install-guide/source/horizon-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/horizon-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/horizon.rst

View File

@ -0,0 +1,75 @@
.. title:: OpenStack Installation Guide
=======================================
OpenStack Installation Guide for Debian
=======================================
Abstract
~~~~~~~~
The OpenStack system consists of several key services that are separately
installed. These services work together depending on your cloud
needs. These services include Compute service, Identity service,
Networking service, Image service, Block Storage service, Object Storage
service, Telemetry service, Orchestration service, and Database service. You
can install any of these projects separately and configure them stand-alone
or as connected entities.
This guide walks through an installation by using packages
available through Debian 8 (code name: Jessie).
Explanations of configuration options and sample configuration files
are included.
This guide documents OpenStack Newton release.
.. warning::
This guide is a work-in-progress and is subject to updates frequently.
Pre-release packages have been used for testing, and some instructions
may not work with final versions. Please help us make this guide better
by reporting any errors you encounter.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 2
common/conventions.rst
overview.rst
environment.rst
debconf/debconf.rst
keystone.rst
glance.rst
nova.rst
neutron.rst
horizon.rst
cinder.rst
manila.rst
swift.rst
heat.rst
ceilometer.rst
trove.rst
launch-instance.rst
Appendix
~~~~~~~~
.. toctree::
:maxdepth: 1
common/app_support.rst
Glossary
~~~~~~~~
.. toctree::
:maxdepth: 1
common/glossary.rst
Search in this guide
~~~~~~~~~~~~~~~~~~~~
* :ref:`search`

View File

@ -0,0 +1,160 @@
.. _keystone-install:
Install and configure
~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the OpenStack
Identity service, code-named keystone, on the controller node. For
performance, this configuration deploys Fernet tokens and the Apache
HTTP server to handle requests.
Install and configure the components
------------------------------------
#. Run the following command to install the packages:
.. code-block:: console
# apt-get install keystone
#. Respond to prompts for :doc:`debconf/debconf-dbconfig-common`,
which will fill the below database access directive.
.. code-block:: ini
[database]
...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
If you decide to not use ``dbconfig-common``, then you have to
create the database and manage its access rights yourself, and run the
following by hand.
.. code-block:: console
# keystone-manage db_sync
#. Generate a random value to use as the administration token during
initial configuration:
.. code-block:: console
$ openssl rand -hex 10
#. Configure the initial administration token:
.. image:: figures/debconf-screenshots/keystone_1_admin_token.png
:scale: 50
Use the random value that you generated in a previous step. If you
install using non-interactive mode or you do not specify this token, the
configuration tool generates a random value.
Later on, the package will configure the below directive with the value
you entered:
.. code-block:: ini
[DEFAULT]
...
admin_token = ADMIN_TOKEN
#. Create the ``admin`` project and user:
During the final stage of the package installation, it is possible to
automatically create an ``admin`` and ``service`` project, and an ``admin``
user. This can later be used for other OpenStack services to contact the
Identity service. This is the equivalent of running the below commands:
.. code-block:: console
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
project create --or-show \
admin --domain default \
--description "Default Debian admin project"
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
project create --or-show \
service --domain default \
--description "Default Debian admin project"
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
user create --or-show \
--password ADMIN_PASS \
--project admin \
--email root@localhost \
--enable \
admin \
--domain default \
--description "Default Debian admin user"
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
role create --or-show admin
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
role add --project admin --user admin admin
.. image:: figures/debconf-screenshots/keystone_2_register_admin_tenant_yes_no.png
:scale: 50
.. image:: figures/debconf-screenshots/keystone_3_admin_user_name.png
:scale: 50
.. image:: figures/debconf-screenshots/keystone_4_admin_user_email.png
:scale: 50
.. image:: figures/debconf-screenshots/keystone_5_admin_user_pass.png
:scale: 50
.. image:: figures/debconf-screenshots/keystone_6_admin_user_pass_confirm.png
:scale: 50
In Debian, the Keystone package offers automatic registration of
Keystone in the service catalogue. This is equivalent of running the
below commands:
.. code-block:: console
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
service create \
--name keystone \
--description "OpenStack Identity" \
identity
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
keystone public http://controller:5000/v2.0
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
keystone internal http://controller:5000/v2.0
# openstack --os-token ${AUTH_TOKEN} \
--os-url=http://127.0.0.1:35357/v3/ \
--os-domain-name default \
--os-identity-api-version=3 \
keystone admin http://controller:35357/v2.0
.. image:: figures/debconf-screenshots/keystone_7_register_endpoint.png

View File

@ -0,0 +1 @@
../../install-guide/source/keystone-openrc.rst

View File

@ -0,0 +1 @@
../../install-guide/source/keystone-services.rst

View File

@ -0,0 +1 @@
../../install-guide/source/keystone-users.rst

View File

@ -0,0 +1 @@
../../install-guide/source/keystone-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/keystone.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-cinder.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-heat.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-manila-dhss-false-option1.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-manila-dhss-true-option2.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-manila.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-networks-provider.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-networks-selfservice.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-provider.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance-selfservice.rst

View File

@ -0,0 +1 @@
../../install-guide/source/launch-instance.rst

View File

@ -0,0 +1,243 @@
.. _manila-controller:
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Shared File Systems
service, code-named manila, on the controller node. This service requires at
least one additional share node that manages file storage drivers.
Prerequisites
-------------
Before you install and configure the Share File System service, you
must create a database, service credentials, and API endpoints.
#. To create the database, complete these steps:
* Use the database access client to connect to the database server as the
``root`` user:
.. code-block:: console
$ mysql -u root -p
* Create the ``manila`` database:
.. code-block:: console
CREATE DATABASE manila;
* Grant proper access to the ``manila`` database:
.. code-block:: console
GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'localhost' \
IDENTIFIED BY 'MANILA_DBPASS';
GRANT ALL PRIVILEGES ON manila.* TO 'manila'@'%' \
IDENTIFIED BY 'MANILA_DBPASS';
Replace ``MANILA_DBPASS`` with a suitable password.
* Exit the database access client.
#. Source the ``admin`` credentials to gain access to admin-only CLI commands:
.. code-block:: console
$ . admin-openrc
#. To create the service credentials, complete these steps:
* Create a ``manila`` user:
.. code-block:: console
$ openstack user create --domain default --password-prompt manila
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | e0353a670a9e496da891347c589539e9 |
| enabled | True |
| id | 83a3990fc2144100ba0e2e23886d8acc |
| name | manila |
+-----------+----------------------------------+
* Add the ``admin`` role to the ``manila`` user:
.. code-block:: console
$ openstack role add --project service --user manila admin
.. note::
This command provides no output.
* Create the ``manila`` and ``manilav2`` service entities:
.. code-block:: console
$ openstack service create --name manila \
--description "OpenStack Shared File Systems" share
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Shared File Systems |
| enabled | True |
| id | 82378b5a16b340aa9cc790cdd46a03ba |
| name | manila |
| type | share |
+-------------+----------------------------------+
.. code-block:: console
$ openstack service create --name manilav2 \
--description "OpenStack Shared File Systems" sharev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Shared File Systems |
| enabled | True |
| id | 30d92a97a81a4e5d8fd97a32bafd7b88 |
| name | manilav2 |
| type | sharev2 |
+-------------+----------------------------------+
.. note::
The Share File System services require two service entities.
#. Create the Shared File Systems service API endpoints:
.. code-block:: console
$ openstack endpoint create --region RegionOne \
share public http://controller:8786/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 0bd2bbf8d28b433aaea56a254c69f69d |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 82378b5a16b340aa9cc790cdd46a03ba |
| service_name | manila |
| service_type | share |
| url | http://controller:8786/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
share internal http://controller:8786/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | a2859b5732cc48b5b083dd36dafb6fd9 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 82378b5a16b340aa9cc790cdd46a03ba |
| service_name | manila |
| service_type | share |
| url | http://controller:8786/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
share admin http://controller:8786/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | f7f46df93a374cc49c0121bef41da03c |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 82378b5a16b340aa9cc790cdd46a03ba |
| service_name | manila |
| service_type | share |
| url | http://controller:8786/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
.. code-block:: console
$ openstack endpoint create --region RegionOne \
sharev2 public http://controller:8786/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | d63cc0d358da4ea680178657291eddc1 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 |
| service_name | manilav2 |
| service_type | sharev2 |
| url | http://controller:8786/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
sharev2 internal http://controller:8786/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | afc86e5f50804008add349dba605da54 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 |
| service_name | manilav2 |
| service_type | sharev2 |
| url | http://controller:8786/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
$ openstack endpoint create --region RegionOne \
sharev2 admin http://controller:8786/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e814a0cec40546e98cf0c25a82498483 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 30d92a97a81a4e5d8fd97a32bafd7b88 |
| service_name | manilav2 |
| service_type | sharev2 |
| url | http://controller:8786/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
.. note::
The Share File System services require endpoints for each service
entity.
Install and configure components
--------------------------------
#. Install the packages:
.. code-block:: console
# apt-get install manila-api manila-scheduler \
python-manilaclient
Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
Finalize installation
---------------------
* Restart the Share File Systems services:
.. code-block:: console
# service manila-scheduler restart
# service manila-api restart

View File

@ -0,0 +1 @@
../../install-guide/source/manila-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/manila-share-install-dhss-false-option1.rst

View File

@ -0,0 +1 @@
../../install-guide/source/manila-share-install-dhss-true-option2.rst

View File

@ -0,0 +1 @@
../../install-guide/source/manila-share-install.rst

View File

@ -0,0 +1 @@
../../install-guide/source/manila-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/manila.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-compute-install-option1.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-compute-install-option2.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-compute-install.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-concepts.rst

View File

@ -0,0 +1,192 @@
Networking Option 1: Provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install the components
----------------------
.. code-block:: console
# apt-get install neutron-server neutron-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent python-neutronclient
Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``core_plugin`` option
in the ``/etc/neutron/neutron.conf`` file with the appropriate values
(in this case, it is set to the value ``ml2``).
Configure the server component
------------------------------
#. Edit the ``/etc/neutron/neutron.conf`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, disable additional plug-ins:
.. code-block:: ini
[DEFAULT]
...
service_plugins =
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
notify Compute of network topology changes:
.. code-block:: ini
[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[nova]
...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
user in the Identity service.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat and VLAN networks:
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan
* In the ``[ml2]`` section, disable self-service networks:
.. code-block:: ini
[ml2]
...
tenant_network_types =
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
.. code-block:: ini
[ml2]
...
mechanism_drivers = linuxbridge
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
* In the ``[ml2]`` section, enable the port security extension driver:
.. code-block:: ini
[ml2]
...
extension_drivers = port_security
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = provider
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
efficiency of security group rules:
.. code-block:: ini
[securitygroup]
...
enable_ipset = True
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
.. code-block:: ini
[vxlan]
enable_vxlan = False
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the DHCP agent
------------------------
The :term:`DHCP agent` provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -0,0 +1,205 @@
Networking Option 2: Self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install and configure the Networking components on the *controller* node.
Install and configure the Networking components
-----------------------------------------------
#. .. code-block:: console
# apt-get install neutron-server neutron-plugin-linuxbridge-agent \
neutron-dhcp-agent neutron-metadata-agent
For networking option 2, also install the ``neutron-l3-agent`` package.
#. Respond to prompts for `database
management <#debconf-dbconfig-common>`__, `Identity service
credentials <#debconf-keystone_authtoken>`__, `service endpoint
registration <#debconf-api-endpoints>`__, and `message queue
credentials <#debconf-rabbitmq>`__.
#. Select the ML2 plug-in:
.. image:: figures/debconf-screenshots/neutron_1_plugin_selection.png
.. note::
Selecting the ML2 plug-in also populates the ``service_plugins`` and
``allow_overlapping_ips`` options in the
``/etc/neutron/neutron.conf`` file with the appropriate values.
Configure the Modular Layer 2 (ML2) plug-in
-------------------------------------------
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
and switching) virtual networking infrastructure for instances.
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
following actions:
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
.. code-block:: ini
[ml2]
...
type_drivers = flat,vlan,vxlan
* In the ``[ml2]`` section, enable VXLAN self-service networks:
.. code-block:: ini
[ml2]
...
tenant_network_types = vxlan
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
mechanisms:
.. code-block:: ini
[ml2]
...
mechanism_drivers = linuxbridge,l2population
.. warning::
After you configure the ML2 plug-in, removing values in the
``type_drivers`` option can lead to database inconsistency.
.. note::
The Linux bridge agent only supports VXLAN overlay networks.
* In the ``[ml2]`` section, enable the port security extension driver:
.. code-block:: ini
[ml2]
...
extension_drivers = port_security
* In the ``[ml2_type_flat]`` section, configure the provider virtual
network as a flat network:
.. code-block:: ini
[ml2_type_flat]
...
flat_networks = provider
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
range for self-service networks:
.. code-block:: ini
[ml2_type_vxlan]
...
vni_ranges = 1:1000
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
efficiency of security group rules:
.. code-block:: ini
[securitygroup]
...
enable_ipset = True
Configure the Linux bridge agent
--------------------------------
The Linux bridge agent builds layer-2 (bridging and switching) virtual
networking infrastructure for instances and handles security groups.
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
complete the following actions:
* In the ``[linux_bridge]`` section, map the provider virtual network to the
provider physical network interface:
.. code-block:: ini
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
provider physical network interface. See :ref:`environment-networking`
for more information.
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
IP address of the physical network interface that handles overlay
networks, and enable layer-2 population:
.. code-block:: ini
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
underlying physical network interface that handles overlay networks. The
example architecture uses the management interface to tunnel traffic to
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
the management IP address of the controller node. See
:ref:`environment-networking` for more information.
* In the ``[securitygroup]`` section, enable security groups and
configure the Linux bridge :term:`iptables` firewall driver:
.. code-block:: ini
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the layer-3 agent
---------------------------
The :term:`Layer-3 (L3) agent` provides routing and NAT services for
self-service virtual networks.
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
and external network bridge:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
.. note::
The ``external_network_bridge`` option intentionally lacks a value
to enable multiple external networks on a single agent.
Configure the DHCP agent
------------------------
The :term:`DHCP agent` provides DHCP services for virtual networks.
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
networks can access metadata over the network:
.. code-block:: ini
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Return to
:ref:`Networking controller node configuration
<neutron-controller-metadata-agent>`.

View File

@ -0,0 +1,97 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure networking options
----------------------------
You can deploy the Networking service using one of two architectures
represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports
attaching instances to provider (external) networks. No self-service (private)
networks, routers, or floating IP addresses. Only the ``admin`` or other
privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching
instances to self-service networks. The ``demo`` or other unprivileged
user can manage self-service networks including routers that provide
connectivity between self-service and provider networks. Additionally,
floating IP addresses provide connectivity to instances using self-service
networks from external networks such as the Internet.
Self-service networks typically use overlay networks. Overlay network
protocols such as VXLAN include additional headers that increase overhead
and decrease space available for the payload or user data. Without knowledge
of the virtual network infrastructure, instances attempt to send packets
using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500
bytes. The Networking service automatically provides the correct MTU value
to instances via DHCP. However, some cloud images do not use DHCP or ignore
the DHCP MTU option and require configuration using metadata or a script.
.. note::
Option 2 also supports attaching instances to provider networks.
Choose one of the following networking options to configure services
specific to it. Afterwards, return here and proceed to
:ref:`neutron-controller-metadata-agent`.
.. toctree::
:maxdepth: 1
neutron-controller-install-option1.rst
neutron-controller-install-option2.rst
.. _neutron-controller-metadata-agent:
Configure the metadata agent
----------------------------
The :term:`metadata agent <Metadata agent>` provides configuration information
such as credentials to instances.
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
actions:
* In the ``[DEFAULT]`` section, configure the metadata host and shared
secret:
.. code-block:: ini
[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
Configure Compute to use Networking
-----------------------------------
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
* In the ``[neutron]`` section, configure access parameters, enable the
metadata proxy, and configure the secret:
.. code-block:: ini
[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
user in the Identity service.
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
proxy.

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-next-steps.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-verify-option1.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-verify-option2.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/neutron.rst

View File

@ -0,0 +1,148 @@
Install and configure a compute node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the Compute
service on a compute node. The service supports several
:term:`hypervisors <hypervisor>` to deploy :term:`instances <instance>`
or :term:`VMs <virtual machine (VM)>`. For simplicity, this configuration
uses the :term:`QEMU <Quick EMUlator (QEMU)>` hypervisor with the
:term:`KVM <kernel-based VM (KVM)>` extension
on compute nodes that support hardware acceleration for virtual machines.
On legacy hardware, this configuration uses the generic QEMU hypervisor.
You can follow these instructions with minor modifications to horizontally
scale your environment with additional compute nodes.
.. note::
This section assumes that you are following the instructions in
this guide step-by-step to configure the first compute node. If you
want to configure additional compute nodes, prepare them in a similar
fashion to the first compute node in the :ref:`example architectures
<overview-example-architectures>` section. Each additional compute node
requires a unique IP address.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# apt-get install nova-compute
Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`. Make
sure that you do not activate database management handling by debconf,
as a compute node should not access the central database.
#. Edit the ``/etc/nova/nova.conf`` file and
complete the following actions:
* In the ``[DEFAULT]`` section, check that the ``my_ip`` option
is correctly set (this value is handled by the config and postinst
scripts of the ``nova-common`` package using debconf):
.. code-block:: ini
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
of the management network interface on your compute node,
typically 10.0.0.31 for the first node in the
:ref:`example architecture <overview-example-architectures>`.
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. code-block:: ini
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall service. Since
Networking includes a firewall service, you must disable the Compute
firewall service by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, enable and configure remote console access:
.. code-block:: ini
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy
component only listens on the management interface IP address of
the compute node. The base URL indicates the location where you
can use a web browser to access remote consoles of instances
on this compute node.
.. note::
If the web browser to access remote consoles resides on
a host that cannot resolve the ``controller`` hostname,
you must replace ``controller`` with the management
interface IP address of the controller node.
* In the ``[glance]`` section, configure the location of the
Image service API:
.. code-block:: ini
[glance]
...
api_servers = http://controller:9292
#. Ensure the kernel module ``nbd`` is loaded.
.. code-block:: console
# modprobe nbd
#. Ensure the module loads on every boot by adding ``nbd``
to the ``/etc/modules-load.d/nbd.conf`` file.
Finalize installation
---------------------
#. Determine whether your compute node supports hardware acceleration
for virtual machines:
.. code-block:: console
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of ``one or greater``, your compute
node supports hardware acceleration which typically requires no
additional configuration.
If this command returns a value of ``zero``, your compute node does
not support hardware acceleration and you must configure ``libvirt``
to use QEMU instead of KVM.
* Replace the ``nova-compute-kvm`` package with ``nova-compute-qemu``
which automatically changes the ``/etc/nova/nova-compute.conf``
file and installs the necessary dependencies:
.. code-block:: console
# apt-get install nova-compute-qemu
#. Restart the Compute service:
.. code-block:: console
# service nova-compute restart

View File

@ -0,0 +1,109 @@
Install and configure controller node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section describes how to install and configure the
Compute service, code-named nova, on the controller node.
Install and configure components
--------------------------------
.. include:: shared/note_configuration_vary_by_distribution.rst
#. Install the packages:
.. code-block:: console
# apt-get install nova-api nova-conductor nova-consoleauth \
nova-consoleproxy nova-scheduler python-novaclient
Respond to prompts for
:doc:`database management <debconf/debconf-dbconfig-common>`,
:doc:`Identity service credentials <debconf/debconf-keystone-authtoken>`,
:doc:`service endpoint registration <debconf/debconf-api-endpoints>`,
and :doc:`message broker credentials <debconf/debconf-rabbitmq>`.
.. note::
``nova-api-metadata`` is included in the ``nova-api`` package,
and can be selected through debconf.
.. note::
A unique ``nova-consoleproxy`` package provides the
``nova-novncproxy``, ``nova-spicehtml5proxy``, and
``nova-xvpvncproxy`` packages. To select packages, edit the
``/etc/default/nova-consoleproxy`` file or use the debconf interface.
You can also manually edit the ``/etc/default/nova-consoleproxy``
file, and stop and start the console daemons.
#. Edit the ``/etc/nova/nova.conf`` file and
complete the following actions:
* In the ``[DEFAULT]`` section, enable only the compute and metadata
APIs:
.. code-block:: ini
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
* The ``.config`` and ``.postinst`` maintainer scripts of the
``nova-common`` package detect automatically the IP address which
goes in the ``my_ip`` directive of the ``[DEFAULT]`` section. This
value will normally still be prompted, and you can check that it
is correct in the nova.conf after ``nova-common`` is installed:
.. code-block:: ini
[DEFAULT]
...
my_ip = 10.0.0.11
* In the ``[DEFAULT]`` section, enable support for the Networking service:
.. code-block:: ini
[DEFAULT]
...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
.. note::
By default, Compute uses an internal firewall driver. Since the
Networking service includes a firewall driver, you must disable the
Compute firewall driver by using the
``nova.virt.firewall.NoopFirewallDriver`` firewall driver.
* In the ``[vnc]`` section, configure the VNC proxy to use the management
interface IP address of the controller node:
.. code-block:: ini
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
* In the ``[glance]`` section, configure the location of the
Image service API:
.. code-block:: ini
[glance]
...
api_servers = http://controller:9292
Finalize installation
---------------------
* Restart the Compute services:
.. code-block:: console
# service nova-api restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart

View File

@ -0,0 +1 @@
../../install-guide/source/nova-verify.rst

View File

@ -0,0 +1 @@
../../install-guide/source/nova.rst

View File

@ -0,0 +1 @@
../../install-guide/source/overview.rst

View File

@ -0,0 +1 @@
../../install-guide/source/shared

View File

@ -0,0 +1 @@
../../install-guide/source/swift-controller-include.txt

View File

@ -0,0 +1 @@
../../install-guide/source/swift-controller-install.rst

View File

@ -0,0 +1 @@
../../install-guide/source/swift-finalize-installation.rst

View File

@ -0,0 +1 @@
../../install-guide/source/swift-initial-rings.rst

Some files were not shown because too many files have changed in this diff Show More