Update deployment documentation

Recently we updated our test jobs so that all of them
use the `deploy-env` Ansible role which utilizes the
Kubeadm to deploy the test Kubernetes cluster.

The role works for both multi-node and single-node
environments. Although the deployment of Kubernetes itself
is out of scope of Openstack-Helm, we recommen using this
role to deploy test and development Kubernetes clusters.
So at the moment there is no need to provide
different sets of tools single-node and multi-node test envs.
Now this is a matter of the Ansible inventory file.

Also the deployment procedure of OpenStack on top of Kubernetes
using Helm is the same for multi-node and single-node clusters
because it only relies on the Kubernetes API.

We will be improving the `deploy-env` role even futher and
we will be cleaning up the deployment scripts and the documentation
so to provide a clear experience for the Openstack-Helm users.

Change-Id: I70236c4a2b870b52d2b01f65b1ef9b9518646964
This commit is contained in:
Vladimir Kozhukalov 2023-09-25 21:34:52 -05:00
parent 5500b2ae0b
commit 1a885ddd1f
31 changed files with 611 additions and 2617 deletions

View File

@ -1,26 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/openstack-helm
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on StoryBoard:
https://storyboard.openstack.org/#!/project/openstack/openstack-helm
For more specific information about contributing to this repository, see the
openstack-helm contributor guide:
https://docs.openstack.org/openstack-helm/latest/contributor/contributing.html
Chart tarballs are published and can be found at the respective sub folder under
https://tarballs.opendev.org/openstack/
Versioning and release notes for each chart update are now required in order to
better support the evolving nature of the OpenStack platform.

View File

@ -9,55 +9,99 @@ The goal of OpenStack-Helm is to provide a collection of Helm charts that
simply, resiliently, and flexibly deploy OpenStack and related services
on Kubernetes.
Versions supported
------------------
The table below shows the combinations of the Openstack/Platform/Kubernetes versions
that are tested and proved to work.
.. list-table::
:widths: 30 30 30 30
:header-rows: 1
* - Openstack version
- Host OS
- Image OS
- Kubernetes version
* - Victoria
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - Wallaby
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - Xena
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - Yoga
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - Zed
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - Zed
- Ubuntu Jammy
- Ubuntu Jammy
- >=1.24,<=1.26
* - 2023.1 (Antelope)
- Ubuntu Focal
- Ubuntu Focal
- >=1.24,<=1.26
* - 2023.1 (Antelope)
- Ubuntu Jammy
- Ubuntu Jammy
- >=1.24,<=1.26
* - 2023.2 (Bobcat)
- Ubuntu Jammy
- Ubuntu Jammy
- >=1.24,<=1.26
Communication
-------------
* Join us on `IRC <irc://chat.oftc.net/openstack-helm>`_:
#openstack-helm on oftc
* Community `IRC Meetings
<http://eavesdrop.openstack.org/#OpenStack-Helm_Team_Meeting>`_:
[Every Tuesday @ 1500 UTC], #openstack-helm in IRC (OFTC)
* Meeting Agenda Items: `Agenda
<https://etherpad.openstack.org/p/openstack-helm-meeting-agenda>`_
``#openstack-helm`` on oftc
* Join us on `Slack <https://kubernetes.slack.com/messages/C3WERB7DE/>`_
- #openstack-helm
(this is preferable way of communication): ``#openstack-helm``
* Join us on `Openstack-discuss <https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss>`_
mailing list (use subject prefix ``[openstack-helm]``)
The list of Openstack-Helm core team members is available here
`openstack-helm-core <https://review.opendev.org/#/admin/groups/1749,members>`_.
Storyboard
----------
Bugs and enhancements are tracked via OpenStack-Helm's
You found an issue and want to make sure we are aware of it? You can do so on our
`Storyboard <https://storyboard.openstack.org/#!/project_group/64>`_.
Installation and Development
----------------------------
Bugs should be filed as stories in Storyboard, not GitHub.
Please review our
`documentation <https://docs.openstack.org/openstack-helm/latest/>`_.
For quick installation, evaluation, and convenience, we have a minikube
based all-in-one solution that runs in a Docker container. The set up
can be found
`here <https://docs.openstack.org/openstack-helm/latest/install/developer/index.html>`_.
Please be as much specific as possible while describing an issue. Usually having
more context in the bug description means less efforts for a developer to
reproduce the bug and understand how to fix it.
Also before filing a bug to the Openstack-Helm `Storyboard <https://storyboard.openstack.org/#!/project_group/64>`_
please try to identify if the issue is indeed related to the deployment
process and not to the deployable software.
Other links
-----------
Our documentation is available `here <https://docs.openstack.org/openstack-helm/latest/>`_.
This project is under active development. We encourage anyone interested in
OpenStack-Helm to review our
`Installation <https://docs.openstack.org/openstack-helm/latest/install/index.html>`_
documentation. Feel free to ask questions or check out our current
`Storyboard backlog <https://storyboard.openstack.org/#!/project_group/64>`_.
OpenStack-Helm to review the `code changes <https://review.opendev.org/q/(project:openstack/openstack-helm+OR+project:openstack/openstack-helm-infra+OR+project:openstack/openstack-helm-images+OR+project:openstack/loci)+AND+-is:abandoned>`_
To evaluate a multinode installation, follow the
`Bare Metal <https://docs.openstack.org/openstack-helm/latest/install/multinode.html>`_
install guide.
Our repositories:
Repository
----------
* OpenStack charts `openstack-helm <https://opendev.org/openstack/openstack-helm.git>`_
* Infra charts `openstack-helm-infra <https://opendev.org/openstack/openstack-helm-infra.git>`_
* Building images `openstack-helm-images <https://opendev.org/openstack/openstack-helm-images.git>`_
* Building Openstack images framework `loci <https://opendev.org/openstack/loci.git>`_
Developers wishing to work on the OpenStack-Helm project should always base
their work on the latest code, available from the OpenStack-Helm git repository.
`OpenStack-Helm git repository <https://opendev.org/openstack/openstack-helm/>`_
Contributing
------------
We welcome contributions. Check out `this <CONTRIBUTING.rst>`_ document if
you would like to get involved.
We welcome contributions in any form: code review, code changes, usage feedback, updating documentation.

View File

@ -42,7 +42,7 @@ master_doc = 'index'
# General information about the project.
project = 'openstack-helm'
copyright = '2016-2022, OpenStack Foundation'
copyright = '2016-2023, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True

View File

@ -1,108 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the accounts
you need, the basics of interacting with our Gerrit review system, how we
communicate as a community, etc.
Additional information could be found in
`OpenDev Developer's Guide
<https://docs.opendev.org/opendev/infra-manual/latest/developers.html>`_.
Below will cover the more project specific information you need to get started
with OpenStack-Helm.
Communication
~~~~~~~~~~~~~
.. This would be a good place to put the channel you chat in as a project; when/
where your meeting is, the tags you prepend to your ML threads, etc.
* Join us on `IRC <irc://chat.oftc.net/openstack-helm>`_:
#openstack-helm on oftc
* Join us on `Slack <https://kubernetes.slack.com/messages/C3WERB7DE/>`_
(this is preferable way of communication): #openstack-helm
Contacting the Core Team
~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should list the core team, their irc nicks, emails, timezones
etc. If all this info is maintained elsewhere (i.e. a wiki), you can link to
that instead of enumerating everyone here.
Project's Core Team could be contacted via IRC or Slack. List of current core reviewers
could be found here `openstack-helm-core <https://review.opendev.org/#/admin/groups/1749,members>`_.
New Feature Planning
~~~~~~~~~~~~~~~~~~~~
.. This section is for talking about the process to get a new feature in. Some
projects use blueprints, some want specs, some want both! Some projects
stick to a strict schedule when selecting what new features will be reviewed
for a release.
New features are planned and implemented trough the process described in
`Project Specifications <../specs/index.html>`_ section of this document.
Task Tracking
~~~~~~~~~~~~~
.. This section is about where you track tasks- launchpad? storyboard? is there
more than one launchpad project? what's the name of the project group in
storyboard?
We track our tasks on our StoryBoard_.
If you're looking for some smaller, easier work item to pick up and get started
on, search for the 'low-hanging-fruit' tag.
.. NOTE: If your tag is not 'low-hanging-fruit' please change the text above.
Other OpenStack-Helm component's tasks could be found on the `group Storyboard`_.
Reporting a Bug
~~~~~~~~~~~~~~~
.. Pretty self explanatory section, link directly to where people should report
bugs for your project.
You found an issue and want to make sure we are aware of it? You can do so on our
Storyboard_.
If an issue is on one of other OpenStack-Helm components, report it to the
appropriate `group Storyboard`_.
Bugs should be filed as stories in Storyboard, not GitHub.
Please be as much specific as possible while describing an issue. Usually having
more context in the bug description means less efforts for a developer to
reproduce the bug and understand how to fix it.
Also before filing a bug to the Openstack-Helm `_Storyboard` please try to identify
if the issue is indeed related to the deployment process and not to the deployable
software.
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should have info about what it takes to get something merged. Do
you require one or two +2's before +W? Do some of your repos require unit
test changes with all patches? etc.
We require two Code-Review +2's from reviewers, before getting your patch merged
with giving Workforce +1. Trivial patches (e.g. typos) could be merged with one
Code-Review +2.
Changes affecting code base often require CI tests and documentation to be added
in the same patch set.
Pull requests submitted through GitHub will be ignored.
Project Team Lead Duties
~~~~~~~~~~~~~~~~~~~~~~~~
.. this section is where you can put PTL specific duties not already listed in
the common PTL guide (linked below), or if you already have them written
up elsewhere you can link to that doc here.
All common PTL duties are enumerated in the `PTL guide
<https://docs.openstack.org/project-team-guide/ptl.html>`_.
.. _Storyboard: https://storyboard.openstack.org/#!/project/openstack/openstack-helm
.. _group Storyboard: https://storyboard.openstack.org/#!/project_group/64

View File

@ -1,145 +0,0 @@
====================
OpenStack-Helm Gates
====================
To facilitate ease of testing and debugging, information regarding gates and
their functionality can be found here.
OpenStack-Helm's single node and multinode gates leverage the kubeadm-aio
environment created and maintained for use as a development environment. All
information regarding the kubeadm-aio environment can be found here_.
.. _here: https://docs.openstack.org/openstack-helm/latest/install/developer/index.html
Gate Checks
-----------
OpenStack-Helm currently checks the following scenarios:
- Testing any documentation changes and impacts.
- Running Make on each chart, which lints and packages the charts. This gate
does not stand up a Kubernetes cluster.
- Provisioning a single node cluster and deploying the OpenStack services. This
check is provided for: Ubuntu-1604, CentOS-7, and Fedora-25.
- Provisioning a multi-node Ubuntu-1604 cluster and deploying the OpenStack
services. This check is provided for both a two node cluster and a three
node cluster.
Gate Functions
--------------
To provide reusable components for gate functionality, functions have been
provided in the gates/funcs directory. These functions include:
- Functions for common host preparation operations, found in common.sh
- Functions for Helm specific operations, found in helm.sh. These functions
include: installing Helm, serving a Helm repository locally, linting and
building all Helm charts, running Helm tests on a release, installing the
helm template plugin, and running the helm template plugin against a chart.
- Functions for Kubernetes specific operations, found in kube.sh. These
functions include: waiting for pods in a specific namespace to register as
ready, waiting for all nodes to register as ready, install the requirements
for the kubeadm-aio container used in the gates, building the kubeadm-aio
container, launching the kubeadm-aio container, and replacing the
kube-controller-manager with a specific image necessary for ceph functionality.
- Functions for network specific operations, found in network.sh. These
functions include: creating a backup of the host's resolv.conf file before
deploying the kubeadm environments, restoring the original resolv.conf
settings, creating a backup of the host's /etc/hosts file before adding the
hosts interface and address, and restoring the original /etc/hosts file.
- Functions for OpenStack specific operations, found in openstack.sh. These
functions include: waiting for a successful ping, and waiting for a booted
virtual machine's status to return as ACTIVE.
Any additional functions required for testing new charts or improving the gate
workflow should be placed in the appropriate location.
Gate Output
-----------
To provide meaningful output from the gates, all information pertaining to the
components of the cluster and workflow are output to the logs directory inside
each gate. The contents of the log directory are as follows:
- The dry-runs directory contains the rendered output of Helm dry-run installs
on each of the OpenStack service charts. This gives visibility into the
manifests created by the templates with the supplied values. When the dry-run
gate fails, the reason should be apparent in the dry-runs output. The logs
found here are helpful in identifying issues resulting from using helm-toolkit
functions incorrectly or other rendering issues with gotpl.
- The K8s directory contains the logs and output of the Kubernetes objects. It
includes: pods, nodes, secrets, services, namespaces, configmaps, deployments,
daemonsets, and statefulsets. Descriptions for the state of all resources
during execution are found here, and this information can prove valuable when
debugging issues raised during a check. When a single node or multi-node
check fails, this is the first place to look. The logs found here are helpful
when the templates render correctly, but the services are not functioning
correctly, whether due to service configuration issues or issues with the
pods themselves.
- The nodes directory contains information about the node the gate tests are
running on in openstack-infra. This includes: the network interfaces, the
contents of iptables, the host's resolv.conf, and the kernel IP routing table.
These logs can be helpful when trying to identify issues with host networking
or other issues at the node level.
Adding Services
---------------
As charts for additional services are added to OpenStack-Helm, they should be
included in the gates. Adding new services to the gates allows a chart
developer and the review team to identify any potential issues associated with
a new service. All services are currently launched in the gate via
a series of launch scripts of the format ``NNN-service-name.sh`` where ``NNN``
dictates the order these scripts are launched. The script should contain
an installation command like:
::
helm install --namespace=openstack ${WORK_DIR}/mistral --name=mistral
Some services in the gate require specific overrides to the default values in
the chart's values.yaml file. If a service requires multiple overrides to
function in the gate, the service should include a separate values.yaml file
placed in the tools/overrides/mvp directory. The <service>.yaml MVP files
provide a configuration file to use for overriding default configuration values
in the chart's values.yaml as an alternative to overriding individual values
during installation. A chart that requires a MVP overrides file
requires the following format:
::
helm install --namespace=openstack ${WORK_DIR}/cinder --name=cinder \
--values=${WORK_DIR}/tools/overrides/mvp/cinder.yaml
Adding Tests
------------
As new charts are developed and the services are added to the gate, an
associated Helm test should be introduced to the gates. The appropriate place
for executing these tests is in the respective service's launch script, and
must be placed after the entry for installing the service and any associated
overrides. Any tests that use the Rally testing framework should leverage the
helm_test_deployment function in the aforementioned funcs/helm.sh file. For
example, a Helm test for Mistral might look like:
::
helm_test_deployment mistral 600
This results in the gate running the following:
::
helm test --timeout 600 mistral
mkdir -p logs/rally
kubectl logs -n openstack mistral-rally-test > logs/rally/mistral
kubectl delete -n openstack pod mistral-rally-test
Any tests that do not use the Rally testing framework would need to be handled
in the appropriate manner in launch script. This would ideally result in new
functions that could be reused, or expansion of the gate scripts to include
scenarios beyond basic service launches.

View File

@ -4,16 +4,14 @@ Welcome to OpenStack-Helm's documentation!
Contents:
.. toctree::
:maxdepth: 2
:maxdepth: 2
contributor/contributing
devref/index
gates
install/index
readme
specs/index
testing/index
troubleshooting/index
readme
install/index
devref/index
testing/index
troubleshooting/index
specs/index
Indices and Tables
==================

View File

@ -0,0 +1,34 @@
Before deployment
=================
Before proceeding with the steps outlined in the following
sections and executing the actions detailed therein, it is
imperative that you clone the essential Git repositories
containing all the required Helm charts, deployment scripts,
and Ansible roles. This preliminary step will ensure that
you have access to the necessary assets for a seamless
deployment process.
.. code-block:: bash
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm.git
git clone https://opendev.org/openstack/openstack-helm-infra.git
All further steps assume these two repositories are cloned into the
`~/osh` directory.
Also before deploying the OpenStack cluster you have to specify the
OpenStack and the operating system version that you would like to use
for deployment. For doing this export the following environment variables
.. code-block:: bash
export OPENSTACK_RELEASE=2023.2
export CONTAINER_DISTRO_NAME=ubuntu
export CONTAINER_DISTRO_VERSION=jammy
.. note::
The list of supported versions can be found :doc:`here </readme>`.

View File

@ -1,70 +0,0 @@
==============================
Common Deployment Requirements
==============================
Passwordless Sudo
=================
Throughout this guide the assumption is that the user is:
``ubuntu``. Because this user has to execute root level commands
remotely to other nodes, it is advised to add the following lines
to ``/etc/sudoers`` for each node:
.. code-block:: shell
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
Latest Version Installs
=======================
On the host or master node, install the latest versions of Git, CA Certs & Make if necessary
.. literalinclude:: ../../../tools/deployment/developer/common/000-install-packages.sh
:language: shell
:lines: 1,17-
Proxy Configuration
===================
.. note:: This guide assumes that users wishing to deploy behind a proxy have already
defined the conventional proxy environment variables ``http_proxy``,
``https_proxy``, and ``no_proxy``.
In order to deploy OpenStack-Helm behind corporate proxy servers, add the
following entries to ``openstack-helm-infra/tools/gate/devel/local-vars.yaml``.
.. code-block:: yaml
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
.. note:: The ``.svc.cluster.local`` address is required to allow the OpenStack
client to communicate without being routed through proxy servers. The IP
address ``172.17.0.1`` is the advertised IP address for the Kubernetes API
server. Replace the addresses if your configuration does not match the
one defined above.
Add the address of the Kubernetes API, ``172.17.0.1``, and
``.svc.cluster.local`` to your ``no_proxy`` and ``NO_PROXY`` environment
variables.
.. code-block:: bash
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
By default, this installation will use Google DNS Server IPs (8.8.8.8, 8.8.4.4)
and will update resolv.conf as a result. If those IPs are blocked by the proxy,
this will overwrite the original DNS entries and result in the inability to
connect to anything on the network behind the proxy. These DNS nameserver entries
can be changed by updating the ``external_dns_nameservers`` entry in this file:
.. code-block:: bash
openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml
It is recommended to add your own existing DNS nameserver entries to avoid
losing connection.

View File

@ -0,0 +1,52 @@
Deploy Ceph
===========
Ceph is a highly scalable and fault-tolerant distributed storage
system designed to store vast amounts of data across a cluster of
commodity hardware. It offers object storage, block storage, and
file storage capabilities, making it a versatile solution for
various storage needs. Ceph's architecture is based on a distributed
object store, where data is divided into objects, each with its
unique identifier, and distributed across multiple storage nodes.
It uses a CRUSH algorithm to ensure data resilience and efficient
data placement, even as the cluster scales. Ceph is widely used
in cloud computing environments and provides a cost-effective and
flexible storage solution for organizations managing large volumes of data.
Kubernetes introduced the CSI standard to allow storage providers
like Ceph to implement their drivers as plugins. Kubernetes can
use the CSI driver for Ceph to provision and manage volumes
directly. By means of CSI stateful applications deployed on top
of Kubernetes can use Ceph to store their data.
At the same time, Ceph provides the RBD API, which applications
can utilize to create and mount block devices distributed across
the Ceph cluster. The OpenStack Cinder service utilizes this Ceph
capability to offer persistent block devices to virtual machines
managed by the OpenStack Nova.
The recommended way to deploy Ceph on top of Kubernetes is by means
of `Rook`_ operator. Rook provides Helm charts to deploy the operator
itself which extends the Kubernetes API adding CRDs that enable
managing Ceph clusters via Kuberntes custom objects. For details please
refer to the `Rook`_ documentation.
To deploy the Rook Ceph operator and a Ceph cluster you can use the script
`ceph.sh`_. Then to generate the client secrets to interface with the Ceph
RBD API use this script `ceph_secrets.sh`
.. code-block:: bash
cd ~/osh/openstack-helm-infra
./tools/deployment/openstack-support-rook/020-ceph.sh
./tools/deployment/openstack-support-rook/025-ceph-ns-activate.sh
.. note::
Please keep in mind that these are the deployment scripts that we
use for testing. For example we place Ceph OSD data object on loop devices
which are slow and are not recommended to use in production.
.. _Rook: https://rook.io/
.. _ceph.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/openstack-support-rook/020-ceph.sh
.. _ceph-ns-activate.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/openstack-support-rook/025-ceph-ns-activate.sh

View File

@ -0,0 +1,52 @@
Deploy ingress controller
=========================
Deploying an ingress controller when deploying OpenStack on Kubernetes
is essential to ensure proper external access and SSL termination
for your OpenStack services.
In the OpenStack-Helm project, we utilize multiple ingress controllers
to optimize traffic routing. Specifically, we deploy three independent
instances of the Nginx ingress controller for distinct purposes:
External Traffic Routing
~~~~~~~~~~~~~~~~~~~~~~~~
* ``Namespace``: kube-system
* ``Functionality``: This instance monitors ingress objects across all
namespaces, primarily focusing on routing external traffic into the
OpenStack environment.
Internal Traffic Routing within OpenStack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* ``Namespace``: openstack
* ``Functionality``: Designed to handle traffic exclusively within the
OpenStack namespace, this instance plays a crucial role in SSL
termination for enhanced security among OpenStack services.
Traffic Routing to Ceph Rados Gateway Service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* ``Namespace``: ceph
* ``Functionality``: Dedicated to routing traffic specifically to the
Ceph Rados Gateway service, ensuring efficient communication with
Ceph storage resources.
By deploying these three distinct ingress controller instances in their
respective namespaces, we optimize traffic management and security within
the OpenStack-Helm environment.
To deploy these three ingress controller instances use the script `ingress.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/common/ingress.sh
.. note::
These script uses Helm chart from the `openstack-helm-infra`_ repository. We assume
this repo is cloned to the `~/osh` directory. See this :doc:`section </install/before_deployment>`.
.. _ingress.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/ingress.sh
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git

View File

@ -0,0 +1,143 @@
Deploy Kubernetes
=================
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
the scope of OpenStack-Helm.
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
as a starting point for lab or proof-of-concept environments.
All OpenStack projects test their code through an infrastructure managed by the CI
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
Install Ansible
---------------
.. code-block:: bash
pip install ansible
Prepare Ansible roles
---------------------
Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles used in this playbook
are defined in different repositories. So in addition to OpenStack-Helm repositories
that we assume have already been cloned to the `~/osh` directory you have to clone
yet another one
.. code-block:: bash
cd ~/osh
git clone https://opendev.org/zuul/zuul-jobs.git
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
where Ansible will lookup roles
.. code-block:: bash
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
To avoid setting it every time when you start a new terminal instance you can define this
in the Ansible configuration file. Please see the Ansible documentation.
Prepare Ansible inventory
-------------------------
We assume you have three nodes, usually VMs. Those nodes must be available via
SSH using the public key authentication and a ssh user (let say `ubuntu`)
must have passwordless sudo on the nodes.
Create the Ansible inventory file using the following command
.. code-block:: bash
cat > ~/osh/inventory.yaml <<EOF
all:
vars:
kubectl:
user: ubuntu
group: ubuntu
calico_version: "v3.25"
crictl_version: "v1.26.1"
helm_version: "v3.6.3"
kube_version: "1.26.3-00"
yq_version: "v4.6.0"
children:
primary:
hosts:
primary:
ansible_port: 22
ansible_host: 10.10.10.10
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
nodes:
hosts:
node-1:
ansible_port: 22
ansible_host: 10.10.10.11
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
node-2:
ansible_port: 22
ansible_host: 10.10.10.12
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF
If you have just one node then it must be `primary` in the file above.
.. note::
If you would like to set up a Kubernetes cluster on the local host,
configure the Ansible inventory to designate the `primary` node as the local host.
For further guidance, please refer to the Ansible documentation.
Deploy Kubernetes
-----------------
.. code-block:: bash
cd ~/osh
ansible-playbook -i inventory.yaml ~/osh/openstack-helm/tools/gate/playbooks/deploy-env.yaml
The playbook only changes the state of the nodes listed in the Ansible inventory.
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_, `ensure-pip`_, `clear-firewall`_)
used in the playbook.
.. note::
The role `deploy-env`_ by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
and update `/etc/resolv.conf` on the nodes. These DNS nameserver entries can be changed by
updating the file ``~/osh/openstack-helm-infra/roles/deploy-env/files/resolv.conf``.
It also configures internal Kubernetes DNS server (Coredns) to work as a recursive DNS server
and adds its IP address (10.96.0.10 by default) to the `/etc/resolv.conf` file.
Programs running on those nodes will be able to resolve names in the
default Kubernetes domain `.svc.cluster.local`. E.g. if you run OpenStack command line
client on one of those nodes it will be able to access OpenStack API services via
these names.
.. note::
The role `deploy-env`_ installs and confiugres Kubectl and Helm on the `primary` node.
You can login to it via SSH, clone `openstack-helm`_ and `openstack-helm-infra`_ repositories
and then run the OpenStack-Helm deployment scipts which employ Kubectl and Helm to deploy
OpenStack.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
.. _openstack-helm: https://opendev.org/openstack/openstack-helm.git
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
.. _playbook: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/gate/playbooks/deploy-env.yaml

View File

@ -0,0 +1,116 @@
Deploy OpenStack
================
Now we are ready for the deployment of OpenStack components.
Some of them are mandatory while others are optional.
Keystone
--------
OpenStack Keystone is the identity and authentication service
for the OpenStack cloud computing platform. It serves as the
central point of authentication and authorization, managing user
identities, roles, and access to OpenStack resources. Keystone
ensures secure and controlled access to various OpenStack services,
making it an integral component for user management and security
in OpenStack deployments.
This is a ``mandatory`` component of any OpenStack cluster.
To deploy the Keystone service run the script `keystone.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/keystone/keystone.sh
Heat
----
OpenStack Heat is an orchestration service that provides templates
and automation for deploying and managing cloud resources. It enables
users to define infrastructure as code, making it easier to create
and manage complex environments in OpenStack through templates and
automation scripts.
Here is the script `heat.sh`_ for the deployment of Heat service.
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/heat/heat.sh
Glance
------
OpenStack Glance is the image service component of OpenStack.
It manages and catalogs virtual machine images, such as operating
system images and snapshots, making them available for use in
OpenStack compute instances.
This is a ``mandatory`` component.
The Glance deployment script is here `glance.sh`_.
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/glance/glance.sh
Placement, Nova, Neutron
------------------------
OpenStack Placement is a service that helps manage and allocate
resources in an OpenStack cloud environment. It helps Nova (compute)
find and allocate the right resources (CPU, memory, etc.)
for virtual machine instances.
OpenStack Nova is the compute service responsible for managing
and orchestrating virtual machines in an OpenStack cloud.
It provisions and schedules instances, handles their lifecycle,
and interacts with underlying hypervisors.
OpenStack Neutron is the networking service that provides network
connectivity and enables users to create and manage network resources
for their virtual machines and other services.
These three services are ``mandatory`` and together constitue
so-called ``compute kit``.
To set up the compute service, the first step involves deploying the
hypervisor backend using the `libvirt.sh`_ script. By default, the
networking service is deployed with OpenvSwitch as the networking
backend, and the deployment script for OpenvSwitch can be found
here: `openvswitch.sh`_. And finally the deployment script for
Placement, Nova and Neutron is here: `compute-kit.sh`_.
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/compute-kit/openvswitch.sh
./tools/deployment/component/compute-kit/libvirt.sh
./tools/deployment/component/compute-kit/compute-kit.sh
Cinder
------
OpenStack Cinder is the block storage service component of the
OpenStack cloud computing platform. It manages and provides persistent
block storage to virtual machines, enabling users to attach and detach
persistent storage volumes to their VMs as needed.
To deploy the OpenStack Cinder service use the script `cinder.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/cinder/cinder.sh
.. _keystone.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/keystone/keystone.sh
.. _heat.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/heat/heat.sh
.. _glance.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/glance/glance.sh
.. _libvirt.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/libvirt.sh
.. _openvswitch.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/openvswitch.sh
.. _compute-kit.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/compute-kit/compute-kit.sh
.. _cinder.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/cinder/cinder.sh

View File

@ -0,0 +1,54 @@
Deploy OpenStack backend
========================
OpenStack is a cloud computing platform that consists of a variety of
services, and many of these services rely on backend services like RabbitMQ,
MariaDB, and Memcached for their proper functioning. These backend services
play crucial roles in OpenStack's architecture.
RabbitMQ
~~~~~~~~
RabbitMQ is a message broker that is often used in OpenStack to handle
messaging between different components and services. It helps in managing
communication and coordination between various parts of the OpenStack
infrastructure. Services like Nova (compute), Neutron (networking), and
Cinder (block storage) use RabbitMQ to exchange messages and ensure
proper orchestration.
MariaDB
~~~~~~~
Database services like MariaDB are used as the backend database for several
OpenStack services. These databases store critical information such as user
credentials, service configurations, and data related to instances, networks,
and volumes. Services like Keystone (identity), Nova, Glance (image), and
Cinder rely on MariaDB for data storage.
Memcached
~~~~~~~~~
Memcached is a distributed memory object caching system that is often used
in OpenStack to improve performance and reduce database load. OpenStack
services cache frequently accessed data in Memcached, which helps in faster
data retrieval and reduces the load on the database backend. Services like
Keystone and Nova can benefit from Memcached for caching.
Deployment
----------
The following scripts `rabbitmq.sh`_, `mariadb.sh`_, `memcached.sh`_ can be used to
deploy the backend services.
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/component/common/rabbitmq.sh
./tools/deployment/component/common/mariadb.sh
./tools/deployment/component/common/memcached.sh
.. note::
These scripts use Helm charts from the `openstack-helm-infra`_ repository. We assume
this repo is cloned to the `~/osh` directory. See this :doc:`section </install/before_deployment>`.
.. _rabbitmq.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/rabbitmq.sh
.. _mariadb.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/mariadb.sh
.. _memcached.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/memcached.sh
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git

View File

@ -1,92 +0,0 @@
=======================
Cleaning the Deployment
=======================
Removing Helm Charts
====================
To delete an installed helm chart, use the following command:
.. code-block:: shell
helm delete ${RELEASE_NAME} --purge
This will delete all Kubernetes resources generated when the chart was
instantiated. However for OpenStack charts, by default, this will not delete
the database and database users that were created when the chart was installed.
All OpenStack projects can be configured such that upon deletion, their database
will also be removed. To delete the database when the chart is deleted the
database drop job must be enabled before installing the chart. There are two
ways to enable the job, set the job_db_drop value to true in the chart's
``values.yaml`` file, or override the value using the helm install command as
follows:
.. code-block:: shell
helm install ${RELEASE_NAME} --set manifests.job_db_drop=true
Environment tear-down
=====================
To tear-down, the development environment charts should be removed first from
the 'openstack' namespace and then the 'ceph' namespace using the commands from
the `Removing Helm Charts` section. Additionally charts should be removed from
the 'nfs' and 'libvirt' namespaces if deploying with NFS backing or bare metal
development support. You can run the following commands to loop through and
delete the charts, then stop the kubelet systemd unit and remove all the
containers before removing the directories used on the host by pods.
.. code-block:: shell
for NS in openstack ceph nfs libvirt; do
helm ls --namespace $NS --short | xargs -r -L1 -P2 helm delete --purge
done
sudo systemctl stop kubelet
sudo systemctl disable kubelet
sudo docker ps -aq | xargs -r -L1 -P16 sudo docker rm -f
sudo rm -rf /var/lib/openstack-helm/*
# NOTE(portdirect): These directories are used by nova and libvirt
sudo rm -rf /var/lib/nova/*
sudo rm -rf /var/lib/libvirt/*
sudo rm -rf /etc/libvirt/qemu/*
#NOTE(chinasubbareddy) cleanup LVM volume groups in case of disk backed ceph osd deployments
for VG in `vgs|grep -v VG|grep -i ceph|awk '{print $1}'`; do
echo $VG
vgremove -y $VG
done
# lets delete loopback devices setup for ceph, if the device names are different in your case,
# please update them here as environmental variables as shown below.
: "${CEPH_OSD_DATA_DEVICE:=/dev/loop0}"
: "${CEPH_OSD_DB_WAL_DEVICE:=/dev/loop1}"
if [ ! -z "$CEPH_OSD_DATA_DEVICE" ]; then
ceph_osd_disk_name=`basename "$CEPH_OSD_DATA_DEVICE"`
if losetup -a|grep $ceph_osd_disk_name; then
losetup -d "$CEPH_OSD_DATA_DEVICE"
fi
fi
if [ ! -z "$CEPH_OSD_DB_WAL_DEVICE" ]; then
ceph_db_wal_disk_name=`basename "$CEPH_OSD_DB_WAL_DEVICE"`
if losetup -a|grep $ceph_db_wal_disk_name; then
losetup -d "$CEPH_OSD_DB_WAL_DEVICE"
fi
fi
echo "let's disable the service"
sudo systemctl disable loops-setup
echo "let's remove the service to setup loopback devices"
if [ -f "/etc/systemd/system/loops-setup.service" ]; then
rm /etc/systemd/system/loops-setup.service
fi
# NOTE(portdirect): Clean up mounts left behind by kubernetes pods
sudo findmnt --raw | awk '/^\/var\/lib\/kubelet\/pods/ { print $1 }' | xargs -r -L1 -P16 sudo umount -f -l
These commands will restore the environment back to a clean Kubernetes
deployment, that can either be manually removed or over-written by
restarting the deployment process. It is recommended to restart the host before
doing so to ensure any residual state, eg. Network interfaces are removed.

View File

@ -1,185 +0,0 @@
===============
Deploy OVS-DPDK
===============
Requirements
============
A correct DPDK configuration depends heavily on the specific hardware resources
and its configuration. Before deploying Openvswitch with DPDK, check the amount
and type of available hugepages on the host OS.
.. code-block:: shell
cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 8
HugePages_Free: 6
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
In this example, 8 hugepages of 1G size have been allocated. 2 of those are
being used and 6 are still available.
More information on how to allocate and configure hugepages on the host OS can
be found in the `Openvswitch documentation
<http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_.
In order to allow OVS inside a pod to make use of hugepages, the corresponding
type and amount of hugepages must be specified in the resource section of the
OVS chart's values.yaml:
.. code-block:: yaml
resources:
enabled: true
ovs:
db:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
vswitchd:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
# set resources to enabled and specify one of the following when using dpdk
hugepages-1Gi: "1Gi"
# hugepages-2Mi: "512Mi"
Additionally, the default configuration of the neutron chart must be adapted according
to the underlying hardware. The corresponding configuration parameter is labeled with
"CHANGE-ME" in the script "values_overrides/dpdk.yaml". Specifically, the "ovs_dpdk"
configuration section should list all NICs which should be bound to DPDK with
their corresponding PCI-IDs. Moreover, the name of each NIC needs to be unique,
e.g., dpdk0, dpdk1, etc.
.. code-block:: yaml
network:
interface:
tunnel: br-phy
conf:
ovs_dpdk:
enabled: true
driver: uio_pci_generic
nics:
- name: dpdk0
# CHANGE-ME: modify pci_id according to hardware
pci_id: '0000:05:00.0'
bridge: br-phy
migrate_ip: true
bridges:
- name: br-phy
bonds: []
In the example above, bonding isn't used and hence an empty list is passed in the "bonds"
section.
Deployment
==========
Once the above requirements are met, start deploying Openstack Helm using the deployment
scripts under the dpdk directory in an increasing order
.. code-block:: shell
./tools/deployment/developer/dpdk/<script-name>
One can also specify the name of Openstack release and container OS distribution as
overrides before running the deployment scripts, for instance,
.. code-block:: shell
export OPENSTACK_RELEASE=wallaby
export CONTAINER_DISTRO_NAME=ubuntu
export CONTAINER_DISTRO_VERSION=focal
Troubleshooting
===============
OVS startup failure
-------------------
If OVS fails to start up because of no hugepages are available, check the
configuration of the OVS daemonset. Older versions of helm-toolkit were not
able to render hugepage configuration into the Kubernetes manifest and just
removed the hugepage attributes. If no hugepage configuration is defined for
the OVS daemonset, consider using a newer version of helm-toolkit.
.. code-block:: shell
kubectl get daemonset openvswitch-vswitchd -n openstack -o yaml
[...]
resources:
limits:
cpu: "2"
hugepages-1Gi: 1Gi
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
[...]
Adding a DPDK port to Openvswitch fails
---------------------------------------
When adding a DPDK port (a NIC bound to DPDK) to OVS fails, one source of error
is related to an incorrect configuration with regards to the NUMA topology of
the underlying hardware. Every NIC is connected to one specific NUMA socket. In
order to use a NIC as DPDK port in OVS, the OVS configurations regarding
hugepage(s) and PMD thread(s) need to match the NUMA topology.
The NUMA socket a given NIC is connected to can be found in the ovs-vswitchd log:
.. code-block::
kubectl logs -n openstack openvswitch-vswitchd-6h928
[...]
2019-07-02T13:42:06Z|00016|dpdk|INFO|EAL: PCI device 0000:00:04.0 on NUMA socket 1
2019-07-02T13:42:06Z|00018|dpdk|INFO|EAL: probe driver: 1af4:1000 net_virtio
[...]
In this example, the NIC with PCI-ID 0000:00:04.0 is connected to NUMA socket
1. As a result, this NIC can only be used by OVS if
1. hugepages have been allocated on NUMA socket 1 by OVS, and
2. PMD threads have been assigned to NUMA socket 1.
To allocate hugepages to NUMA sockets in OVS, ensure that the
``socket_memory`` attribute in values.yaml specifies a value for the
corresponding NUMA socket. In the following example, OVS will use one 1G
hugepage for NUMA socket 0 and socket 1.
.. code-block::
socket_memory: 1024,1024
To allocate PMD threads to NUMA sockets in OVS, ensure that the ``pmd_cpu_mask``
attribute in values.yaml includes CPU sockets on the corresponding NUMA socket.
In the example below, the mask of 0xf covers the first 4 CPU cores which are
distributed across NUMA sockets 0 and 1.
.. code-block::
pmd_cpu_mask: 0xf
The mapping of CPU cores to NUMA sockets can be determined by means of ``lspci``, for instance:
.. code-block:: shell
lspci | grep NUMA
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
More information can be found in the `Openvswitch documentation
<http://docs.openvswitch.org/en/latest/intro/install/dpdk/>`_.

View File

@ -1,222 +0,0 @@
====================
Deployment With Ceph
====================
.. note::
For other deployment options, select appropriate ``Deployment with ...``
option from `Index <../developer/index.html>`__ page.
Deploy Ceph
^^^^^^^^^^^
We are going to install Ceph OSDs backed by loopback devices as this will
help us not to attach extra disks, in case if you have enough disks
on the node then feel free to skip creating loopback devices by exporting
CREATE_LOOPBACK_DEVICES_FOR_CEPH to false and export the block devices names
as environment variables(CEPH_OSD_DATA_DEVICE and CEPH_OSD_DB_WAL_DEVICE).
We are also going to separate Ceph metadata and data onto a different devices
to replicate the ideal scenario of fast disks for metadata and slow disks to store data.
You can change this as per your design by referring to the documentation explained in
../openstack-helm-infra/ceph-osd/values.yaml
This script will create two loopback devices for Ceph as one disk for OSD data
and other disk for block DB and block WAL. If default devices (loop0 and loop1) are busy in
your case, feel free to change them by exporting environment variables(CEPH_OSD_DATA_DEVICE
and CEPH_OSD_DB_WAL_DEVICE).
.. note::
if you are rerunning the below script then make sure to skip the loopback device creation
by exporting CREATE_LOOPBACK_DEVICES_FOR_CEPH to false.
.. literalinclude:: ../../../../tools/deployment/developer/ceph/040-ceph.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/040-ceph.sh
Activate the OpenStack namespace to be able to use Ceph
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/045-ceph-ns-activate.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
Deploy MariaDB
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/050-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/050-mariadb.sh
Deploy RabbitMQ
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/060-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/060-rabbitmq.sh
Deploy Memcached
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/070-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/070-memcached.sh
Deploy Keystone
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/080-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/080-keystone.sh
Deploy Heat
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/090-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/090-heat.sh
Deploy Horizon
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/100-horizon.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/100-horizon.sh
Deploy Rados Gateway for object store
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/110-ceph-radosgateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/110-ceph-radosgateway.sh
Deploy Glance
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/120-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/120-glance.sh
Deploy Cinder
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/130-cinder.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/130-cinder.sh
Deploy OpenvSwitch
^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/140-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/140-openvswitch.sh
Deploy Libvirt
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/150-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/150-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/160-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/160-compute-kit.sh
Setup the gateway to the public network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/ceph/170-setup-gateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/ceph/170-setup-gateway.sh

View File

@ -1,163 +0,0 @@
===================
Deployment With NFS
===================
.. note::
For other deployment options, select appropriate ``Deployment with ...``
option from `Index <../developer/index.html>`__ page.
Deploy NFS Provisioner
^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/040-nfs-provisioner.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/040-nfs-provisioner.sh
Deploy MariaDB
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/050-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/050-mariadb.sh
Deploy RabbitMQ
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/060-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/060-rabbitmq.sh
Deploy Memcached
^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/070-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/070-memcached.sh
Deploy Keystone
^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/080-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/080-keystone.sh
Deploy Heat
^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/090-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/090-heat.sh
Deploy Horizon
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/100-horizon.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/100-horizon.sh
Deploy Glance
^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/120-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/120-glance.sh
Deploy OpenvSwitch
^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/140-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/140-openvswitch.sh
Deploy Libvirt
^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/150-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/150-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/160-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/160-compute-kit.sh
Setup the gateway to the public network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/developer/nfs/170-setup-gateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/nfs/170-setup-gateway.sh

View File

@ -1,147 +0,0 @@
===============================
Deployment with Tungsten Fabric
===============================
Intro
^^^^^
Tungsten Fabric is the multicloud and multistack network solution which you can
use for your OpenStack as a network plugin. This document decribes how you can deploy
a single node Open Stack based on Tungsten Fabric using openstack helm for development purpose.
Prepare host
^^^^^^^^^^^^
First you have to set up OpenStack and Linux versions and install needed packages
.. code-block:: shell
export OPENSTACK_RELEASE=train
export CONTAINER_DISTRO_NAME=ubuntu
export CONTAINER_DISTRO_VERSION=bionic
sudo apt update -y
sudo apt install -y resolvconf
cd ~/openstack-helm
Install OpenStack packages
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/common/install-packages.sh
Install k8s Minikube
^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/common/deploy-k8s.sh
Setup DNS for use cluster DNS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
dns_cluster_ip=`kubectl get svc kube-dns -n kube-system --no-headers -o custom-columns=":spec.clusterIP"`
echo "nameserver ${dns_cluster_ip}" | sudo tee -a /etc/resolvconf/resolv.conf.d/head > /dev/null
sudo dpkg-reconfigure --force resolvconf
sudo systemctl restart resolvconf
Setup env for apply values_overrides
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
export FEATURE_GATES=tf
Setup OpenStack client
^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/common/setup-client.sh
Setup Ingress
^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/common/ingress.sh
Setup MariaDB
^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/common/mariadb.sh
Setup Memcached
^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/common/memcached.sh
Setup RabbitMQ
^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/common/rabbitmq.sh
Setup NFS
^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/nfs-provisioner/nfs-provisioner.sh
Setup Keystone
^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/keystone/keystone.sh
Setup Heat
^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/heat/heat.sh
Setup Glance
^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/glance/glance.sh
Prepare host and openstack helm for tf
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/compute-kit/tungsten-fabric.sh prepare
Setup libvirt
^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/compute-kit/libvirt.sh
Setup Neutron and Nova
^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/compute-kit/compute-kit.sh
Setup Tungsten Fabric
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
./tools/deployment/component/compute-kit/tungsten-fabric.sh deploy

View File

@ -1,90 +0,0 @@
==================
Exercise the Cloud
==================
Once OpenStack-Helm has been deployed, the cloud can be exercised either with
the OpenStack client, or the same heat templates that are used in the validation
gates.
.. literalinclude:: ../../../../tools/deployment/developer/common/900-use-it.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/900-use-it.sh
To run further commands from the CLI manually, execute the following to
set up authentication credentials::
export OS_CLOUD=openstack_helm
Note that this command will only enable you to auth successfully using the
``python-openstackclient`` CLI. To use legacy clients like the
``python-novaclient`` from the CLI, reference the auth values in
``/etc/openstack/clouds.yaml`` and run::
export OS_USERNAME='admin'
export OS_PASSWORD='password'
export OS_PROJECT_NAME='admin'
export OS_PROJECT_DOMAIN_NAME='default'
export OS_USER_DOMAIN_NAME='default'
export OS_AUTH_URL='http://keystone.openstack.svc.cluster.local/v3'
The example above uses the default values used by ``openstack-helm-infra``.
--------------------------------
Subsequent Runs & Post Clean-up
--------------------------------
Execution of the **900-use-it.sh** script results in the creation of 4 heat stacks and a unique
keypair enabling access to a newly created VM. Subsequent runs of the **900-use-it.sh** script
requires deletion of the stacks, a keypair, and key files, generated during the initial script
execution.
The following steps serve as a guide to clean-up the client environment by deleting stacks and
respective artifacts created during the **900-use-it.sh** script:
1. List the stacks created during script execution which will need to be deleted::
sudo openstack --os-cloud openstack_helm stack list
# Sample results returned for *Stack Name* include:
# - heat-vm-volume-attach
# - heat-basic-vm-deployment
# - heat-subnet-pool-deployment
# - heat-public-net-deployment
2. Delete the stacks returned from the *openstack helm stack list* command above::
sudo openstack --os-cloud openstack_helm stack delete heat-vm-volume-attach
sudo openstack --os-cloud openstack_helm stack delete heat-basic-vm-deployment
sudo openstack --os-cloud openstack_helm stack delete heat-subnet-pool-deployment
sudo openstack --os-cloud openstack_helm stack delete heat-public-net-deployment
3. List the keypair(s) generated during the script execution::
sudo openstack --os-cloud openstack_helm keypair list
# Sample Results returned for “Name” include:
# - heat-vm-key
4. Delete the keypair(s) returned from the list command above::
sudo openstack --os-cloud openstack_helm keypair delete heat-vm-key
5. Manually remove the keypair directories created from the script in the ~/.ssh directory::
cd ~/.ssh
rm osh_key
rm known_hosts
6. As a final validation step, re-run the **openstack helm stack list** and
**openstack helm keypair list** commands and confirm the returned results are shown as empty.::
sudo openstack --os-cloud openstack_helm stack list
sudo openstack --os-cloud openstack_helm keypair list
Alternatively, these steps can be performed by running the script directly::
./tools/deployment/developer/common/910-clean-it.sh

View File

@ -1,16 +0,0 @@
Deployment
==========
Contents:
.. toctree::
:maxdepth: 2
requirements-and-host-config
kubernetes-and-common-setup
deploy-with-nfs
deploy-with-tungsten-fabric
deploy-with-ceph
deploy-ovs-dpdk.rst
exercise-the-cloud
cleaning-deployment

View File

@ -1,135 +0,0 @@
===========================
Kubernetes and Common Setup
===========================
Install Basic Utilities
^^^^^^^^^^^^^^^^^^^^^^^
To get started with OSH, we will need ``git``, ``curl`` and ``make``.
.. code-block:: shell
sudo apt install git curl make
Clone the OpenStack-Helm Repos
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once the host has been configured the repos containing the OpenStack-Helm charts
should be cloned:
.. code-block:: shell
#!/bin/bash
set -xe
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/openstack/openstack-helm.git
OSH Proxy & DNS Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. note::
If you are not deploying OSH behind a proxy, skip this step and
continue with "Deploy Kubernetes & Helm".
In order to deploy OSH behind a proxy, add the following entries to
``openstack-helm-infra/tools/gate/devel/local-vars.yaml``:
.. code-block:: shell
proxy:
http: http://PROXY_URL:PORT
https: https://PROXY_URL:PORT
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
.. note::
Depending on your specific proxy, https_proxy may be the same as http_proxy.
Refer to your specific proxy documentation.
By default OSH will use Google DNS Server IPs (8.8.8.8, 8.8.4.4) and will
update resolv.conf as a result. If those IPs are blocked by your proxy, running
the OSH scripts will result in the inability to connect to anything on the
network. These DNS nameserver entries can be changed by updating the
external_dns_nameservers entry in the file
``openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml``.
.. code-block:: shell
external_dns_nameservers:
- YOUR_PROXY_DNS_IP
- ALT_PROXY_DNS_IP
These values can be retrieved by running:
.. code-block:: shell
systemd-resolve --status
Deploy Kubernetes & Helm
^^^^^^^^^^^^^^^^^^^^^^^^
You may now deploy kubernetes, and helm onto your machine, first move into the
``openstack-helm`` directory and then run the following:
.. literalinclude:: ../../../../tools/deployment/developer/common/010-deploy-k8s.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/010-deploy-k8s.sh
This command will deploy a single node minikube cluster. This will use the
parameters in ``${OSH_INFRA_PATH}/playbooks/vars.yaml`` to control the
deployment, which can be over-ridden by adding entries to
``${OSH_INFRA_PATH}/tools/gate/devel/local-vars.yaml``.
Helm Chart Installation
=======================
Using the Helm packages previously pushed to the local Helm repository, run the
following commands to instruct tiller to create an instance of the given chart.
During installation, the helm client will print useful information about
resources created, the state of the Helm releases, and whether any additional
configuration steps are necessary.
Install OpenStack-Helm
----------------------
.. note:: The following commands all assume that they are run from the
``openstack-helm`` directory and the repos have been cloned as above.
Setup Clients on the host and assemble the charts
=================================================
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the
charts can be performed by running the following commands:
.. literalinclude:: ../../../../tools/deployment/developer/common/020-setup-client.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/developer/common/020-setup-client.sh
Deploy the ingress controller
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../../../../tools/deployment/component/common/ingress.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/component/common/ingress.sh
To continue to deploy OpenStack on Kubernetes via OSH, see
:doc:`Deploy NFS<./deploy-with-nfs>` or :doc:`Deploy Ceph<./deploy-with-ceph>`.

View File

@ -1,100 +0,0 @@
===================================
Requirements and Host Configuration
===================================
Overview
========
Below are some instructions and suggestions to help you get started with a
Kubeadm All-in-One environment on Ubuntu 18.04.
Other supported versions of Linux can also be used, with the appropriate changes
to package installation.
Requirements
============
System Requirements
-------------------
The recommended minimum system requirements for a full deployment are:
- 16GB of RAM
- 8 Cores
- 48GB HDD
For a deployment without cinder and horizon the system requirements are:
- 8GB of RAM
- 4 Cores
- 48GB HDD
This guide covers the minimum number of requirements to get started.
All commands below should be run as a normal user, not as root.
Appropriate versions of Docker, Kubernetes, and Helm will be installed
by the playbooks used below, so there's no need to install them ahead of time.
.. warning:: By default the Calico CNI will use ``192.168.0.0/16`` and
Kubernetes services will use ``10.96.0.0/16`` as the CIDR for services. Check
that these CIDRs are not in use on the development node before proceeding, or
adjust as required.
Host Configuration
------------------
OpenStack-Helm uses the hosts networking namespace for many pods including,
Ceph, Neutron and Nova components. For this, to function, as expected pods need
to be able to resolve DNS requests correctly. Ubuntu Desktop and some other
distributions make use of ``mdns4_minimal`` which does not operate as Kubernetes
expects with its default TLD of ``.local``. To operate at expected either
change the ``hosts`` line in the ``/etc/nsswitch.conf``, or confirm that it
matches:
.. code-block:: ini
hosts: files dns
Host Proxy & DNS Configuration
------------------------------
.. note::
If you are not deploying OSH behind a proxy, skip this step.
Set your local environment variables to use the proxy information. This
involves adding or setting the following values in ``/etc/environment``:
.. code-block:: shell
export http_proxy="YOUR_PROXY_ADDRESS:PORT"
export https_proxy="YOUR_PROXY_ADDRESS:PORT"
export ftp_proxy="YOUR_PROXY_ADDRESS:PORT"
export no_proxy="localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,172.17.0.1,.svc.cluster.local,$YOUR_ACTUAL_IP"
export HTTP_PROXY="YOUR_PROXY_ADDRESS:PORT"
export HTTPS_PROXY="YOUR_PROXY_ADDRESS:PORT"
export FTP_PROXY="YOUR_PROXY_ADDRESS:PORT"
export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,172.17.0.1,.svc.cluster.local,$YOUR_ACTUAL_IP"
.. note::
Depending on your specific proxy, https_proxy may be the same as http_proxy.
Refer to your specific proxy documentation.
Your changes to `/etc/environment` will not be applied until you source them:
.. code-block:: shell
source /etc/environment
OSH runs updates for local apt packages, so we will need to set the proxy for
apt as well by adding these lines to `/etc/apt/apt.conf`:
.. code-block:: shell
Acquire::http::proxy "YOUR_PROXY_ADDRESS:PORT";
Acquire::https::proxy "YOUR_PROXY_ADDRESS:PORT";
Acquire::ftp::proxy "YOUR_PROXY_ADDRESS:PORT";
.. note::
Depending on your specific proxy, https_proxy may be the same as http_proxy.
Refer to your specific proxy documentation.

View File

@ -1,244 +0,0 @@
============================
External DNS to FQDN/Ingress
============================
Overview
========
In order to access your OpenStack deployment on Kubernetes we can use the Ingress Controller
or NodePorts to provide a pathway in. A background on Ingress, OpenStack-Helm fully qualified
domain name (FQDN) overrides, installation, examples, and troubleshooting will be discussed here.
Ingress
=======
OpenStack-Helm utilizes the `Kubernetes Ingress Controller
<https://kubernetes.io/docs/concepts/services-networking/ingress/>`__
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
::
internet
|
[ Ingress ]
--|-----|--
[ Services ]
It can be configured to give services externally-reachable URLs, load balance traffic,
terminate SSL, offer name based virtual hosting, and more.
Essentially the use of Ingress for OpenStack-Helm is an Nginx proxy service. Ingress (Nginx) is
accessible by your cluster public IP - e.g. the IP associated with
``kubectl get pods -o wide --all-namespaces | grep ingress-api``
Ingress/Nginx will be listening for server name requests of "keystone" or "keystone.openstack"
and will route those requests to the proper internal K8s Services.
These public listeners in Ingress must match the external DNS that you will set up to access
your OpenStack deployment. Note each rule also has a Service that directs Ingress Controllers
allow access to the endpoints from within the cluster.
External DNS and FQDN
=====================
Prepare ahead of time your FQDN and DNS layouts. There are a handful of OpenStack endpoints
you will want to expose for API and Dashboard access.
Update your lab/environment DNS server with your appropriate host values creating A Records
for the edge node IP's and various FQDN's. Alternatively you can test these settings locally by
editing your ``/etc/hosts``. Below is an example with a dummy domain ``os.foo.org`` and
dummy Ingress IP ``1.2.3.4``.
::
A Records
1.2.3.4 horizon.os.foo.org
1.2.3.4 neutron.os.foo.org
1.2.3.4 keystone.os.foo.org
1.2.3.4 nova.os.foo.org
1.2.3.4 metadata.os.foo.org
1.2.3.4 glance.os.foo.org
The default FQDN's for OpenStack-Helm are
::
horizon.openstack.svc.cluster.local
neutron.openstack.svc.cluster.local
keystone.openstack.svc.cluster.local
nova.openstack.svc.cluster.local
metadata.openstack.svc.cluster.local
glance.openstack.svc.cluster.local
We want to change the **public** configurations to match our DNS layouts above. In each Chart
``values.yaml`` is a ``endpoints`` configuration that has ``host_fqdn_override``'s for each API
that the Chart either produces or is dependent on. `Read more about how Endpoints are developed
<https://docs.openstack.org/openstack-helm/latest/devref/endpoints.html>`__.
Note while Glance Registry is listening on a Ingress http endpoint, you will not need to expose
the registry for external services.
Installation
============
Implementing the FQDN overrides **must** be done at install time. If you run these as helm upgrades,
Ingress will notice the updates though none of the endpoint build-out jobs will run again,
unless they are cleaned up manually or using a tool like Armada.
Two similar options exist to set the FQDN overrides for External DNS mapping.
**First**, edit the ``values.yaml`` for Neutron, Glance, Horizon, Keystone, and Nova.
Using Horizon as an example, find the ``endpoints`` config.
For ``identity`` and ``dashboard`` at ``host_fdqn_override.public`` replace ``null`` with the
value as ``keystone.os.foo.org`` and ``horizon.os.foo.org``
.. code:: bash
endpoints:
cluster_domain_suffix: cluster.local
identity:
name: keystone
hosts:
default: keystone-api
public: keystone
host_fqdn_override:
default: null
public: keystone.os.foo.org
.
.
dashboard:
name: horizon
hosts:
default: horizon-int
public: horizon
host_fqdn_override:
default: null
public: horizon.os.foo.org
After making the configuration changes, run a ``make`` and then install as you would from
AIO or MultiNode instructions.
**Second** option would be as ``--set`` flags when calling ``helm install``
Add to the Install steps these flags - also adding a shell environment variable to save on
repeat code.
.. code-block:: shell
export FQDN=os.foo.org
helm install --name=horizon ./horizon --namespace=openstack \
--set network.node_port.enabled=true \
--set endpoints.dashboard.host_fqdn_override.public=horizon.$FQDN \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
Note if you need to make a DNS change, you will have to do uninstall (``helm delete <chart>``)
and install again.
Once installed, access the API's or Dashboard at `http://horizon.os.foo.org`
Examples
========
Code examples below.
If doing an `AIO install
<https://docs.openstack.org/openstack-helm/latest/install/developer/index.html>`__,
all the ``--set`` flags
.. code-block:: shell
export FQDN=os.foo.org
helm install --name=keystone local/keystone --namespace=openstack \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
helm install --name=glance local/glance --namespace=openstack \
--set storage=pvc \
--set endpoints.image.host_fqdn_override.public=glance.$FQDN \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
helm install --name=nova local/nova --namespace=openstack \
--values=./tools/overrides/mvp/nova.yaml \
--set conf.nova.libvirt.virt_type=qemu \
--set conf.nova.libvirt.cpu_mode=none \
--set endpoints.compute.host_fqdn_override.public=nova.$FQDN \
--set endpoints.compute_metadata.host_fqdn_override.public=metadata.$FQDN \
--set endpoints.image.host_fqdn_override.public=glance.$FQDN \
--set endpoints.network.host_fqdn_override.public=neutron.$FQDN \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
helm install --name=neutron local/neutron \
--namespace=openstack --values=./tools/overrides/mvp/neutron-ovs.yaml \
--set endpoints.network.host_fqdn_override.public=neutron.$FQDN \
--set endpoints.compute.host_fqdn_override.public=nova.$FQDN \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
helm install --name=horizon local/horizon --namespace=openstack \
--set=network.node_port.enabled=true \
--set endpoints.dashboard.host_fqdn_override.public=horizon.$FQDN \
--set endpoints.identity.host_fqdn_override.public=keystone.$FQDN
Troubleshooting
===============
**Review the Ingress configuration.**
Get the Nginx configuration from the Ingress Pod:
.. code-block:: shell
kubectl exec -it ingress-api-2210976527-92cq0 -n openstack -- cat /etc/nginx/nginx.conf
Look for *server* configuration with a *server_name* matching your desired FQDN
::
server {
server_name nova.os.foo.org;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
set $proxy_upstream_name "openstack-nova-api-n-api";
.
.
}
**Check Chart Status**
Get the ``helm status`` of your chart.
.. code-block:: shell
helm status keystone
Verify the *v1beta1/Ingress* resource has a Host with your FQDN value
::
LAST DEPLOYED: Thu Sep 28 20:00:49 2017
NAMESPACE: openstack
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
keystone keystone,keystone.os.foo.org 1.2.3.4 80 35m

View File

@ -4,11 +4,14 @@ Installation
Contents:
.. toctree::
:maxdepth: 2
:maxdepth: 2
before_deployment
deploy_kubernetes
prepare_kubernetes
deploy_ceph
setup_openstack_client
deploy_ingress_controller
deploy_openstack_backend
deploy_openstack
common-requirements
developer/index
multinode
kubernetes-gate
ext-dns-fqdn
plugins/index

View File

@ -1,143 +0,0 @@
=====================
Gate-Based Kubernetes
=====================
Overview
========
You can use any Kubernetes deployment tool to bring up a working Kubernetes
cluster for use with OpenStack-Helm. This guide describes how to simply stand
up a multinode Kubernetes cluster via the OpenStack-Helm gate scripts,
which use KubeADM and Ansible. Although this cluster won't be
production-grade, it will serve as a quick starting point in a lab or
proof-of-concept environment.
OpenStack-Helm-Infra KubeADM deployment
=======================================
On the worker nodes:
.. code-block:: shell
#!/bin/bash
set -xe
sudo apt-get update
sudo apt-get install --no-install-recommends -y git
SSH-Key preparation
-------------------
Create an ssh-key on the master node, and add the public key to each node that
you intend to join the cluster.
.. note::
1. To generate the key you can use ``ssh-keygen -t rsa``
2. To copy the ssh key to each node, this can be accomplished with
the ``ssh-copy-id`` command, for example: *ssh-copy-id
ubuntu@192.168.122.178*
3. Copy the key: ``sudo cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem``
4. Set correct ownership: ``sudo chown ubuntu
/etc/openstack-helm/deploy-key.pem``
Test this by ssh'ing to a node and then executing a command with
'sudo'. Neither operation should require a password.
Clone the OpenStack-Helm Repos
------------------------------
Once the host has been configured the repos containing the OpenStack-Helm charts
should be cloned onto each node in the cluster:
.. code-block:: shell
#!/bin/bash
set -xe
sudo chown -R ubuntu: /opt
git clone https://opendev.org/openstack/openstack-helm-infra.git /opt/openstack-helm-infra
git clone https://opendev.org/openstack/openstack-helm.git /opt/openstack-helm
Create an inventory file
------------------------
On the master node create an inventory file for the cluster:
.. note::
node_one, node_two and node_three below are all worker nodes,
children of the master node that the commands below are executed on.
.. code-block:: shell
#!/bin/bash
set -xe
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml <<EOF
all:
children:
primary:
hosts:
node_one:
ansible_port: 22
ansible_host: $node_one_ip
ansible_user: ubuntu
ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
nodes:
hosts:
node_two:
ansible_port: 22
ansible_host: $node_two_ip
ansible_user: ubuntu
ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
node_three:
ansible_port: 22
ansible_host: $node_three_ip
ansible_user: ubuntu
ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF
Create an environment file
--------------------------
On the master node create an environment file for the cluster:
.. code-block:: shell
#!/bin/bash
set -xe
function net_default_iface {
sudo ip -4 route list 0/0 | awk '{ print $5; exit }'
}
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-vars.yaml <<EOF
kubernetes_network_default_device: $(net_default_iface)
EOF
Additional configuration variables can be found `here
<https://github.com/openstack/openstack-helm-infra/blob/master/roles/deploy-kubeadm-aio-common/defaults/main.yml>`_.
In particular, ``kubernetes_cluster_pod_subnet`` can be used to override the
pod subnet set up by Calico (the default container SDN), if you have a
preexisting network that conflicts with the default pod subnet of 192.168.0.0/16.
.. note::
This installation, by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
and updates resolv.conf. These DNS nameserver entries can be changed by
updating file ``/opt/openstack-helm-infra/tools/images/kubeadm-aio/assets/opt/playbooks/vars.yaml``
under section ``external_dns_nameservers``. This change must be done on each
node in your cluster.
Run the playbooks
-----------------
On the master node run the playbooks:
.. code-block:: shell
#!/bin/bash
set -xe
cd /opt/openstack-helm-infra
make dev-deploy setup-host multinode
make dev-deploy k8s multinode

View File

@ -1,331 +0,0 @@
=========
Multinode
=========
Overview
========
In order to drive towards a production-ready OpenStack solution, our
goal is to provide containerized, yet stable `persistent
volumes <https://kubernetes.io/docs/concepts/storage/persistent-volumes/>`_
that Kubernetes can use to schedule applications that require state,
such as MariaDB (Galera). Although we assume that the project should
provide a "batteries included" approach towards persistent storage, we
want to allow operators to define their own solution as well. Examples
of this work will be documented in another section, however evidence of
this is found throughout the project. If you find any issues or gaps,
please create a `story <https://storyboard.openstack.org/#!/project/886>`_
to track what can be done to improve our documentation.
.. note::
Please see the supported application versions outlined in the
`source variable file <https://github.com/openstack/openstack-helm-infra/blob/master/roles/build-images/defaults/main.yml>`_.
Other versions and considerations (such as other CNI SDN providers),
config map data, and value overrides will be included in other
documentation as we explore these options further.
The installation procedures below, will take an administrator from a new
``kubeadm`` installation to OpenStack-Helm deployment.
.. note:: Many of the default container images that are referenced across
OpenStack-Helm charts are not intended for production use; for example,
while LOCI and Kolla can be used to produce production-grade images, their
public reference images are not prod-grade. In addition, some of the default
images use ``latest`` or ``master`` tags, which are moving targets and can
lead to unpredictable behavior. For production-like deployments, we
recommend building custom images, or at minimum caching a set of known
images, and incorporating them into OpenStack-Helm via values overrides.
.. warning:: Until the Ubuntu kernel shipped with 16.04 supports CephFS
subvolume mounts by default the `HWE Kernel
<../troubleshooting/ubuntu-hwe-kernel.html>`__ is required to use CephFS.
Kubernetes Preparation
======================
You can use any Kubernetes deployment tool to bring up a working Kubernetes
cluster for use with OpenStack-Helm. For production deployments,
please choose (and tune appropriately) a highly-resilient Kubernetes
distribution, e.g.:
- `Airship <https://airshipit.org/>`_, a declarative open cloud
infrastructure platform
- `KubeADM <https://kubernetes.io/docs/setup/independent/high-availability/>`_,
the foundation of a number of Kubernetes installation solutions
For a lab or proof-of-concept environment, the OpenStack-Helm gate scripts
can be used to quickly deploy a multinode Kubernetes cluster using KubeADM
and Ansible. Please refer to the deployment guide
`here <./kubernetes-gate.html>`__.
Managing and configuring a Kubernetes cluster is beyond the scope
of OpenStack-Helm and this guide.
Deploy OpenStack-Helm
=====================
.. note::
The following commands all assume that they are run from the
``/opt/openstack-helm`` directory.
Setup Clients on the host and assemble the charts
-------------------------------------------------
The OpenStack clients and Kubernetes RBAC rules, along with assembly of the
charts can be performed by running the following commands:
.. literalinclude:: ../../../tools/deployment/multinode/010-setup-client.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/010-setup-client.sh
Deploy the ingress controller
-----------------------------
.. code-block:: shell
export OSH_DEPLOY_MULTINODE=True
.. literalinclude:: ../../../tools/deployment/component/common/ingress.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
OSH_DEPLOY_MULTINODE=True ./tools/deployment/component/common/ingress.sh
Create loopback devices for CEPH
--------------------------------
Create two loopback devices for ceph as one disk for OSD data and other disk for
block DB and block WAL.
If loop0 and loop1 devices are busy in your case , feel free to change them in parameters
by using --ceph-osd-data and --ceph-osd-dbwal options.
.. code-block:: shell
ansible all -i /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml -m shell -s -a "/opt/openstack-helm/tools/deployment/common/setup-ceph-loopback-device.sh --ceph-osd-data /dev/loop0 --ceph-osd-dbwal /dev/loop1"
Deploy Ceph
-----------
The script below configures Ceph to use loopback devices created in previous step as backend for ceph osds.
To configure a custom block device-based backend, please refer
to the ``ceph-osd`` `values.yaml <https://github.com/openstack/openstack-helm/blob/master/ceph-osd/values.yaml>`_.
Additional information on Kubernetes Ceph-based integration can be found in
the documentation for the
`CephFS <https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/cephfs/README.md>`_
and `RBD <https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/rbd/README.md>`_
storage provisioners, as well as for the alternative
`NFS <https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/README.md>`_ provisioner.
.. warning:: The upstream Ceph image repository does not currently pin tags to
specific Ceph point releases. This can lead to unpredictable results
in long-lived deployments. In production scenarios, we strongly recommend
overriding the Ceph images to use either custom built images or controlled,
cached images.
.. note::
The `./tools/deployment/multinode/kube-node-subnet.sh` script requires docker
to run.
.. literalinclude:: ../../../tools/deployment/multinode/030-ceph.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/030-ceph.sh
Activate the openstack namespace to be able to use Ceph
-------------------------------------------------------
.. literalinclude:: ../../../tools/deployment/multinode/040-ceph-ns-activate.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/040-ceph-ns-activate.sh
Deploy MariaDB
--------------
.. literalinclude:: ../../../tools/deployment/multinode/050-mariadb.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/050-mariadb.sh
Deploy RabbitMQ
---------------
.. literalinclude:: ../../../tools/deployment/multinode/060-rabbitmq.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/060-rabbitmq.sh
Deploy Memcached
----------------
.. literalinclude:: ../../../tools/deployment/multinode/070-memcached.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/070-memcached.sh
Deploy Keystone
---------------
.. literalinclude:: ../../../tools/deployment/multinode/080-keystone.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/080-keystone.sh
Deploy Rados Gateway for object store
-------------------------------------
.. literalinclude:: ../../../tools/deployment/multinode/090-ceph-radosgateway.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/090-ceph-radosgateway.sh
Deploy Glance
-------------
.. literalinclude:: ../../../tools/deployment/multinode/100-glance.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/100-glance.sh
Deploy Cinder
-------------
.. literalinclude:: ../../../tools/deployment/multinode/110-cinder.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/110-cinder.sh
Deploy OpenvSwitch
------------------
.. literalinclude:: ../../../tools/deployment/multinode/120-openvswitch.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/120-openvswitch.sh
Deploy Libvirt
--------------
.. literalinclude:: ../../../tools/deployment/multinode/130-libvirt.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/130-libvirt.sh
Deploy Compute Kit (Nova and Neutron)
-------------------------------------
.. literalinclude:: ../../../tools/deployment/multinode/140-compute-kit.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/140-compute-kit.sh
Deploy Heat
-----------
.. literalinclude:: ../../../tools/deployment/multinode/150-heat.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/150-heat.sh
Deploy Barbican
---------------
.. literalinclude:: ../../../tools/deployment/multinode/160-barbican.sh
:language: shell
:lines: 1,17-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/multinode/160-barbican.sh
Configure OpenStack
-------------------
Configuring OpenStack for a particular production use-case is beyond the scope
of this guide. Please refer to the
OpenStack `Configuration <https://docs.openstack.org/latest/configuration/>`_
documentation for your selected version of OpenStack to determine
what additional values overrides should be
provided to the OpenStack-Helm charts to ensure appropriate networking,
security, etc. is in place.

View File

@ -1,339 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
==========================================================
Deploy tap-as-a-service (TaaS) Neutron / Dashboard plugin
==========================================================
This guide explains how to deploy tap-as-a-service (TaaS) Neutron plugin and
TaaS Dashboard plugin in Neutron and Horizon charts respectively.
TaaS plugin provides a mechanism to mirror certain traffic (for example tagged
with specific VLANs) from a source VM to any traffic analyzer VM. When packet
will be forwarded, the original value of source and target ip/ports information
will not be altered and the system administrator will be able to run, for ex.
tcpdump, on the target VM to trace these packets.
For more details, refer to TaaS specification: Tap-as-a-service_.
.. _Tap-as-a-service: https://github.com/openstack/tap-as-a-service/blob/master/specs/mitaka/tap-as-a-service.rst
TaaS Architecture
==================
As any other Neutron plugin, TaaS neutron plugin functionality consists of
following modules:
.. figure:: figures/taas-architecture.png
:alt: Neutron TaaS Architecture
**TaaS Plugin**: This is the front-end of TaaS which runs on controller node
(Neutron server). This serves TaaS APIs and stores/retrieves TaaS configuration
state to/from Neutron TaaS DB.
**TaaS Agent, TaaS OVS Driver and TaaS SR-IOV Driver**: This forms the back-end
of TaaS which runs as a ML2 agent extension on compute nodes. It handles the RPC
calls made by TaaS Plugin and configures the mechanism driver, i.e. OpenVSwitch
or SR-IOV Nic Switch.
**TaaS Dashboard Plugin**: Horizon Plugin which adds GUI panels for TaaS
resources in the Horizon Dashboard.
Prepare LOCI images
======================
Before deploying TaaS and/or TaaS Dashboard, it needs to be added in Neutron
and/or Horizon LOCI images.
This is a two step process, i.e.
#. Prepare a requirements LOCI image with Neutron TaaS and TaaS Dashboard code
installed.
#. Prepare Neutron or Horizon LOCI image using this requirements image as
:code:`docker build --build-arg WHEELS` command argument.
Requirements LOCI image
-------------------------
* Create a patchset for ``openstack/requirements`` repo
Add TaaS and TaaS dashboard dependencies in :code:`upper-constraints.txt`
file in :code:`openstack/requirements` repo, i.e.
https://opendev.org/openstack/requirements
.. path upper-constraints
.. code-block:: none
git+https://opendev.org/openstack/tap-as-a-service@master#egg=tap-as-a-service
git+https://opendev.org/openstack/tap-as-a-service-dashboard@master#egg=tap-as-a-service-dashboard
.. end
For example if gerrit refspec for this commit is "refs/changes/xx/xxxxxx/x",
so export the :code:`REQUIREMENTS_REF_SPEC` variable as follows:
.. path REQUIREMENTS_REF_SPEC
.. code-block:: bash
export REQUIREMENTS_REF_SPEC="refs/changes/xx/xxxxxx/x"
.. end
* Build the requirements LOCI image using above commit
Use it as ``docker build --build-arg PROJECT_REF=${REQUIREMENTS_REF_SPEC}``
command argument to build the requirements LOCI image.
Neutron and Horizon LOCI images
---------------------------------
* Create a patchset for ``openstack/neutron`` repo
Add TaaS dependency in ``requirements.txt`` file in ``openstack/neutron``
repo, i.e. https://opendev.org/openstack/neutron
.. path patchset-neutron
.. code-block:: none
tap-as-a-service
.. end
For example if gerrit refspec for this commit is "refs/changes/xx/xxxxxx/x";
so export the :code:`NEUTRON_REF_SPEC` variable as follows:
.. path patchset-neutron-export
.. code-block:: bash
export NEUTRON_REF_SPEC="refs/changes/xx/xxxxxx/x"
.. end
* Create a patchset for ``openstack/horizon`` repo
Add TaaS Dashboard dependency in ``requirements.txt`` file in
``openstack/horizon`` repo, i.e. https://opendev.org/openstack/horizon
.. path patchset-horizon
.. code-block:: none
tap-as-a-service-dashboard
.. end
For example if gerrit refspec for this commit is "refs/changes/xx/xxxxxx/x";
so export the :code:`HORIZON_REF_SPEC` variable as follows:
.. path patchset-horizon-export
.. code-block:: bash
export HORIZON_REF_SPEC="refs/changes/xx/xxxxxx/x"
.. end
* Putting it all together
Apart from the variables above with gerrit refspec values, additionally
export following environment variables with values as applicable:
.. path other-env-export
.. code-block:: bash
export OPENSTACK_VERSION="stable/ocata"
export PRIVATE_REPO="docker.io/username"
.. end
Use above gerrit commits to prepare the LOCI images using following script:
.. path main-script
.. code-block:: bash
#!/bin/bash
set -ex
# export following variables with applicable values before invoking the script
#----------
: ${OPENSTACK_VERSION:="stable/ocata"}
: ${REQUIREMENTS_REF_SPEC:=""}
: ${NEUTRON_REF_SPEC:=""}
: ${HORIZON_REF_SPEC:=""}
: ${PRIVATE_REPO:="docker.io/username"} # Replace with your own dockerhub repo
#----------
IMAGE_TAG="${OPENSTACK_VERSION#*/}"
REGEX_GERRIT_REF_SPEC="^refs"
[[ ${REQUIREMENTS_REF_SPEC} =~ ${REGEX_GERRIT_REF_SPEC} ]] ||
(echo "Please set a proper value for REQUIREMENTS_REF_SPEC env variable" && exit)
[[ ${NEUTRON_REF_SPEC} =~ ${REGEX_GERRIT_REF_SPEC} ]] ||
(echo "Please set a proper value for NEUTRON_REF_SPEC env variable" && exit)
[[ ${HORIZON_REF_SPEC} =~ ${REGEX_GERRIT_REF_SPEC} ]] ||
(echo "Please set a proper value for HORIZON_REF_SPEC env variable" && exit)
# Login to private-repo : provide login password when asked
sudo docker login
sudo docker run -d \
--name docker-in-docker \
--privileged=true \
--net=host \
-v /var/lib/docker \
-v ${HOME}/.docker/config.json:/root/.docker/config.json:ro\
docker.io/docker:17.07.0-dind \
dockerd \
--pidfile=/var/run/docker.pid \
--host=unix:///var/run/docker.sock \
--storage-driver=overlay2
sudo docker exec docker-in-docker apk update
sudo docker exec docker-in-docker apk add git
# Prepare Requirements image
sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \
https://opendev.org/openstack/loci.git \
--network host \
--build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \
--build-arg PROJECT=requirements \
--build-arg PROJECT_REF=${REQUIREMENTS_REF_SPEC} \
--tag ${PRIVATE_REPO}/requirements:${IMAGE_TAG}
sudo docker exec docker-in-docker docker push ${PRIVATE_REPO}/requirements:${IMAGE_TAG}
# Prepare Neutron image
sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \
https://opendev.org/openstack/loci.git \
--build-arg PROJECT=neutron \
--build-arg PROJECT_REF=${NEUTRON_REF_SPEC} \
--build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \
--build-arg PROFILES="fluent neutron linuxbridge openvswitch" \
--build-arg PIP_PACKAGES="pycrypto" \
--build-arg WHEELS=${PRIVATE_REPO}/requirements:${IMAGE_TAG} \
--tag ${PRIVATE_REPO}/neutron:${IMAGE_TAG}
sudo docker exec docker-in-docker docker push ${PRIVATE_REPO}/neutron:${IMAGE_TAG}
# Prepare Neutron sriov image
sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \
https://opendev.org/openstack/loci.git \
--build-arg PROJECT=neutron \
--build-arg PROJECT_REF=${NEUTRON_REF_SPEC} \
--build-arg FROM=docker.io/ubuntu:18.04 \
--build-arg PROFILES="fluent neutron linuxbridge openvswitch" \
--build-arg PIP_PACKAGES="pycrypto" \
--build-arg DIST_PACKAGES="ethtool lshw" \
--build-arg WHEELS=${PRIVATE_REPO}/requirements:${IMAGE_TAG} \
--tag ${PRIVATE_REPO}/neutron:${IMAGE_TAG}-sriov-1804
sudo docker exec docker-in-docker docker push ${PRIVATE_REPO}/neutron:${IMAGE_TAG}-sriov-1804
# Prepare Horizon image
sudo docker exec docker-in-docker docker build --force-rm --pull --no-cache \
https://opendev.org/openstack/loci.git \
--build-arg PROJECT=horizon \
--build-arg PROJECT_REF=${HORIZON_REF_SPEC} \
--build-arg FROM=gcr.io/google_containers/ubuntu-slim:0.14 \
--build-arg PROFILES="fluent horizon apache" \
--build-arg PIP_PACKAGES="pycrypto" \
--build-arg WHEELS=${PRIVATE_REPO}/requirements:${IMAGE_TAG} \
--tag ${PRIVATE_REPO}/horizon:${IMAGE_TAG}
sudo docker exec docker-in-docker docker push ${PRIVATE_REPO}/horizon:${IMAGE_TAG}
.. end
Deploy TaaS Plugin
==================
Override images in Neutron chart
---------------------------------
Override the :code:`images` section parameters for Neutron chart with the
custom LOCI image's tag, prepared as explained in above sections.
.. code-block:: yaml
images:
tags:
neutron_db_sync: ${PRIVATE_REPO}/neutron:ocata
neutron_server: ${PRIVATE_REPO}/neutron:ocata
neutron_dhcp: ${PRIVATE_REPO}/neutron:ocata
neutron_metadata: ${PRIVATE_REPO}/neutron:ocata
neutron_l3: ${PRIVATE_REPO}/neutron:ocata
neutron_openvswitch_agent: ${PRIVATE_REPO}/neutron:ocata
neutron_linuxbridge_agent: ${PRIVATE_REPO}/neutron:ocata
neutron_sriov_agent: ${PRIVATE_REPO}/neutron:ocata-sriov-1804
neutron_sriov_agent_init: ${PRIVATE_REPO}/neutron:ocata-sriov-1804
Configure TaaS in Neutron chart
--------------------------------
While deploying neutron-server and L2 agents, TaaS should be enabled in
``conf: neutron`` section to add TaaS as a service plugin; in ``conf: plugins``
section to add TaaS as a L2 agent extension; in ``conf: taas_plugin`` section
to configure the ``service_provider`` endpoint used by Neutron TaaS plugin:
.. code-block:: yaml
conf:
neutron:
DEFAULT:
service_plugins: taas
plugins:
ml2_conf:
agent:
extensions: taas
taas:
taas:
enabled: True
taas_plugin:
service_providers:
service_provider: TAAS:TAAS:neutron_taas.services.taas.service_drivers.taas_rpc.TaasRpcDriver:default
Deploy TaaS Dashboard Plugin
============================
TaaS dashboard plugin can be deployed simply by using custom LOCI images having
TaaS Dashboard code installed (as explained in above sections), i.e. override
the :code:`images` section parameters for Horizon charts:
.. code-block:: yaml
images:
tags:
horizon_db_sync: ${PRIVATE_REPO}/horizon:ocata
horizon: ${PRIVATE_REPO}/horizon:ocata
Set log level for TaaS
======================
Default log level for Neutron TaaS is :code:`INFO`. For changing it, override
following parameter:
.. code-block:: yaml
conf:
logging:
logger_neutron_taas:
level: INFO
References
==========
#. Neutron TaaS support in Openstack-Helm commits:
- https://review.openstack.org/#/c/597200/
- https://review.openstack.org/#/c/607392/
#. Add TaaS panel to Horizon Dashboard:
- https://review.openstack.org/#/c/621606/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

View File

@ -1,9 +0,0 @@
Plugins
========
Contents:
.. toctree::
:maxdepth: 2
deploy-tap-as-a-service-neutron-plugin

View File

@ -0,0 +1,28 @@
Prepare Kubernetes
==================
In this section we assume you have a working Kubernetes cluster and
Kubectl and Helm properly configured to interact with the cluster.
Before deploying OpenStack components using OpenStack-Helm you have to set
labels on Kubernetes worker nodes which are used as node selectors.
Also necessary namespaces must be created.
You can use the `prepare-k8s.sh`_ script as an example of how to prepare
the Kubernetes cluster for OpenStack deployment. The script is assumed to be run
from the openstack-helm repository
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/prepare-k8s.sh
.. note::
Pay attention that in the above script we set labels on all Kubernetes nodes including
Kubernetes control plane nodes which are usually not aimed to run workload pods
(OpenStack in our case). So you have to either untaint control plane nodes or modify the
`prepare-k8s.sh`_ script so it sets labels only on the worker nodes.
.. _prepare-k8s.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/prepare-k8s.sh

View File

@ -0,0 +1,35 @@
Setup OpenStack client
======================
The OpenStack client software is a crucial tool for interacting
with OpenStack services. In certain OpenStack-Helm deployment
scripts, the OpenStack client software is utilized to conduct
essential checks during deployment. Therefore, installing the
OpenStack client on the developer's machine is a vital step.
The script `setup-client.sh`_ can be used to setup the OpenStack
client.
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/setup-client.sh
At this point you have to keep in mind that the above script configures
OpenStack client so it uses internal Kubernetes FQDNs like
`keystone.openstack.svc.cluster.local`. In order to be able to resolve these
internal names you have to configure the Kubernetes authoritative DNS server
(CoreDNS) to work as a recursive resolver and then add its IP (`10.96.0.10` by default)
to `/etc/resolv.conf`. This is only going to work when you try to access
to OpenStack services from one of Kubernetes nodes because IPs from the
Kubernetes service network are routed only between Kubernetes nodes.
If you wish to access OpenStack services from outside the Kubernetes cluster,
you need to expose the OpenStack Ingress controller using an IP address accessible
from outside the Kubernetes cluster, typically achieved through solutions like
`MetalLB`_ or similar tools. In this scenario, you should also ensure that you
have set up proper FQDN resolution to map to the external IP address and
create the necessary Ingress objects for the associated FQDN.
.. _setup-client.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/setup-client.sh
.. _MetalLB: https://metallb.universe.tf