Merge "Upgrade the rst convention of the Reference Guide [4]"

This commit is contained in:
Zuul 2018-03-21 08:49:42 +00:00 committed by Gerrit Code Review
commit bd283f9347
5 changed files with 492 additions and 357 deletions

View File

@ -5,7 +5,7 @@ Skydive in Kolla
================
Overview
========
~~~~~~~~
Skydive is an open source real-time network topology and protocols analyzer.
It aims to provide a comprehensive way of understanding what is happening in
the network infrastructure.
@ -16,12 +16,14 @@ All the information is stored in an Elasticsearch database.
Configuration on Kolla deployment
---------------------------------
Enable Skydive in ``/etc/kolla/globals.yml``
Enable Skydive in ``/etc/kolla/globals.yml`` file:
.. code-block:: console
.. code-block:: yaml
enable_skydive: "yes"
enable_elasticsearch: "yes"
enable_skydive: "yes"
enable_elasticsearch: "yes"
.. end
Verify operation
----------------

View File

@ -5,12 +5,13 @@ Swift in Kolla
==============
Overview
========
~~~~~~~~
Kolla can deploy a full working Swift setup in either a **all-in-one** or
**multinode** setup.
Disks with a partition table (recommended)
==========================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Swift requires block devices to be available for storage. To prepare a disk
for use as a Swift storage device, a special partition name and filesystem
@ -19,32 +20,39 @@ label need to be added.
The following should be done on each storage node, the example is shown
for three disks:
::
.. warning::
# <WARNING ALL DATA ON DISK will be LOST!>
index=0
for d in sdc sdd sde; do
parted /dev/${d} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} /dev/${d}1
(( index++ ))
done
ALL DATA ON DISK will be LOST!
.. code-block:: console
index=0
for d in sdc sdd sde; do
parted /dev/${d} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} /dev/${d}1
(( index++ ))
done
.. end
For evaluation, loopback devices can be used in lieu of real disks:
::
.. code-block:: console
index=0
for d in sdc sdd sde; do
free_device=$(losetup -f)
fallocate -l 1G /tmp/$d
losetup $free_device /tmp/$d
parted $free_device -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} ${free_device}p1
(( index++ ))
done
index=0
for d in sdc sdd sde; do
free_device=$(losetup -f)
fallocate -l 1G /tmp/$d
losetup $free_device /tmp/$d
parted $free_device -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
sudo mkfs.xfs -f -L d${index} ${free_device}p1
(( index++ ))
done
.. end
Disks without a partition table
===============================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Kolla also supports unpartitioned disk (filesystem on ``/dev/sdc`` instead of
``/dev/sdc1``) detection purely based on filesystem label. This is generally
@ -54,150 +62,193 @@ deployment already using disk like this.
Given hard disks with labels swd1, swd2, swd3, use the following settings in
``ansible/roles/swift/defaults/main.yml``.
::
.. code-block:: yaml
swift_devices_match_mode: "prefix"
swift_devices_name: "swd"
swift_devices_match_mode: "prefix"
swift_devices_name: "swd"
.. end
Rings
=====
~~~~~
Before running Swift we need to generate **rings**, which are binary compressed
files that at a high level let the various Swift services know where data is in
the cluster. We hope to automate this process in a future release.
The following example commands should be run from the ``operator`` node to
generate rings for a demo setup. The commands work with **disks with partition
table** example listed above. Please modify accordingly if your setup is
generate rings for a demo setup. The commands work with **disks with partition
table** example listed above. Please modify accordingly if your setup is
different.
::
Prepare for Rings generating
----------------------------
STORAGE_NODES=(192.168.0.2 192.168.0.3 192.168.0.4)
KOLLA_SWIFT_BASE_IMAGE="kolla/oraclelinux-source-swift-base:4.0.0"
To perpare for Swift Rings generating, run the following commands to initialize
the environment variable and create ``/etc/kolla/config/swift`` directory:
mkdir -p /etc/kolla/config/swift
.. code-block:: console
# Object ring
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder create 10 3 1
STORAGE_NODES=(192.168.0.2 192.168.0.3 192.168.0.4)
KOLLA_SWIFT_BASE_IMAGE="kolla/oraclelinux-source-swift-base:4.0.0"
mkdir -p /etc/kolla/config/swift
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder add r1z1-${node}:6000/d${i} 1;
done
done
.. end
# Account ring
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder create 10 3 1
Generate Object Ring
--------------------
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder add r1z1-${node}:6001/d${i} 1;
done
done
To generate Swift object ring, run the following commands:
# Container ring
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder create 10 3 1
.. code-block:: console
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder add r1z1-${node}:6002/d${i} 1;
done
done
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder create 10 3 1
for ring in object account container; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/${ring}.builder rebalance;
done
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/object.builder add r1z1-${node}:6000/d${i} 1;
done
done
For more info, see
.. end
Generate Account Ring
---------------------
To generate Swift account ring, run the following commands:
.. code-block:: console
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder create 10 3 1
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/account.builder add r1z1-${node}:6001/d${i} 1;
done
done
.. end
Generate Container Ring
-----------------------
To generate Swift container ring, run the following commands:
.. code-block:: console
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder create 10 3 1
for node in ${STORAGE_NODES[@]}; do
for i in {0..2}; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/container.builder add r1z1-${node}:6002/d${i} 1;
done
done
for ring in object account container; do
docker run \
--rm \
-v /etc/kolla/config/swift/:/etc/kolla/config/swift/ \
$KOLLA_SWIFT_BASE_IMAGE \
swift-ring-builder \
/etc/kolla/config/swift/${ring}.builder rebalance;
done
.. end
For more information, see
https://docs.openstack.org/project-install-guide/object-storage/ocata/initial-rings.html
Deploying
=========
~~~~~~~~~
Enable Swift in ``/etc/kolla/globals.yml``:
::
.. code-block:: yaml
enable_swift : "yes"
enable_swift : "yes"
.. end
Once the rings are in place, deploying Swift is the same as any other Kolla
Ansible service:
::
.. code-block:: console
kolla-ansible deploy -i <path/to/inventory-file>
# kolla-ansible deploy -i <path/to/inventory-file>
.. end
Verification
~~~~~~~~~~~~
Validation
==========
A very basic smoke test:
::
.. code-block:: console
$ openstack container create mycontainer
+---------------------------------------+--------------+------------------------------------+
| account | container | x-trans-id |
+---------------------------------------+--------------+------------------------------------+
| AUTH_7b938156dba44de7891f311c751f91d8 | mycontainer | txb7f05fa81f244117ac1b7-005a0e7803 |
+---------------------------------------+--------------+------------------------------------+
$ openstack container create mycontainer
$ openstack object create mycontainer README.rst
+---------------+--------------+----------------------------------+
| object | container | etag |
+---------------+--------------+----------------------------------+
| README.rst | mycontainer | 2634ecee0b9a52ba403a503cc7d8e988 |
+---------------+--------------+----------------------------------+
+---------------------------------------+--------------+------------------------------------+
| account | container | x-trans-id |
+---------------------------------------+--------------+------------------------------------+
| AUTH_7b938156dba44de7891f311c751f91d8 | mycontainer | txb7f05fa81f244117ac1b7-005a0e7803 |
+---------------------------------------+--------------+------------------------------------+
$ openstack container show mycontainer
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| account | AUTH_7b938156dba44de7891f311c751f91d8 |
| bytes_used | 6684 |
| container | mycontainer |
| object_count | 1 |
+--------------+---------------------------------------+
$ openstack object create mycontainer README.rst
$ openstack object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_7b938156dba44de7891f311c751f91d8 |
| Bytes | 6684 |
| Containers | 1 |
| Objects | 1 |
+------------+---------------------------------------+
+---------------+--------------+----------------------------------+
| object | container | etag |
+---------------+--------------+----------------------------------+
| README.rst | mycontainer | 2634ecee0b9a52ba403a503cc7d8e988 |
+---------------+--------------+----------------------------------+
$ openstack container show mycontainer
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| account | AUTH_7b938156dba44de7891f311c751f91d8 |
| bytes_used | 6684 |
| container | mycontainer |
| object_count | 1 |
+--------------+---------------------------------------+
$ openstack object store account show
+------------+---------------------------------------+
| Field | Value |
+------------+---------------------------------------+
| Account | AUTH_7b938156dba44de7891f311c751f91d8 |
| Bytes | 6684 |
| Containers | 1 |
| Objects | 1 |
+------------+---------------------------------------+

View File

@ -1,14 +1,16 @@
===============
Tacker in Kolla
===============
"Tacker is an OpenStack service for NFV Orchestration with
a general purpose VNF Manager to deploy and operate
Virtual Network Functions (VNFs) and Network Services
on an NFV Platform.
It is based on ETSI MANO Architectural Framework." [1].
"Tacker is an OpenStack service for NFV Orchestration with a general purpose
VNF Manager to deploy and operate Virtual Network Functions (VNFs) and
Network Services on an NFV Platform. It is based on ETSI MANO Architectural
Framework."
For more details about Tacker, see `OpenStack Tacker Documentation
<https://docs.openstack.org/tacker/latest/>`__.
Overview
--------
~~~~~~~~
As of the Pike release, tacker requires the following services
to be enabled to operate correctly.
@ -26,7 +28,7 @@ Optionally tacker supports the following services and features.
* Opendaylight
Compatibility
-------------
~~~~~~~~~~~~~
Tacker is supported by the following distros and install_types.
@ -39,7 +41,7 @@ Tacker is supported by the following distros and install_types.
* Only source images.
Preparation and Deployment
--------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~
By default tacker and required services are disabled in
the ``group_vars/all.yml`` file.
@ -48,53 +50,63 @@ In order to enable them, you need to edit the file
.. note::
Heat is enabled by default, ensure it is not disabled.
Heat is enabled by default, ensure it is not disabled.
::
.. code-block:: yaml
enable_tacker: "yes"
enable_barbican: "yes"
enable_mistral: "yes"
enable_redis: "yes"
enable_tacker: "yes"
enable_barbican: "yes"
enable_mistral: "yes"
enable_redis: "yes"
.. end
.. warning::
Barbican is required in multinode deployments to share VIM fernet_keys.
If not enabled, only one tacker-server host will have the keys on it
and any request made to a different tacker-server will fail with a
similar error as ``No such file or directory /etc/tacker/vim/fernet_keys``
Barbican is required in multinode deployments to share VIM fernet_keys.
If not enabled, only one tacker-server host will have the keys on it
and any request made to a different tacker-server will fail with a
similar error as ``No such file or directory /etc/tacker/vim/fernet_keys``
Deploy tacker and related services.
::
.. code-block:: console
$ kolla-ansible deploy
$ kolla-ansible deploy
Verify
------
.. end
Verification
~~~~~~~~~~~~
Generate the credentials file.
::
.. code-block:: console
$ kolla-ansible post-deploy
$ kolla-ansible post-deploy
.. end
Source credentials file.
::
.. code-block:: console
$ . /etc/kolla/admin-openrc.sh
$ . /etc/kolla/admin-openrc.sh
.. end
Create base neutron networks and glance images.
::
.. code-block:: console
$ sh tools/init-runonce
$ sh tools/init-runonce
.. end
.. note::
``init-runonce`` file is located in ``$PYTHON_PATH/kolla-ansible``
folder in kolla-ansible installation from pip.
``init-runonce`` file is located in ``$PYTHON_PATH/kolla-ansible``
folder in kolla-ansible installation from pip.
In kolla-ansible git repository a `tacker demo <https://github.com/openstack/kolla-ansible/tree/master/contrib/demos/tacker>`_
is present in ``kolla-ansible/contrib/demos/tacker/`` that will
@ -104,18 +116,22 @@ Install python-tackerclient.
.. note::
Barbican, heat and mistral python clients are in tacker's
requirements and will be installed as dependency.
Barbican, heat and mistral python clients are in tacker's
requirements and will be installed as dependency.
::
.. code-block:: console
$ pip install python-tackerclient
$ pip install python-tackerclient
.. end
Execute ``deploy-tacker-demo`` script to initialize the VNF creation.
::
.. code-block:: console
$ sh deploy-tacker-demo
$ sh deploy-tacker-demo
.. end
Tacker demo script will create sample VNF Descriptor (VNFD) file,
then register a default VIM, create a tacker VNFD and finally
@ -127,42 +143,51 @@ running in nova and with its corresponding heat stack CREATION_COMPLETE.
Verify tacker VNF status is ACTIVE.
::
.. code-block:: console
$ tacker vnf-list
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
| id | name | mgmt_url | status | vim_id | vnfd_id |
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
| c52fcf99-101d-427b-8a2d-c9ef54af8b1d | kolla-sample-vnf | {"VDU1": "10.0.0.10"} | ACTIVE | eb3aa497-192c-4557-a9d7-1dff6874a8e6 | 27e8ea98-f1ff-4a40-a45c-e829e53b3c41 |
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
$ tacker vnf-list
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
| id | name | mgmt_url | status | vim_id | vnfd_id |
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
| c52fcf99-101d-427b-8a2d-c9ef54af8b1d | kolla-sample-vnf | {"VDU1": "10.0.0.10"} | ACTIVE | eb3aa497-192c-4557-a9d7-1dff6874a8e6 | 27e8ea98-f1ff-4a40-a45c-e829e53b3c41 |
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
.. end
Verify nova instance status is ACTIVE.
::
.. code-block:: console
$ openstack server list
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
| d2d59eeb-8526-4826-8f1b-c50b571395e2 | ta-cf99-101d-427b-8a2d-c9ef54af8b1d-VDU1-fchiv6saay7p | ACTIVE | demo-net=10.0.0.10 | cirros | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d-VDU1_flavor-yl4bzskwxdkn |
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
$ openstack server list
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
| d2d59eeb-8526-4826-8f1b-c50b571395e2 | ta-cf99-101d-427b-8a2d-c9ef54af8b1d-VDU1-fchiv6saay7p | ACTIVE | demo-net=10.0.0.10 | cirros | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d-VDU1_flavor-yl4bzskwxdkn |
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
.. end
Verify Heat stack status is CREATE_COMPLETE.
::
.. code-block:: console
$ openstack stack list
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
| 289a6686-70f6-4db7-aa10-ed169fe547a6 | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d | 1243948e59054aab83dbf2803e109b3f | CREATE_COMPLETE | 2017-08-23T09:49:50Z | None |
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
$ openstack stack list
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
| 289a6686-70f6-4db7-aa10-ed169fe547a6 | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d | 1243948e59054aab83dbf2803e109b3f | CREATE_COMPLETE | 2017-08-23T09:49:50Z | None |
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
.. end
After the correct functionality of tacker is verified, tacker demo
can be cleaned up executing ``cleanup-tacker`` script.
::
.. code-block:: console
$ sh cleanup-tacker
$ sh cleanup-tacker
[1] https://docs.openstack.org/tacker/latest/
.. end

View File

@ -1,11 +1,12 @@
.. _vmware-guide:
====================
===============
VMware in Kolla
====================
===============
Overview
========
~~~~~~~~
Kolla can deploy the Nova and Neutron Service(s) for VMware vSphere.
Depending on the network architecture (NsxV or DVS) you choose, Kolla deploys
the following OpenStack services for VMware vSphere:
@ -42,11 +43,11 @@ bridge and works through VLAN.
.. note::
VMware NSX-DVS plugin does not support tenant networks, so all VMs should
attach to Provider VLAN/Flat networks.
VMware NSX-DVS plugin does not support tenant networks, so all VMs should
attach to Provider VLAN/Flat networks.
VMware NSX-V
============
~~~~~~~~~~~~
Preparation
-----------
@ -57,51 +58,57 @@ For more information, please see `VMware NSX-V documentation <https://docs.vmwar
.. note::
In addition, it is important to modify the firewall rule of vSphere to make
sure that VNC is accessible from outside VMware environment.
In addition, it is important to modify the firewall rule of vSphere to make
sure that VNC is accessible from outside VMware environment.
On every VMware host, edit /etc/vmware/firewall/vnc.xml as below:
On every VMware host, edit /etc/vmware/firewall/vnc.xml as below:
.. code-block:: console
.. code-block:: none
<!-- FirewallRule for VNC Console -->
<ConfigRoot>
<service>
<id>VNC</id>
<rule id = '0000'>
<direction>inbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5900</begin>
<end>5999</end>
</port>
</rule>
<rule id = '0001'>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>0</begin>
<end>65535</end>
</port>
</rule>
<enabled>true</enabled>
<required>false</required>
</service>
</ConfigRoot>
<!-- FirewallRule for VNC Console -->
<ConfigRoot>
<service>
<id>VNC</id>
<rule id = '0000'>
<direction>inbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>5900</begin>
<end>5999</end>
</port>
</rule>
<rule id = '0001'>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>0</begin>
<end>65535</end>
</port>
</rule>
<enabled>true</enabled>
<required>false</required>
</service>
</ConfigRoot>
.. end
Then refresh the firewall config by:
.. code-block:: console
esxcli network firewall refresh
# esxcli network firewall refresh
.. end
Verify that the firewall config is applied:
.. code-block:: console
esxcli network firewall ruleset list
# esxcli network firewall ruleset list
.. end
Deployment
----------
@ -109,97 +116,111 @@ Deployment
Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_nsxv"
nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_nsxv"
.. end
.. note::
VMware NSX-V also supports Neutron FWaaS, LBaaS and VPNaaS services, you can enable
them by setting these options in globals.yml:
VMware NSX-V also supports Neutron FWaaS, LBaaS and VPNaaS services, you can enable
them by setting these options in ``globals.yml``:
* enable_neutron_vpnaas: "yes"
* enable_neutron_lbaas: "yes"
* enable_neutron_fwaas: "yes"
* enable_neutron_vpnaas: "yes"
* enable_neutron_lbaas: "yes"
* enable_neutron_fwaas: "yes"
If you want to set VMware datastore as cinder backend, enable it in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
enable_cinder: "yes"
cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore"
enable_cinder: "yes"
cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore"
.. end
If you want to set VMware datastore as glance backend, enable it in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
glance_backend_vmware: "yes"
vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore"
glance_backend_vmware: "yes"
vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore"
.. end
VMware options are required in ``/etc/kolla/globals.yml``, these options should
be configured correctly according to your NSX-V environment.
Options for nova-compute and ceilometer:
Options for ``nova-compute`` and ``ceilometer``:
.. code-block:: console
.. code-block:: yaml
vmware_vcenter_host_ip: "127.0.0.1"
vmware_vcenter_host_username: "admin"
vmware_vcenter_cluster_name: "cluster-1"
vmware_vcenter_insecure: "True"
vmware_vcenter_datastore_regex: ".*"
vmware_vcenter_host_ip: "127.0.0.1"
vmware_vcenter_host_username: "admin"
vmware_vcenter_cluster_name: "cluster-1"
vmware_vcenter_insecure: "True"
vmware_vcenter_datastore_regex: ".*"
.. end
.. note::
The VMware vCenter password has to be set in ``/etc/kolla/passwords.yml``.
The VMware vCenter password has to be set in ``/etc/kolla/passwords.yml``.
.. code-block:: console
.. code-block:: yaml
vmware_vcenter_host_password: "admin"
vmware_vcenter_host_password: "admin"
.. end
Options for Neutron NSX-V support:
.. code-block:: console
.. code-block:: yaml
vmware_nsxv_user: "nsx_manager_user"
vmware_nsxv_manager_uri: "https://127.0.0.1"
vmware_nsxv_cluster_moid: "TestCluster"
vmware_nsxv_datacenter_moid: "TestDataCeter"
vmware_nsxv_resource_pool_id: "TestRSGroup"
vmware_nsxv_datastore_id: "TestDataStore"
vmware_nsxv_external_network: "TestDVSPort-Ext"
vmware_nsxv_vdn_scope_id: "TestVDNScope"
vmware_nsxv_dvs_id: "TestDVS"
vmware_nsxv_backup_edge_pool: "service:compact:1:2"
vmware_nsxv_spoofguard_enabled: "false"
vmware_nsxv_metadata_initializer: "false"
vmware_nsxv_edge_ha: "false"
vmware_nsxv_user: "nsx_manager_user"
vmware_nsxv_manager_uri: "https://127.0.0.1"
vmware_nsxv_cluster_moid: "TestCluster"
vmware_nsxv_datacenter_moid: "TestDataCeter"
vmware_nsxv_resource_pool_id: "TestRSGroup"
vmware_nsxv_datastore_id: "TestDataStore"
vmware_nsxv_external_network: "TestDVSPort-Ext"
vmware_nsxv_vdn_scope_id: "TestVDNScope"
vmware_nsxv_dvs_id: "TestDVS"
vmware_nsxv_backup_edge_pool: "service:compact:1:2"
vmware_nsxv_spoofguard_enabled: "false"
vmware_nsxv_metadata_initializer: "false"
vmware_nsxv_edge_ha: "false"
.. yaml
.. note::
If you want to set secure connections to VMware, set ``vmware_vcenter_insecure``
to false.
Secure connections to vCenter requires a CA file, copy the vCenter CA file to
``/etc/kolla/config/vmware_ca``.
If you want to set secure connections to VMware, set ``vmware_vcenter_insecure``
to false.
Secure connections to vCenter requires a CA file, copy the vCenter CA file to
``/etc/kolla/config/vmware_ca``.
.. note::
The VMware NSX-V password has to be set in ``/etc/kolla/passwords.yml``.
The VMware NSX-V password has to be set in ``/etc/kolla/passwords.yml``.
.. code-block:: console
.. code-block:: yaml
vmware_nsxv_password: "nsx_manager_password"
vmware_nsxv_password: "nsx_manager_password"
Then you should start kolla-ansible deployment normally as KVM/QEMU deployment.
.. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment.
VMware NSX-DVS
==============
~~~~~~~~~~~~~~
Preparation
-----------
@ -216,28 +237,34 @@ Deployment
Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_dvs"
nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_dvs"
.. end
If you want to set VMware datastore as Cinder backend, enable it in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
enable_cinder: "yes"
cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore"
enable_cinder: "yes"
cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore"
.. end
If you want to set VMware datastore as Glance backend, enable it in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
glance_backend_vmware: "yes"
vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore"
glance_backend_vmware: "yes"
vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore"
.. end
VMware options are required in ``/etc/kolla/globals.yml``, these options should
be configured correctly according to the vSphere environment you installed
@ -246,23 +273,27 @@ the following options.
Options for Neutron NSX-DVS support:
.. code-block:: console
.. code-block:: yaml
vmware_dvs_host_ip: "192.168.1.1"
vmware_dvs_host_port: "443"
vmware_dvs_host_username: "admin"
vmware_dvs_dvs_name: "VDS-1"
vmware_dvs_dhcp_override_mac: ""
vmware_dvs_host_ip: "192.168.1.1"
vmware_dvs_host_port: "443"
vmware_dvs_host_username: "admin"
vmware_dvs_dvs_name: "VDS-1"
vmware_dvs_dhcp_override_mac: ""
.. end
.. note::
The VMware NSX-DVS password has to be set in ``/etc/kolla/passwords.yml``.
The VMware NSX-DVS password has to be set in ``/etc/kolla/passwords.yml``.
.. code-block:: console
.. code-block:: yaml
vmware_dvs_host_password: "password"
vmware_dvs_host_password: "password"
Then you should start kolla-ansible deployment normally as KVM/QEMU deployment.
.. end
Then you should start :command:`kolla-ansible` deployment normally as KVM/QEMU deployment.
For more information on OpenStack vSphere, see
`VMware vSphere

View File

@ -1,9 +1,12 @@
============
Zun in Kolla
============
"Zun is an OpenStack Container service. It aims to provide an
OpenStack API for provisioning and managing containerized
workload on OpenStack." [1].
workload on OpenStack."
For more details about Zun, see `OpenStack Zun Documentation
<https://docs.openstack.org/zun/latest/>`__.
Preparation and Deployment
--------------------------
@ -14,101 +17,124 @@ configure kuryr refer to :doc:`kuryr-guide`.
To allow Zun Compute connect to the Docker Daemon, add the following in the
``docker.service`` file on each zun-compute node.
::
.. code-block:: none
ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375
ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375
.. end
.. note::
``DOCKER_SERVICE_IP`` is zun-compute host IP address. ``2375`` is port that
allows Docker daemon to be accessed remotely.
``DOCKER_SERVICE_IP`` is zun-compute host IP address. ``2375`` is port that
allows Docker daemon to be accessed remotely.
By default zun is disabled in the ``group_vars/all.yml``.
In order to enable it, you need to edit the file globals.yml and set the
following variables:
::
.. code-block:: yaml
enable_zun: "yes"
enable_kuryr: "yes"
enable_etcd: "yes"
enable_zun: "yes"
enable_kuryr: "yes"
enable_etcd: "yes"
.. end
Deploy the OpenStack cloud and zun.
::
.. code-block:: console
$ kolla-ansible deploy
$ kolla-ansible deploy
Verify
------
.. end
Generate the credentials file.
Verification
------------
::
#. Generate the credentials file:
$ kolla-ansible post-deploy
.. code-block:: console
Source credentials file.
$ kolla-ansible post-deploy
::
.. end
$ . /etc/kolla/admin-openrc.sh
#. Source credentials file:
Download and create a glance container image.
.. code-block:: console
::
$ . /etc/kolla/admin-openrc.sh
$ docker pull cirros
$ docker save cirros | openstack image create cirros --public \
--container-format docker --disk-format raw
.. end
Create zun container.
#. Download and create a glance container image:
::
.. code-block:: console
$ zun create --name test --net network=demo-net cirros ping -c4 8.8.8.8
$ docker pull cirros
$ docker save cirros | openstack image create cirros --public \
--container-format docker --disk-format raw
.. note::
.. end
Kuryr does not support networks with DHCP enabled, disable DHCP in the
subnet used for zun containers.
#. Create zun container:
::
.. code-block:: console
openstack subnet set --no-dhcp <subnet>
$ zun create --name test --net network=demo-net cirros ping -c4 8.8.8.8
Verify container is created.
.. end
::
.. note::
$ zun list
+--------------------------------------+------+---------------+---------+------------+------------+-------+
| uuid | name | image | status | task_state | addresses | ports |
+--------------------------------------+------+---------------+---------+------------+------------+-------+
| 3719a73e-5f86-47e1-bc5f-f4074fc749f2 | test | cirros | Created | None | 172.17.0.3 | [] |
+--------------------------------------+------+---------------+---------+------------+------------+-------+
Kuryr does not support networks with DHCP enabled, disable DHCP in the
subnet used for zun containers.
Start container.
.. code-block:: console
::
$ openstack subnet set --no-dhcp <subnet>
$ zun start test
Request to start container test has been accepted.
.. end
Verify container.
#. Verify container is created:
::
.. code-block:: console
$ zun logs test
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=45 time=96.396 ms
64 bytes from 8.8.8.8: seq=1 ttl=45 time=96.504 ms
64 bytes from 8.8.8.8: seq=2 ttl=45 time=96.721 ms
64 bytes from 8.8.8.8: seq=3 ttl=45 time=95.884 ms
$ zun list
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 95.884/96.376/96.721 ms
+--------------------------------------+------+---------------+---------+------------+------------+-------+
| uuid | name | image | status | task_state | addresses | ports |
+--------------------------------------+------+---------------+---------+------------+------------+-------+
| 3719a73e-5f86-47e1-bc5f-f4074fc749f2 | test | cirros | Created | None | 172.17.0.3 | [] |
+--------------------------------------+------+---------------+---------+------------+------------+-------+
.. end
#. Start container:
.. code-block:: console
$ zun start test
Request to start container test has been accepted.
.. end
#. Verify container:
.. code-block:: console
$ zun logs test
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=45 time=96.396 ms
64 bytes from 8.8.8.8: seq=1 ttl=45 time=96.504 ms
64 bytes from 8.8.8.8: seq=2 ttl=45 time=96.721 ms
64 bytes from 8.8.8.8: seq=3 ttl=45 time=95.884 ms
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 95.884/96.376/96.721 ms
.. end
For more information about how zun works, see
`zun, OpenStack Container service <https://docs.openstack.org/zun/latest/>`__.