Merge "Add support and documentation for OVN"

This commit is contained in:
Zuul 2018-02-15 11:44:49 +00:00 committed by Gerrit Code Review
commit 519391fac0
7 changed files with 496 additions and 1 deletions

View File

@ -531,7 +531,7 @@ spec:
path: /proc
- name: openvswitch
hostPath:
path: /var/run/openvswitch
path: ${OVS_HOST_PATH}
EOF
}

View File

@ -0,0 +1,255 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes \
https://git.openstack.org/openstack/kuryr-kubernetes
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
# OVN components
enable_plugin networking-ovn https://git.openstack.org/openstack/networking-ovn
enable_service ovn-northd
enable_service ovn-controller
enable_service networking-ovn-metadata-agent
# Neutron services
enable_service neutron
enable_service q-svc
# OVS HOST PATH
# =============
# OVS_HOST_PATH=/var/run/openvswitch
OVS_HOST_PATH=/usr/local/var/run/openvswitch
# OCTAVIA
KURYR_K8S_LBAAS_USE_OCTAVIA=True
# Uncomment it to use L2 communication between loadbalancer and member pods
# KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
if [[ "$KURYR_K8S_LBAAS_USE_OCTAVIA" == "True" ]]; then
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://git.openstack.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
### Image
### Barbican
enable_plugin barbican https://git.openstack.org/openstack/barbican
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
### Neutron-lbaas
#### In case Octavia is older than Pike, neutron-lbaas is needed
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
else
# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
fi
# Keystone
enable_service key
# dependencies
enable_service mysql
enable_service rabbit
# By default use all the services from the kuryr-kubernetes plugin
# Docker
# ======
# If you already have docker configured, running and with its socket writable
# by the stack user, you can omit the following line.
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
# Etcd
# ====
# The default is for devstack to run etcd for you.
enable_service etcd3
# You can also run the deprecated etcd containerized and select the image and
# version of it by commenting the etcd3 service enablement and uncommenting
#
# enable legacy_etcd
#
# You can also modify the following defaults.
# KURYR_ETCD_IMAGE="quay.io/coreos/etcd"
# KURYR_ETCD_VERSION="v3.0.8"
#
# You can select the listening and advertising client and peering Etcd
# addresses by uncommenting and changing from the following defaults:
# KURYR_ETCD_ADVERTISE_CLIENT_URL=http://my_host_ip:2379}
# KURYR_ETCD_ADVERTISE_PEER_URL=http://my_host_ip:2380}
# KURYR_ETCD_LISTEN_CLIENT_URL=http://0.0.0.0:2379}
# KURYR_ETCD_LISTEN_PEER_URL=http://0.0.0.0:2380}
#
# If you already have an etcd cluster configured and running, you can just
# comment out the lines enabling legacy_etcd and etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"
# Kubernetes
# ==========
#
# Kubernetes is run from the hyperkube docker image
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# KURYR_HYPERKUBE_IMAGE="gcr.io/google_containers/hyperkube-amd64"
# KURYR_HYPERKUBE_VERSION="v1.6.2"
#
# If you have the 8080 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="8080"
#
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
#
# NOTE: KURYR_HYPERKUBE_IMAGE, KURYR_HYPERKUBE_VERSION also affect which
# the selected binary for the Kubelet.
# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr can run CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# To enable kuryr-daemon uncomment next line.
enable_service kuryr-daemon
# Containerized Kuryr
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni). If you want DevStack to deploy
# Kuryr services as pods on Kubernetes uncomment next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
# Kuryr POD VIF Driver
# ====================
#
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# KURYR_VIF_POOL_DRIVER=noop
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=5
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=10
# KURYR_VIF_POOL_UPDATE_FREQ=20
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True
# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
if [[ "$KURYR_K8S_LBAAS_USE_OCTAVIA" == "True" ]]; then
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
else
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
fi
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999

View File

@ -0,0 +1,47 @@
[[local|localrc]]
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
MULTI_HOST=1
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
enable_plugin networking-ovn https://git.openstack.org/openstack/networking-ovn
enable_service ovn-northd
enable_service ovn-controller
enable_service networking-ovn-metadata-agent
# Use Neutron instead of nova-network
disable_service n-net
enable_service q-svc
# Disable Neutron agents not used with OVN.
disable_service q-agt
disable_service q-l3
disable_service q-dhcp
disable_service q-meta
# Enable services, these services depend on neutron plugin.
enable_plugin neutron https://git.openstack.org/openstack/neutron
enable_service q-trunk
# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
git://git.openstack.org/openstack/neutron-lbaas
enable_service q-lbaasv2
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"

View File

@ -33,6 +33,10 @@ enable_service q-dhcp
enable_service q-l3
enable_service q-svc
# OVS HOST PATH
# =============
# OVS_HOST_PATH=/var/run/openvswitch
# OCTAVIA
KURYR_K8S_LBAAS_USE_OCTAVIA=True
# Uncomment it to use L2 communication between loadbalancer and member pods

View File

@ -77,3 +77,6 @@ KURYR_VIF_POOL_MANAGER=${KURYR_VIF_POOL_MANAGER:-False}
# Health Server
KURYR_HEALTH_SERVER_PORT=${KURYR_HEALTH_SERVER_PORT:-8082}
# OVS HOST PATH
OVS_HOST_PATH=${OVS_HOST_PATH:-/var/run/openvswitch}

View File

@ -34,6 +34,7 @@ ML2 drivers.
nested-vlan
nested-macvlan
odl_support
ovn_support
dragonflow_support
containerized
ports-pool

View File

@ -0,0 +1,185 @@
================================
Kuryr Kubernetes OVN Integration
================================
OVN provides virtual networking for Open vSwitch and is a component of the Open
vSwitch project.
OpenStack can use OVN as its network management provider through the Modular
Layer 2 (ML2) north-bound plug-in.
Integrating of OVN allows Kuryr to be used to bridge (both baremetal and
nested) containers and VM networking in a OVN-based OpenStack deployment.
Testing with DevStack
=====================
The next points describe how to test OpenStack with OVN using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
----------------------------
1. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
::
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
::
$ sudo su - stack
$ git clone https://git.openstack.org/openstack-dev/devstack.git
$ git clone https://git.openstack.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use OVN.
kuryr-kubernetes comes with a sample DevStack configuration file for OVN you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.
::
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
is different from the default one (i.e., /var/run/openvswtich).
Optionally, the ports pool funcionality can be enabled by following:
:doc:`./ports-pool`
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
::
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
6. Extra configurations.
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:
::
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:
::
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
Inspect default Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
you can check the :doc:`../default_configuration`
Testing Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once the environment is ready, we can test that network connectivity works
among pods. To do that check out :doc:`../testing_connectivity`
Nested Containers Test Environment (VLAN)
-----------------------------------------
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
OVN configurations such as enabling the trunk support that will be needed for
the VM. And then install the overcloud deployment inside the VM with the kuryr
components.
Undercloud deployment
~~~~~~~~~~~~~~~~~~~~~
The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case::
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
The main differences with the default ovn local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also
be installed inside the VM: kuryr-kubernetes, kubelet,
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- OVN Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at :doc:`../trunk_ports`
Overcloud deployment
~~~~~~~~~~~~~~~~~~~~
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the
same steps as for ML2/OVS:
1. Log in into the VM::
$ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
Testing Nested Network Connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
:doc:`../testing_nested_connectivity`