Make containerized deployment as a default.

Currently, by default, kuryr-kubernetes services (controller and CNI
daemon) are suppose to be run as a systemd services. The reality is,
that in most real world deployments we are using containerized services.
In this patch variable KURYR_K8S_CONTAINERIZED_DEPLOYMENT will now have
default value set to True, which means, that without even setting it,
deploying kuryr-kubernetes will be containerized.

Secondly, we agreed[1], that all the gate names should also reflect that
change in their names.

And finally, non working and outdated local.conf samples for
OpenDaylight has to be removed.

Behaviour for sample local.confs wasn't change: using them to spin up
devstack will still use systemd services.

[1] https://etherpad.opendev.org/p/apr2021-ptg-kuryr

Change-Id: I2c13893c80e9e5b3b2ac0cb64dd9bd9a40d99e63
This commit is contained in:
Roman Dobosz 2021-05-20 14:59:21 +02:00
parent c034b0060e
commit e005247b89
11 changed files with 101 additions and 404 deletions

View File

@ -13,7 +13,7 @@
# limitations under the License.
- job:
name: kuryr-kubernetes-tempest-multinode-containerized
name: kuryr-kubernetes-tempest-multinode
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest multinode job
@ -40,11 +40,9 @@
c-bak: false
devstack_localrc:
KURYR_FORCE_IMAGE_BUILD: true
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true
USE_PYTHON3: true
vars:
devstack_localrc:
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true
KURYR_K8S_API_URL: "http://${SERVICE_HOST}:${KURYR_K8S_API_PORT}"
KURYR_K8S_MULTI_WORKER_TESTS: True
devstack_services:
@ -54,7 +52,7 @@
- job:
name: kuryr-kubernetes-tempest-multinode-ha
parent: kuryr-kubernetes-tempest-multinode-containerized
parent: kuryr-kubernetes-tempest-multinode
description: |
Kuryr-Kubernetes tempest multinode job running containerized in HA
timeout: 7800

View File

@ -13,7 +13,7 @@
# limitations under the License.
- job:
name: kuryr-kubernetes-tempest
name: kuryr-kubernetes-tempest-octavia-base
parent: kuryr-kubernetes-tempest-base
description: |
Kuryr-Kubernetes tempest job using octavia
@ -47,24 +47,31 @@
o-hk: true
o-hm: true
- job:
name: kuryr-kubernetes-tempest-systemd
parent: kuryr-kubernetes-tempest-octavia-base
description: |
Kuryr-Kubernetes tempest job using octavia and running kuryr as systemd
services
vars:
devstack_localrc:
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: false
- job:
name: kuryr-kubernetes-tempest-centos-7
parent: kuryr-kubernetes-tempest
parent: kuryr-kubernetes-tempest-systemd
nodeset: openstack-centos-7-single-node
voting: false
- job:
name: kuryr-kubernetes-tempest-containerized
parent: kuryr-kubernetes-tempest
name: kuryr-kubernetes-tempest
parent: kuryr-kubernetes-tempest-octavia-base
description: |
Kuryr-Kubernetes tempest job running kuryr containerized
vars:
devstack_localrc:
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true
- job:
name: kuryr-kubernetes-tempest-containerized-ipv6
parent: kuryr-kubernetes-tempest-containerized
name: kuryr-kubernetes-tempest-ipv6
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with IPv6 pod
and service networks
@ -74,8 +81,8 @@
voting: false
- job:
name: kuryr-kubernetes-tempest-containerized-dual-stack
parent: kuryr-kubernetes-tempest-containerized
name: kuryr-kubernetes-tempest-dual-stack
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with dual stack
pod and service networks
@ -85,8 +92,8 @@
voting: false
- job:
name: kuryr-kubernetes-tempest-containerized-lower-constraints
parent: kuryr-kubernetes-tempest-containerized
name: kuryr-kubernetes-tempest-lower-constraints
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with
requirments from lower-constraints.txt
@ -96,8 +103,8 @@
voting: false
- job:
name: kuryr-kubernetes-tempest-containerized-l2
parent: kuryr-kubernetes-tempest-containerized
name: kuryr-kubernetes-tempest-l2
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job using octavia in l2 mode, kuryr containerized
vars:
@ -105,10 +112,10 @@
KURYR_K8S_OCTAVIA_MEMBER_MODE: L2
- job:
name: kuryr-kubernetes-tempest-containerized-pools-namespace
name: kuryr-kubernetes-tempest-pools-namespace
description: |
Tempest with containers, port pools and namespace subnet driver
parent: kuryr-kubernetes-tempest-containerized
parent: kuryr-kubernetes-tempest
vars:
devstack_localrc:
KURYR_SUBNET_DRIVER: namespace
@ -120,10 +127,10 @@
KURYR_CONFIGMAP_MODIFIABLE: true
- job:
name: kuryr-kubernetes-tempest-containerized-network-policy
name: kuryr-kubernetes-tempest-network-policy
description: |
Tempest with Octavia, containers and network policy driver
parent: kuryr-kubernetes-tempest-containerized
parent: kuryr-kubernetes-tempest
vars:
devstack_localrc:
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
@ -131,8 +138,8 @@
KURYR_SUBNET_DRIVER: namespace
- job:
name: kuryr-kubernetes-tempest-containerized-crio
parent: kuryr-kubernetes-tempest-containerized
name: kuryr-kubernetes-tempest-crio
parent: kuryr-kubernetes-tempest
vars:
devstack_localrc:
CONTAINER_ENGINE: crio

View File

@ -16,30 +16,30 @@
name: kuryr-kubernetes-tempest-jobs
check:
jobs:
- kuryr-kubernetes-tempest-systemd
- kuryr-kubernetes-tempest
- kuryr-kubernetes-tempest-containerized
- kuryr-kubernetes-tempest-containerized-lower-constraints
- kuryr-kubernetes-tempest-containerized-ovn
- kuryr-kubernetes-tempest-containerized-network-policy
- kuryr-kubernetes-tempest-multinode-containerized
- kuryr-kubernetes-tempest-containerized-ipv6
- kuryr-kubernetes-tempest-containerized-ovn-ipv6
- kuryr-kubernetes-tempest-containerized-ovn-provider-ovn
- kuryr-kubernetes-e2e-np-containerized-ovn-provider-ovn
- kuryr-kubernetes-tempest-lower-constraints
- kuryr-kubernetes-tempest-ovn
- kuryr-kubernetes-tempest-network-policy
- kuryr-kubernetes-tempest-multinode
- kuryr-kubernetes-tempest-ipv6
- kuryr-kubernetes-tempest-ovn-ipv6
- kuryr-kubernetes-tempest-ovn-provider-ovn
- kuryr-kubernetes-e2e-np-ovn-provider-ovn
gate:
jobs:
- kuryr-kubernetes-tempest-systemd
- kuryr-kubernetes-tempest
- kuryr-kubernetes-tempest-containerized
- kuryr-kubernetes-tempest-containerized-ovn
- kuryr-kubernetes-tempest-containerized-network-policy
- kuryr-kubernetes-tempest-ovn
- kuryr-kubernetes-tempest-network-policy
experimental:
jobs:
- kuryr-kubernetes-tempest-containerized-l2
- kuryr-kubernetes-tempest-containerized-pools-namespace
- kuryr-kubernetes-tempest-ovn
- kuryr-kubernetes-tempest-l2
- kuryr-kubernetes-tempest-pools-namespace
- kuryr-kubernetes-tempest-ovn-systemd
- kuryr-kubernetes-tempest-multinode-ha
- kuryr-kubernetes-tempest-containerized-crio
- kuryr-kubernetes-tempest-containerized-dual-stack
- kuryr-kubernetes-tempest-crio
- kuryr-kubernetes-tempest-dual-stack
- project-template:
name: kuryr-kubernetes-lower-constraints-bionic-jobs

View File

@ -59,17 +59,17 @@
'{{ devstack_log_dir }}/ovsdb-server-sb.log': 'logs'
- job:
name: kuryr-kubernetes-tempest-containerized-ovn
name: kuryr-kubernetes-tempest-ovn-systemd
parent: kuryr-kubernetes-tempest-ovn
description: |
Kuryr-Kubernetes tempest job using OVN and Containerized
vars:
devstack_localrc:
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: false
- job:
name: kuryr-kubernetes-tempest-containerized-ovn-ipv6
parent: kuryr-kubernetes-tempest-containerized-ovn
name: kuryr-kubernetes-tempest-ovn-ipv6
parent: kuryr-kubernetes-tempest-ovn
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with OVN and
IPv6 pod and service networks
@ -78,8 +78,8 @@
KURYR_IPV6: true
- job:
name: kuryr-kubernetes-tempest-containerized-ovn-provider-ovn
parent: kuryr-kubernetes-tempest-containerized-ovn
name: kuryr-kubernetes-tempest-ovn-provider-ovn
parent: kuryr-kubernetes-tempest-ovn
description: |
Kuryr-Kubernetes tempest job using OVN, CNI daemon, Containerized, Octavia provider OVN, and Network Policy drivers
required-projects:
@ -109,7 +109,7 @@
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
- job:
name: kuryr-kubernetes-e2e-np-containerized-ovn-provider-ovn
name: kuryr-kubernetes-e2e-np-ovn-provider-ovn
parent: kuryr-kubernetes-k8s-base
description: |
Kuryr-Kubernetes job with OVN and Octavia provider OVN running k8s network policy e2e tests
@ -152,7 +152,6 @@
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true
devstack_services:
octavia: true
o-api: true

View File

@ -1,186 +0,0 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes \
https://opendev.org/openstack/kuryr-kubernetes
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
# Neutron services
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service q-dhcp
enable_service q-api
enable_service q-meta
enable_service q-svc
# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
https://opendev.org/openstack/neutron-lbaas
enable_service q-lbaasv2
# Currently there is problem with the ODL LBaaS driver integration, so we
# default to the default neutron one
#NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:opendaylight:networking_odl.lbaas.driver_v2.OpenDaylightLbaasDriverV2:default"
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
ODL_MODE=allinone
ODL_RELEASE=carbon-snapshot-0.6
Q_USE_PUBLIC_VETH=False
PUBLIC_BRIDGE=br-ex
PUBLIC_PHYSICAL_NETWORK=public
ODL_PROVIDER_MAPPINGS=public:br-ex
ODL_L3=True
ODL_NETVIRT_KARAF_FEATURE=odl-neutron-service,odl-restconf-all,odl-aaa-authn,odl-dlux-core,odl-mdsal-apidocs,odl-netvirt-openstack,odl-neutron-logger,odl-neutron-hostconfig-ovs
ODL_PORT_BINDING_CONTROLLER=pseudo-agentdb-binding
ODL_TIMEOUT=60
ODL_V2DRIVER=True
ODL_NETVIRT_DEBUG_LOGS=True
EBTABLES_RACE_FIX=True
enable_plugin networking-odl http://opendev.org/openstack/networking-odl
# Keystone
enable_service key
# dependencies
enable_service mysql
enable_service rabbit
# By default use all the services from the kuryr-kubernetes plugin
# Docker
# ======
# If you already have docker configured, running and with its socket writable
# by the stack user, you can omit the following line.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# Etcd
# ====
# If you already have etcd configured and running, you can just comment out
enable_service etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"
# Kubernetes
# ==========
#
# Kubernetes is run from the hyperkube docker image
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# KURYR_HYPERKUBE_IMAGE="gcr.io/google_containers/hyperkube-amd64"
# KURYR_HYPERKUBE_VERSION="v1.3.7"
#
# If you have the 8080 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="8080"
#
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
# KURYR_K8S_API_URL="http://k8s_api_ip:k8s_api_port"
#
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
#
# NOTE: KURYR_HYPERKUBE_IMAGE, KURYR_HYPERKUBE_VERSION also affect which
# the selected binary for the Kubelet.
# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr runs CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon
# Kuryr POD VIF Driver
# ====================
#
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif
# Kuryr Enabled Handlers
# ======================
#
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# KURYR_VIF_POOL_DRIVER=noop
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=2
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=5
# KURYR_VIF_POOL_UPDATE_FREQ=30

View File

@ -43,14 +43,13 @@ enable_service q-svc
# VAR RUN PATH
# =============
# VAR_RUN_PATH=/usr/local/var/run
VAR_RUN_PATH=/var/run
# VAR_RUN_PATH=/var/run
# OCTAVIA
# =======
# Uncomment it to use L2 communication between loadbalancer and member pods
# KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
# Kuryr K8S-Endpoint driver Octavia provider
# ==========================================
# Kuryr uses LBaaS to provide the Kubernetes services
@ -69,7 +68,6 @@ VAR_RUN_PATH=/var/run
# KURYR_TIMEOUT_CLIENT_DATA=50000
# KURYR_TIMEOUT_MEMBER_DATA=50000
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
@ -79,7 +77,7 @@ enable_service o-cw
enable_service o-hm
enable_service o-hk
enable_service o-da
## Octavia Deps
### Nova
enable_service n-api
enable_service n-api-meta
@ -88,6 +86,7 @@ enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
@ -122,52 +121,34 @@ enable_service etcd3
# Kubernetes
# ==========
#
# Kubernetes is run from the hyperkube docker image
# Kubernetes is installed by kubeadm (which is installed from proper
# repository).
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
# enabling the Kubernetes service.
# TODO(gryf): review the part whith existsing cluster for kubelet
# configuration instead of runing it via devstack - it need to be
# configured for use our CNI.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubernetes-master
# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# KURYR_HYPERKUBE_IMAGE="gcr.io/google_containers/hyperkube-amd64"
# KURYR_HYPERKUBE_VERSION="v1.6.2"
#
# If you have the 8080 port already bound to another service, you will need to
# If you have the 6443 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="8080"
#
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
# KURYR_K8S_API_PORT="6443"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
#
# TODO(gryf): revisit this scenario. Do we even support this in devstack?
#
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
#
# NOTE: KURYR_HYPERKUBE_IMAGE, KURYR_HYPERKUBE_VERSION also affect which
# the selected binary for the Kubelet.
enable_service kubernetes-master
# Kuryr watcher
# =============
@ -177,7 +158,6 @@ enable_service kubelet
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
@ -189,14 +169,14 @@ enable_service kuryr-kubernetes
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon
# Containerized Kuryr
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni). If you want DevStack to deploy
# Kuryr services as pods on Kubernetes uncomment next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
# (kuryr-controller) and DaemonSet (kuryr-cni) or as systemd services. If you
# want DevStack to deploy Kuryr services as pods on Kubernetes, comment (or
# remove) next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=False
# Kuryr POD VIF Driver
# ====================
@ -213,6 +193,7 @@ KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#

View File

@ -33,10 +33,7 @@ KURYR_NEUTRON_DEFAULT_ROUTER=router1
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
enable_service etcd3
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kubernetes-master
enable_service kuryr-kubernetes
enable_service kuryr-daemon

View File

@ -1,83 +0,0 @@
[[local|localrc]]
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
# Neutron services
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service q-dhcp
enable_service q-svc
enable_service q-meta
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
### Neutron-lbaas
# LBaaSv2 service and Haproxy agent
enable_plugin neutron-lbaas \
https://opendev.org/openstack/neutron-lbaas
enable_service q-lbaasv2
# Currently there is problem with the ODL LBaaS driver integration, so we
# default to the default neutron one
#NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:opendaylight:networking_odl.lbaas.driver_v2.OpenDaylightLbaasDriverV2:default"
NEUTRON_LBAAS_SERVICE_PROVIDERV2="LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
# Keystone
enable_service key
# dependencies
enable_service mysql
enable_service rabbit
ODL_MODE=allinone
ODL_RELEASE=carbon-snapshot-0.6
Q_USE_PUBLIC_VETH=False
PUBLIC_BRIDGE=br-ex
PUBLIC_PHYSICAL_NETWORK=public
ODL_PROVIDER_MAPPINGS=public:br-ex
ODL_L3=True
ODL_NETVIRT_KARAF_FEATURE=odl-neutron-service,odl-restconf-all,odl-aaa-authn,odl-dlux-core,odl-mdsal-apidocs,odl-netvirt-openstack,odl-neutron-logger,odl-neutron-hostconfig-ovs
ODL_PORT_BINDING_CONTROLLER=pseudo-agentdb-binding
ODL_TIMEOUT=60
ODL_V2DRIVER=True
ODL_NETVIRT_DEBUG_LOGS=True
Q_SERVICE_PLUGIN_CLASSES=trunk
EBTABLES_RACE_FIX=True
enable_plugin networking-odl http://opendev.org/openstack/networking-odl

View File

@ -58,6 +58,7 @@ enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
@ -66,6 +67,7 @@ enable_service o-hk
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_SIZE=3
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
### Nova
enable_service n-api
enable_service n-api-meta
@ -74,11 +76,11 @@ enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
# Keystone
enable_service key
@ -106,52 +108,34 @@ enable_service etcd3
# Kubernetes
# ==========
#
# Kubernetes is run from the hyperkube docker image
# Kubernetes is installed by kubeadm (which is installed from proper
# repository).
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
# enabling the Kubernetes service.
# TODO(gryf): review the part whith existsing cluster for kubelet
# configuration instead of runing it via devstack - it need to be
# configured for use our CNI.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubernetes-master
# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# KURYR_HYPERKUBE_IMAGE="gcr.io/google_containers/hyperkube-amd64"
# KURYR_HYPERKUBE_VERSION="v1.6.2"
#
# If you have the 8080 port already bound to another service, you will need to
# If you have the 6443 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="8080"
#
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
# KURYR_K8S_API_PORT="6443"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
#
# TODO(gryf): revisit this scenario. Do we even support this in devstack?
#
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
#
# NOTE: KURYR_HYPERKUBE_IMAGE, KURYR_HYPERKUBE_VERSION also affect which
# the selected binary for the Kubelet.
enable_service kubernetes-master
# Kuryr watcher
# =============
@ -161,11 +145,10 @@ enable_service kubelet
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr runs CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# Kuryr can run CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
@ -177,9 +160,10 @@ enable_service kuryr-daemon
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni). If you want DevStack to deploy
# Kuryr services as pods on Kubernetes uncomment next line.
# KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
# (kuryr-controller) and DaemonSet (kuryr-cni) or as systemd services. If you
# want DevStack to deploy Kuryr services as pods on Kubernetes, comment (or
# remove) next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=False
# Kuryr POD VIF Driver
# ====================

View File

@ -63,8 +63,8 @@ enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-p
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# Kubelet will be run via kubeadm
enable_service kubernetes-worker
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:

View File

@ -52,7 +52,7 @@ KURYR_TIMEOUT_MEMBER_DATA=${KURYR_TIMEOUT_MEMBER_DATA:-0}
KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE=${KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE:-True}
# Kubernetes containerized deployment
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=${KURYR_K8S_CONTAINERIZED_DEPLOYMENT:-False}
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=${KURYR_K8S_CONTAINERIZED_DEPLOYMENT:-True}
# Kuryr Endpoint LBaaS OCTAVIA provider
KURYR_EP_DRIVER_OCTAVIA_PROVIDER=${KURYR_EP_DRIVER_OCTAVIA_PROVIDER:-default}