Remove dragonflow

Dragonflow was removed from governance in 2018 and is now being retired.
This cleans up references to dragonflow jobs and configuration.

http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015600.html

Change-Id: Ie990da4e68e82d998768fa0c047cca4cccd59915
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
This commit is contained in:
Sean McGinnis 2020-06-23 10:26:50 -05:00
parent 258c708b85
commit cded615f86
No known key found for this signature in database
GPG Key ID: CE7EE4BFAF8D70C8
5 changed files with 0 additions and 507 deletions

View File

@ -99,32 +99,3 @@
KURYR_SUBNET_DRIVER: namespace
KURYR_SG_DRIVER: policy
KURYR_ENABLED_HANDLERS: vif,lb,lbaasspec,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork
- job:
name: kuryr-kubernetes-tempest-dragonflow
parent: kuryr-kubernetes-tempest
description: |
Kuryr-Kubernetes tempest job using Dragonflow
required-projects:
- openstack/dragonflow
vars:
devstack_localrc:
Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER: true
DF_RUNNING_IN_GATE: true
TUNNEL_TYPE: vxlan
DF_L2_RESPONDER: true
OVS_INSTALL_FROM_GIT: false
OVS_BRANCH: master
devstack_services:
q-agt: false
q-dhcp: false
q-l3: false
q-trunk: true
df-redis: true
df-redis-server: true
df-controller: true
df-ext-services: true
df-l3-agent: true
devstack_plugins:
dragonflow: https://github.com/openstack/dragonflow
voting: false

View File

@ -1,210 +0,0 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes \
https://opendev.org/openstack/kuryr-kubernetes
enable_plugin dragonflow https://opendev.org/openstack/dragonflow
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
# DF services
enable_service df-redis
enable_service df-redis-server
enable_service df-controller
# Neutron services
enable_service neutron
enable_service q-svc
# Keystone
enable_service key
# Dependencies
enable_service mysql
enable_service rabbit
# enable DF local controller
Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER=True
# DF settings
DF_RUNNING_IN_GATE=True
TUNNEL_TYPE=vxlan
DF_SELECTIVE_TOPO_DIST=False
# OCTAVIA
# Uncomment it to use L2 communication between loadbalancer and member pods
# KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
### Image
### Barbican
enable_plugin barbican https://opendev.org/openstack/barbican
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
# By default use all the services from the kuryr-kubernetes plugin
# Docker
# ======
# If you already have docker configured, running and with its socket writable
# by the stack user, you can omit the following line.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# Etcd
# ====
# The default is for devstack to run etcd for you.
enable_service etcd3
# If you already have an etcd cluster configured and running, you can just
# comment out the lines enabling legacy_etcd and etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"
# Kubernetes
# ==========
#
# Kubernetes is run from the hyperkube docker image
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service (except Kubelet, which must be run by
# devstack so that it uses our development CNI driver.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
# We use hyperkube to run the services. You can select the hyperkube image and/
# or version by uncommenting and setting the following ENV vars different
# to the following defaults:
# KURYR_HYPERKUBE_IMAGE="gcr.io/google_containers/hyperkube-amd64"
# KURYR_HYPERKUBE_VERSION="v1.6.2"
#
# If you have the 8080 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="8080"
#
# If you want to test with a different range for the Cluster IPs uncomment and
# set the following ENV var to a different CIDR
# KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
# KURYR_K8S_API_URL="http://k8s_api_ip:k8s_api_port"
#
# Kubelet
# =======
#
# Kubelet should almost invariably be run by devstack
enable_service kubelet
# You can specify a different location for the hyperkube binary that will be
# extracted from the hyperkube container into the Host filesystem:
# KURYR_HYPERKUBE_BINARY=/usr/local/bin/hyperkube
#
# NOTE: KURYR_HYPERKUBE_IMAGE, KURYR_HYPERKUBE_VERSION also affect which
# the selected binary for the Kubelet.
# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr runs CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon
# Kuryr POD VIF Driver
# ====================
#
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif
# Kuryr Enabled Handlers
# ======================
#
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,lb,lbaasspec
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# KURYR_VIF_POOL_DRIVER=noop
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=2
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=5
# KURYR_VIF_POOL_UPDATE_FREQ=30
# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999

View File

@ -1,77 +0,0 @@
[[local|localrc]]
Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER=True
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
ADMIN_PASSWORD=pass
MULTI_HOST=1
# Dragonflow plugin and services
enable_plugin dragonflow https://opendev.org/openstack/dragonflow
enable_service df-controller
enable_service df-redis
enable_service df-redis-server
enable_service df-metadata
enable_service q-trunk
# Neutron services
disable_service n-net
enable_service q-svc
enable_service q-qos
disable_service q-l3
disable_service df-l3-agent
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
disable_service q-agt
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
disable_service q-dhcp
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
# uncommenting the following lines.
# [1] https://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_SIZE=3
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
### Image
### Barbican
enable_plugin barbican https://opendev.org/openstack/barbican
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
# Enable heat services if you want to deploy overcloud using Heat stack
enable_plugin heat https://opendev.org/openstack/heat
enable_service h-eng h-api h-api-cfn h-api-cw
disable_service tempest
DF_REDIS_PUBSUB=True
Q_USE_PROVIDERNET_FOR_PUBLIC=True
Q_FLOATING_ALLOCATION_POOL=start=172.24.4.10,end=172.24.4.200
PUBLIC_NETWORK_NAME=public
PUBLIC_NETWORK_GATEWAY=172.24.4.1

View File

@ -1,190 +0,0 @@
=======================================
Kuryr Kubernetes Dragonflow Integration
=======================================
Dragonflow is a distributed, modular and extendable SDN controller that
enables to connect cloud network instances (VMs, Containers and Bare Metal
servers) at scale.
Dragonflow adopts a distributed approach to mitigate the scaling issues for
large scale deployments. With Dragonflow the load is distributed to the compute
nodes running local controller. Dragonflow manages the network services for
the OpenStack compute nodes by distributing network topology and policies to
the compute nodes, where they are translated into Openflow rules and programmed
into Open Vswitch pipeline. Network services are implemented as Applications in
the local controller. OpenStack can use Dragonflow as its network provider
through the Modular Layer-2 (ML2) Plugin.
Integrating with Dragonflow allows Kuryr to be used to bridge containers and
VM networking in an OpenStack deployment. Kuryr acts as the container
networking interface for Dragonflow.
Testing with DevStack
---------------------
The next points describe how to test OpenStack with Dragonflow using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet
is to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
#. Create the ``stack`` user.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
#. Configure DevStack to use Dragonflow.
kuryr-kubernetes comes with a sample DevStack configuration file for
Dragonflow you can start with. You may change some values for the various
variables in that file, like password settings or what LBaaS service
provider to use. Feel free to edit it if you'd like, but it should work
as-is.
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_.
#. Run DevStack.
Expect it to take a while. It installs required packages, clones a bunch of
git repos, and installs everything from these git repos.
.. code-block:: console
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block:: console
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
#. Extra configurations.
Create NAT rule that will cause "external" traffic from your instances to
get rewritten to your network controller's ip address and sent out on the
network:
.. code-block:: console
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
Inspect default Configuration
+++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
you can check the `Inspect default Configuration`_.
Testing Network Connectivity
++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_.
Nested Containers Test Environment (VLAN)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
Dragonflow configurations such as enabling the trunk support that will be
needed for the VM. And then install the overcloud deployment inside the VM with
the kuryr components.
Undercloud deployment
+++++++++++++++++++++
The steps to deploy the undercloud environment are the same as described above
for the `Single Node Test Environment` with the different sample local.conf to
use (step 4), in this case:
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.df.sample local.conf
The main differences with the default dragonflow local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- Dragonflow Trunk service plugin need to be enable to ensure Trunk ports
support.
Once the undercloud deployment has finished, the next steps are related to
creating the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at `Boot VM with a Trunk Port`_.
Overcloud deployment
++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the
same steps as for ML2/OVS:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
#. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.
Testing Nested Network Connectivity
+++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
`Testing Nested Network Connectivity`_.
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html

View File

@ -38,6 +38,5 @@ ML2 drivers.
nested-dpdk
odl_support
ovn_support
dragonflow_support
containerized
ports-pool