Use /var/run instead of /var/run/openvswitch

/var/run/openvswitch was necessary for ovs-vsctl work,
it should have access do openvswitch/db.sock.

But supporting vhostuser ports requires to move socket
from /var/run/openvswitch to another directory. This another
directory should be on the same mount point on host as well as
docker's overlay fs.

Change-Id: I572bdcb365e34cea481f42b94d9be0748e0d5c93
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
This commit is contained in:
Alexey Perevalov 2018-09-13 04:25:11 -04:00
parent 79ec91f681
commit a25da709b9
6 changed files with 20 additions and 11 deletions

View File

@ -26,7 +26,7 @@
neutron: https://git.openstack.org/openstack/neutron
vars:
devstack_localrc:
OVS_HOST_PATH: /usr/local/var/run/openvswitch
VAR_RUN_PATH: /usr/local/var/run
Q_USE_PROVIDERNET_FOR_PUBLIC: true
PHYSICAL_NETWORK: public
OVN_L3_CREATE_PUBLIC_NETWORK: true

View File

@ -663,10 +663,10 @@ spec:
- name: proc
mountPath: /host_proc
EOF
if [[ -n "$OVS_HOST_PATH" ]]; then
if [[ -n "$VAR_RUN_PATH" ]]; then
cat >> "${output_dir}/cni_ds.yml" << EOF
- name: openvswitch
mountPath: /var/run/openvswitch
mountPath: /var/run
EOF
fi
if [ "$cni_daemon" == "True" ]; then
@ -700,11 +700,11 @@ EOF
hostPath:
path: /proc
EOF
if [[ -n "$OVS_HOST_PATH" ]]; then
if [[ -n "$VAR_RUN_PATH" ]]; then
cat >> "${output_dir}/cni_ds.yml" << EOF
- name: openvswitch
hostPath:
path: ${OVS_HOST_PATH}
path: ${VAR_RUN_PATH}
EOF
fi
}

View File

@ -36,10 +36,10 @@ enable_service networking-ovn-metadata-agent
enable_service neutron
enable_service q-svc
# OVS HOST PATH
# VAR RUN PATH
# =============
# OVS_HOST_PATH=/var/run/openvswitch
OVS_HOST_PATH=/usr/local/var/run/openvswitch
# VAR_RUN_PATH=/var/run
VAR_RUN_PATH=/usr/local/var/run
# OCTAVIA
KURYR_K8S_LBAAS_USE_OCTAVIA=True

View File

@ -33,9 +33,9 @@ enable_service q-dhcp
enable_service q-l3
enable_service q-svc
# OVS HOST PATH
# VAR RUN PATH
# =============
# OVS_HOST_PATH=/var/run/openvswitch
# VAR_RUN_PATH=/var/run
# OCTAVIA
KURYR_K8S_LBAAS_USE_OCTAVIA=True

View File

@ -91,7 +91,7 @@ KURYR_VIF_POOL_MANAGER=${KURYR_VIF_POOL_MANAGER:-False}
KURYR_HEALTH_SERVER_PORT=${KURYR_HEALTH_SERVER_PORT:-8082}
# OVS HOST PATH
OVS_HOST_PATH=${OVS_HOST_PATH:-/var/run/openvswitch}
VAR_RUN_PATH=${VAR_RUN_PATH:-/var/run}
# Health Server
KURYR_CNI_HEALTH_SERVER_PORT=${KURYR_CNI_HEALTH_SERVER_PORT:-8090}

View File

@ -108,6 +108,15 @@ This should generate 5 files in your ``<output_dir>``:
* controller_deployment.yml
* cni_ds.yml
.. note::
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to several sub directories
like openvswitch and auxiliary directory for vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration option, kuryr-kubernetes
has to move vhostuser socket to auxilary directory, that auxiliary directory should be on the same
mount point, otherwise connection of this socket will be refused.
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~