Implement multi-vif driver

This patch implements the multi-vif of VIF-Handler And Vif
Drivers Design.

This patch creates a new driver type MultiVIFDriver. It will
be the base class of real drivers like sriov,
additional_subnet and npwg_multiple_interfaces. Each of the
derived driver should implement the parsing of the additional
interfaces definition in K8S pods, and call VIF driver to
either create or acquire the Neutron port and its VIF object.

A list of enabled drivers can be returned by its class method.
So that the VIFHandler can invoke each driver one by one to
get the whole list of interfaces for one pod.

Partially Implements: blueprint multi-vif-pods
Change-Id: I8b5175a4637b18a0b574e27674a217865afb22b7
Signed-off-by: Peng Liu <pliu@redhat.com>
This commit is contained in:
Peng Liu 2018-06-15 14:11:34 +08:00
parent 68873beaf0
commit aaeb4f4687
10 changed files with 196 additions and 22 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

@ -25,21 +25,27 @@ Kuryr-Kubernetes Controller.
VIF-Handler
-----------
VIF-handler is intended to handle VIFs. The main aim of VIF-handler is to get
the pod object, send it to Multi-vif driver and get vif objects from it. After
that VIF-handler is able to activate, release or update vifs. Also VIF-Handler
is always authorized to get main vif for pod from generic driver.
the pod object, send it to 1) the VIF-driver for the default network, 2)
enabled Multi-VIF drivers for the additional networks, and get VIF objects
from both. After that VIF-handler is able to activate, release or update VIFs.
VIF-handler should stay clean whereas parsing of specific pod information
should be moved to Multi-vif driver.
should be done by Multi-VIF drivers.
Multi-vif driver
Multi-VIF driver
~~~~~~~~~~~~~~~~
The main driver that is authorized to call other drivers. The main aim of
this driver is to get list of enabled drivers, parse pod annotations, pass
pod object to enabled drivers and get vif objects from them to pass these
objects to VIF-handler finally. The list of parsed annotations by Multi-vif
driver includes sriov requests, additional subnets requests and specific ports.
If the pod object doesn't have annotation which is required by some of the
drivers then this driver is not called or driver can just return.
The new type of drivers which is used to call other VIF-drivers to attach
additional interfaces to Pods. The main aim of this kind of drivers is to get
additional interfaces from the Pods definition, then invoke real VIF-drivers
like neutron-vif, nested-macvlan or sriov to retrieve the VIF objects
accordingly.
All Multi-VIF drivers should be derived from class *MultiVIFDriver*. And all
should implement the *request_additional_vifs* method which returns a list of
VIF objects. Those VIF objects are created by each of the vif-drivers invoked
by the Multi-VIF driver. Each of the multi-vif driver should support a syntax
of additional interfaces definition in Pod. If the pod object doesn't define
additional interfaces, the Multi-VIF driver can just return.
Diagram describing VifHandler - Drivers flow is giver below:
.. image:: ../../images/vif_handler_drivers_design.png
@ -49,19 +55,28 @@ Diagram describing VifHandler - Drivers flow is giver below:
Config Options
~~~~~~~~~~~~~~
Add new config option "enabled_vif_drivers" (list) to config file that shows
what drivers should be used in Multi-vif driver to collect vif objects. This
means that Multi-vif driver will pass pod object only to specified drivers
(generic driver is always used by default and it's not necessary to specify
it) and get vifs from them.
Add new config option "multi_vif_drivers" (list) to config file that shows
what Multi-VIF drivers should be used in to specify the addition VIF objects.
It is allowed to have one or more multi_vif_drivers enabled, which means that
multi_vif_drivers can either work separately or together. By default, a noop
driver which basically does nothing will be used if this field is not
explicitly specified.
Option in config file might look like this:
.. code-block:: ini
[kubernetes]
enabled_vif_drivers = sriov, additional_subnets
multi_vif_drivers = sriov, additional_subnets
Or like this:
.. code-block:: ini
[kubernetes]
multi_vif_drivers = npwg_multiple_interfaces
Additional Subnets Driver
~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -176,6 +176,10 @@ k8s_opts = [
cfg.StrOpt('network_policy_driver',
help=_("Driver for network policies"),
default='default'),
cfg.ListOpt('multi_vif_drivers',
help=_("The drivers that provide additional VIFs for"
"Kubernetes Pods."),
default='noop'),
]
neutron_defaults = [

View File

@ -56,3 +56,5 @@ VIF_POOL_LIST = '/listPools'
VIF_POOL_SHOW = '/showPool'
DEFAULT_IFNAME = 'eth0'
ADDITIONAL_IFNAME_PREFIX = 'eth'

View File

@ -23,6 +23,7 @@ from kuryr_kubernetes import config
_DRIVER_NAMESPACE_BASE = 'kuryr_kubernetes.controller.drivers'
_DRIVER_MANAGERS = {}
_MULTI_VIF_DRIVERS = []
class DriverBase(object):
@ -347,6 +348,42 @@ class PodVIFDriver(DriverBase):
@six.add_metaclass(abc.ABCMeta)
class MultiVIFDriver(DriverBase):
"""Manages additional ports of Kubernetes Pods."""
ALIAS = 'multi_vif'
@abc.abstractmethod
def request_additional_vifs(
self, pod, project_id, security_groups):
"""Links Neutron ports to pod and returns them as a list of VIF objects.
Implementing drivers must be able to parse the additional interface
definition from pod. The format of the definition is up to the
implementation of each driver. Then implementing drivers must invoke
the VIF drivers to either create new Neutron ports on each request or
reuse available ports when possible.
:param pod: dict containing Kubernetes Pod object
:param project_id: OpenStack project ID
:param security_groups: list containing security groups' IDs as
returned by
`PodSecurityGroupsDriver.get_security_groups`
:return: VIF object list
"""
raise NotImplementedError()
@classmethod
def get_enabled_drivers(cls):
if _MULTI_VIF_DRIVERS:
pass
else:
drivers = config.CONF.kubernetes['multi_vif_drivers']
for driver in drivers:
_MULTI_VIF_DRIVERS.append(cls.get_instance(driver))
return _MULTI_VIF_DRIVERS
class LBaaSDriver(DriverBase):
"""Base class for Openstack loadbalancer services."""

View File

@ -0,0 +1,22 @@
# Copyright (c) 2018 RedHat, Inc.
# All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from kuryr_kubernetes.controller.drivers import base
class NoopMultiVIFDriver(base.MultiVIFDriver):
def request_additional_vifs(
self, pod, project_id, security_groups):
return []

View File

@ -52,6 +52,7 @@ class VIFHandler(k8s_base.ResourceEventHandler):
self._drv_vif_pool = drivers.VIFPoolDriver.get_instance(
driver_alias='multi_pool')
self._drv_vif_pool.set_vif_driver()
self._drv_multi_vif = drivers.MultiVIFDriver.get_enabled_drivers()
def on_present(self, pod):
if self._is_host_network(pod) or not self._is_pending_node(pod):
@ -69,13 +70,21 @@ class VIFHandler(k8s_base.ResourceEventHandler):
security_groups = self._drv_sg.get_security_groups(pod, project_id)
subnets = self._drv_subnets.get_subnets(pod, project_id)
# NOTE(danil): There is currently no way to actually request
# multiple VIFs. However we're packing the main_vif 'eth0' in a
# dict here to facilitate future work in this area
# Request the default interface of pod
main_vif = self._drv_vif_pool.request_vif(
pod, project_id, subnets, security_groups)
vifs[constants.DEFAULT_IFNAME] = main_vif
# Request the additional interfaces from multiple dirvers
additional_vifs = []
for driver in self._drv_multi_vif:
additional_vifs.extend(
driver.request_additional_vifs(
pod, project_id, security_groups))
if additional_vifs:
for i, vif in enumerate(additional_vifs, start=1):
vifs[constants.ADDITIONAL_IFNAME_PREFIX + str(i)] = vif
try:
self._set_vifs(pod, vifs)
except k_exc.K8sClientException as ex:

View File

@ -81,3 +81,34 @@ class TestDriverBase(test_base.TestCase):
self.assertRaises(TypeError, _TestDriver.get_instance)
m_cfg.assert_not_called()
m_stv_mgr.assert_not_called()
class TestMultiVIFDriver(test_base.TestCase):
@mock.patch.object(d_base, '_MULTI_VIF_DRIVERS', [])
@mock.patch('kuryr_kubernetes.config.CONF')
def test_get_enabled_drivers(self, m_cfg):
cfg_name = 'multi_vif_drivers'
drv_name = 'driver_impl'
m_cfg.kubernetes.__getitem__.return_value = [drv_name]
m_drv = mock.MagicMock()
d_base.MultiVIFDriver.get_instance = mock.MagicMock(return_value=m_drv)
self.assertIn(m_drv, d_base.MultiVIFDriver.get_enabled_drivers())
m_cfg.kubernetes.__getitem__.assert_called_once_with(cfg_name)
@mock.patch.object(d_base, '_MULTI_VIF_DRIVERS', [])
@mock.patch('kuryr_kubernetes.config.CONF')
def test_get_enabled_drivers_multiple(self, m_cfg):
cfg_name = 'multi_vif_drivers'
drv1_name = 'driver_impl_1'
drv2_name = 'driver_impl_2'
m_cfg.kubernetes.__getitem__.return_value = [drv1_name, drv2_name]
m_drv1 = mock.MagicMock()
m_drv2 = mock.MagicMock()
d_base.MultiVIFDriver.get_instance = mock.MagicMock()
d_base.MultiVIFDriver.get_instance.side_effect = [m_drv1, m_drv2]
self.assertIn(m_drv1, d_base.MultiVIFDriver.get_enabled_drivers())
self.assertIn(m_drv2, d_base.MultiVIFDriver.get_enabled_drivers())
m_cfg.kubernetes.__getitem__.assert_called_once_with(cfg_name)

View File

@ -34,6 +34,8 @@ class TestVIFHandler(test_base.TestCase):
self._vif.active = True
self._vif_serialized = mock.sentinel.vif_serialized
self._vifs = {k_const.DEFAULT_IFNAME: self._vif}
self._multi_vif_drv = mock.MagicMock(spec=drivers.MultiVIFDriver)
self._additioan_vifs = []
self._pod_version = mock.sentinel.pod_version
self._pod_link = mock.sentinel.pod_link
@ -52,6 +54,7 @@ class TestVIFHandler(test_base.TestCase):
self._handler._drv_vif = mock.Mock(spec=drivers.PodVIFDriver)
self._handler._drv_vif_pool = mock.MagicMock(
spec=drivers.VIFPoolDriver)
self._handler._drv_multi_vif = [self._multi_vif_drv]
self._get_project = self._handler._drv_project.get_project
self._get_subnets = self._handler._drv_subnets.get_subnets
@ -64,8 +67,11 @@ class TestVIFHandler(test_base.TestCase):
self._set_vifs = self._handler._set_vifs
self._is_host_network = self._handler._is_host_network
self._is_pending_node = self._handler._is_pending_node
self._request_additional_vifs = \
self._multi_vif_drv.request_additional_vifs
self._request_vif.return_value = self._vif
self._request_additional_vifs.return_value = self._additioan_vifs
self._get_vifs.return_value = self._vifs
self._is_host_network.return_value = False
self._is_pending_node.return_value = True
@ -78,6 +84,7 @@ class TestVIFHandler(test_base.TestCase):
self._handler._drv_for_vif = h_vif.VIFHandler._drv_for_vif.__get__(
self._handler)
@mock.patch.object(drivers.MultiVIFDriver, 'get_enabled_drivers')
@mock.patch.object(drivers.VIFPoolDriver, 'set_vif_driver')
@mock.patch.object(drivers.VIFPoolDriver, 'get_instance')
@mock.patch.object(drivers.PodVIFDriver, 'get_instance')
@ -86,17 +93,19 @@ class TestVIFHandler(test_base.TestCase):
@mock.patch.object(drivers.PodProjectDriver, 'get_instance')
def test_init(self, m_get_project_driver, m_get_subnets_driver,
m_get_sg_driver, m_get_vif_driver, m_get_vif_pool_driver,
m_set_vifs_driver):
m_set_vifs_driver, m_get_multi_vif_drivers):
project_driver = mock.sentinel.project_driver
subnets_driver = mock.sentinel.subnets_driver
sg_driver = mock.sentinel.sg_driver
vif_driver = mock.sentinel.vif_driver
vif_pool_driver = mock.Mock(spec=drivers.VIFPoolDriver)
multi_vif_drivers = [mock.MagicMock(spec=drivers.MultiVIFDriver)]
m_get_project_driver.return_value = project_driver
m_get_subnets_driver.return_value = subnets_driver
m_get_sg_driver.return_value = sg_driver
m_get_vif_driver.return_value = vif_driver
m_get_vif_pool_driver.return_value = vif_pool_driver
m_get_multi_vif_drivers.return_value = multi_vif_drivers
handler = h_vif.VIFHandler()
@ -104,6 +113,7 @@ class TestVIFHandler(test_base.TestCase):
self.assertEqual(subnets_driver, handler._drv_subnets)
self.assertEqual(sg_driver, handler._drv_sg)
self.assertEqual(vif_pool_driver, handler._drv_vif_pool)
self.assertEqual(multi_vif_drivers, handler._drv_multi_vif)
def test_is_host_network(self):
self._pod['spec']['hostNetwork'] = True
@ -137,6 +147,7 @@ class TestVIFHandler(test_base.TestCase):
self._get_vifs.assert_called_once_with(self._pod)
self._request_vif.assert_not_called()
self._request_additional_vifs.assert_not_called()
self._activate_vif.assert_not_called()
self._set_vifs.assert_not_called()
@ -147,6 +158,7 @@ class TestVIFHandler(test_base.TestCase):
self._get_vifs.assert_not_called()
self._request_vif.assert_not_called()
self._request_additional_vifs.assert_not_called()
self._activate_vif.assert_not_called()
self._set_vifs.assert_not_called()
@ -157,6 +169,7 @@ class TestVIFHandler(test_base.TestCase):
self._get_vifs.assert_not_called()
self._request_vif.assert_not_called()
self._request_additional_vifs.assert_not_called()
self._activate_vif.assert_not_called()
self._set_vifs.assert_not_called()
@ -169,6 +182,7 @@ class TestVIFHandler(test_base.TestCase):
self._activate_vif.assert_called_once_with(self._pod, self._vif)
self._set_vifs.assert_called_once_with(self._pod, self._vifs)
self._request_vif.assert_not_called()
self._request_additional_vifs.assert_not_called()
def test_on_present_create(self):
self._get_vifs.return_value = {}
@ -178,9 +192,28 @@ class TestVIFHandler(test_base.TestCase):
self._get_vifs.assert_called_once_with(self._pod)
self._request_vif.assert_called_once_with(
self._pod, self._project_id, self._subnets, self._security_groups)
self._request_additional_vifs.assert_called_once_with(
self._pod, self._project_id, self._security_groups)
self._set_vifs.assert_called_once_with(self._pod, self._vifs)
self._activate_vif.assert_not_called()
def test_on_present_create_with_additional_vifs(self):
self._get_vifs.return_value = {}
self._request_additional_vifs.return_value = [mock.Mock()]
vifs = self._vifs.copy()
vifs.update({k_const.ADDITIONAL_IFNAME_PREFIX+'1':
self._request_additional_vifs.return_value[0]})
h_vif.VIFHandler.on_present(self._handler, self._pod)
self._get_vifs.assert_called_once_with(self._pod)
self._request_vif.assert_called_once_with(
self._pod, self._project_id, self._subnets, self._security_groups)
self._request_additional_vifs.assert_called_once_with(
self._pod, self._project_id, self._security_groups)
self._set_vifs.assert_called_once_with(self._pod, vifs)
self._activate_vif.assert_not_called()
def test_on_present_rollback(self):
self._get_vifs.return_value = {}
self._set_vifs.side_effect = k_exc.K8sClientException
@ -190,6 +223,8 @@ class TestVIFHandler(test_base.TestCase):
self._get_vifs.assert_called_once_with(self._pod)
self._request_vif.assert_called_once_with(
self._pod, self._project_id, self._subnets, self._security_groups)
self._request_additional_vifs.assert_called_once_with(
self._pod, self._project_id, self._security_groups)
self._set_vifs.assert_called_once_with(self._pod, self._vifs)
self._release_vif.assert_called_once_with(self._pod, self._vif,
self._project_id,
@ -204,6 +239,22 @@ class TestVIFHandler(test_base.TestCase):
self._project_id,
self._security_groups)
def test_on_deleted_with_additional_vifs(self):
self._additioan_vifs = [mock.Mock()]
self._get_vifs.return_value = self._vifs.copy()
self._get_vifs.return_value.update(
{k_const.ADDITIONAL_IFNAME_PREFIX+'1': self._additioan_vifs[0]})
h_vif.VIFHandler.on_deleted(self._handler, self._pod)
self._get_vifs.assert_called_once_with(self._pod)
self._release_vif.assert_any_call(self._pod, self._vif,
self._project_id,
self._security_groups)
self._release_vif.assert_any_call(self._pod, self._additioan_vifs[0],
self._project_id,
self._security_groups)
def test_on_deleted_host_network(self):
self._is_host_network.return_value = True

View File

@ -95,6 +95,9 @@ kuryr_kubernetes.controller.handlers =
policy = kuryr_kubernetes.controller.handlers.policy:NetworkPolicyHandler
test_handler = kuryr_kubernetes.tests.unit.controller.handlers.test_fake_handler:TestHandler
kuryr_kubernetes.controller.drivers.multi_vif =
noop = kuryr_kubernetes.controller.drivers.multi_vif:NoopMultiVIFDriver
[files]
packages =
kuryr_kubernetes