libvirt: Delegate OVS plug to os-vif

os-vif 1.15.0 added the ability to create an OVS port during plugging
by specifying the 'create_port' attribute in the 'port_profile' field.
By delegating port creation to os-vif, we can rely on it's 'isolate_vif'
config option [1] that will temporarily configure the VLAN to 4095
(0xfff), which is reserved for implementation use [2] and is used by
neutron to as a dead VLAN [3]. By doing this, we ensure VIFs are plugged
securely, preventing guests from accessing other tenants' networks
before the neutron OVS agent can wire up the port.

This change requires a little dance as part of the live migration flow.
Since we can't be certain the destination host has a version of os-vif
that supports this feature, we need to use a sentinel to indicate when
it does. Typically we would do so with a field in
'LibvirtLiveMigrateData', such as the 'src_supports_numa_live_migration'
and 'dst_supports_numa_live_migration' fields used to indicate support
for NUMA-aware live migration. However, doing this prevents us
backporting this important fix since o.vo changes are not backportable.
Instead, we (somewhat evilly) rely on the free-form nature of the
'VIFMigrateData.profile_json' string field, which stores JSON blobs and
is included in 'LibvirtLiveMigrateData' via the 'vifs' attribute, to
transport this sentinel. This is a hack but is necessary to work around
the lack of a free-form "capabilities" style dict that would allow us do
backportable fixes to live migration features.

Note that this change has the knock on effect of modifying the XML
generated for OVS ports: when hybrid plug is false will now be of type
'ethernet' rather than 'bridge' as before. This explains the larger than
expected test damage but should not affect users.

[1] https://opendev.org/openstack/os-vif/src/tag/2.4.0/vif_plug_ovs/ovs.py#L90-L93
[2] https://en.wikipedia.org/wiki/IEEE_802.1Q#Frame_format
[3] https://answers.launchpad.net/neutron/+question/231806

Change-Id: I11fb5d3ada7f27b39c183157ea73c8b72b4e672e
Depends-On: Id12486b3127ab4ac8ad9ef2b3641da1b79a25a50
Closes-Bug: #1734320
Closes-Bug: #1815989
This commit is contained in:
Stephen Finucane 2021-04-30 12:51:35 +01:00
parent dab4ec1a53
commit a62dd42c0d
15 changed files with 251 additions and 115 deletions

View File

@ -63,7 +63,7 @@ os-client-config==1.29.0
os-resource-classes==0.4.0
os-service-types==1.7.0
os-traits==2.5.0
os-vif==1.14.0
os-vif==1.15.2
os-win==5.4.0
os-xenapi==0.3.4
osc-lib==1.10.0

View File

@ -7933,11 +7933,12 @@ class ComputeManager(manager.Manager):
'but source is either too old, or is set to an '
'older upgrade level.', instance=instance)
if self.network_api.supports_port_binding_extension(ctxt):
# Create migrate_data vifs
migrate_data.vifs = \
migrate_data_obj.\
VIFMigrateData.create_skeleton_migrate_vifs(
instance.get_network_info())
# Create migrate_data vifs if not provided by driver.
if 'vifs' not in migrate_data:
migrate_data.vifs = (
migrate_data_obj.
VIFMigrateData.create_skeleton_migrate_vifs(
instance.get_network_info()))
# Claim PCI devices for VIFs on destination (if needed)
port_id_to_pci = self._claim_pci_for_instance_vifs(
ctxt, instance)

View File

@ -384,7 +384,8 @@ class VIF(Model):
details=None, devname=None, ovs_interfaceid=None,
qbh_params=None, qbg_params=None, active=False,
vnic_type=VNIC_TYPE_NORMAL, profile=None,
preserve_on_delete=False, **kwargs):
preserve_on_delete=False, delegate_create=False,
**kwargs):
super(VIF, self).__init__()
self['id'] = id
@ -401,6 +402,7 @@ class VIF(Model):
self['vnic_type'] = vnic_type
self['profile'] = profile
self['preserve_on_delete'] = preserve_on_delete
self['delegate_create'] = delegate_create
self._set_meta(kwargs)
@ -408,7 +410,7 @@ class VIF(Model):
keys = ['id', 'address', 'network', 'vnic_type',
'type', 'profile', 'details', 'devname',
'ovs_interfaceid', 'qbh_params', 'qbg_params',
'active', 'preserve_on_delete']
'active', 'preserve_on_delete', 'delegate_create']
return all(self[k] == other[k] for k in keys)
def __ne__(self, other):

View File

@ -347,6 +347,7 @@ def _nova_to_osvif_vif_ovs(vif):
vif_name=vif_name,
bridge_name=_get_hybrid_bridge_name(vif))
else:
profile.create_port = vif.get('delegate_create', False)
obj = _get_vif_instance(
vif,
objects.vif.VIFOpenVSwitch,

View File

@ -22,8 +22,8 @@ from nova import exception
from nova.objects import base as obj_base
from nova.objects import fields
LOG = log.getLogger(__name__)
OS_VIF_DELEGATION = 'os_vif_delegation'
@obj_base.NovaObjectRegistry.register
@ -60,6 +60,8 @@ class VIFMigrateData(obj_base.NovaObject):
@property
def vif_details(self):
if 'vif_details_json' not in self:
return {}
return jsonutils.loads(self.vif_details_json)
@vif_details.setter
@ -68,12 +70,24 @@ class VIFMigrateData(obj_base.NovaObject):
@property
def profile(self):
if 'profile_json' not in self:
return {}
return jsonutils.loads(self.profile_json)
@profile.setter
def profile(self, profile_dict):
self.profile_json = jsonutils.dumps(profile_dict)
@property
def supports_os_vif_delegation(self):
return self.profile.get(OS_VIF_DELEGATION, False)
# TODO(stephenfin): add a proper delegation field instead of storing this
# info in the profile catch-all blob
@supports_os_vif_delegation.setter
def supports_os_vif_delegation(self, supported):
self.profile[OS_VIF_DELEGATION] = supported
def get_dest_vif(self):
"""Get a destination VIF representation of this object.
@ -91,6 +105,7 @@ class VIFMigrateData(obj_base.NovaObject):
vif['vnic_type'] = self.vnic_type
vif['profile'] = self.profile
vif['details'] = self.vif_details
vif['delegate_create'] = self.supports_os_vif_delegation
return vif
@classmethod

View File

@ -461,16 +461,15 @@ class SRIOVServersTest(_PCIServersTestBase):
self.start_compute(pci_info=pci_info)
# create the SR-IOV port
self.neutron.create_port({'port': self.neutron.network_4_port_1})
port = self.neutron.create_port(
{'port': self.neutron.network_4_port_1})
# create a server using the VF and multiple networks
extra_spec = {'pci_passthrough:alias': f'{self.VFS_ALIAS_NAME}:1'}
flavor_id = self._create_flavor(extra_spec=extra_spec)
flavor_id = self._create_flavor()
server = self._create_server(
flavor_id=flavor_id,
networks=[
{'uuid': base.LibvirtNeutronFixture.network_1['id']},
{'port': base.LibvirtNeutronFixture.network_4_port_1['id']},
{'port': port['port']['id']},
],
)
@ -484,13 +483,16 @@ class SRIOVServersTest(_PCIServersTestBase):
base.LibvirtNeutronFixture.network_1_port_2['mac_address'],
diagnostics['nic_details'][0]['mac_address'],
)
self.assertIsNotNone(diagnostics['nic_details'][0]['tx_packets'])
for key in ('rx_packets', 'tx_packets'):
self.assertIn(key, diagnostics['nic_details'][0])
self.assertEqual(
base.LibvirtNeutronFixture.network_4_port_1['mac_address'],
diagnostics['nic_details'][1]['mac_address'],
)
self.assertIsNone(diagnostics['nic_details'][1]['tx_packets'])
for key in ('rx_packets', 'tx_packets'):
self.assertIn(key, diagnostics['nic_details'][1])
def test_create_server_after_change_in_nonsriov_pf_to_sriov_pf(self):
# Starts a compute with PF not configured with SRIOV capabilities

View File

@ -120,7 +120,7 @@ def fake_get_instance_nw_info(test, num_networks=1):
qbg_params=None,
active=False,
vnic_type='normal',
profile=None,
profile={},
preserve_on_delete=False,
meta={'rxtx_cap': 30},
)

View File

@ -458,7 +458,8 @@ class OSVIFUtilTestCase(test.NoDBTestCase):
subnets=[]),
details={
model.VIF_DETAILS_PORT_FILTER: True,
}
},
delegate_create=False,
)
actual = os_vif_util.nova_to_osvif_vif(vif)
@ -471,7 +472,8 @@ class OSVIFUtilTestCase(test.NoDBTestCase):
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
datapath_type=None),
datapath_type=None,
create_port=False),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(
@ -588,6 +590,7 @@ class OSVIFUtilTestCase(test.NoDBTestCase):
model.VIF_DETAILS_OVS_DATAPATH_TYPE:
model.VIF_DETAILS_OVS_DATAPATH_SYSTEM
},
delegate_create=True,
)
actual = os_vif_util.nova_to_osvif_vif(vif)
@ -600,7 +603,8 @@ class OSVIFUtilTestCase(test.NoDBTestCase):
plugin="ovs",
port_profile=osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="dc065497-3c8d-4f44-8fb4-e1d33c16a536",
datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM),
datapath_type=model.VIF_DETAILS_OVS_DATAPATH_SYSTEM,
create_port=True),
preserve_on_delete=False,
vif_name="nicdc065497-3c",
network=osv_objects.network.Network(

View File

@ -1264,6 +1264,16 @@ class Domain(object):
nic_info['_attached'] = True
self._def['devices']['nics'] += [nic_info]
result = True
else:
# FIXME(sean-k-mooney): We don't currently handle attaching
# or detaching hostdevs but we have tests that assume we do so
# this is an error not an exception. This affects PCI passthough,
# vGPUs and PF neutron ports.
LOG.error(
"Trying to attach an unsupported device type."
"The fakelibvirt implementation is incomplete "
"and should be extended to support %s: %s",
xml, self._def['devices'])
return result
@ -1278,17 +1288,45 @@ class Domain(object):
self.attachDevice(xml)
def detachDevice(self, xml):
# TODO(gibi): this should handle nics similarly to attachDevice()
disk_info = _parse_disk_info(etree.fromstring(xml))
attached_disk_info = None
for attached_disk in self._def['devices']['disks']:
if attached_disk['target_dev'] == disk_info.get('target_dev'):
attached_disk_info = attached_disk
break
# detachDevice is a common function used for all devices types
# so we need to handle each separately
if xml.startswith("<disk"):
disk_info = _parse_disk_info(etree.fromstring(xml))
attached_disk_info = None
for attached_disk in self._def['devices']['disks']:
if attached_disk['target_dev'] == disk_info.get('target_dev'):
attached_disk_info = attached_disk
break
if attached_disk_info:
self._def['devices']['disks'].remove(attached_disk_info)
return attached_disk_info is not None
if attached_disk_info:
self._def['devices']['disks'].remove(attached_disk_info)
return attached_disk_info is not None
if xml.startswith("<interface"):
nic_info = _parse_nic_info(etree.fromstring(xml))
attached_nic_info = None
for attached_nic in self._def['devices']['nics']:
if attached_nic['mac'] == nic_info['mac']:
attached_nic_info = attached_nic
break
if attached_nic_info:
self._def['devices']['nics'].remove(attached_nic_info)
return attached_nic_info is not None
# FIXME(sean-k-mooney): We don't currently handle attaching or
# detaching hostdevs but we have tests that assume we do so this is
# an error not an exception. This affects PCI passthough, vGPUs and
# PF neutron ports
LOG.error(
"Trying to detach an unsupported device type."
"The fakelibvirt implementation is incomplete "
"and should be extended to support %s: %s",
xml, self._def['devices'])
return False
def detachDeviceFlags(self, xml, flags):
self.detachDevice(xml)
@ -1310,34 +1348,46 @@ class Domain(object):
<target dev='%(target_dev)s' bus='%(target_bus)s'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>''' % dict(source_attr=source_attr, **disk)
nics = ''
for nic in self._def['devices']['nics']:
for func, nic in enumerate(self._def['devices']['nics']):
if func > 7:
# this should never be raised but is just present to highlight
# the limitations of the current fake when writing new tests.
# if you see this raised when add a new test you will need
# to extend this fake to use both functions and slots.
# the pci function is limited to 3 bits or 0-7.
raise RuntimeError(
'Test attempts to add more than 8 PCI devices. This is '
'not supported by the fake libvirt implementation.')
nic['func'] = func
# this branch covers most interface types with a source
# such as linux bridge interfaces.
if 'source' in nic:
if nic['type'] == 'hostdev':
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<source>
<address type='pci' domain='0x0000' bus='0x81' slot='0x00' function='0x01'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>''' % nic # noqa: E501
elif nic['type'] == 'vdpa':
# TODO(stephenfin): In real life, this would actually have
# an '<address>' element, but that requires information
# about the host that we're not passing through yet
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<source dev='%(source)s'/>
<model type='virtio'/>
</interface>'''
else:
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<source %(type)s='%(source)s'/>
<target dev='tap274487d1-60'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>''' % nic # noqa: E501
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<source %(type)s='%(source)s'/>
<target dev='tap274487d1-6%(func)s'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x%(func)s'/>
</interface>''' % nic
elif nic['type'] in ('ethernet',):
# this branch covers kernel ovs interfaces
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<target dev='tap274487d1-6%(func)s'/>
</interface>''' % nic
else:
# This branch covers the macvtap vnic-type.
# This is incomplete as the source dev should be unique
# and map to the VF netdev name but due to the mocking in
# the fixture we hard code it.
nics += '''<interface type='%(type)s'>
<mac address='%(mac)s'/>
<source dev='fake_pf_interface_name' mode='passthrough'>
<address type='pci' domain='0x0000' bus='0x81' slot='0x00'
function='0x%(func)s'/>
</source>
</interface>''' % nic
hostdevs = ''
for hostdev in self._def['devices']['hostdevs']:

View File

@ -711,6 +711,7 @@ def _create_test_instance():
'trusted_certs': None,
'resources': None,
'migration_context': None,
'info_cache': None,
}
@ -754,6 +755,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.addCleanup(_p.stop)
self.test_instance = _create_test_instance()
network_info = objects.InstanceInfoCache(
network_info=_fake_network_info(self))
self.test_instance['info_cache'] = network_info
self.test_image_meta = {
"disk_format": "raw",
}
@ -10494,11 +10498,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
self.assertEqual(drvr._uri(), testuri)
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_all_pass_with_block_migration(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
@ -10529,11 +10537,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'serial_listen_addr': None},
return_value.obj_to_primitive()['nova_object.data'])
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_all_pass_with_over_commit(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
@ -10565,11 +10577,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'serial_listen_addr': None},
return_value.obj_to_primitive()['nova_object.data'])
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_all_pass_no_block_migration(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
@ -10598,12 +10614,16 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'serial_listen_addr': None},
return_value.obj_to_primitive()['nova_object.data'])
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file',
return_value='fake')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_fills_listen_addrs(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
# Tests that check_can_live_migrate_destination returns the listen
# addresses required by check_can_live_migrate_source.
self.flags(server_listen='192.0.2.12', group='vnc')
@ -10626,13 +10646,17 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.assertEqual('203.0.113.56',
str(result.serial_listen_addr))
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file',
return_value='fake')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU',
return_value=1)
def test_check_can_live_migrate_dest_ensure_serial_adds_not_set(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
self.flags(proxyclient_address='127.0.0.1', group='serial_console')
self.flags(enabled=False, group='serial_console')
instance_ref = objects.Instance(**self.test_instance)
@ -10643,12 +10667,16 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.context, instance_ref, compute_info, compute_info)
self.assertIsNone(result.serial_listen_addr)
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file',
return_value='fake')
@mock.patch.object(libvirt_driver.LibvirtDriver, '_compare_cpu')
def test_check_can_live_migrate_guest_cpu_none_model(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
# Tests that when instance.vcpu_model.model is None, the host cpu
# model is used for live migration.
instance_ref = objects.Instance(**self.test_instance)
@ -10672,12 +10700,16 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'serial_listen_addr': None},
result.obj_to_primitive()['nova_object.data'])
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file',
return_value='fake')
@mock.patch.object(libvirt_driver.LibvirtDriver, '_compare_cpu')
def test_check_can_live_migrate_dest_numa_lm(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
instance_ref.numa_topology = objects.InstanceNUMATopology(
cells=[objects.InstanceNUMACell()])
@ -10687,12 +10719,16 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.context, instance_ref, compute_info, compute_info)
self.assertTrue(result.dst_supports_numa_live_migration)
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file',
return_value='fake')
@mock.patch.object(libvirt_driver.LibvirtDriver, '_compare_cpu')
def test_check_can_live_migrate_dest_numa_lm_no_instance_numa(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1}
@ -10700,11 +10736,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
self.context, instance_ref, compute_info, compute_info)
self.assertNotIn('dst_supports_numa_live_migration', result)
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_no_instance_cpu_info(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
instance_ref = objects.Instance(**self.test_instance)
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
compute_info = {'cpu_info': jsonutils.dumps({
@ -10737,12 +10777,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
'serial_listen_addr': None},
return_value.obj_to_primitive()['nova_object.data'])
@mock.patch(
'nova.network.neutron.API.supports_port_binding_extension',
new=mock.Mock(return_value=False))
@mock.patch.object(libvirt_driver.LibvirtDriver,
'_create_shared_storage_test_file')
@mock.patch.object(fakelibvirt.Connection, 'compareCPU')
def test_check_can_live_migrate_dest_file_backed(
self, mock_cpu, mock_test_file):
self, mock_cpu, mock_test_file,
):
self.flags(file_backed_memory=1024, group='libvirt')
instance_ref = objects.Instance(**self.test_instance)

View File

@ -149,7 +149,8 @@ class LibvirtVifTestCase(test.NoDBTestCase):
type=network_model.VIF_TYPE_OVS,
details={'port_filter': True},
devname='tap-xxx-yyy-zzz',
ovs_interfaceid=uuids.ovs)
ovs_interfaceid=uuids.ovs,
delegate_create=True)
vif_ovs_legacy = network_model.VIF(id=uuids.vif,
address='ca:fe:de:ad:be:ef',
@ -406,7 +407,8 @@ class LibvirtVifTestCase(test.NoDBTestCase):
self.os_vif_ovs_prof = osv_objects.vif.VIFPortProfileOpenVSwitch(
interface_id="07bd6cea-fb37-4594-b769-90fc51854ee9",
profile_id="fishfood")
profile_id="fishfood",
create_port=True)
self.os_vif_repr_prof = osv_objects.vif.VIFPortProfileOVSRepresentor(
interface_id="07bd6cea-fb37-4594-b769-90fc51854ee9",
@ -1010,35 +1012,12 @@ class LibvirtVifTestCase(test.NoDBTestCase):
self.vif_iovisor['network']['id'],
self.instance.project_id)])
def _check_ovs_virtualport_driver(self, d, vif, want_iface_id):
xml = self._get_instance_xml(d, vif)
node = self._get_node(xml)
self._assertTypeAndMacEquals(node, "bridge", "source", "bridge",
vif, "br0")
vp = node.find("virtualport")
self.assertEqual(vp.get("type"), "openvswitch")
iface_id_found = False
for p_elem in vp.findall("parameters"):
iface_id = p_elem.get("interfaceid", None)
if iface_id:
self.assertEqual(iface_id, want_iface_id)
iface_id_found = True
self.assertTrue(iface_id_found)
def test_generic_ovs_virtualport_driver(self):
d = vif.LibvirtGenericVIFDriver()
want_iface_id = self.vif_ovs['ovs_interfaceid']
self._check_ovs_virtualport_driver(d,
self.vif_ovs,
want_iface_id)
def test_direct_plug_with_port_filter_cap_no_nova_firewall(self):
d = vif.LibvirtGenericVIFDriver()
br_want = self.vif_midonet['devname']
xml = self._get_instance_xml(d, self.vif_ovs_filter_cap)
node = self._get_node(xml)
self._assertTypeAndMacEquals(node, "bridge", "target", "dev",
self._assertTypeAndMacEquals(node, "ethernet", "target", "dev",
self.vif_ovs_filter_cap, br_want)
def test_ib_hostdev_driver(self):
@ -1498,8 +1477,8 @@ class LibvirtVifTestCase(test.NoDBTestCase):
d = vif.LibvirtGenericVIFDriver()
xml = self._get_instance_xml(d, vif_model)
node = self._get_node(xml)
self._assertXmlEqual(expected_xml, node)
node_xml = etree.tostring(node).decode()
self._assertXmlEqual(expected_xml, node_xml)
def test_config_os_vif_bridge(self):
os_vif_type = self.os_vif_bridge
@ -1559,20 +1538,15 @@ class LibvirtVifTestCase(test.NoDBTestCase):
vif_type = self.vif_agilio_ovs
expected_xml = """
<interface type="bridge">
<interface type="ethernet">
<mac address="22:52:25:62:e2:aa"/>
<model type="virtio"/>
<source bridge="br0"/>
<mtu size="9000"/>
<target dev="nicdc065497-3c"/>
<virtualport type="openvswitch">
<parameters
interfaceid="07bd6cea-fb37-4594-b769-90fc51854ee9"/>
</virtualport>
<bandwidth>
<inbound average="100" peak="200" burst="300"/>
<outbound average="10" peak="20" burst="30"/>
</bandwidth>
<bandwidth>
<inbound average="100" peak="200" burst="300"/>
<outbound average="10" peak="20" burst="30"/>
</bandwidth>
</interface>"""
self._test_config_os_vif(os_vif_type, vif_type, expected_xml)
@ -1612,15 +1586,11 @@ class LibvirtVifTestCase(test.NoDBTestCase):
vif_type = self.vif_ovs
expected_xml = """
<interface type="bridge">
<interface type="ethernet">
<mac address="22:52:25:62:e2:aa"/>
<model type="virtio"/>
<source bridge="br0"/>
<mtu size="9000"/>
<target dev="nicdc065497-3c"/>
<virtualport type="openvswitch">
<parameters interfaceid="07bd6cea-fb37-4594-b769-90fc51854ee9"/>
</virtualport>
<bandwidth>
<inbound average="100" peak="200" burst="300"/>
<outbound average="10" peak="20" burst="30"/>

View File

@ -91,9 +91,11 @@ from nova import exception
from nova.i18n import _
from nova.image import glance
from nova.network import model as network_model
from nova.network import neutron
from nova import objects
from nova.objects import diagnostics as diagnostics_obj
from nova.objects import fields
from nova.objects import migrate_data as migrate_data_obj
from nova.pci import manager as pci_manager
from nova.pci import utils as pci_utils
import nova.privsep.libvirt
@ -456,6 +458,7 @@ class LibvirtDriver(driver.ComputeDriver):
self._volume_api = cinder.API()
self._image_api = glance.API()
self._network_api = neutron.API()
# The default choice for the sysinfo_serial config option is "unique"
# which does not have a special function since the value is just the
@ -9065,6 +9068,18 @@ class LibvirtDriver(driver.ComputeDriver):
if instance.numa_topology:
data.dst_supports_numa_live_migration = True
# NOTE(sean-k-mooney): The migrate_data vifs field is used to signal
# that we are using the multiple port binding workflow so we can only
# populate it if we are using multiple port bindings.
# TODO(stephenfin): Remove once we can do this unconditionally in X or
# later
if self._network_api.supports_port_binding_extension(context):
data.vifs = (
migrate_data_obj.VIFMigrateData.create_skeleton_migrate_vifs(
instance.get_network_info()))
for vif in data.vifs:
vif.supports_os_vif_delegation = True
return data
def post_claim_migrate_data(self, context, instance, migrate_data, claim):

View File

@ -450,10 +450,15 @@ class LibvirtGenericVIFDriver(object):
conf.target_dev = vif.vif_name
def _set_config_VIFOpenVSwitch(self, instance, vif, conf):
conf.net_type = "bridge"
conf.source_dev = vif.bridge_name
conf.target_dev = vif.vif_name
self._set_config_VIFPortProfile(instance, vif, conf)
# if delegating creation to os-vif, create an ethernet-type VIF and let
# os-vif do the actual wiring up
if 'create_port' in vif.port_profile and vif.port_profile.create_port:
self._set_config_VIFGeneric(instance, vif, conf)
else:
conf.net_type = "bridge"
conf.source_dev = vif.bridge_name
conf.target_dev = vif.vif_name
self._set_config_VIFPortProfile(instance, vif, conf)
def _set_config_VIFVHostUser(self, instance, vif, conf):
# TODO(sahid): We should never configure a driver backend for

View File

@ -0,0 +1,28 @@
---
security:
- |
In this release OVS port creation has been delegated to os-vif when the
``noop`` or ``openvswitch`` security group firewall drivers are
enabled in Neutron. Those options, and others that disable the
``hybrid_plug`` mechanism, will now use os-vif instead of libvirt to plug
VIFs into the bridge. By delegating port plugging to os-vif we can use the
``isolate_vif`` config option to ensure VIFs are plugged securely preventing
guests from accessing other tenants' networks before the neutron ovs agent
can wire up the port. See `bug #1734320`_ for details.
Note that OVN, ODL and other SDN solutions also use
``hybrid_plug=false`` but they are not known to be affected by the security
issue caused by the previous behavior. As such the ``isolate_vif``
os-vif config option is only used when deploying with ml2/ovs.
fixes:
- |
In this release we delegate port plugging to os-vif for all OVS interface
types. This allows os-vif to create the OVS port before libvirt creates
a tap device during a live migration therefore preventing the loss of
the MAC learning frames generated by QEMU. This resolves a long-standing
race condition between Libvirt creating the OVS port, Neutron wiring up
the OVS port and QEMU generating RARP packets to populate the vswitch
MAC learning table. As a result this reduces the interval during a live
migration where packets can be lost. See `bug #1815989`_ for details.
.. _`bug #1734320`: https://bugs.launchpad.net/neutron/+bug/1734320
.. _`bug #1815989`: https://bugs.launchpad.net/neutron/+bug/1815989

View File

@ -54,7 +54,7 @@ oslo.versionedobjects>=1.35.0 # Apache-2.0
os-brick>=4.3.1 # Apache-2.0
os-resource-classes>=0.4.0 # Apache-2.0
os-traits>=2.5.0 # Apache-2.0
os-vif>=1.14.0 # Apache-2.0
os-vif>=1.15.2 # Apache-2.0
os-win>=5.4.0 # Apache-2.0
castellan>=0.16.0 # Apache-2.0
microversion-parse>=0.2.1 # Apache-2.0