hardware: Differentiate between shared and dedicated CPUs
Start making use of the new fields added to the 'NUMACell' object and store things differently. There are a lot of mechanical changes required to fix these. The bulk of these fall into one of three categories: - Migrating 'NUMACell.cpuset' values to 'NUMACell.pcpuset' where necessary (hint: if we were testing something to do with a instance NUMA topology with pinning, we need to migrate the field) - Configuring the 'cpu_shared_set' and 'cpu_dedicated_set' config options (from the '[compute]' group) and changing calls to 'nova.virt.hardware.get_vcpu_pin_set' to '_get_vcpu_available' and/or '_get_pcpu_available'. This is necessary because the '_get_guest_numa_config' function has changed to call the latter instead of the former. - Removing checks for 'NUMACell.cpu_usage' for pinned tests, since this no longer counts usage of pinned CPUs (that's handled by the 'pinned_cpus' field on the same object) The only serious deviation from this is the 'test_get_guest_config_numa_host_instance_cpu_pinning_realtime' test, which has to have configuration added to ensure the guest looks like it has CPU pinning configured. This test was pretty much broken before. It's useful to understand the lifecycle of the 'NUMATopology' object in order to understand the upgrade impacts of this change, in so far as it relates to the libvirt virt driver. We generate a 'NUMATopology' object on the compute node in the 'LibvirtDriver._get_host_numa_topology' method and report that as part of the 'ComputeNode' object provided to the resource tracker from the 'LibvirtDriver.get_available_resource' method. The 'NUMATopology' object generated by the driver *does not* include any usage information, meaning the fields 'cpu_usage', 'pinned_cpus' and 'memory_usage' are set to empty values. Instead, these are calculated by calling the 'hardware.numa_usage_from_instance_numa' function. This happens in two places, in the resource tracker as part of the 'ResourceTracker._update_usage' method, which is called by the 'ResourceTracker.update_usage' periodic task as well as by other internal operations each time a instance is created, moved or destroyed, and in the 'HostState._update_from_compute_node' method, which is called by the 'HostState.update' method on the scheduler at startup. As a result of the above, there isn't a significant upgrade impact for this and it remains possible to run older Stein-based compute nodes alongside newer Train-based compute nodes. There are two things that make this possible. Firstly, at no point in this entire process does a 'NUMATopology' object make its way back to the compute node once it leaves, by way of a 'ComputeNode' object. That's true for objects in general and means we don't need to worry about the compute node seeing these new object fields and not being able to understand them. Secondly, we have checks in that crucial 'hardware.numa_usage_from_instance_numa' function, which will check if 'pcpuset' is not set (meaning it's a pre-Train compute node) and set this locally. Because the virt driver doesn't set the usage-related fields, this is the only field we need to care about. Part of blueprint cpu-resources Change-Id: I492803eaacc34c69af073689f9159449557919db Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
parent
1ecb141da1
commit
8da70ef95e
|
@ -75,17 +75,19 @@ class NUMACell(base.NovaObject):
|
|||
return not (self == other)
|
||||
|
||||
@property
|
||||
def free_cpus(self):
|
||||
return self.cpuset - self.pinned_cpus or set()
|
||||
def free_pcpus(self):
|
||||
"""Return available dedicated CPUs."""
|
||||
return self.pcpuset - self.pinned_cpus or set()
|
||||
|
||||
@property
|
||||
def free_siblings(self):
|
||||
return [sibling_set & self.free_cpus
|
||||
for sibling_set in self.siblings]
|
||||
"""Return available dedicated CPUs in their sibling set form."""
|
||||
return [sibling_set & self.free_pcpus for sibling_set in self.siblings]
|
||||
|
||||
@property
|
||||
def avail_cpus(self):
|
||||
return len(self.free_cpus)
|
||||
def avail_pcpus(self):
|
||||
"""Return number of available dedicated CPUs."""
|
||||
return len(self.free_pcpus)
|
||||
|
||||
@property
|
||||
def avail_memory(self):
|
||||
|
@ -97,23 +99,27 @@ class NUMACell(base.NovaObject):
|
|||
return any(len(sibling_set) > 1 for sibling_set in self.siblings)
|
||||
|
||||
def pin_cpus(self, cpus):
|
||||
if cpus - self.cpuset:
|
||||
if cpus - self.pcpuset:
|
||||
raise exception.CPUPinningUnknown(requested=list(cpus),
|
||||
available=list(self.cpuset))
|
||||
available=list(self.pcpuset))
|
||||
|
||||
if self.pinned_cpus & cpus:
|
||||
available = list(self.pcpuset - self.pinned_cpus)
|
||||
raise exception.CPUPinningInvalid(requested=list(cpus),
|
||||
available=list(self.cpuset -
|
||||
self.pinned_cpus))
|
||||
available=available)
|
||||
|
||||
self.pinned_cpus |= cpus
|
||||
|
||||
def unpin_cpus(self, cpus):
|
||||
if cpus - self.cpuset:
|
||||
if cpus - self.pcpuset:
|
||||
raise exception.CPUUnpinningUnknown(requested=list(cpus),
|
||||
available=list(self.cpuset))
|
||||
available=list(self.pcpuset))
|
||||
|
||||
if (self.pinned_cpus & cpus) != cpus:
|
||||
raise exception.CPUUnpinningInvalid(requested=list(cpus),
|
||||
available=list(
|
||||
self.pinned_cpus))
|
||||
|
||||
self.pinned_cpus -= cpus
|
||||
|
||||
def pin_cpus_with_siblings(self, cpus):
|
||||
|
|
|
@ -5701,8 +5701,8 @@ class ComputeTestCase(BaseTestCase,
|
|||
# are used
|
||||
cell1 = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2]),
|
||||
memory=512,
|
||||
pagesize=2048,
|
||||
cpu_usage=2,
|
||||
|
@ -5715,8 +5715,8 @@ class ComputeTestCase(BaseTestCase,
|
|||
# are free (on current host)
|
||||
cell2 = objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([3, 4]),
|
||||
pinned_cpus=set(),
|
||||
memory=512,
|
||||
pagesize=2048,
|
||||
|
@ -5765,7 +5765,6 @@ class ComputeTestCase(BaseTestCase,
|
|||
# after confirming resize all cpus on currect host must be free
|
||||
self.assertEqual(2, len(updated_topology.cells))
|
||||
for cell in updated_topology.cells:
|
||||
self.assertEqual(0, cell.cpu_usage)
|
||||
self.assertEqual(set(), cell.pinned_cpus)
|
||||
|
||||
def _test_resize_with_pci(self, method, expected_pci_addr):
|
||||
|
|
|
@ -45,8 +45,8 @@ class _TestNUMACell(object):
|
|||
def test_free_cpus(self):
|
||||
cell_a = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2]),
|
||||
memory=512,
|
||||
cpu_usage=2,
|
||||
memory_usage=256,
|
||||
|
@ -55,8 +55,8 @@ class _TestNUMACell(object):
|
|||
mempages=[])
|
||||
cell_b = objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([3, 4]),
|
||||
memory=512,
|
||||
cpu_usage=1,
|
||||
memory_usage=128,
|
||||
|
@ -64,14 +64,14 @@ class _TestNUMACell(object):
|
|||
siblings=[set([3]), set([4])],
|
||||
mempages=[])
|
||||
|
||||
self.assertEqual(set([2]), cell_a.free_cpus)
|
||||
self.assertEqual(set([3, 4]), cell_b.free_cpus)
|
||||
self.assertEqual(set([2]), cell_a.free_pcpus)
|
||||
self.assertEqual(set([3, 4]), cell_b.free_pcpus)
|
||||
|
||||
def test_pinning_logic(self):
|
||||
numacell = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 3, 4]),
|
||||
memory=512,
|
||||
cpu_usage=2,
|
||||
memory_usage=256,
|
||||
|
@ -79,7 +79,7 @@ class _TestNUMACell(object):
|
|||
siblings=[set([1]), set([2]), set([3]), set([4])],
|
||||
mempages=[])
|
||||
numacell.pin_cpus(set([2, 3]))
|
||||
self.assertEqual(set([4]), numacell.free_cpus)
|
||||
self.assertEqual(set([4]), numacell.free_pcpus)
|
||||
|
||||
expect_msg = exception.CPUPinningUnknown.msg_fmt % {
|
||||
'requested': r'\[1, 55\]', 'available': r'\[1, 2, 3, 4\]'}
|
||||
|
@ -99,13 +99,13 @@ class _TestNUMACell(object):
|
|||
self.assertRaises(exception.CPUUnpinningInvalid,
|
||||
numacell.unpin_cpus, set([1, 4]))
|
||||
numacell.unpin_cpus(set([1, 2, 3]))
|
||||
self.assertEqual(set([1, 2, 3, 4]), numacell.free_cpus)
|
||||
self.assertEqual(set([1, 2, 3, 4]), numacell.free_pcpus)
|
||||
|
||||
def test_pinning_with_siblings(self):
|
||||
numacell = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 3, 4]),
|
||||
memory=512,
|
||||
cpu_usage=2,
|
||||
memory_usage=256,
|
||||
|
@ -114,9 +114,9 @@ class _TestNUMACell(object):
|
|||
mempages=[])
|
||||
|
||||
numacell.pin_cpus_with_siblings(set([1, 2]))
|
||||
self.assertEqual(set(), numacell.free_cpus)
|
||||
self.assertEqual(set(), numacell.free_pcpus)
|
||||
numacell.unpin_cpus_with_siblings(set([1]))
|
||||
self.assertEqual(set([1, 3]), numacell.free_cpus)
|
||||
self.assertEqual(set([1, 3]), numacell.free_pcpus)
|
||||
self.assertRaises(exception.CPUUnpinningInvalid,
|
||||
numacell.unpin_cpus_with_siblings,
|
||||
set([3]))
|
||||
|
@ -126,15 +126,15 @@ class _TestNUMACell(object):
|
|||
self.assertRaises(exception.CPUUnpinningInvalid,
|
||||
numacell.unpin_cpus_with_siblings,
|
||||
set([3, 4]))
|
||||
self.assertEqual(set([1, 3]), numacell.free_cpus)
|
||||
self.assertEqual(set([1, 3]), numacell.free_pcpus)
|
||||
numacell.unpin_cpus_with_siblings(set([4]))
|
||||
self.assertEqual(set([1, 2, 3, 4]), numacell.free_cpus)
|
||||
self.assertEqual(set([1, 2, 3, 4]), numacell.free_pcpus)
|
||||
|
||||
def test_pinning_with_siblings_no_host_siblings(self):
|
||||
numacell = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 3, 4]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=256,
|
||||
|
|
|
@ -28,8 +28,8 @@ from nova.scheduler import host_manager
|
|||
NUMA_TOPOLOGY = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set([0, 1]),
|
||||
pcpuset=set([2, 3]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -37,11 +37,11 @@ NUMA_TOPOLOGY = objects.NUMATopology(cells=[
|
|||
mempages=[
|
||||
objects.NUMAPagesTopology(size_kb=16, total=387184, used=0),
|
||||
objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)],
|
||||
siblings=[set([0]), set([1])]),
|
||||
siblings=[set([0]), set([1]), set([2]), set([3])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([3, 4]),
|
||||
pcpuset=set(),
|
||||
cpuset=set([4, 5]),
|
||||
pcpuset=set([6, 7]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -49,14 +49,14 @@ NUMA_TOPOLOGY = objects.NUMATopology(cells=[
|
|||
mempages=[
|
||||
objects.NUMAPagesTopology(size_kb=4, total=1548736, used=0),
|
||||
objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)],
|
||||
siblings=[set([2]), set([3])])])
|
||||
siblings=[set([4]), set([5]), set([6]), set([7])])])
|
||||
|
||||
NUMA_TOPOLOGIES_W_HT = [
|
||||
objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 5, 6]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 5, 6]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -65,8 +65,8 @@ NUMA_TOPOLOGIES_W_HT = [
|
|||
siblings=[set([1, 5]), set([2, 6])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([3, 4, 7, 8]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([3, 4, 7, 8]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -87,8 +87,8 @@ NUMA_TOPOLOGIES_W_HT = [
|
|||
siblings=[]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([1, 2, 5, 6]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 5, 6]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -97,8 +97,8 @@ NUMA_TOPOLOGIES_W_HT = [
|
|||
siblings=[set([1, 5]), set([2, 6])]),
|
||||
objects.NUMACell(
|
||||
id=2,
|
||||
cpuset=set([3, 4, 7, 8]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([3, 4, 7, 8]),
|
||||
memory=512,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
|
|
@ -2757,6 +2757,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
@mock.patch.object(
|
||||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||
def test_get_guest_config_numa_host_instance_fits(self, is_able):
|
||||
self.flags(cpu_shared_set=None, cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
||||
flavor = objects.Flavor(memory_mb=1, vcpus=2, root_gb=496,
|
||||
|
@ -2779,7 +2781,10 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
mock.patch.object(host.Host, 'has_min_version',
|
||||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps)):
|
||||
return_value=caps),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set([0, 1])),
|
||||
):
|
||||
cfg = drvr._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
self.assertIsNone(cfg.cpuset)
|
||||
|
@ -2812,18 +2817,14 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set', return_value=set([3])),
|
||||
mock.patch.object(random, 'choice'),
|
||||
mock.patch.object(drvr, '_has_numa_support',
|
||||
return_value=False)
|
||||
) as (get_host_cap_mock,
|
||||
get_vcpu_pin_set_mock, choice_mock,
|
||||
_has_numa_support_mock):
|
||||
) as (_, choice_mock, _):
|
||||
cfg = drvr._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
self.assertFalse(choice_mock.called)
|
||||
self.assertEqual(set([3]), cfg.cpuset)
|
||||
self.assertIsNone(cfg.cpuset)
|
||||
self.assertEqual(0, len(cfg.cputune.vcpupin))
|
||||
self.assertIsNone(cfg.cpu.numa)
|
||||
|
||||
|
@ -3197,6 +3198,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||
def test_get_guest_config_numa_host_instance_pci_no_numa_info(
|
||||
self, is_able):
|
||||
self.flags(cpu_shared_set='3', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
||||
flavor = objects.Flavor(memory_mb=1, vcpus=2, root_gb=496,
|
||||
|
@ -3228,12 +3232,10 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(host.Host, 'has_min_version',
|
||||
return_value=True),
|
||||
mock.patch.object(
|
||||
host.Host, "get_capabilities", return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set', return_value=set([3])),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(8))),
|
||||
return_value=set([3])),
|
||||
mock.patch.object(pci_manager, "get_instance_pci_devs",
|
||||
return_value=[pci_device])):
|
||||
cfg = conn._get_guest_config(instance_ref, [],
|
||||
|
@ -3247,6 +3249,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
@mock.patch.object(
|
||||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||
def test_get_guest_config_numa_host_instance_2pci_no_fit(self, is_able):
|
||||
self.flags(cpu_shared_set='3', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
||||
flavor = objects.Flavor(memory_mb=4096, vcpus=4, root_gb=496,
|
||||
|
@ -3279,16 +3283,14 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(
|
||||
host.Host, "get_capabilities", return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set', return_value=set([3])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set([3])),
|
||||
mock.patch.object(random, 'choice'),
|
||||
mock.patch.object(pci_manager, "get_instance_pci_devs",
|
||||
return_value=[pci_device, pci_device2]),
|
||||
mock.patch.object(conn, '_has_numa_support',
|
||||
return_value=False)
|
||||
) as (get_host_cap_mock,
|
||||
get_vcpu_pin_set_mock, choice_mock, pci_mock,
|
||||
_has_numa_support_mock):
|
||||
) as (_, _, choice_mock, pci_mock, _):
|
||||
cfg = conn._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
self.assertFalse(choice_mock.called)
|
||||
|
@ -3368,6 +3370,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||
def test_get_guest_config_numa_host_instance_fit_w_cpu_pinset(
|
||||
self, is_able):
|
||||
self.flags(cpu_shared_set='2-3', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
image_meta = objects.ImageMeta.from_dict(self.test_image_meta)
|
||||
flavor = objects.Flavor(memory_mb=1024, vcpus=2, root_gb=496,
|
||||
|
@ -3389,14 +3394,11 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(host.Host, 'has_min_version',
|
||||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
mock.patch.object(host.Host, 'get_capabilities',
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set', return_value=set([2, 3])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(8)))
|
||||
) as (has_min_version_mock, get_host_cap_mock,
|
||||
get_vcpu_pin_set_mock, get_online_cpus_mock):
|
||||
return_value=set([2, 3])),
|
||||
):
|
||||
cfg = drvr._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
# NOTE(ndipanov): we make sure that pin_set was taken into account
|
||||
|
@ -3456,6 +3458,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
@mock.patch.object(
|
||||
host.Host, "is_cpu_control_policy_capable", return_value=True)
|
||||
def test_get_guest_config_numa_host_instance_topo(self, is_able):
|
||||
self.flags(cpu_shared_set='0-5', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
cells=[objects.InstanceNUMACell(
|
||||
id=1, cpuset=set([0, 1]), memory=1024, pagesize=None),
|
||||
|
@ -3489,9 +3494,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([2, 3, 4, 5])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(8))),
|
||||
):
|
||||
|
@ -3686,6 +3688,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
self.assertEqual("strict", memnode.mode)
|
||||
|
||||
def test_get_guest_config_numa_host_mempages_shared(self):
|
||||
self.flags(cpu_shared_set='2-5', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
cells=[
|
||||
objects.InstanceNUMACell(
|
||||
|
@ -3724,11 +3729,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([2, 3, 4, 5])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(8))),
|
||||
return_value=set([2, 3, 4, 5])),
|
||||
):
|
||||
cfg = drvr._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
|
@ -3759,13 +3761,20 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
self.assertEqual(set([2, 3, 4, 5]), cfg.cputune.emulatorpin.cpuset)
|
||||
|
||||
def test_get_guest_config_numa_host_instance_cpu_pinning_realtime(self):
|
||||
self.flags(cpu_shared_set=None, cpu_dedicated_set='4-7',
|
||||
group='compute')
|
||||
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
cells=[
|
||||
objects.InstanceNUMACell(
|
||||
id=2, cpuset=set([0, 1]),
|
||||
cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
|
||||
cpu_pinning={0: 4, 1: 5},
|
||||
memory=1024, pagesize=2048),
|
||||
objects.InstanceNUMACell(
|
||||
id=3, cpuset=set([2, 3]),
|
||||
cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
|
||||
cpu_pinning={2: 6, 3: 7},
|
||||
memory=1024, pagesize=2048)])
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
instance_ref.numa_topology = instance_topology
|
||||
|
@ -3773,6 +3782,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
flavor = objects.Flavor(memory_mb=2048, vcpus=4, root_gb=496,
|
||||
ephemeral_gb=8128, swap=33550336, name='fake',
|
||||
extra_specs={
|
||||
"hw:numa_nodes": "2",
|
||||
"hw:cpu_realtime": "yes",
|
||||
"hw:cpu_policy": "dedicated",
|
||||
"hw:cpu_realtime_mask": "^0-1"
|
||||
|
@ -3801,9 +3811,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([4, 5, 6, 7])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(8))),
|
||||
):
|
||||
|
@ -3835,12 +3842,10 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
self.assertEqual(1, len(cfg.cputune.vcpusched))
|
||||
self.assertEqual("fifo", cfg.cputune.vcpusched[0].scheduler)
|
||||
|
||||
# Ensure vCPUs 0-1 are pinned on host CPUs 4-5 and 2-3 are
|
||||
# set on host CPUs 6-7 according the realtime mask ^0-1
|
||||
self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[0].cpuset)
|
||||
self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[1].cpuset)
|
||||
self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[2].cpuset)
|
||||
self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[3].cpuset)
|
||||
self.assertEqual(set([4]), cfg.cputune.vcpupin[0].cpuset)
|
||||
self.assertEqual(set([5]), cfg.cputune.vcpupin[1].cpuset)
|
||||
self.assertEqual(set([6]), cfg.cputune.vcpupin[2].cpuset)
|
||||
self.assertEqual(set([7]), cfg.cputune.vcpupin[3].cpuset)
|
||||
|
||||
# We ensure that emulator threads are pinned on host CPUs
|
||||
# 4-5 which are "normal" vCPUs
|
||||
|
@ -3851,6 +3856,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
self.assertEqual(set([2, 3]), cfg.cputune.vcpusched[0].vcpus)
|
||||
|
||||
def test_get_guest_config_numa_host_instance_isolated_emulthreads(self):
|
||||
self.flags(cpu_shared_set=None, cpu_dedicated_set='4-8',
|
||||
group='compute')
|
||||
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
emulator_threads_policy=(
|
||||
fields.CPUEmulatorThreadsPolicy.ISOLATE),
|
||||
|
@ -3889,9 +3897,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([4, 5, 6, 7, 8])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(10))),
|
||||
):
|
||||
|
@ -3906,7 +3911,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
|
||||
def test_get_guest_config_numa_host_instance_shared_emulthreads_err(
|
||||
self):
|
||||
self.flags(cpu_shared_set="48-50", group="compute")
|
||||
self.flags(cpu_shared_set='48-50', cpu_dedicated_set='4-8',
|
||||
group='compute')
|
||||
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
emulator_threads_policy=(
|
||||
fields.CPUEmulatorThreadsPolicy.SHARE),
|
||||
|
@ -3945,9 +3952,6 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([4, 5, 6, 7, 8])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(10))),
|
||||
):
|
||||
|
@ -3957,7 +3961,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
|
||||
def test_get_guest_config_numa_host_instance_shared_emulator_threads(
|
||||
self):
|
||||
self.flags(cpu_shared_set="48-50", group="compute")
|
||||
self.flags(cpu_shared_set='0,1', cpu_dedicated_set='2-7',
|
||||
group='compute')
|
||||
instance_topology = objects.InstanceNUMATopology(
|
||||
emulator_threads_policy=(
|
||||
fields.CPUEmulatorThreadsPolicy.SHARE),
|
||||
|
@ -3966,13 +3971,13 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
id=0, cpuset=set([0, 1]),
|
||||
memory=1024, pagesize=2048,
|
||||
cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
|
||||
cpu_pinning={0: 4, 1: 5},
|
||||
cpu_pinning={0: 2, 1: 3},
|
||||
cpuset_reserved=set([6])),
|
||||
objects.InstanceNUMACell(
|
||||
id=1, cpuset=set([2, 3]),
|
||||
memory=1024, pagesize=2048,
|
||||
cpu_policy=fields.CPUAllocationPolicy.DEDICATED,
|
||||
cpu_pinning={2: 7, 3: 8})])
|
||||
cpu_pinning={2: 4, 3: 5})])
|
||||
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
instance_ref.numa_topology = instance_topology
|
||||
|
@ -3996,23 +4001,18 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
return_value=True),
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([4, 5, 6, 7, 8])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(list(range(10)) +
|
||||
[48, 50])),
|
||||
return_value=set(range(8))),
|
||||
):
|
||||
cfg = drvr._get_guest_config(instance_ref, [],
|
||||
image_meta, disk_info)
|
||||
|
||||
# cpu_shared_set is configured with [48, 49, 50] but only
|
||||
# [48, 50] are online.
|
||||
self.assertEqual(set([48, 50]), cfg.cputune.emulatorpin.cpuset)
|
||||
self.assertEqual(set([4]), cfg.cputune.vcpupin[0].cpuset)
|
||||
self.assertEqual(set([5]), cfg.cputune.vcpupin[1].cpuset)
|
||||
self.assertEqual(set([7]), cfg.cputune.vcpupin[2].cpuset)
|
||||
self.assertEqual(set([8]), cfg.cputune.vcpupin[3].cpuset)
|
||||
# emulator threads should be mapped to cores from 'cpu_shared_set'
|
||||
self.assertEqual(set([0, 1]), cfg.cputune.emulatorpin.cpuset)
|
||||
self.assertEqual(set([2]), cfg.cputune.vcpupin[0].cpuset)
|
||||
self.assertEqual(set([3]), cfg.cputune.vcpupin[1].cpuset)
|
||||
self.assertEqual(set([4]), cfg.cputune.vcpupin[2].cpuset)
|
||||
self.assertEqual(set([5]), cfg.cputune.vcpupin[3].cpuset)
|
||||
|
||||
def test_get_cpu_numa_config_from_instance(self):
|
||||
topology = objects.InstanceNUMATopology(cells=[
|
||||
|
@ -4073,7 +4073,7 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(drvr, "_get_host_numa_topology",
|
||||
return_value=host_topology),
|
||||
mock.patch.object(hardware, 'get_vcpu_pin_set',
|
||||
mock.patch.object(hardware, "get_cpu_shared_set",
|
||||
return_value=[1, 2, 3, 4, 5, 6])):
|
||||
guest_numa_config = drvr._get_guest_numa_config(
|
||||
instance_topology, flavor={}, image_meta={})
|
||||
|
@ -7207,11 +7207,15 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
'EPYC',
|
||||
'EPYC-IBPB']
|
||||
|
||||
def fake_getCPUMap():
|
||||
return (2, [True, True], 2)
|
||||
|
||||
# Make sure the host arch is mocked as x86_64
|
||||
self.create_fake_libvirt_mock(getCapabilities=fake_getCapabilities,
|
||||
baselineCPU=fake_baselineCPU,
|
||||
getCPUModelNames=fake_getCPUModelNames,
|
||||
getVersion=lambda: 1005001)
|
||||
getVersion=lambda: 1005001,
|
||||
getCPUMap=fake_getCPUMap)
|
||||
|
||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True)
|
||||
instance_ref = objects.Instance(**self.test_instance)
|
||||
|
@ -13805,6 +13809,9 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
'EPYC',
|
||||
'EPYC-IBPB']
|
||||
|
||||
def fake_getCPUMap():
|
||||
return (2, [True, True], 2)
|
||||
|
||||
# _fake_network_info must be called before create_fake_libvirt_mock(),
|
||||
# as _fake_network_info calls importutils.import_class() and
|
||||
# create_fake_libvirt_mock() mocks importutils.import_class().
|
||||
|
@ -13813,7 +13820,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
getCapabilities=fake_getCapabilities,
|
||||
getCPUModelNames=fake_getCPUModelNames,
|
||||
getVersion=lambda: 1005001,
|
||||
baselineCPU=fake_baselineCPU)
|
||||
baselineCPU=fake_baselineCPU,
|
||||
getCPUMap=fake_getCPUMap)
|
||||
|
||||
instance = objects.Instance(**self.test_instance)
|
||||
instance.image_ref = uuids.image_ref
|
||||
|
@ -16604,9 +16612,62 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
mock.call('0000:04:11.7', False)
|
||||
])
|
||||
|
||||
# TODO(stephenfin): This only has one caller. Flatten it and remove the
|
||||
# 'mempages=False' branches or add the missing test
|
||||
def _test_get_host_numa_topology(self, mempages):
|
||||
@mock.patch.object(host.Host, 'has_min_version',
|
||||
new=mock.Mock(return_value=True))
|
||||
def _test_get_host_numa_topology(self):
|
||||
nodes = 4
|
||||
sockets = 1
|
||||
cores = 1
|
||||
threads = 2
|
||||
total_cores = nodes * sockets * cores * threads
|
||||
|
||||
caps = vconfig.LibvirtConfigCaps()
|
||||
caps.host = vconfig.LibvirtConfigCapsHost()
|
||||
caps.host.cpu = vconfig.LibvirtConfigCPU()
|
||||
caps.host.cpu.arch = fields.Architecture.X86_64
|
||||
caps.host.topology = fakelibvirt.NUMATopology(
|
||||
cpu_nodes=nodes, cpu_sockets=sockets, cpu_cores=cores,
|
||||
cpu_threads=threads)
|
||||
for i, cell in enumerate(caps.host.topology.cells):
|
||||
cell.mempages = fakelibvirt.create_mempages(
|
||||
[(4, 1024 * i), (2048, i)])
|
||||
|
||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
||||
|
||||
with test.nested(
|
||||
mock.patch.object(host.Host, 'get_capabilities',
|
||||
return_value=caps),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set(range(total_cores))),
|
||||
):
|
||||
got_topo = drvr._get_host_numa_topology()
|
||||
|
||||
# there should be varying amounts of mempages for each cell
|
||||
self.assertEqual(4, got_topo.cells[0].mempages[0].size_kb)
|
||||
self.assertEqual(0, got_topo.cells[0].mempages[0].total)
|
||||
self.assertEqual(2048, got_topo.cells[0].mempages[1].size_kb)
|
||||
self.assertEqual(0, got_topo.cells[0].mempages[1].total)
|
||||
self.assertEqual(4, got_topo.cells[1].mempages[0].size_kb)
|
||||
self.assertEqual(1024, got_topo.cells[1].mempages[0].total)
|
||||
self.assertEqual(2048, got_topo.cells[1].mempages[1].size_kb)
|
||||
self.assertEqual(1, got_topo.cells[1].mempages[1].total)
|
||||
|
||||
# none of the topologies should have pinned CPUs yet
|
||||
self.assertEqual(set([]), got_topo.cells[0].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[1].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[2].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[3].pinned_cpus)
|
||||
|
||||
# return to caller for further checks
|
||||
return got_topo
|
||||
|
||||
def test_get_host_numa_topology(self):
|
||||
"""Check that the host NUMA topology is generated correctly for a
|
||||
fairly complex configuration.
|
||||
"""
|
||||
self.flags(cpu_shared_set='0-1', cpu_dedicated_set='2-6',
|
||||
group='compute')
|
||||
self.flags(vcpu_pin_set=None)
|
||||
self.flags(physnets=['foo', 'bar', 'baz'], group='neutron')
|
||||
# we need to call the below again to ensure the updated 'physnets'
|
||||
# value is read and the new groups created
|
||||
|
@ -16616,68 +16677,122 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
self.flags(numa_nodes=[3], group='neutron_physnet_bar')
|
||||
self.flags(numa_nodes=[1, 2, 3], group='neutron_physnet_baz')
|
||||
|
||||
caps = vconfig.LibvirtConfigCaps()
|
||||
caps.host = vconfig.LibvirtConfigCapsHost()
|
||||
caps.host.cpu = vconfig.LibvirtConfigCPU()
|
||||
caps.host.cpu.arch = fields.Architecture.X86_64
|
||||
caps.host.topology = fakelibvirt.NUMATopology()
|
||||
if mempages:
|
||||
for i, cell in enumerate(caps.host.topology.cells):
|
||||
cell.mempages = fakelibvirt.create_mempages(
|
||||
[(4, 1024 * i), (2048, i)])
|
||||
got_topo = self._test_get_host_numa_topology()
|
||||
|
||||
drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
|
||||
# only cores 0 and 1 are configured as shared using the
|
||||
# 'cpu_shared_set' config option
|
||||
self.assertEqual(set([0, 1]), got_topo.cells[0].cpuset)
|
||||
self.assertEqual(set(), got_topo.cells[0].pcpuset)
|
||||
self.assertEqual(set(), got_topo.cells[1].cpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].pcpuset)
|
||||
self.assertEqual(set(), got_topo.cells[2].cpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].pcpuset)
|
||||
self.assertEqual(set(), got_topo.cells[3].cpuset)
|
||||
self.assertEqual(set([6]), got_topo.cells[3].pcpuset)
|
||||
|
||||
with test.nested(
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(
|
||||
hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([0, 1, 3, 4, 5])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set([0, 1, 2, 3, 6])),
|
||||
):
|
||||
got_topo = drvr._get_host_numa_topology()
|
||||
# all cells except the last one should have siblings
|
||||
self.assertEqual([set([0, 1])], got_topo.cells[0].siblings)
|
||||
self.assertEqual([set([2, 3])], got_topo.cells[1].siblings)
|
||||
self.assertEqual([set([4, 5])], got_topo.cells[2].siblings)
|
||||
self.assertEqual([set([6])], got_topo.cells[3].siblings)
|
||||
|
||||
if mempages:
|
||||
# cells 0
|
||||
self.assertEqual(4, got_topo.cells[0].mempages[0].size_kb)
|
||||
self.assertEqual(0, got_topo.cells[0].mempages[0].total)
|
||||
self.assertEqual(2048, got_topo.cells[0].mempages[1].size_kb)
|
||||
self.assertEqual(0, got_topo.cells[0].mempages[1].total)
|
||||
# cells 1
|
||||
self.assertEqual(4, got_topo.cells[1].mempages[0].size_kb)
|
||||
self.assertEqual(1024, got_topo.cells[1].mempages[0].total)
|
||||
self.assertEqual(2048, got_topo.cells[1].mempages[1].size_kb)
|
||||
self.assertEqual(1, got_topo.cells[1].mempages[1].total)
|
||||
else:
|
||||
self.assertEqual([], got_topo.cells[0].mempages)
|
||||
self.assertEqual([], got_topo.cells[1].mempages)
|
||||
self.assertEqual(set(),
|
||||
got_topo.cells[0].network_metadata.physnets)
|
||||
self.assertEqual(set(['foo', 'baz']),
|
||||
got_topo.cells[1].network_metadata.physnets)
|
||||
self.assertEqual(set(['baz']),
|
||||
got_topo.cells[2].network_metadata.physnets)
|
||||
self.assertEqual(set(['bar', 'baz']),
|
||||
got_topo.cells[3].network_metadata.physnets)
|
||||
|
||||
self.assertEqual(set([]), got_topo.cells[0].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[1].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[2].pinned_cpus)
|
||||
self.assertEqual(set([]), got_topo.cells[3].pinned_cpus)
|
||||
self.assertEqual([set([0, 1])], got_topo.cells[0].siblings)
|
||||
self.assertEqual([set([3])], got_topo.cells[1].siblings)
|
||||
self.assertTrue(got_topo.cells[0].network_metadata.tunneled)
|
||||
self.assertFalse(got_topo.cells[1].network_metadata.tunneled)
|
||||
self.assertTrue(got_topo.cells[2].network_metadata.tunneled)
|
||||
self.assertFalse(got_topo.cells[3].network_metadata.tunneled)
|
||||
|
||||
self.assertEqual(set(),
|
||||
got_topo.cells[0].network_metadata.physnets)
|
||||
self.assertEqual(set(['foo', 'baz']),
|
||||
got_topo.cells[1].network_metadata.physnets)
|
||||
self.assertEqual(set(['baz']),
|
||||
got_topo.cells[2].network_metadata.physnets)
|
||||
self.assertEqual(set(['bar', 'baz']),
|
||||
got_topo.cells[3].network_metadata.physnets)
|
||||
def test_get_host_numa_topology__vcpu_pin_set_fallback(self):
|
||||
"""Check that the host NUMA topology will fall back to using
|
||||
'vcpu_pin_set' if 'cpu_dedicated_set' is not defined.
|
||||
"""
|
||||
self.flags(cpu_shared_set='0-1', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
self.flags(vcpu_pin_set='2-6')
|
||||
|
||||
self.assertTrue(got_topo.cells[0].network_metadata.tunneled)
|
||||
self.assertFalse(got_topo.cells[1].network_metadata.tunneled)
|
||||
self.assertTrue(got_topo.cells[2].network_metadata.tunneled)
|
||||
self.assertFalse(got_topo.cells[3].network_metadata.tunneled)
|
||||
got_topo = self._test_get_host_numa_topology()
|
||||
|
||||
@mock.patch.object(host.Host, 'has_min_version', return_value=True)
|
||||
def test_get_host_numa_topology(self, mock_version):
|
||||
self._test_get_host_numa_topology(mempages=True)
|
||||
# cores 0 and 1 are configured as shared using the 'cpu_shared_set'
|
||||
# config option but because 'vcpu_pin_set' is configured this
|
||||
# configuration is ignored. All the cores listed in 'vcpu_pin_set' are
|
||||
# dual reported for upgrade reasons
|
||||
self.assertEqual(set(), got_topo.cells[0].cpuset)
|
||||
self.assertEqual(set(), got_topo.cells[0].pcpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].cpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].pcpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].cpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].pcpuset)
|
||||
self.assertEqual(set([6]), got_topo.cells[3].cpuset)
|
||||
self.assertEqual(set([6]), got_topo.cells[3].pcpuset)
|
||||
|
||||
# all cells except the first and last one should have siblings
|
||||
self.assertEqual([], got_topo.cells[0].siblings)
|
||||
self.assertEqual([set([2, 3])], got_topo.cells[1].siblings)
|
||||
self.assertEqual([set([4, 5])], got_topo.cells[2].siblings)
|
||||
self.assertEqual([set([6])], got_topo.cells[3].siblings)
|
||||
|
||||
def test_get_host_numa_topology__no_cpu_configuration(self):
|
||||
"""Check that the host NUMA topology will fall back to using
|
||||
'vcpu_pin_set' if 'cpu_dedicated_set' is not defined.
|
||||
"""
|
||||
self.flags(cpu_shared_set=None, cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
self.flags(vcpu_pin_set=None)
|
||||
|
||||
got_topo = self._test_get_host_numa_topology()
|
||||
|
||||
# there's no CPU configuration so every core is dual-reported for
|
||||
# upgrade reasons
|
||||
self.assertEqual(set([0, 1]), got_topo.cells[0].cpuset)
|
||||
self.assertEqual(set([0, 1]), got_topo.cells[0].pcpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].cpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].pcpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].cpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].pcpuset)
|
||||
self.assertEqual(set([6, 7]), got_topo.cells[3].cpuset)
|
||||
self.assertEqual(set([6, 7]), got_topo.cells[3].pcpuset)
|
||||
|
||||
# all cells should have siblings
|
||||
self.assertEqual([set([0, 1])], got_topo.cells[0].siblings)
|
||||
self.assertEqual([set([2, 3])], got_topo.cells[1].siblings)
|
||||
self.assertEqual([set([4, 5])], got_topo.cells[2].siblings)
|
||||
self.assertEqual([set([6, 7])], got_topo.cells[3].siblings)
|
||||
|
||||
def test_get_host_numa_topology__only_shared_cpus(self):
|
||||
"""Check that the host NUMA topology does not use 'cpu_shared_set' if
|
||||
'cpu_dedicated_set' is not defined.
|
||||
"""
|
||||
self.flags(cpu_shared_set='0-6', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
self.flags(vcpu_pin_set=None)
|
||||
|
||||
got_topo = self._test_get_host_numa_topology()
|
||||
|
||||
# only cores 0 and 1 are configured as shared using the
|
||||
# 'cpu_shared_set' config option, but the rest are dual reported
|
||||
# for upgrade reasons
|
||||
self.assertEqual(set([0, 1]), got_topo.cells[0].cpuset)
|
||||
self.assertEqual(set(), got_topo.cells[0].pcpuset)
|
||||
self.assertEqual(set([2, 3]), got_topo.cells[1].cpuset)
|
||||
self.assertEqual(set([]), got_topo.cells[1].pcpuset)
|
||||
self.assertEqual(set([4, 5]), got_topo.cells[2].cpuset)
|
||||
self.assertEqual(set([]), got_topo.cells[2].pcpuset)
|
||||
self.assertEqual(set([6]), got_topo.cells[3].cpuset)
|
||||
self.assertEqual(set([]), got_topo.cells[3].pcpuset)
|
||||
|
||||
# all cells except the lasat one should have siblings
|
||||
self.assertEqual([set([0, 1])], got_topo.cells[0].siblings)
|
||||
self.assertEqual([set([2, 3])], got_topo.cells[1].siblings)
|
||||
self.assertEqual([set([4, 5])], got_topo.cells[2].siblings)
|
||||
self.assertEqual([set([6])], got_topo.cells[3].siblings)
|
||||
|
||||
def test_get_host_numa_topology_empty(self):
|
||||
caps = vconfig.LibvirtConfigCaps()
|
||||
|
@ -16713,6 +16828,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
@mock.patch.object(host.Host, 'has_min_version', return_value=True)
|
||||
def test_get_host_numa_topology_missing_network_metadata(self,
|
||||
mock_version):
|
||||
self.flags(cpu_shared_set='0-5', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
self.flags(physnets=['bar'], group='neutron')
|
||||
# we need to call the below again to ensure the updated 'physnets'
|
||||
# value is read and the new groups created
|
||||
|
@ -16731,10 +16848,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([0, 1, 3, 4, 5])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set([0, 1, 2, 3, 6])),
|
||||
return_value=set(range(6)))
|
||||
):
|
||||
self.assertRaisesRegex(
|
||||
exception.InvalidNetworkNUMAAffinity,
|
||||
|
@ -16746,6 +16861,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
@mock.patch.object(host.Host, 'has_min_version', return_value=True)
|
||||
def _test_get_host_numa_topology_invalid_network_affinity(self,
|
||||
group_name, mock_version):
|
||||
self.flags(cpu_shared_set='0-5', cpu_dedicated_set=None,
|
||||
group='compute')
|
||||
self.flags(physnets=['foo', 'bar'], group='neutron')
|
||||
# we need to call the below again to ensure the updated 'physnets'
|
||||
# value is read and the new groups created
|
||||
|
@ -16769,10 +16886,8 @@ class LibvirtConnTestCase(test.NoDBTestCase,
|
|||
with test.nested(
|
||||
mock.patch.object(host.Host, "get_capabilities",
|
||||
return_value=caps),
|
||||
mock.patch.object(hardware, 'get_vcpu_pin_set',
|
||||
return_value=set([0, 1, 3, 4, 5])),
|
||||
mock.patch.object(host.Host, 'get_online_cpus',
|
||||
return_value=set([0, 1, 2, 3, 6])),
|
||||
return_value=set(range(6)))
|
||||
):
|
||||
self.assertRaisesRegex(
|
||||
exception.InvalidNetworkNUMAAffinity,
|
||||
|
|
|
@ -2400,8 +2400,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_inst_too_large_cpu(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2417,8 +2417,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_inst_too_large_mem(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2]),
|
||||
memory=2048,
|
||||
memory_usage=1024,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2433,8 +2433,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_inst_not_avail(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0]),
|
||||
|
@ -2450,8 +2450,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_no_sibling_fits_empty(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2470,8 +2470,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_no_sibling_fits_w_usage(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([1]),
|
||||
|
@ -2488,8 +2488,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_instance_siblings_fits(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2509,8 +2509,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_instance_siblings_host_siblings_fits_empty(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
siblings=[set([0, 1]), set([2, 3])],
|
||||
|
@ -2530,8 +2530,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_instance_siblings_host_siblings_fits_empty_2(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2551,8 +2551,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_instance_siblings_host_siblings_fits_w_usage(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([1, 2, 5, 6]),
|
||||
|
@ -2571,8 +2571,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_fit_single_core(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2591,8 +2591,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_fit(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2610,8 +2610,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_require_policy_no_siblings(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set(range(0, 8)),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set(range(0, 8)),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2628,8 +2628,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_require_policy_too_few_siblings(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 1, 2]),
|
||||
|
@ -2646,8 +2646,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_require_policy_fits(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2666,8 +2666,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_require_policy_fits_w_usage(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 1]),
|
||||
|
@ -2686,8 +2686,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_instance_odd_fit(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2703,8 +2703,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_instance_fit_optimize_threads(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2720,8 +2720,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_instance_odd_fit_w_usage(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 2, 5]),
|
||||
|
@ -2737,8 +2737,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_instance_mixed_siblings(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 1, 2, 5]),
|
||||
|
@ -2754,8 +2754,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_instance_odd_fit_orphan_only(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 2, 5, 6]),
|
||||
|
@ -2771,9 +2771,9 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_host_siblings_large_instance_odd_fit(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
|
||||
15]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
|
||||
15]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2791,8 +2791,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_isolate_policy_too_few_fully_free_cores(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([1]),
|
||||
|
@ -2809,8 +2809,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_isolate_policy_no_fully_free_cores(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([1, 2]),
|
||||
|
@ -2827,8 +2827,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_isolate_policy_fits(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2847,8 +2847,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_isolate_policy_fits_ht_host(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2867,8 +2867,8 @@ class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
def test_get_pinning_isolate_policy_fits_w_usage(self):
|
||||
host_pin = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 1]),
|
||||
|
@ -2890,8 +2890,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2899,8 +2899,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2917,12 +2917,47 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
for cell in inst_topo.cells:
|
||||
self.assertInstanceCellPinned(cell, cell_ids=(0, 1))
|
||||
|
||||
def test_host_numa_fit_instance_to_host_single_cell_w_usage(self):
|
||||
# TODO(stephenfin): Remove in U
|
||||
def test_host_numa_fit_instance_to_host_legacy_object(self):
|
||||
"""Check that we're able to fit an instance NUMA topology to a legacy
|
||||
host NUMA topology that doesn't have the 'pcpuset' field present.
|
||||
"""
|
||||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1]),
|
||||
pcpuset=set(),
|
||||
# we are explicitly not setting pcpuset here
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
mempages=[],
|
||||
siblings=[set([0]), set([1])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([2, 3]),
|
||||
# we are explicitly not setting pcpuset here
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
mempages=[],
|
||||
siblings=[set([2]), set([3])])
|
||||
])
|
||||
inst_topo = objects.InstanceNUMATopology(cells=[
|
||||
objects.InstanceNUMACell(
|
||||
cpuset=set([0, 1]), memory=2048,
|
||||
cpu_policy=fields.CPUAllocationPolicy.DEDICATED)])
|
||||
|
||||
inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo)
|
||||
|
||||
for cell in inst_topo.cells:
|
||||
self.assertInstanceCellPinned(cell, cell_ids=(0, 1))
|
||||
|
||||
def test_host_numa_fit_instance_to_host_single_cell_w_usage(self):
|
||||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0]),
|
||||
|
@ -2930,8 +2965,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2952,8 +2987,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0]),
|
||||
|
@ -2961,8 +2996,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([2]),
|
||||
|
@ -2981,8 +3016,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -2990,8 +3025,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1]), set([2]), set([3])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([4, 5, 6, 7]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -3014,8 +3049,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0]),
|
||||
|
@ -3023,8 +3058,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1]), set([2]), set([3])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([4, 5, 6, 7]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([4, 5, 6]),
|
||||
|
@ -3032,8 +3067,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([4]), set([5]), set([6]), set([7])]),
|
||||
objects.NUMACell(
|
||||
id=2,
|
||||
cpuset=set([8, 9, 10, 11]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([8, 9, 10, 11]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([10, 11]),
|
||||
|
@ -3057,8 +3092,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0]),
|
||||
|
@ -3066,8 +3101,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1]), set([2]), set([3])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([4, 5, 6]),
|
||||
|
@ -3088,8 +3123,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -3097,8 +3132,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
siblings=[set([0]), set([1]), set([2]), set([3])]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([4, 5, 6, 7]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([4, 5, 6, 7]),
|
||||
memory=4096,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -3122,8 +3157,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3151,8 +3186,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3179,8 +3214,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3207,8 +3242,8 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3225,16 +3260,16 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
)])
|
||||
|
||||
new_cell = hw.numa_usage_from_instance_numa(host_pin, inst_pin_1)
|
||||
self.assertEqual(host_pin.cells[0].cpuset,
|
||||
self.assertEqual(host_pin.cells[0].pcpuset,
|
||||
new_cell.cells[0].pinned_cpus)
|
||||
self.assertEqual(new_cell.cells[0].cpu_usage, 4)
|
||||
self.assertEqual(0, new_cell.cells[0].cpu_usage)
|
||||
|
||||
def test_host_usage_from_instances_isolate_free(self):
|
||||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3253,14 +3288,14 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
new_cell = hw.numa_usage_from_instance_numa(host_pin, inst_pin_1,
|
||||
free=True)
|
||||
self.assertEqual(set([]), new_cell.cells[0].pinned_cpus)
|
||||
self.assertEqual(new_cell.cells[0].cpu_usage, 0)
|
||||
self.assertEqual(0, new_cell.cells[0].cpu_usage)
|
||||
|
||||
def test_host_usage_from_instances_isolated_without_siblings(self):
|
||||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3279,16 +3314,16 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
new_cell = hw.numa_usage_from_instance_numa(host_pin, inst_pin)
|
||||
self.assertEqual(inst_pin.cells[0].cpuset,
|
||||
new_cell.cells[0].pinned_cpus)
|
||||
self.assertEqual(new_cell.cells[0].cpu_usage, 3)
|
||||
self.assertEqual(0, new_cell.cells[0].cpu_usage)
|
||||
|
||||
def test_host_usage_from_instances_isolated_without_siblings_free(self):
|
||||
host_pin = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3]),
|
||||
memory=4096,
|
||||
cpu_usage=4,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set([0, 1, 2, 3]),
|
||||
siblings=[set([0]), set([1]), set([2]), set([3])],
|
||||
|
@ -3305,15 +3340,15 @@ class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase):
|
|||
new_cell = hw.numa_usage_from_instance_numa(host_pin, inst_pin,
|
||||
free=True)
|
||||
self.assertEqual(set([3]), new_cell.cells[0].pinned_cpus)
|
||||
self.assertEqual(new_cell.cells[0].cpu_usage, 1)
|
||||
self.assertEqual(0, new_cell.cells[0].cpu_usage)
|
||||
|
||||
|
||||
class CPUSReservedCellTestCase(test.NoDBTestCase):
|
||||
def _test_reserved(self, reserved):
|
||||
host_cell = objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2]),
|
||||
memory=2048,
|
||||
memory_usage=0,
|
||||
pinned_cpus=set(),
|
||||
|
@ -3379,8 +3414,8 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
return objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3390,8 +3425,8 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
size_kb=4, total=524288, used=0)]),
|
||||
objects.NUMACell(
|
||||
id=1,
|
||||
cpuset=set([2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([2, 3]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3527,9 +3562,7 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
|
||||
host_topo = hw.numa_usage_from_instance_numa(host_topo, inst_topo)
|
||||
|
||||
self.assertEqual(2, host_topo.cells[0].cpu_usage)
|
||||
self.assertEqual(set([0, 1]), host_topo.cells[0].pinned_cpus)
|
||||
self.assertEqual(0, host_topo.cells[1].cpu_usage)
|
||||
self.assertEqual(set([]), host_topo.cells[1].pinned_cpus)
|
||||
|
||||
def test_isolate_full_usage(self):
|
||||
|
@ -3556,15 +3589,14 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
host_topo = hw.numa_usage_from_instance_numa(host_topo, inst_topo1)
|
||||
host_topo = hw.numa_usage_from_instance_numa(host_topo, inst_topo2)
|
||||
|
||||
self.assertEqual(2, host_topo.cells[0].cpu_usage)
|
||||
self.assertEqual(set([0, 1]), host_topo.cells[0].pinned_cpus)
|
||||
|
||||
def test_isolate_w_isolate_thread_alloc(self):
|
||||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3590,8 +3622,8 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([0, 1, 2, 3, 4, 5]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([0, 1, 2, 3, 4, 5]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3625,8 +3657,8 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 3]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 3]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -3650,8 +3682,8 @@ class EmulatorThreadsTestCase(test.NoDBTestCase):
|
|||
host_topo = objects.NUMATopology(cells=[
|
||||
objects.NUMACell(
|
||||
id=0,
|
||||
cpuset=set([1, 2, 3, 4, 5]),
|
||||
pcpuset=set(),
|
||||
cpuset=set(),
|
||||
pcpuset=set([1, 2, 3, 4, 5]),
|
||||
memory=2048,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
|
|
@ -999,14 +999,14 @@ def _numa_fit_instance_cell_with_pinning(host_cell, instance_cell,
|
|||
or None if instance cannot be pinned to the given host
|
||||
"""
|
||||
required_cpus = len(instance_cell.cpuset) + num_cpu_reserved
|
||||
if host_cell.avail_cpus < required_cpus:
|
||||
if host_cell.avail_pcpus < required_cpus:
|
||||
LOG.debug('Not enough available CPUs to schedule instance. '
|
||||
'Oversubscription is not possible with pinned instances. '
|
||||
'Required: %(required)d (%(vcpus)d + %(num_cpu_reserved)d), '
|
||||
'actual: %(actual)d',
|
||||
{'required': required_cpus,
|
||||
'vcpus': len(instance_cell.cpuset),
|
||||
'actual': host_cell.avail_cpus,
|
||||
'actual': host_cell.avail_pcpus,
|
||||
'num_cpu_reserved': num_cpu_reserved})
|
||||
return
|
||||
|
||||
|
@ -1102,14 +1102,40 @@ def _numa_fit_instance_cell(host_cell, instance_cell, limit_cell=None,
|
|||
'actual': host_cell.memory})
|
||||
return
|
||||
|
||||
if len(instance_cell.cpuset) + cpuset_reserved > len(host_cell.cpuset):
|
||||
LOG.debug('Not enough host cell CPUs to fit instance cell. Required: '
|
||||
'%(required)d + %(cpuset_reserved)d as overhead, '
|
||||
'actual: %(actual)d',
|
||||
{'required': len(instance_cell.cpuset),
|
||||
'actual': len(host_cell.cpuset),
|
||||
'cpuset_reserved': cpuset_reserved})
|
||||
return
|
||||
# The 'pcpuset' field is only set by newer compute nodes, so if it's
|
||||
# not present then we've received this object from a pre-Train compute
|
||||
# node and need to query against the 'cpuset' field instead until the
|
||||
# compute node has been upgraded and starts reporting things properly.
|
||||
# TODO(stephenfin): Remove in U
|
||||
if 'pcpuset' not in host_cell:
|
||||
host_cell.pcpuset = host_cell.cpuset
|
||||
|
||||
# NOTE(stephenfin): As with memory, do not allow an instance to overcommit
|
||||
# against itself on any NUMA cell
|
||||
if instance_cell.cpu_pinning_requested:
|
||||
# TODO(stephenfin): Is 'cpuset_reserved' present if consuming emulator
|
||||
# threads from shared CPU pools? If so, we don't want to add this here
|
||||
required_cpus = len(instance_cell.cpuset) + cpuset_reserved
|
||||
if required_cpus > len(host_cell.pcpuset):
|
||||
LOG.debug('Not enough host cell CPUs to fit instance cell; '
|
||||
'required: %(required)d + %(cpuset_reserved)d as '
|
||||
'overhead, actual: %(actual)d', {
|
||||
'required': len(instance_cell.cpuset),
|
||||
'actual': len(host_cell.pcpuset),
|
||||
'cpuset_reserved': cpuset_reserved
|
||||
})
|
||||
return
|
||||
else:
|
||||
required_cpus = len(instance_cell.cpuset) + cpuset_reserved
|
||||
if required_cpus > len(host_cell.cpuset):
|
||||
LOG.debug('Not enough host cell CPUs to fit instance cell; '
|
||||
'required: %(required)d + %(cpuset_reserved)d as '
|
||||
'overhead, actual: %(actual)d', {
|
||||
'required': len(instance_cell.cpuset),
|
||||
'actual': len(host_cell.cpuset),
|
||||
'cpuset_reserved': cpuset_reserved
|
||||
})
|
||||
return
|
||||
|
||||
if instance_cell.cpu_pinning_requested:
|
||||
LOG.debug('Pinning has been requested')
|
||||
|
@ -2026,14 +2052,28 @@ def numa_usage_from_instance_numa(host_topology, instance_topology,
|
|||
|
||||
cells = []
|
||||
sign = -1 if free else 1
|
||||
|
||||
for host_cell in host_topology.cells:
|
||||
memory_usage = host_cell.memory_usage
|
||||
cpu_usage = host_cell.cpu_usage
|
||||
shared_cpus_usage = host_cell.cpu_usage
|
||||
|
||||
# The 'pcpuset' field is only set by newer compute nodes, so if it's
|
||||
# not present then we've received this object from a pre-Train compute
|
||||
# node and need to dual-report all CPUS listed therein as both
|
||||
# dedicated and shared until the compute node has been upgraded and
|
||||
# starts reporting things properly.
|
||||
# TODO(stephenfin): Remove in U
|
||||
if 'pcpuset' not in host_cell:
|
||||
shared_cpus = host_cell.cpuset
|
||||
dedicated_cpus = host_cell.cpuset
|
||||
else:
|
||||
shared_cpus = host_cell.cpuset
|
||||
dedicated_cpus = host_cell.pcpuset
|
||||
|
||||
new_cell = objects.NUMACell(
|
||||
id=host_cell.id,
|
||||
cpuset=host_cell.cpuset,
|
||||
pcpuset=set(), # TODO(stephenfin): Start setting this
|
||||
cpuset=shared_cpus,
|
||||
pcpuset=dedicated_cpus,
|
||||
memory=host_cell.memory,
|
||||
cpu_usage=0,
|
||||
memory_usage=0,
|
||||
|
@ -2048,46 +2088,36 @@ def numa_usage_from_instance_numa(host_topology, instance_topology,
|
|||
if instance_cell.id != host_cell.id:
|
||||
continue
|
||||
|
||||
memory_usage = memory_usage + sign * instance_cell.memory
|
||||
cpu_usage_diff = len(instance_cell.cpuset)
|
||||
if (instance_cell.cpu_thread_policy ==
|
||||
fields.CPUThreadAllocationPolicy.ISOLATE and
|
||||
host_cell.siblings):
|
||||
cpu_usage_diff *= max(map(len, host_cell.siblings))
|
||||
cpu_usage += sign * cpu_usage_diff
|
||||
|
||||
if cellid == 0 and instance_topology.emulator_threads_isolated:
|
||||
# The emulator threads policy when defined with 'isolate' makes
|
||||
# the instance to consume an additional pCPU as overhead. That
|
||||
# pCPU is mapped on the host NUMA node related to the guest
|
||||
# NUMA node 0.
|
||||
cpu_usage += sign * len(instance_cell.cpuset_reserved)
|
||||
|
||||
# Compute mempages usage
|
||||
new_cell.mempages = _numa_pagesize_usage_from_cell(
|
||||
new_cell, instance_cell, sign)
|
||||
|
||||
if instance_topology.cpu_pinning_requested:
|
||||
pinned_cpus = set(instance_cell.cpu_pinning.values())
|
||||
memory_usage = memory_usage + sign * instance_cell.memory
|
||||
|
||||
if instance_cell.cpuset_reserved:
|
||||
pinned_cpus |= instance_cell.cpuset_reserved
|
||||
if not instance_cell.cpu_pinning_requested:
|
||||
shared_cpus_usage += sign * len(instance_cell.cpuset)
|
||||
continue
|
||||
|
||||
if free:
|
||||
if (instance_cell.cpu_thread_policy ==
|
||||
fields.CPUThreadAllocationPolicy.ISOLATE):
|
||||
new_cell.unpin_cpus_with_siblings(pinned_cpus)
|
||||
else:
|
||||
new_cell.unpin_cpus(pinned_cpus)
|
||||
pinned_cpus = set(instance_cell.cpu_pinning.values())
|
||||
if instance_cell.cpuset_reserved:
|
||||
pinned_cpus |= instance_cell.cpuset_reserved
|
||||
|
||||
if free:
|
||||
if (instance_cell.cpu_thread_policy ==
|
||||
fields.CPUThreadAllocationPolicy.ISOLATE):
|
||||
new_cell.unpin_cpus_with_siblings(pinned_cpus)
|
||||
else:
|
||||
if (instance_cell.cpu_thread_policy ==
|
||||
fields.CPUThreadAllocationPolicy.ISOLATE):
|
||||
new_cell.pin_cpus_with_siblings(pinned_cpus)
|
||||
else:
|
||||
new_cell.pin_cpus(pinned_cpus)
|
||||
new_cell.unpin_cpus(pinned_cpus)
|
||||
else:
|
||||
if (instance_cell.cpu_thread_policy ==
|
||||
fields.CPUThreadAllocationPolicy.ISOLATE):
|
||||
new_cell.pin_cpus_with_siblings(pinned_cpus)
|
||||
else:
|
||||
new_cell.pin_cpus(pinned_cpus)
|
||||
|
||||
new_cell.cpu_usage = max(0, cpu_usage)
|
||||
# NOTE(stephenfin): We don't need to set 'pinned_cpus' here since that
|
||||
# was done in the above '(un)pin_cpus(_with_siblings)' functions
|
||||
new_cell.memory_usage = max(0, memory_usage)
|
||||
new_cell.cpu_usage = max(0, shared_cpus_usage)
|
||||
cells.append(new_cell)
|
||||
|
||||
return objects.NUMATopology(cells=cells)
|
||||
|
|
|
@ -4813,7 +4813,12 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||
# mess up though, raise an exception
|
||||
raise exception.NUMATopologyUnsupported()
|
||||
|
||||
allowed_cpus = hardware.get_vcpu_pin_set()
|
||||
# We only pin an instance to some host cores if the user has provided
|
||||
# configuration to suggest we should.
|
||||
shared_cpus = None
|
||||
if CONF.vcpu_pin_set or CONF.compute.cpu_shared_set:
|
||||
shared_cpus = self._get_vcpu_available()
|
||||
|
||||
topology = self._get_host_numa_topology()
|
||||
|
||||
# We have instance NUMA so translate it to the config class
|
||||
|
@ -4827,12 +4832,12 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||
# TODO(ndipanov): Attempt to spread the instance
|
||||
# across NUMA nodes and expose the topology to the
|
||||
# instance as an optimisation
|
||||
return GuestNumaConfig(allowed_cpus, None, None, None)
|
||||
return GuestNumaConfig(shared_cpus, None, None, None)
|
||||
|
||||
if not topology:
|
||||
# No NUMA topology defined for host - This will only happen with
|
||||
# some libvirt versions and certain platforms.
|
||||
return GuestNumaConfig(allowed_cpus, None,
|
||||
return GuestNumaConfig(shared_cpus, None,
|
||||
guest_cpu_numa_config, None)
|
||||
|
||||
# Now get configuration from the numa_topology
|
||||
|
@ -6990,12 +6995,22 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||
return
|
||||
|
||||
cells = []
|
||||
allowed_cpus = hardware.get_vcpu_pin_set()
|
||||
online_cpus = self._host.get_online_cpus()
|
||||
if allowed_cpus:
|
||||
allowed_cpus &= online_cpus
|
||||
else:
|
||||
allowed_cpus = online_cpus
|
||||
|
||||
available_shared_cpus = self._get_vcpu_available()
|
||||
available_dedicated_cpus = self._get_pcpu_available()
|
||||
|
||||
# NOTE(stephenfin): In an ideal world, if the operator had not
|
||||
# configured this host to report PCPUs using the '[compute]
|
||||
# cpu_dedicated_set' option, then we should not be able to used pinned
|
||||
# instances on this host. However, that would force operators to update
|
||||
# their configuration as part of the Stein -> Train upgrade or be
|
||||
# unable to schedule instances on the host. As a result, we need to
|
||||
# revert to legacy behavior and use 'vcpu_pin_set' for both VCPUs and
|
||||
# PCPUs.
|
||||
# TODO(stephenfin): Remove this in U
|
||||
if not available_dedicated_cpus and not (
|
||||
CONF.compute.cpu_shared_set and not CONF.vcpu_pin_set):
|
||||
available_dedicated_cpus = available_shared_cpus
|
||||
|
||||
def _get_reserved_memory_for_cell(self, cell_id, page_size):
|
||||
cell = self._reserved_hugepages.get(cell_id, {})
|
||||
|
@ -7042,14 +7057,19 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||
tunnel_affinities = _get_tunnel_numa_affinity()
|
||||
|
||||
for cell in topology.cells:
|
||||
cpuset = set(cpu.id for cpu in cell.cpus)
|
||||
cpus = set(cpu.id for cpu in cell.cpus)
|
||||
|
||||
cpuset = cpus & available_shared_cpus
|
||||
pcpuset = cpus & available_dedicated_cpus
|
||||
|
||||
siblings = sorted(map(set,
|
||||
set(tuple(cpu.siblings)
|
||||
if cpu.siblings else ()
|
||||
for cpu in cell.cpus)
|
||||
))
|
||||
cpuset &= allowed_cpus
|
||||
siblings = [sib & allowed_cpus for sib in siblings]
|
||||
|
||||
cpus &= available_shared_cpus | available_dedicated_cpus
|
||||
siblings = [sib & cpus for sib in siblings]
|
||||
# Filter out empty sibling sets that may be left
|
||||
siblings = [sib for sib in siblings if len(sib) > 0]
|
||||
|
||||
|
@ -7066,15 +7086,19 @@ class LibvirtDriver(driver.ComputeDriver):
|
|||
physnets=physnet_affinities[cell.id],
|
||||
tunneled=tunnel_affinities[cell.id])
|
||||
|
||||
# NOTE(stephenfin): Note that we don't actually return any usage
|
||||
# information here. This is because this is handled by the resource
|
||||
# tracker via the 'update_available_resource' periodic task, which
|
||||
# loops through all instances and calculated usage accordingly
|
||||
cell = objects.NUMACell(
|
||||
id=cell.id,
|
||||
cpuset=cpuset,
|
||||
pcpuset=set(), # TODO(stephenfin): Start setting this
|
||||
pcpuset=pcpuset,
|
||||
memory=cell.memory / units.Ki,
|
||||
cpu_usage=0,
|
||||
pinned_cpus=set(),
|
||||
memory_usage=0,
|
||||
siblings=siblings,
|
||||
pinned_cpus=set([]),
|
||||
mempages=mempages,
|
||||
network_metadata=network_metadata)
|
||||
cells.append(cell)
|
||||
|
|
Loading…
Reference in New Issue