tests: Add reproducer for bug #1889633

With the introduction of the cpu-resources work [1], (libvirt) hosts can
now report 'PCPU' inventory separate from 'VCPU' inventory, which is
consumed by instances with pinned CPUs ('hw:cpu_policy=dedicated'). As
part of that effort, we had to drop support for the ability to boot
instances with 'hw:cpu_thread_policy=isolate' (i.e. I don't want
hyperthreads) on hosts with hyperthreading. This had been previously
implemented by marking thread siblings of the host cores used by such an
instance as reserved and unusable by other instances, but such a design
wasn't possible in world where we had to track resource consumption in
placement before landing in the host. Instead, the 'isolate' policy now
simply means "give me a host without hyperthreads". This is enforced by
hosts with hyperthreads reporting the 'HW_CPU_HYPERTHREADING' trait, and
instances with the 'isolate' policy requesting
'HW_CPU_HYPERTHREADING=forbidden'.

Or at least, that's how it should work. We also have a fallback query
for placement to find hosts with 'VCPU' inventory and that doesn't care
about the 'HW_CPU_HYPERTHREADING' trait. This was envisioned to ensure
hosts with old style configuration ('[DEFAULT] vcpu_pin_set') could
continue to be scheduled to. We figured that this second fallback query
could accidentally pick up hosts with new-style configuration, but we
are also tracking the available and used cores from those listed in the
'[compute] cpu_dedicated_set' as part of the host 'NUMATopology' objects
(specifically, via the 'pcpuset' and 'cpu_pinning' fields of the
'NUMACell' child objects). These are validated by both the
'NUMATopologyFilter' and the virt driver itself, which means hosts with
new style configuration that got caught up in this second query would be
rejected by this filter or by a late failure on the host. (Hint: there's
much more detail on this in the spec).

Unfortunately we didn't think about hyperthreading. If a host gets
picked up in the second request, it might well have enough PCPU
inventory but simply be rejected in the first query since it had
hyperthreads. In this case, because it has enough free cores available
for pinning, neither the filter nor the virt driver will reject the
request, resulting in a situation whereby the instance ends up falling
back to the old code paths and consuming $flavor.vcpu host cores, plus
the thread siblings for each of these cores. Despite this, it will be
marked as consuming $flavor.vcpu VCPU (not PCPU) inventory in placement.

This patch proves this to be the case, allowing us to resolve the issue
later.

[1] https://specs.openstack.org/openstack/nova-specs/specs/train/approved/cpu-resources.html

Change-Id: I87cd4d14192b1a40cbdca6e3af0f818f2cab613e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Related-Bug: #1889633
This commit is contained in:
Stephen Finucane 2020-07-30 17:37:38 +01:00
parent 91b948177f
commit 737e0c0111
1 changed files with 71 additions and 0 deletions

View File

@ -257,6 +257,46 @@ class NUMAServersTest(NUMAServersTestBase):
self._run_build_test(flavor_id, expected_usage=expected_usage)
def test_create_server_with_isolate_thread_policy_old_configuration(self):
"""Create a server with the legacy 'hw:cpu_thread_policy=isolate' extra
spec and configuration.
This should pass and result in an instance consuming $flavor.vcpu host
cores plus the thread sibling(s) of each of these cores. We also be
consuming VCPUs since we're on legacy configuration here, though that
would in theory be fixed during a later reshape.
"""
self.flags(
cpu_dedicated_set=None, cpu_shared_set=None, group='compute')
self.flags(vcpu_pin_set='0-3')
# host has hyperthreads, which means we're going to end up consuming
# $flavor.vcpu hosts cores plus the thread sibling(s) for each core
host_info = fakelibvirt.HostInfo(
cpu_nodes=1, cpu_sockets=1, cpu_cores=2, cpu_threads=2,
kB_mem=(1024 * 1024 * 16), # GB
)
fake_connection = self._get_connection(host_info=host_info)
self.mock_conn.return_value = fake_connection
extra_spec = {
'hw:cpu_policy': 'dedicated',
'hw:cpu_thread_policy': 'isolate',
}
flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec)
expected_usage = {'DISK_GB': 20, 'MEMORY_MB': 2048, 'VCPU': 2}
self._run_build_test(flavor_id, expected_usage=expected_usage)
# verify that we have consumed two cores plus the thread sibling of
# each core, totalling four cores since the HostInfo indicates each
# core should have two threads
ctxt = nova_context.get_admin_context()
host_numa = objects.NUMATopology.obj_from_db_obj(
objects.ComputeNode.get_by_nodename(ctxt, 'compute1').numa_topology
)
self.assertEqual({0, 1, 2, 3}, host_numa.cells[0].pinned_cpus)
def test_create_server_with_legacy_pinning_policy_fails(self):
"""Create a pinned instance on a host with no PCPUs.
@ -319,6 +359,37 @@ class NUMAServersTest(NUMAServersTestBase):
self.api.post_server, post)
self.assertEqual(403, ex.response.status_code)
def test_create_server_with_isolate_thread_policy_fails(self):
"""Create a server with the legacy 'hw:cpu_thread_policy=isolate' extra
spec.
This should fail on a host with hyperthreading.
"""
self.flags(
cpu_dedicated_set='0-3', cpu_shared_set='4-7', group='compute')
self.flags(vcpu_pin_set=None)
# host has hyperthreads, which means it should be rejected
host_info = fakelibvirt.HostInfo(
cpu_nodes=2, cpu_sockets=1, cpu_cores=2, cpu_threads=2,
kB_mem=(1024 * 1024 * 16), # GB
)
fake_connection = self._get_connection(host_info=host_info)
self.mock_conn.return_value = fake_connection
extra_spec = {
'hw:cpu_policy': 'dedicated',
'hw:cpu_thread_policy': 'isolate',
}
flavor_id = self._create_flavor(vcpu=2, extra_spec=extra_spec)
# FIXME(stephenfin): This should go to error status since there should
# not be a host available
expected_usage = {
'DISK_GB': 20, 'MEMORY_MB': 2048, 'PCPU': 0, 'VCPU': 2,
}
self._run_build_test(flavor_id, expected_usage=expected_usage)
def test_create_server_with_pcpu(self):
"""Create a server using an explicit 'resources:PCPU' request.