Disable NUMATopologyFilter on rebuild

This change leverages the new NUMA constraint checking added in
in I0322d872bdff68936033a6f5a54e8296a6fb3434 to allow the
NUMATopologyFilter to be skipped on rebuild.

As the new behavior of rebuild enfroces that no changes
to the numa constraints are allowed on rebuild we no longer
need to execute the NUMATopologyFilter. Previously
the NUMATopologyFilter would process the rebuild request
as if it was a request to spawn a new instnace as the
numa_fit_instance_to_host function is not rebuild aware.

As such prior to this change a rebuild would only succeed
if a host had enough additional capacity for a second instance
on the same host meeting the requirement of the new image and
existing flavor. This behavior was incorrect on two counts as
a rebuild uses a noop claim. First the resouce usage cannot
change so it was incorrect to require the addtional capacity
to rebuild an instance. Secondly it was incorrect not to assert
the resouce usage remained the same.

I0322d872bdff68936033a6f5a54e8296a6fb3434 adressed guarding the
rebuild against altering the resouce usage and this change
allows in place rebuild.

This change found a latent bug that will be adressed in a follow
up change and updated the functional tests to note the incorrect
behavior.

Change-Id: I48bccc4b9adcac3c7a3e42769c11fdeb8f6fd132
Closes-Bug: #1804502
Implements: blueprint inplace-rebuild-of-numa-instances
(cherry picked from commit 3f9411071d)
This commit is contained in:
Sean Mooney 2019-10-21 16:17:17 +00:00 committed by Stephen Finucane
parent 745de99063
commit 94c0362918
3 changed files with 37 additions and 9 deletions

View File

@ -23,7 +23,11 @@ LOG = logging.getLogger(__name__)
class NUMATopologyFilter(filters.BaseHostFilter):
"""Filter on requested NUMA topology."""
RUN_ON_REBUILD = True
# NOTE(sean-k-mooney): In change I0322d872bdff68936033a6f5a54e8296a6fb343
# we validate that the NUMA topology does not change in the api. If the
# requested image would alter the NUMA constrains we reject the rebuild
# request and therefore do not need to run this filter on rebuild.
RUN_ON_REBUILD = False
def _satisfies_cpu_policy(self, host_state, extra_specs, image_props):
"""Check that the host_state provided satisfies any available

View File

@ -16,18 +16,20 @@
import mock
import six
from testtools import skip
from oslo_config import cfg
from oslo_log import log as logging
from nova.conf import neutron as neutron_conf
from nova import context as nova_context
from nova import objects
from nova.tests import fixtures as nova_fixtures
from nova.tests.functional.api import client
from nova.tests.functional.libvirt import base
from nova.tests.unit import fake_notifier
from nova.tests.unit.virt.libvirt import fakelibvirt
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@ -894,7 +896,17 @@ class NUMAServersWithNetworksTest(NUMAServersTestBase):
self.assertTrue(self.mock_filter.called)
self.assertEqual('ACTIVE', status)
def test_rebuild_server_with_network_affinity(self):
# FIXME(sean-k-mooney): The logic of this test is incorrect.
# The test was written to assert that we failed to rebuild
# because the NUMA constraints were violated due to the attachment
# of an interface from a second host NUMA node to an instance with
# a NUMA topology of 1 that is affined to a different NUMA node.
# Nova should reject the interface attachment if the NUMA constraints
# would be violated and it should fail at that point not when the
# instance is rebuilt. This is a latent bug which will be addressed
# in a separate patch.
@skip("bug 1855332")
def test_attach_interface_with_network_affinity_violation(self):
extra_spec = {'hw:numa_nodes': '1'}
flavor_id = self._create_flavor(extra_spec=extra_spec)
networks = [
@ -929,10 +941,15 @@ class NUMAServersWithNetworksTest(NUMAServersTestBase):
'net_id': base.LibvirtNeutronFixture.network_2['id'],
}
}
# FIXME(sean-k-mooney): This should raise an exception as this
# interface attachment would violate the NUMA constraints.
self.api.attach_interface(server['id'], post)
post = {'rebuild': {
'imageRef': 'a2459075-d96c-40d5-893e-577ff92e721c',
}}
# NOTE(sean-k-mooney): the rest of the test is incorrect but
# is left to show the currently broken behavior.
# Now this should fail because we've violated the NUMA requirements
# with the latest attachment
ex = self.assertRaises(client.OpenStackApiException,
@ -1089,12 +1106,8 @@ class NUMAServersRebuildTests(NUMAServersTestBase):
server = self._create_active_server(
server_args={"flavorRef": flavor_id})
# TODO(sean-k-mooney): this should pass but i currently expect it to
# fail because the NUMA topology filter does not support in place
# rebuild and we have used all the resources on the compute node.
self.assertRaises(
client.OpenStackApiException, self._rebuild_server,
server, self.image_ref_1)
# This should succeed as the numa constraints do not change.
self._rebuild_server(server, self.image_ref_1)
def test_rebuild_server_with_different_numa_topology_fails(self):
"""Create a NUMA instance and ensure inplace rebuild fails.

View File

@ -14,3 +14,14 @@ fixes:
and rejects the rebuild.
.. _`bug #1763766`: https://bugs.launchpad.net/nova/+bug/1763766
features:
- |
With the changes introduced to address `bug #1763766`_, Nova now guards
against NUMA constraint changes on rebuild. As a result the
``NUMATopologyFilter`` is no longer required to run on rebuild since
we already know the topology will not change and therefor the existing
resource claim is still valid. As such it is now possible to do an in-place
rebuild of a instance with a NUMA topology even if the image changes
provided the new image does not alter the topology.