Use instance project/user when creating RequestSpec during resize reschedule
When rescheduling from a failed cold migrate / resize, the compute service does not pass the request spec back to conductor so we create one based on the in-scope variables. This introduces a problem for some scheduler filters like the AggregateMultiTenancyIsolation filter since it will create the RequestSpec using the project and user information from the current context, which for a cold migrate is the admin and might not be the owner of the instance (which could be in some other project). So the AggregateMultiTenancyIsolation filter might reject the request or select a host that fits an aggregate for the admin but not the end user. This fixes the problem by using the instance project/user information when constructing the RequestSpec which will take priority over the context in RequestSpec.from_components(). Long-term we need the compute service to pass the request spec back to the conductor during a reschedule, but we do this first since we can backport it. NOTE(mriedem): RequestSpec.user_id was added in Rocky in commit6e49019fae
so we have to remove its usage in this backport. Conflicts: nova/tests/unit/conductor/test_conductor.py NOTE(mriedem): The conflict is due to not having change Ibc44e3b2261b314bb92062a88ca9ee6b81298dc3 in Pike. Change-Id: Iaaf7f68d6874fd5d6e737e7d2bc589ea4a048fee Closes-Bug: #1774205 (cherry picked from commit8c21660819
) (cherry picked from commit1162902280
)
This commit is contained in:
parent
02fcd57260
commit
ce7ad87809
|
@ -298,7 +298,8 @@ class ComputeTaskManager(base.Base):
|
|||
request_spec = objects.RequestSpec.from_components(
|
||||
context, instance.uuid, image,
|
||||
flavor, instance.numa_topology, instance.pci_requests,
|
||||
filter_properties, None, instance.availability_zone)
|
||||
filter_properties, None, instance.availability_zone,
|
||||
project_id=instance.project_id)
|
||||
else:
|
||||
# NOTE(sbauza): Resizes means new flavor, so we need to update the
|
||||
# original RequestSpec object for make sure the scheduler verifies
|
||||
|
|
|
@ -2288,6 +2288,7 @@ class ConductorTaskTestCase(_BaseTaskTestCase, test_compute.BaseTestCase):
|
|||
instance_type_id=flavor['id'],
|
||||
system_metadata={},
|
||||
uuid=uuids.instance,
|
||||
project_id=fakes.FAKE_PROJECT_ID,
|
||||
user_id=fakes.FAKE_USER_ID,
|
||||
flavor=flavor,
|
||||
numa_topology=None,
|
||||
|
@ -2311,6 +2312,10 @@ class ConductorTaskTestCase(_BaseTaskTestCase, test_compute.BaseTestCase):
|
|||
set_vm_mock.assert_called_once_with(self.context, inst_obj.uuid,
|
||||
'migrate_server', updates,
|
||||
exception, legacy_request_spec)
|
||||
spec_fc_mock.assert_called_once_with(
|
||||
self.context, inst_obj.uuid, image, flavor, inst_obj.numa_topology,
|
||||
inst_obj.pci_requests, {}, None, inst_obj.availability_zone,
|
||||
project_id=inst_obj.project_id)
|
||||
|
||||
@mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid')
|
||||
@mock.patch.object(scheduler_utils, 'setup_instance_group')
|
||||
|
|
Loading…
Reference in New Issue