Restore RT.old_resources if ComputeNode.save() fails

When starting nova-compute for the first time with a new node,
the ResourceTracker will create a new ComputeNode record in
_init_compute_node but without all of the fields set on the
ComputeNode, for example "free_disk_gb".

Later _update_usage_from_instances will set some fields on the
ComputeNode record (even if there are no instances on the node,
why - I don't know) like free_disk_gb.

This will make the eventual call from _update() to _resource_change()
update the value in the old_resouces dict and return True, and then
_update() will try to update those ComputeNode changes to the database.
If that update fails, for example due to a DBConnectionError, the
value in old_resources will still be for the current version of the node
in memory but not what is actually in the database.

Note that this failure does not result in the compute service failing
to start because ComputeManager._update_available_resource_for_node
traps the Exception and just logs it.

A subsequent trip through the RT._update() method - because of the
update_available_resource periodic task - will call _resource_change
but because old_resource matches the current state of the node, it
returns False and the RT does not attempt to persist the changes to
the DB. _update() will then go on to call _update_to_placement
which will create the resource provider in placement along with its
inventory, making it potentially a candidate for scheduling.

This can be a problem later in the scheduler because the
HostState._update_from_compute_node method may skip setting fields
on the HostState object if free_disk_gb is not set in the
ComputeNode record - which can then break filters and weighers
later in the scheduling process (see bug 1834691 and bug 1834694).

The fix proposed here is simple: if the ComputeNode.save() in
RT._update() fails, restore the previous value in old_resources
so that the subsequent run through _resource_change will compare the
correct state of the object and retry the update.

An alternative to this would be killing the compute service on startup
if there is a DB error but that could have unintended side effects,
especially if the DB error is transient and can be fixed on the next
try.

Obviously the scheduler code needs to be more robust also, but those
improvements are left for separate changes related to the other bugs
mentioned above.

Also, ComputeNode.update_from_virt_driver could be updated to set
free_disk_gb if possible to workaround the tight coupling in the
HostState._update_from_compute_node code, but that's also sort of
a whack-a-mole type change best made separately.

Change-Id: Id3c847be32d8a1037722d08bf52e4b88dc5adc97
Closes-Bug: #1834712
This commit is contained in:
Matt Riedemann 2019-06-28 18:50:33 -04:00 committed by Chris Dent
parent b7c98befda
commit 11cb42f396
2 changed files with 40 additions and 1 deletions

View File

@ -26,6 +26,7 @@ import os_resource_classes as orc
import os_traits
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils
import retrying
from nova.compute import claims
@ -1052,12 +1053,23 @@ class ResourceTracker(object):
def _update(self, context, compute_node, startup=False):
"""Update partial stats locally and populate them to Scheduler."""
# _resource_change will update self.old_resources if it detects changes
# but we want to restore those if compute_node.save() fails.
nodename = compute_node.hypervisor_hostname
old_compute = self.old_resources[nodename]
if self._resource_change(compute_node):
# If the compute_node's resource changed, update to DB.
# NOTE(jianghuaw): Once we completely move to use get_inventory()
# for all resource provider's inv data. We can remove this check.
# At the moment we still need this check and save compute_node.
compute_node.save()
try:
compute_node.save()
except Exception:
# Restore the previous state in self.old_resources so that on
# the next trip through here _resource_change does not have
# stale data to compare.
with excutils.save_and_reraise_exception(logger=LOG):
self.old_resources[nodename] = old_compute
self._update_to_placement(context, compute_node, startup)

View File

@ -1655,6 +1655,33 @@ class TestUpdateComputeNode(BaseTestCase):
self.assertIn('Unable to find services table record for nova-compute',
mock_log_error.call_args[0][0])
def test_update_compute_node_save_fails_restores_old_resources(self):
"""Tests the scenario that compute_node.save() fails and the
old_resources value for the node is restored to its previous value
before calling _resource_change updated it.
"""
self._setup_rt()
orig_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone()
# Pretend the ComputeNode was just created in the DB but not yet saved
# with the free_disk_gb field.
delattr(orig_compute, 'free_disk_gb')
nodename = orig_compute.hypervisor_hostname
self.rt.old_resources[nodename] = orig_compute
# Now have an updated compute node with free_disk_gb set which should
# make _resource_change modify old_resources and return True.
updated_compute = _COMPUTE_NODE_FIXTURES[0].obj_clone()
ctxt = context.get_admin_context()
# Mock ComputeNode.save() to trigger some failure (realistically this
# could be a DBConnectionError).
with mock.patch.object(updated_compute, 'save',
side_effect=test.TestingException('db error')):
self.assertRaises(test.TestingException,
self.rt._update,
ctxt, updated_compute, startup=True)
# Make sure that the old_resources entry for the node has not changed
# from the original.
self.assertTrue(self.rt._resource_change(updated_compute))
def test_copy_resources_no_update_allocation_ratios(self):
"""Tests that a ComputeNode object's allocation ratio fields are
not set if the configured allocation ratio values are default None.