nova/nova/virt/ironic
John Garbutt e95277fa3e Re-use existing ComputeNode on ironic rebalance
When a nova-compute service dies that is one of several ironic based
nova-compute services running, a node rebalance occurs to ensure there
is still an active nova-compute service dealing with requests for the
given instance that is running.

Today, when this occurs, we create a new ComputeNode entry. This change
alters that logic to detect the case of the ironic node rebalance and in
that case we re-use the existing ComputeNode entry, simply updating the
host field to match the new host it has been rebalanced onto.

Previously we hit problems with placement when we get a new
ComputeNode.uuid for the same ironic_node.uuid. This reusing of the
existing entry keeps the ComputeNode.uuid the same when the rebalance of
the ComputeNode occurs.

Without keeping the same ComputeNode.uuid placement errors out with a 409
because we attempt to create a ResourceProvider that has the same name
as an existing ResourceProvdier. Had that worked, we would have noticed
the race that occurs after we create the ResourceProvider but before we
add back the existing allocations for existing instances. Keeping the
ComputeNode.uuid the same means we simply look up the existing
ResourceProvider in placement, avoiding all this pain and tears.

Closes-Bug: #1714248
Co-Authored-By: Dmitry Tantsur <dtantsur@redhat.com>
Change-Id: I4253cffca3dbf558c875eed7e77711a31e9e3406
(cherry picked from commit e3c5e22d1f)
2017-12-12 10:10:53 -05:00
..
__init__.py Import Ironic Driver & supporting files - part 1 2014-09-05 19:00:12 -04:00
client_wrapper.py Ironic: Support boot from Cinder volume 2017-07-25 03:57:51 +00:00
driver.py Re-use existing ComputeNode on ironic rebalance 2017-12-12 10:10:53 -05:00
ironic_states.py Ironic: Call unprovison for nodes in DEPLOYING state 2015-09-09 10:38:22 +01:00
patcher.py Ironic: Support boot from Cinder volume 2017-07-25 03:57:51 +00:00