f8c2df78f2
When we fail to schedule an instance, e.g. there are no hosts
available, conductor creates the instance in the cell0 database
and deletes the build request. At this point quota usage
has been incremented in the main 'nova' database.
When the instance is deleted, the build request is already gone
so _delete_while_booting returns False and we lookup the instance
in cell0 and delete it from there, but that flow wasn't decrementing
quota usage like _delete_while_booting was.
This change adds the same quota usage decrement handling that
_delete_while_booting performs.
NOTE(mriedem): This change also pulls in some things from
I7de87dce216835729283bca69f0eff59a679b624 which is not being
backported to Ocata since in Pike it solves a slightly different
part of this quota usage issue. In Pike the cell mapping db_connection
is actually stored on the context object when we get the instance
from nova.compute.api.API.get(). So the fix in Pike is slightly
different from Ocata. However, what we need to pull from that Pike
change is:
1. We need to target the cell that the instance lives in to get the
flavor information when creating the quota reservation.
2. We need to change the functional regression test to assert that
the bug is fixed.
The code and tests are adjusted to be a sort of mix between both
changes in Pike without requiring a full backport of the 2nd
part of the fix in Pike.
Change-Id: I4cb0169ce0de537804ab9129bc671d75ce5f7953
Partial-Bug: #1670627
(cherry picked from commit
|
||
---|---|---|
.. | ||
monitors | ||
__init__.py | ||
api.py | ||
build_results.py | ||
cells_api.py | ||
claims.py | ||
flavors.py | ||
instance_actions.py | ||
manager.py | ||
power_state.py | ||
resource_tracker.py | ||
rpcapi.py | ||
stats.py | ||
task_states.py | ||
utils.py | ||
vm_states.py |