52679374a4
_get_host_states returns a generator which closes over seen_nodes, which is local, and self.host_state_map, which is global. It also modifies self.host_state_map, and will remove entries whose compute nodes are no longer present. If a compute node is deleted while a filter is still evaluating the generator returned by _get_host_states, the entry in self.host_state_map will be deleted if _get_host_states is called again. This will cause a KeyError when the first generator comes to evaluate the entry for the deleted compute node. We fix this by modifying the returned generator expression to check that a host_state_map entry still exists before returning it. An existing unit test is modified to exhibit the bug. Conflicts: nova/scheduler/filter_scheduler.py nova/scheduler/host_manager.py NOTE(mriedem): The conflict in filter_scheduler.py is due to |
||
---|---|---|
.. | ||
client | ||
filters | ||
weights | ||
__init__.py | ||
caching_scheduler.py | ||
chance.py | ||
driver.py | ||
filter_scheduler.py | ||
host_manager.py | ||
ironic_host_manager.py | ||
manager.py | ||
rpcapi.py | ||
utils.py |