Restart scheduler in TestNovaManagePlacementHealAllocations
TestNovaManagePlacementHealAllocations tests rely on the CachingScheduler specifically because it does not use placement and therefore does not create allocations during scheduling, which gives us instances that need to heal allocations. However, we have a race in the test setup where the scheduler is started before the compute services. The CachingScheduler runs a periodic task on startup to fetch the hosts from the DB to initialize it's cache, and then uses that during scheduling until the periodic runs again. So what we can hit is the scheduler starts, loads an empty cache, then we start the computes and try to create an instance but because of the empty cache we fail with a NoValidHost error. This restarts and resets the CachingScheduler cache *after* we have started the computes and asserted they are available in the API. Change-Id: I32f607a436e9851a96877123ae3d1fe51f444f73 Closes-Bug: #1781648
This commit is contained in:
parent
6522ea3ecf
commit
4fe4fbe7a4
|
@ -387,6 +387,14 @@ class TestNovaManagePlacementHealAllocations(
|
|||
self.flavor = self.api.get_flavors()[0]
|
||||
self.output = StringIO()
|
||||
self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output))
|
||||
# On startup, the CachingScheduler runs a periodic task to pull the
|
||||
# initial set of compute nodes out of the database which it then puts
|
||||
# into a cache (hence the name of the driver). This can race with
|
||||
# actually starting the compute services so we need to restart the
|
||||
# scheduler to refresh the cache.
|
||||
self.scheduler_service.stop()
|
||||
self.scheduler_service.manager.driver.all_host_states = None
|
||||
self.scheduler_service.start()
|
||||
|
||||
def _boot_and_assert_no_allocations(self, flavor, hostname):
|
||||
"""Creates a server on the given host and asserts neither have usage
|
||||
|
|
Loading…
Reference in New Issue