Eliminate unnecessary sleeps during no-op update

For an update that involves e.g. stepping through the whole graph and
verifying that nothing needs to be updated, we spend a lot of time sleeping
unnecessarilty. Every task will exit without yielding (i.e.
it will be complete after calling TaskRunner.start()), yet the
DependencyTaskGroup yields after each set of tasks so the minimum sleep
time in seconds is the maximum path length in the graph minus one.

This change fixes that by removing nodes from the graph immediately if they
are done immediately after having been started. Since the _ready() call
returns an iterator, this allows any later tasks that were blocking only on
this one to start immediately. To ensure that any tasks that are only
blocking on this one _do_ appear later, iterate over the graph in
topologically sorted order.

The potential downside to this would be any time that actions complete
quickly (i.e. without yielding), but we still need to throttle them. An
obvious case might be a resource type with no check_create_complete()
function - creating a lot of these in a row could result in quota failures
on the target API. However, the Resource.action_handler_task() task always
yields at least once even if there is no check, so this patch should not
change its behaviour.

Change-Id: I734561814d2784e710d0b9ec3ef7834f44f579b2
Closes-Bug: #1523303
This commit is contained in:
Zane Bitter 2015-12-07 17:57:57 -05:00
parent 2467d83377
commit d66d57f187
2 changed files with 7 additions and 4 deletions

View File

@ -351,7 +351,8 @@ class DependencyTaskGroup(object):
of the error will be cancelled). Once all chains are complete, any
errors will be rolled up into an ExceptionGroup exception.
"""
self._runners = dict((o, TaskRunner(task, o)) for o in dependencies)
self._keys = list(dependencies)
self._runners = dict((o, TaskRunner(task, o)) for o in self._keys)
self._graph = dependencies.graph(reverse=reverse)
self.error_wait_time = error_wait_time
self.aggregate_exceptions = aggregate_exceptions
@ -374,6 +375,8 @@ class DependencyTaskGroup(object):
try:
for k, r in self._ready():
r.start()
if not r:
del self._graph[k]
yield
@ -417,8 +420,8 @@ class DependencyTaskGroup(object):
Ready subtasks are subtasks whose dependencies have all been satisfied,
but which have not yet been started.
"""
for k, n in six.iteritems(self._graph):
if not n:
for k in self._keys:
if not self._graph.get(k, True):
runner = self._runners[k]
if runner and not runner.started():
yield k, runner

View File

@ -94,7 +94,7 @@ class DependencyTaskGroupTest(common.HeatTestCase):
self.steps = 0
self.m.StubOutWithMock(scheduler.TaskRunner, '_sleep')
with self._dep_test(('second', 'first')):
scheduler.TaskRunner._sleep(None).AndReturn(None)
pass
def test_single_node(self):
with self._dep_test(('only', None)) as dummy: