This flag works only when -w option is used, it causes the report
command to end prematurely with exit code 1 and display all tasks status
when it encounters NEW error in orchestration task.
Change-Id: I7998bb1e1e8da1c76a69aa066af6460eb2fcee1c
It was possible to fetch stale data from database with high isolation
levels, to avoid such issues we will simply restart transaction
after each report interval
Change-Id: I64a9843aa64adf4c710a9f593bdeaa2f5b3c5fce
All solar cli actions will be atomic, for example if user is going
to create resources from composer file - all or none will be added to database
If for some reason this behaviour is undesirable for particular command
developer can overwrite it by using default click command:
@group.command(cls=click.Command)
For those who are using solar as a library - decorator and context managers
are available in following module:
from solar.dblayer.utils import atomic
@atomic
def setup():
Change-Id: I8491d90f17c25edc85f18bc7bd7e16c32c3f4561
Replaced EGroup with BaseGroup which has default error_wrapper method.
Added BaseGroup to all click groups using proxy empty classes.
Added inheritance from SolarError to several custom exception classes.
Change-Id: I4afa2f23ef4486c3a1565c04419f4b8dff21705a
Procedure in staged_log, that was adding log items based on connections,
was reworked to avoid creation of LogItem instances before filtering
those children that weren't changed (with empty diff or connections_diff)
That problem was leading to unpredictable behaviour during updates,
removal and relevant discard/revert scenarios.
Change-Id: I65adb8262fdbe10299c02c54db9d19fb255bceea
In case if resource was already created (commit operation was
triggered) we will implicitly stage 'update' action.
It was default solar behaviour which was change in one of the
patches related to mentioned blueprint.
related to blueprint refactor-process-of-staging-changes
Change-Id: I861dfca4f6a68cb8b1d9914d6f6a082ed9e865cf
By using childs weights for scheduling we can unlock
concurrent and decrease total time of execution.
As an example consider next variant:
Tasks A and B can't run concurrently because of node-limit.
Tasks A and C have logical dependency, and thus not concurrent.
Tasks B and C will be executed on different nodes, and doesnt
have any logical dependency.
As A and B doesnt have parents we may schedule any of this task
and logically execution will be correct, but in case if we will choose
B total time of execution will be - B + A + C, BUT
if we will select A - total time of execution may be reduced,
and will take in total - A + max(B, C).
Change-Id: I52a6c20e8c3d729ed20da822f45cbad90e51f2df
Closes-Bug: 1554105
Current patch addresses several problems -
1. A lot of forced updates on every tick of scheduler are leading
to increased cpu consumption of solar-worker
2. In order to represent solar dbmodel Task using networkx interface
a lot of Task properties are duplicated and are copied each time
when graph object is created
Solving 2nd problem allows us to move update logic to scheduler,
and this will guarantee that we will update no more than reported task
+ childs of that task on each scheduler tick.
Closes-Bug: 1560059
Change-Id: I3ee368ff03b7e24e783e4a367d51e9a84b28a4d9
Current implementation of DBModelProxy doesnt allow to use
origin hash function of Model class.
In order to avoid this problem we will store references to
Model instances in WeakValueDictionary instead of WeakSet.
Change-Id: If92af140c9aaad3a46b24872dae16969b1090df8
Closes-Bug: 1560369
cls.bucket.get expects key to be a string,
but instead of it may receive dict where one of the
keys will be *key*.
This problem can be found on riak backend.
Change-Id: I824889447ea229ac7005df31156afe79dc78f42b
Current change addresses several problems -
- It was impossible to re-stage already commited resources.
For example re-run openstack actions without doing any artificial
updates in solar inputs.
- Also there was no way to execute actions that are not related
to state in solar (run/update/remove). An example would be -
restart of services.
- And following changes addresses isolation problem in staging
procedure. By design solar is isolated using tags semantics,
but previous implemention of *process* was building a graph
unconditionally for all staged resources. It was reworked and now
we can support partial processing of resources, based on tags.
Implicit staging will be done when resource is update/created/removed.
Additionally actions can be staged using solar ch stage command,
to support this additional flags were added:
--action, -a - action that should be staged
--tag, -t - tags to select group of resources
--name, -n - resource that will be staged
Only one from name or tag will be used, if user will provide both - name
will be of higher priority
Reverts and discard are working as previously for
creation/update/removal of resources, but Exception will be raised if
revert will be attempted for custom action, such as *restart*.
Processing staged items can be achieved with -
solar ch process -t tag1 -t tag2
History and staged log items will be stored in different buckets.
Custom siblings resolved for LogItem will ensure that there is only
one resource action is staged.
implements blueprint refactor-process-of-staging-changes
Change-Id: I9e634803a38d80213b87518cd2c8fdc022237aa0
Supported parameters
- pool_size how big is the pool
- pool_overflow how many overflow connections should be allowed
Change-Id: Iba92eb94754ef7314bc3d4bf0e413e7d61e027f8
It will stop pollution of subsequent orchestrations with time values
from previous runs.
Change-Id: I2b0e495f84768aee3545f33e388c9bfb20d76fa4
Closes-bug: 1554058
Events of dependency type by design cant insert tasks into
the graph during build, but if reaction is processed
after dependency, and they have same child - it is possible
that dependency wont be present in the resulting graph
which will lead to incorrect order
Closes-Bug #1553187
Change-Id: I97cae1be538df5dd8ccd8b8bfcfc5bb3541b6e98
When repository is created from source, we create it in temporary
directory, which is moved to proper repo directory upon completion, or
is removed if an error occurred.
.tmp dir in _REPOS_LOCATION is created when needed, it is also removed
from repo listing due to it not being a repository.
.tmp dir is created in the same directory as repositories to ensure
that we can safely os.rename temp dir to destination.
Change-Id: Ie57f0273ce2eca96966323fba916f700fad7e5ca
Implements: blueprint repository-module-atomic-like-create
Additional message is printed for the case when there are no tasks in
graph to report.
Change-Id: I0074e8e8b0d5a4e25cdb90187790820c4f1c73e0
Closes-bug: 1547537
In case if reaction was inserted in changes graph, we missed
all possible successors of that reaction
Closes-Bug #1552275
Change-Id: Iba21e20d1d31086bf76d64b906b52bc85fcd7693
* Added test for soft_stop
* Replaced simple fixture with simple_plan fixture in tests
Change-Id: I2375c586f2e733f1ff3de3455b19a39d3baff7be
Closes-bug: 1549312
- For rsync use simple mkdir -p {} && rsync
- For old fabric use run transport before sync to create paths
Change-Id: I268346c06666f29b13e83b4634f84c564c0c1a31
Closes-bug: #1552152