Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Id1c9c1dd5ee8186263661b861d0ee0f8828deff8
This commit is contained in:
Tony Breeds 2017-09-12 15:41:14 -06:00
parent ca307415a1
commit c116ca4f2e
518 changed files with 14 additions and 76294 deletions

View File

@ -1,9 +0,0 @@
[run]
branch = True
source = mistral
omit =
.tox/*
mistral/tests/*
[report]
ignore_errors = True

58
.gitignore vendored
View File

@ -1,58 +0,0 @@
*.py[cod]
*.sqlite
# C extensions
*.so
# Packages
*.egg*
dist
build
.venv
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
cover/*
.testrepository/
subunit.log
.mistral.conf
AUTHORS
ChangeLog
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
.idea
.DS_Store
etc/*.conf
etc/mistral.conf.sample
#Linux swap files range from .saa to .swp
*.s[a-w][a-p]
# Files created by releasenotes build
releasenotes/build
# Files created by doc build
doc/source/api
# Files created by API build
api-ref/build/

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/mistral.git

View File

@ -1,10 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./mistral/tests/unit $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
test_run_concurrency=echo ${TEST_RUN_CONCURRENCY:-0}

View File

@ -1,66 +0,0 @@
=======================
Contributing to Mistral
=======================
If you're interested in contributing to the Mistral project,
the following will help get you started.
Contributor License Agreement
=============================
In order to contribute to the Mistral project, you need to have
signed OpenStack's contributor's agreement:
* https://docs.openstack.org/infra/manual/developers.html
* https://wiki.openstack.org/CLA
Project Hosting Details
=======================
* Bug trackers
* General mistral tracker: https://launchpad.net/mistral
* Python client tracker: https://launchpad.net/python-mistralclient
* Mailing list (prefix subjects with ``[Mistral]`` for faster responses)
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
* Documentation
* https://docs.openstack.org/mistral/latest/
* IRC channel
* #openstack-mistral at FreeNode
* https://wiki.openstack.org/wiki/Mistral/Meetings_Meetings
* Code Hosting
* https://github.com/openstack/mistral
* https://github.com/openstack/python-mistralclient
* https://github.com/openstack/mistral-dashboard
* https://github.com/openstack/mistral-lib
* https://github.com/openstack/mistral-specs
* https://github.com/openstack/mistral-specs
* Code Review
* https://review.openstack.org/#/q/mistral
* https://review.openstack.org/#/q/python-mistralclient
* https://review.openstack.org/#/q/mistral-dashboard
* https://review.openstack.org/#/q/mistral-lib
* https://review.openstack.org/#/q/mistral-extra
* https://review.openstack.org/#/q/mistral-specs
* https://docs.openstack.org/infra/manual/developers.html#development-workflow
* Mistral Design Specifications
* https://specs.openstack.org/openstack/mistral-specs/

View File

@ -1,18 +0,0 @@
Style Commandments
==================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
Mistral Specific Commandments
-----------------------------
- [M001] Use LOG.warning(). LOG.warn() is deprecated.
- [M318] Change assertEqual(A, None) or assertEqual(None, A) by optimal assert
like assertIsNone(A)
- [M319] Enforce use of assertTrue/assertFalse
- [M320] Enforce use of assertIs/assertIsNot
- [M327] Do not use xrange(). xrange() is not compatible with Python 3. Use
range() or six.moves.range() instead.
- [M328] Python 3: do not use dict.iteritems.
- [M329] Python 3: do not use dict.iterkeys.
- [M330] Python 3: do not use dict.itervalues.

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,268 +0,0 @@
========================
Team and repository tags
========================
.. image:: https://governance.openstack.org/badges/mistral.svg
:target: https://governance.openstack.org/reference/tags/index.html
Mistral
=======
Workflow Service for OpenStack cloud. This project aims to provide a mechanism
to define tasks and workflows without writing code, manage and execute them in
the cloud environment.
Installation
~~~~~~~~~~~~
The following are the steps to install Mistral on debian-based systems.
To install Mistral, you have to install the following prerequisites::
$ apt-get install python-dev python-setuptools libffi-dev \
libxslt1-dev libxml2-dev libyaml-dev libssl-dev
**Mistral can be used without authentication at all or it can work with
OpenStack.**
In case of OpenStack, it works **only with Keystone v3**, make sure **Keystone
v3** is installed.
Install Mistral
---------------
First of all, clone the repo and go to the repo directory::
$ git clone https://git.openstack.org/openstack/mistral.git
$ cd mistral
**Devstack installation**
Information about how to install Mistral with devstack can be found
`here <https://docs.openstack.org/mistral/latest/contributor/devstack.html>`_.
Configuring Mistral
~~~~~~~~~~~~~~~~~~~
Mistral configuration is needed for getting it work correctly with and without
an OpenStack environment.
#. Install and configure a database which can be *MySQL* or *PostgreSQL*
(**SQLite can't be used in production.**). Here are the steps to connect
Mistral to a *MySQL* database.
* Make sure you have installed ``mysql-server`` package on your Mistral
machine.
* Install *MySQL driver* for python::
$ pip install mysql-python
or, if you work in virtualenv, run::
$ tox -evenv -- pip install mysql-python
NOTE: If you're using Python 3 then you need to install ``mysqlclient``
instead of ``mysql-python``.
* Create the database and grant privileges::
$ mysql -u root -p
mysql> CREATE DATABASE mistral;
mysql> USE mistral
mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'localhost' IDENTIFIED BY 'MISTRAL_DBPASS';
mysql> GRANT ALL PRIVILEGES ON mistral.* TO 'mistral'@'%' IDENTIFIED BY 'MISTRAL_DBPASS';
#. Generate ``mistral.conf`` file::
$ oslo-config-generator \
--config-file tools/config/config-generator.mistral.conf \
--output-file etc/mistral.conf.sample
#. Copy service configuration files::
$ sudo mkdir /etc/mistral
$ sudo chown `whoami` /etc/mistral
$ cp etc/event_definitions.yml.sample /etc/mistral/event_definitions.yml
$ cp etc/logging.conf.sample /etc/mistral/logging.conf
$ cp etc/policy.json /etc/mistral/policy.json
$ cp etc/wf_trace_logging.conf.sample /etc/mistral/wf_trace_logging.conf
$ cp etc/mistral.conf.sample /etc/mistral/mistral.conf
#. Edit file ``/etc/mistral/mistral.conf`` according to your setup. Pay attention
to the following sections and options::
[oslo_messaging_rabbit]
rabbit_host = <RABBIT_HOST>
rabbit_userid = <RABBIT_USERID>
rabbit_password = <RABBIT_PASSWORD>
[database]
# Use the following line if *PostgreSQL* is used
# connection = postgresql://<DB_USER>:<DB_PASSWORD>@localhost:5432/mistral
connection = mysql://<DB_USER>:<DB_PASSWORD>@localhost:3306/mistral
#. If you are not using OpenStack, add the following entry to the
``/etc/mistral/mistral.conf`` file and **skip the following steps**::
[pecan]
auth_enable = False
#. Provide valid keystone auth properties::
[keystone_authtoken]
auth_uri = http://keystone-host:port/v3
identity_uri = http://keystone-host:port
auth_version = v3
admin_user = <user>
admin_password = <password>
admin_tenant_name = <tenant>
#. Register Mistral service and Mistral endpoints on Keystone::
$ MISTRAL_URL="http://[host]:[port]/v2"
$ openstack service create --name mistral workflowv2
$ openstack endpoint create mistral public $MISTRAL_URL
$ openstack endpoint create mistral internal $MISTRAL_URL
$ openstack endpoint create mistral admin $MISTRAL_URL
#. Update the ``mistral/actions/openstack/mapping.json`` file which contains
all available OpenStack actions, according to the specific client versions
of OpenStack projects in your deployment. Please find more detailed
information in the ``tools/get_action_list.py`` script.
Before the First Run
--------------------
After local installation you will find the commands ``mistral-server`` and
``mistral-db-manage`` available in your environment. The ``mistral-db-manage``
command can be used for migrating database schema versions. If Mistral is not
installed in system then this script can be found at
``mistral/db/sqlalchemy/migration/cli.py``, it can be executed using Python
command line.
To update the database schema to the latest revision, type::
$ mistral-db-manage --config-file <path_to_config> upgrade head
To populate the database with standard actions and workflows, type::
$ mistral-db-manage --config-file <path_to_config> populate
For more detailed information about ``mistral-db-manage`` script please check
file ``mistral/db/sqlalchemy/migration/alembic_migrations/README.md``.
Running Mistral API server
--------------------------
To run Mistral API server::
$ tox -evenv -- python mistral/cmd/launch.py \
--server api --config-file <path_to_config>
Running Mistral Engines
-----------------------
To run Mistral Engine::
$ tox -evenv -- python mistral/cmd/launch.py \
--server engine --config-file <path_to_config>
Running Mistral Task Executors
------------------------------
To run Mistral Task Executor instance::
$ tox -evenv -- python mistral/cmd/launch.py \
--server executor --config-file <path_to_config>
Note that at least one Engine instance and one Executor instance should be
running in order for workflow tasks to be processed by Mistral.
If you want to run some tasks on specific executor, the *task affinity* feature
can be used to send these tasks directly to a specific executor. You can edit
the following property in your mistral configuration file for this purpose::
[executor]
host = my_favorite_executor
After changing this option, you will need to start (restart) the executor. Use
the ``target`` property of a task to specify the executor::
... Workflow YAML ...
task1:
...
target: my_favorite_executor
... Workflow YAML ...
Running Multiple Mistral Servers Under the Same Process
-------------------------------------------------------
To run more than one server (API, Engine, or Task Executor) on the same
process::
$ tox -evenv -- python mistral/cmd/launch.py \
--server api,engine --config-file <path_to_config>
The value for the ``--server`` option can be a comma-delimited list. The valid
options are ``all`` (which is the default if not specified) or any combination
of ``api``, ``engine``, and ``executor``.
It's important to note that the ``fake`` transport for the ``rpc_backend``
defined in the configuration file should only be used if ``all`` Mistral
servers are launched on the same process. Otherwise, messages do not get
delivered because the ``fake`` transport is using an in-process queue.
Project Goals 2017
------------------
#. **Complete Mistral documentation**.
Mistral documentation should be more usable. It requires focused work to
make it well structured, eliminate gaps in API/Mistral Workflow Language
specifications, add more examples and tutorials.
*Definition of done*:
All capabilities are covered, all documentation topics are written using
the same style and structure principles. The obvious sub-goal of this goal
is to establish these principles.
#. **Complete Mistral Custom Actions API**.
There has been the initiative in Mistral team since April of 2016 to
refactor Mistral actions subsystem in order to make the process of
developing Mistral actions easier and clearer. In 2017 we need to complete
this effort and make sure that all APIs are stable and its well-documented.
*Definition of done*:
All API interfaces are stable, existing actions are rewritten using this new
API, OpenStack actions are also rewritten based on the new API and moved to
mistral-extra repo. Everything is well documented and the doc has enough
examples.
#. **Finish Mistral multi-node mode**.
Mistral needs to be proven to work reliably in multi-node mode. In order
to achieve it we need to make a number of engine, executor and RPC
changes and configure a CI gate to run stress tests on multi-node Mistral.
*Definition of done*:
CI gate supports MySQL, all critically important functionality (join,
with-items, parallel workflows, sequential workflows) is covered by tests.
#. **Reduce workflow execution time**.
*Definition of done*: Average workflow execution time reduced by 30%.
Project Resources
-----------------
* `Mistral Official Documentation <https://docs.openstack.org/mistral/latest/>`_
* Project status, bugs, and blueprints are tracked on
`Launchpad <https://launchpad.net/mistral/>`_
* Additional resources are linked from the project
`Wiki <https://wiki.openstack.org/wiki/Mistral/>`_ page
* Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0

View File

@ -1,131 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import subprocess
import sys
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinxcontrib.autohttp.flask',
'sphinxcontrib.pecanwsme.rest',
'wsmeext.sphinxext',
]
if not on_rtd:
extensions.append('oslosphinx')
wsme_protocols = ['restjson']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Workflow Service API Reference'
copyright = u'2017, Mistral Contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
from mistral.version import version_info
release = version_info.release_string()
version = version_info.version_string()
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_static_path = ['_static']
if on_rtd:
html_theme_path = ['.']
html_theme = 'sphinx_rtd_theme'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['mistral.']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
html_last_updated_fmt = subprocess.check_output(
git_cmd).decode('utf-8')
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Mistral API Reference'
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': [
'sidebarlinks.html', 'localtoc.html', 'searchbox.html',
'sourcelink.html'
],
'**': [
'localtoc.html', 'relations.html',
'searchbox.html', 'sourcelink.html'
]
}
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'mistral', u'Mistral',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
man_show_urls = True

View File

@ -1,8 +0,0 @@
===============================
OpenStack Workflow Service APIs
===============================
.. toctree::
:maxdepth: 1
v2/index

View File

@ -1,21 +0,0 @@
============================
Enabling Mistral in Devstack
============================
1. Download DevStack::
git clone https://github.com/openstack-dev/devstack.git
cd devstack
2. Add this repo as an external repository in ``local.conf`` file::
> cat local.conf
[[local|localrc]]
enable_plugin mistral https://github.com/openstack/mistral
To use stable branches, make sure devstack is on that branch, and specify
the branch name to enable_plugin, for example::
enable_plugin mistral https://github.com/openstack/mistral stable/newton
3. run ``stack.sh``

View File

@ -1,28 +0,0 @@
Listen %PUBLICPORT%
<VirtualHost *:%PUBLICPORT%>
WSGIDaemonProcess mistral-api processes=%API_WORKERS% threads=1 user=%USER% display-name=%{GROUP} %VIRTUALENV%
WSGIProcessGroup mistral-api
WSGIScriptAlias / %MISTRAL_BIN_DIR%/mistral-wsgi-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
AllowEncodedSlashes On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/%APACHE_NAME%/mistral_api.log
CustomLog /var/log/%APACHE_NAME%/mistral_api_access.log combined
%SSLENGINE%
%SSLCERTFILE%
%SSLKEYFILE%
<Directory %MISTRAL_BIN_DIR%>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>

View File

@ -1,277 +0,0 @@
# ``stack.sh`` calls the entry points in this order:
#
# install_mistral
# install_python_mistralclient
# configure_mistral
# start_mistral
# stop_mistral
# cleanup_mistral
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
# Defaults
# --------
# Support entry points installation of console scripts
if [[ -d $MISTRAL_DIR/bin ]]; then
MISTRAL_BIN_DIR=$MISTRAL_DIR/bin
else
MISTRAL_BIN_DIR=$(get_python_exec_prefix)
fi
# Toggle for deploying Mistral API under HTTPD + mod_wsgi
MISTRAL_USE_MOD_WSGI=${MISTRAL_USE_MOD_WSGI:-True}
MISTRAL_FILES_DIR=$MISTRAL_DIR/devstack/files
# create_mistral_accounts - Set up common required mistral accounts
#
# Tenant User Roles
# ------------------------------
# service mistral admin
function create_mistral_accounts {
if ! is_service_enabled key; then
return
fi
create_service_user "mistral" "admin"
get_or_create_service "mistral" "workflowv2" "Workflow Service v2"
get_or_create_endpoint "workflowv2" \
"$REGION_NAME" \
"$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2" \
"$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2" \
"$MISTRAL_SERVICE_PROTOCOL://$MISTRAL_SERVICE_HOST:$MISTRAL_SERVICE_PORT/v2"
}
function mkdir_chown_stack {
if [[ ! -d "$1" ]]; then
sudo mkdir -p "$1"
fi
sudo chown $STACK_USER "$1"
}
# Entry points
# ------------
# configure_mistral - Set config files, create data dirs, etc
function configure_mistral {
mkdir_chown_stack "$MISTRAL_CONF_DIR"
# Generate Mistral configuration file and configure common parameters.
oslo-config-generator --config-file $MISTRAL_DIR/tools/config/config-generator.mistral.conf --output-file $MISTRAL_CONF_FILE
iniset $MISTRAL_CONF_FILE DEFAULT debug $MISTRAL_DEBUG
MISTRAL_POLICY_FILE=$MISTRAL_CONF_DIR/policy.json
cp $MISTRAL_DIR/etc/policy.json $MISTRAL_POLICY_FILE
# Run all Mistral processes as a single process
iniset $MISTRAL_CONF_FILE DEFAULT server all
# Mistral Configuration
#-------------------------
# Setup keystone_authtoken section
iniset $MISTRAL_CONF_FILE keystone_authtoken auth_host $KEYSTONE_AUTH_HOST
iniset $MISTRAL_CONF_FILE keystone_authtoken auth_port $KEYSTONE_AUTH_PORT
iniset $MISTRAL_CONF_FILE keystone_authtoken auth_protocol $KEYSTONE_AUTH_PROTOCOL
iniset $MISTRAL_CONF_FILE keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
iniset $MISTRAL_CONF_FILE keystone_authtoken admin_user $MISTRAL_ADMIN_USER
iniset $MISTRAL_CONF_FILE keystone_authtoken admin_password $SERVICE_PASSWORD
iniset $MISTRAL_CONF_FILE keystone_authtoken auth_uri $KEYSTONE_AUTH_URI_V3
iniset $MISTRAL_CONF_FILE keystone_authtoken identity_uri $KEYSTONE_AUTH_URI
# Setup RabbitMQ credentials
iniset_rpc_backend mistral $MISTRAL_CONF_FILE
# Configure the database.
iniset $MISTRAL_CONF_FILE database connection `database_connection_url mistral`
iniset $MISTRAL_CONF_FILE database max_overflow -1
iniset $MISTRAL_CONF_FILE database max_pool_size 1000
# Configure action execution deletion policy
iniset $MISTRAL_CONF_FILE api allow_action_execution_deletion True
# Path of policy.json file.
iniset $MISTRAL_CONF oslo_policy policy_file $MISTRAL_POLICY_FILE
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
setup_colorized_logging $MISTRAL_CONF_FILE DEFAULT tenant user
fi
if [ "$MISTRAL_RPC_IMPLEMENTATION" ]; then
iniset $MISTRAL_CONF_FILE DEFAULT rpc_implementation $MISTRAL_RPC_IMPLEMENTATION
fi
if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then
_config_mistral_apache_wsgi
fi
}
# init_mistral - Initialize the database
function init_mistral {
# (re)create Mistral database
recreate_database mistral utf8
python $MISTRAL_DIR/tools/sync_db.py --config-file $MISTRAL_CONF_FILE
}
# install_mistral - Collect source and prepare
function install_mistral {
setup_develop $MISTRAL_DIR
# installing python-nose.
real_install_package python-nose
if is_service_enabled horizon; then
_install_mistraldashboard
fi
if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then
install_apache_wsgi
fi
}
function _install_mistraldashboard {
git_clone $MISTRAL_DASHBOARD_REPO $MISTRAL_DASHBOARD_DIR $MISTRAL_DASHBOARD_BRANCH
setup_develop $MISTRAL_DASHBOARD_DIR
ln -fs $MISTRAL_DASHBOARD_DIR/mistraldashboard/enabled/_50_mistral.py $HORIZON_DIR/openstack_dashboard/local/enabled/_50_mistral.py
}
function install_mistral_pythonclient {
if use_library_from_git "python-mistralclient"; then
git_clone $MISTRAL_PYTHONCLIENT_REPO $MISTRAL_PYTHONCLIENT_DIR $MISTRAL_PYTHONCLIENT_BRANCH
local tags=`git --git-dir=$MISTRAL_PYTHONCLIENT_DIR/.git tag -l | grep 2015`
if [ ! "$tags" = "" ]; then
git --git-dir=$MISTRAL_PYTHONCLIENT_DIR/.git tag -d $tags
fi
setup_develop $MISTRAL_PYTHONCLIENT_DIR
fi
}
# start_mistral - Start running processes, including screen
function start_mistral {
# If the site is not enabled then we are in a grenade scenario
local enabled_site_file
enabled_site_file=$(apache_site_config_for mistral-api)
if is_service_enabled mistral-api && is_service_enabled mistral-engine && is_service_enabled mistral-executor && is_service_enabled mistral-event-engine ; then
echo_summary "Installing all mistral services in separate processes"
if [ -f ${enabled_site_file} ] && [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then
enable_apache_site mistral-api
restart_apache_server
tail_log mistral-api /var/log/$APACHE_NAME/mistral_api.log
else
run_process mistral-api "$MISTRAL_BIN_DIR/mistral-server --server api --config-file $MISTRAL_CONF_DIR/mistral.conf"
fi
run_process mistral-engine "$MISTRAL_BIN_DIR/mistral-server --server engine --config-file $MISTRAL_CONF_DIR/mistral.conf"
run_process mistral-executor "$MISTRAL_BIN_DIR/mistral-server --server executor --config-file $MISTRAL_CONF_DIR/mistral.conf"
run_process mistral-event-engine "$MISTRAL_BIN_DIR/mistral-server --server event-engine --config-file $MISTRAL_CONF_DIR/mistral.conf"
else
echo_summary "Installing all mistral services in one process"
run_process mistral "$MISTRAL_BIN_DIR/mistral-server --server all --config-file $MISTRAL_CONF_DIR/mistral.conf"
fi
}
# stop_mistral - Stop running processes
function stop_mistral {
# Kill the Mistral screen windows
local serv
for serv in mistral mistral-engine mistral-executor mistral-event-engine; do
stop_process $serv
done
if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then
disable_apache_site mistral-api
restart_apache_server
else
stop_process mistral-api
fi
}
function cleanup_mistral {
if is_service_enabled horizon; then
_mistral_cleanup_mistraldashboard
fi
if [ "$MISTRAL_USE_MOD_WSGI" == "True" ]; then
_mistral_cleanup_apache_wsgi
fi
sudo rm -rf $MISTRAL_CONF_DIR
}
function _mistral_cleanup_mistraldashboard {
rm -f $HORIZON_DIR/openstack_dashboard/local/enabled/_50_mistral.py
}
function _mistral_cleanup_apache_wsgi {
sudo rm -f $(apache_site_config_for mistral-api)
}
# _config_mistral_apache_wsgi() - Set WSGI config files for Mistral
function _config_mistral_apache_wsgi {
local mistral_apache_conf
mistral_apache_conf=$(apache_site_config_for mistral-api)
local mistral_ssl=""
local mistral_certfile=""
local mistral_keyfile=""
local mistral_api_port=$MISTRAL_SERVICE_PORT
local venv_path=""
sudo cp $MISTRAL_FILES_DIR/apache-mistral-api.template $mistral_apache_conf
sudo sed -e "
s|%PUBLICPORT%|$mistral_api_port|g;
s|%APACHE_NAME%|$APACHE_NAME|g;
s|%MISTRAL_BIN_DIR%|$MISTRAL_BIN_DIR|g;
s|%API_WORKERS%|$API_WORKERS|g;
s|%SSLENGINE%|$mistral_ssl|g;
s|%SSLCERTFILE%|$mistral_certfile|g;
s|%SSLKEYFILE%|$mistral_keyfile|g;
s|%USER%|$STACK_USER|g;
s|%VIRTUALENV%|$venv_path|g
" -i $mistral_apache_conf
}
if is_service_enabled mistral; then
if [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing mistral"
install_mistral
install_mistral_pythonclient
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring mistral"
configure_mistral
create_mistral_accounts
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing mistral"
init_mistral
start_mistral
fi
if [[ "$1" == "unstack" ]]; then
echo_summary "Shutting down mistral"
stop_mistral
fi
if [[ "$1" == "clean" ]]; then
echo_summary "Cleaning mistral"
cleanup_mistral
fi
fi
# Restore xtrace
$XTRACE
# Local variables:
# mode: shell-script
# End:

View File

@ -1,38 +0,0 @@
# Devstack settings
# We have to add Mistral to enabled services for run_process to work
# "mistral" should be always enabled
# To run services in separate processes and screens need to write:
# enable_service mistral mistral-api mistral-engine mistral-executor
# To run all services in one screen as a one process need to write:
# enable_service mistral
# All other combinations of services like 'mistral mistral-api' or 'mistral mistral-api mistral-engine'
# is an incorrect way to run services and all services by default will run in one screen
enable_service mistral mistral-api mistral-engine mistral-executor mistral-event-engine
# Set up default repos
MISTRAL_REPO=${MISTRAL_REPO:-${GIT_BASE}/openstack/mistral.git}
MISTRAL_BRANCH=${MISTRAL_BRANCH:-master}
MISTRAL_DASHBOARD_REPO=${MISTRAL_DASHBOARD_REPO:-${GIT_BASE}/openstack/mistral-dashboard.git}
MISTRAL_DASHBOARD_BRANCH=${MISTRAL_DASHBOARD_BRANCH:-$MISTRAL_BRANCH}
MISTRAL_PYTHONCLIENT_REPO=${MISTRAL_PYTHONCLIENT_REPO:-${GIT_BASE}/openstack/python-mistralclient.git}
MISTRAL_PYTHONCLIENT_BRANCH=${MISTRAL_PYTHONCLIENT_BRANCH:-master}
MISTRAL_PYTHONCLIENT_DIR=$DEST/python-mistralclient
# Set up default directories
MISTRAL_DIR=$DEST/mistral
MISTRAL_DASHBOARD_DIR=$DEST/mistral-dashboard
MISTRAL_CONF_DIR=${MISTRAL_CONF_DIR:-/etc/mistral}
MISTRAL_CONF_FILE=${MISTRAL_CONF_DIR}/mistral.conf
MISTRAL_DEBUG=${MISTRAL_DEBUG:-True}
MISTRAL_SERVICE_HOST=${MISTRAL_SERVICE_HOST:-$SERVICE_HOST}
MISTRAL_SERVICE_PORT=${MISTRAL_SERVICE_PORT:-8989}
MISTRAL_SERVICE_PROTOCOL=${MISTRAL_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
MISTRAL_ADMIN_USER=${MISTRAL_ADMIN_USER:-mistral}

View File

@ -1,11 +0,0 @@
<h3>Useful Links</h3>
<ul>
<li><a href="https://launchpad.net/mistral">Mistral @ Launchpad</a></li>
<li><a href="https://wiki.openstack.org/wiki/mistral">Mistral @ OpenStack Wiki</a></li>
</ul>
{% if READTHEDOCS %}
<script type='text/javascript'>
$('div.body').css('margin', 0)
</script>
{% endif %}

View File

@ -1,4 +0,0 @@
{% extends "basic/layout.html" %}
{% set css_files = css_files + ['_static/tweaks.css'] %}
{% block relbar1 %}{% endblock relbar1 %}

View File

@ -1,4 +0,0 @@
[theme]
inherit = nature
stylesheet = nature.css
pygments_style = tango

File diff suppressed because it is too large Load Diff

View File

@ -1,75 +0,0 @@
Mistral Upgrade Guide
=====================
Database upgrade
----------------
The migrations in ``alembic_migrations/versions`` contain the changes needed to
migrate between Mistral database revisions. A migration occurs by executing a
script that details the changes needed to upgrade the database. The migration
scripts are ordered so that multiple scripts can run sequentially. The scripts are
executed by Mistral's migration wrapper which uses the Alembic library to manage
the migration. Mistral supports migration from Kilo or later.
You can upgrade to the latest database version via:
::
$ mistral-db-manage --config-file /path/to/mistral.conf upgrade head
You can populate the database with standard actions and workflows:
::
$ mistral-db-manage --config-file /path/to/mistral.conf populate
To check the current database version:
::
$ mistral-db-manage --config-file /path/to/mistral.conf current
To create a script to run the migration offline:
::
$ mistral-db-manage --config-file /path/to/mistral.conf upgrade head --sql
To run the offline migration between specific migration versions:
::
$ mistral-db-manage --config-file /path/to/mistral.conf upgrade <start version>:<end version> --sql
Upgrade the database incrementally:
::
$ mistral-db-manage --config-file /path/to/mistral.conf upgrade --delta <# of revs>
Or, upgrade the database to one newer revision:
::
$ mistral-db-manage --config-file /path/to/mistral.conf upgrade +1
Create new revision:
::
$ mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision" --autogenerate
Create a blank file:
::
$ mistral-db-manage --config-file /path/to/mistral.conf revision -m "description of revision"
This command does not perform any migrations, it only sets the revision.
Revision may be any existing revision. Use this command carefully.
::
$ mistral-db-manage --config-file /path/to/mistral.conf stamp <revision>
To verify that the timeline does branch, you can run this command:
::
$ mistral-db-manage --config-file /path/to/mistral.conf check_migration
If the migration path has branch, you can find the branch point via:
::
$ mistral-db-manage --config-file /path/to/mistral.conf history

View File

@ -1,7 +0,0 @@
REST API Specification
======================
.. toctree::
:maxdepth: 2
v2

View File

@ -1,236 +0,0 @@
V2 API
======
This API describes the ways of interacting with Mistral service via HTTP protocol
using Representational State Transfer concept (ReST).
Basics
-------
Media types
^^^^^^^^^^^
Currently this API relies on JSON to represent states of REST resources.
Error states
^^^^^^^^^^^^
The common HTTP Response Status Codes (https://github.com/for-GET/know-your-http-well/blob/master/status-codes.md) are used.
Application root [/]
^^^^^^^^^^^^^^^^^^^^
Application Root provides links to all possible API methods for Mistral. URLs
for other resources described below are relative to Application Root.
API v2 root [/v2/]
^^^^^^^^^^^^^^^^^^
All API v2 urls are relative to API v2 root.
Workbooks
---------
.. autotype:: mistral.api.controllers.v2.resources.Workbook
:members:
`name` is immutable. tags is a list of values associated with a workbook that
a user can use to group workbooks by some criteria (deployment workbooks,
Big Data processing workbooks etc.). Note that name and tags get inferred from
workbook definition when Mistral service receives a POST request. So they
can't be changed in another way.
.. autotype:: mistral.api.controllers.v2.resources.Workbooks
:members:
.. rest-controller:: mistral.api.controllers.v2.workbook:WorkbooksController
:webprefix: /v2/workbooks
Workflows
---------
.. autotype:: mistral.api.controllers.v2.resources.Workflow
:members:
`name` is immutable. tags is a list of values associated with a workflow that
a user can use to group workflows by some criteria. Note that name and tags get
inferred from workflow definition when Mistral service receives a POST request.
So they can't be changed in another way.
.. autotype:: mistral.api.controllers.v2.resources.Workflows
:members:
.. rest-controller:: mistral.api.controllers.v2.workflow:WorkflowsController
:webprefix: /v2/workflows
Actions
-------
.. autotype:: mistral.api.controllers.v2.resources.Action
:members:
.. autotype:: mistral.api.controllers.v2.resources.Actions
:members:
.. rest-controller:: mistral.api.controllers.v2.action:ActionsController
:webprefix: /v2/actions
Executions
----------
.. autotype:: mistral.api.controllers.v2.resources.Execution
:members:
.. autotype:: mistral.api.controllers.v2.resources.Executions
:members:
.. rest-controller:: mistral.api.controllers.v2.execution:ExecutionsController
:webprefix: /v2/executions
Tasks
-----
When a workflow starts Mistral creates an execution. It in turn consists of a
set of tasks. So Task is an instance of a task described in a Workflow that
belongs to a particular execution.
.. autotype:: mistral.api.controllers.v2.resources.Task
:members:
.. autotype:: mistral.api.controllers.v2.resources.Tasks
:members:
.. rest-controller:: mistral.api.controllers.v2.task:TasksController
:webprefix: /v2/tasks
.. rest-controller:: mistral.api.controllers.v2.task:ExecutionTasksController
:webprefix: /v2/executions
Action Executions
-----------------
When a Task starts Mistral creates a set of Action Executions. So Action Execution
is an instance of an action call described in a Workflow Task that belongs to a
particular execution.
.. autotype:: mistral.api.controllers.v2.resources.ActionExecution
:members:
.. autotype:: mistral.api.controllers.v2.resources.ActionExecutions
:members:
.. rest-controller:: mistral.api.controllers.v2.action_execution:ActionExecutionsController
:webprefix: /v2/action_executions
.. rest-controller:: mistral.api.controllers.v2.action_execution:TasksActionExecutionController
:webprefix: /v2/tasks
Cron Triggers
-------------
Cron trigger is an object that allows to run Mistral workflows according to a time
pattern (Unix crontab patterns format). Once a trigger is created it will run a
specified workflow according to its properties: pattern, first_execution_time and
remaining_executions.
.. autotype:: mistral.api.controllers.v2.resources.CronTrigger
:members:
.. autotype:: mistral.api.controllers.v2.resources.CronTriggers
:members:
.. rest-controller:: mistral.api.controllers.v2.cron_trigger:CronTriggersController
:webprefix: /v2/cron_triggers
Environments
------------
Environment contains a set of variables which can be used in specific workflow.
Using an Environment it is possible to create and map action default values -
just provide '__actions' key in 'variables'. All these variables can be
accessed using the Workflow Language with the ``<% $.__env %>`` expression.
Example of usage:
.. code-block:: yaml
workflow:
tasks:
task1:
action: std.echo output=<% $.__env.my_echo_output %>
Example of creating action defaults
::
...ENV...
"variables": {
"__actions": {
"std.echo": {
"output": "my_output"
}
}
},
...ENV...
Note: using CLI, Environment can be created via JSON or YAML file.
.. autotype:: mistral.api.controllers.v2.resources.Environment
:members:
.. autotype:: mistral.api.controllers.v2.resources.Environments
:members:
.. rest-controller:: mistral.api.controllers.v2.environment:EnvironmentController
:webprefix: /v2/environments
Services
--------
Through service management API, system administrator or operator can retrieve
Mistral services information of the system, including service group and service
identifier. The internal implementation of this feature make use of tooz library,
which needs coordinator backend(the most commonly used at present is Zookeeper)
installed, please refer to tooz official documentation for more detailed
instruction.
There are three service groups according to Mistral architecture currently, namely
api_group, engine_group and executor_group. The service identifier contains name
of the host the service is running on and the process identifier of the service on
that host.
.. autotype:: mistral.api.controllers.v2.resources.Service
:members:
.. autotype:: mistral.api.controllers.v2.resources.Services
:members:
.. rest-controller:: mistral.api.controllers.v2.service:ServicesController
:webprefix: /v2/services
Validation
----------
Validation endpoints allow to check correctness of workbook, workflow and ad-hoc
action Workflow Language without having to upload them into Mistral.
**POST /v2/workbooks/validation**
Validate workbook content (Workflow Language grammar and semantics).
**POST /v2/workflows/validation**
Validate workflow content (Workflow Language grammar and semantics).
**POST /v2/actions/validation**
Validate ad-hoc action content (Workflow Language grammar and semantics).
These endpoints expect workbook, workflow or ad-hoc action text (Workflow Language) correspondingly
in a request body.

View File

@ -1,65 +0,0 @@
Mistral Architecture
====================
Mistral is OpenStack workflow service. The main aim of the project is to provide
capability to define, execute and manage tasks and workflows without writing
code.
Basic concepts
~~~~~~~~~~~~~~
A few basic concepts that one has to understand before going through the Mistral
architecture are given below:
* Workflow - consists of tasks (at least one) describing what exact steps should
be made during workflow execution.
* Task - an activity executed within the workflow definition.
* Action - work done when an exact task is triggered.
Mistral components
~~~~~~~~~~~~~~~~~~
Mistral is composed of the following major components:
* API Server
* Engine
* Task Executors
* Scheduler
* Persistence
The following diagram illustrates the architecture of mistral:
.. image:: img/mistral_architecture.png
API server
----------
The API server exposes REST API to operate and monitor the workflow executions.
Engine
------
The Engine picks up the workflows from the workflow queue. It handles the control
and dataflow of workflow executions. It also computes which tasks are ready and
places them in a task queue. It passes the data from task to task, deals with
condition transitions, etc.
Task Executors
--------------
The Task Executor executes task Actions. It picks up the tasks from the queue,
run actions, and sends results back to the engine.
Scheduler
---------
The scheduler stores and executes delayed calls. It is the important Mistral
component since it interacts with engine and executors. It also triggers
workflows on events (e.g., periodic cron event)
Persistence
-----------
The persistence stores workflow definitions, current execution states, and
past execution results.

View File

@ -1,87 +0,0 @@
Mistral Client Commands Guide
=============================
The Mistral CLI can be used with ``mistral`` command or via `OpenStackClient
<https://docs.openstack.org/python-openstackclient/latest/>`_.
Mistral Client
--------------
The best way to learn about all the commands and arguements that are expected
is to use the ``mistral help`` comand.
.. code-block:: bash
$ mistral help
usage: mistral [--version] [-v] [--log-file LOG_FILE] [-q] [-h] [--debug]
[--os-mistral-url MISTRAL_URL]
[--os-mistral-version MISTRAL_VERSION]
[--os-mistral-service-type SERVICE_TYPE]
...
It can also be used with the name of a sub-command.
.. code-block:: bash
$ mistral help execution-create
usage: mistral execution-create [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--print-empty] [--noindent] [--prefix PREFIX]
[-d DESCRIPTION]
workflow_identifier [workflow_input] [params]
Create new execution.
positional arguments:
workflow_identifier Workflow ID or name. Workflow name will be deprecated
sinceMitaka.
...
OpenStack Client
----------------
OpenStack client works in a similar way, the command ``openstack help`` shows
all the available commands and then ``openstack help <sub-command>`` will show
the detailed usage.
The full list of Mistral commands that are registered with OpenStack client
can be listed with ``openstack command list``. By default it will list all
commands grouped togehter, but we can specify only the Mistral command group.
.. code-block:: bash
$ openstack command list --group openstack.workflow_engine.v2
+------------------------------+-----------------------------------+
| Command Group | Commands |
+------------------------------+-----------------------------------+
| openstack.workflow_engine.v2 | action definition create |
| | action definition definition show |
| | action definition delete |
| | action definition list |
| | action definition show |
| | action definition update |
| | action execution delete |
...
Then detailed help output can be requested for an individual command.
.. code-block:: bash
$ openstack help workflow execution create
usage: openstack workflow execution create [-h]
[-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--print-empty] [--noindent]
[--prefix PREFIX] [-d DESCRIPTION]
workflow_identifier
[workflow_input] [params]
Create new execution.
positional arguments:
workflow_identifier Workflow ID or name. Workflow name will be deprecated
sinceMitaka.
workflow_input Workflow input
params Workflow additional parameters

View File

@ -1,130 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinxcontrib.pecanwsme.rest',
'sphinxcontrib.httpdomain',
'wsmeext.sphinxext',
'openstackdocstheme',
]
wsme_protocols = ['restjson']
suppress_warnings = ['app.add_directive']
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Mistral'
copyright = u'2014, Mistral Contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
from mistral.version import version_info
release = version_info.release_string()
version = version_info.version_string()
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_static_path = ['_static']
html_theme = 'openstackdocs'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['mistral.']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# Must set this variable to include year, month, day, hours, and minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Mistral'
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'index': [
'sidebarlinks.html', 'localtoc.html', 'searchbox.html',
'sourcelink.html'
],
'**': [
'localtoc.html', 'relations.html',
'searchbox.html', 'sourcelink.html'
]
}
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'mistral', u'Mistral',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
man_show_urls = True
# -- Options for openstackdocstheme -------------------------------------------
repository_name = 'openstack/mistral'
bug_project = 'mistral'
bug_tag = ''

View File

@ -1,129 +0,0 @@
Mistral Configuration Guide
===========================
Mistral configuration is needed for getting it work correctly
either with real OpenStack environment or without OpenStack environment.
**NOTE:** The most of the following operations should performed in mistral
directory.
#. Generate *mistral.conf* (if it does not already exist)::
$ oslo-config-generator --config-file tools/config/config-generator.mistral.conf --output-file /etc/mistral/mistral.conf
#. Edit file **/etc/mistral/mistral.conf**.
#. **If you are not using OpenStack, skip this item.** Provide valid keystone
auth properties::
[keystone_authtoken]
auth_uri = http://<Keystone-host>:5000/v3
identity_uri = http://<Keystone-host:35357
auth_version = v3
admin_user = <user>
admin_password = <password>
admin_tenant_name = <tenant>
#. Mistral can be also configured to authenticate with Keycloak server via OpenID Connect protocol.
In order to enable Keycloak authentication the following section should be in the config file::
auth_type = keycloak-oidc
[keycloak_oidc]
auth_url = https://<Keycloak-server-host>:<Keycloak-server-port>/auth
Property 'auth_type' is assigned to 'keystone' by default.
If SSL/TLS verification needs to be disabled then 'insecure = True' should also be added
under [keycloak_oidc] group.
#. If you want to configure SSL for Mistral API server, provide following options
in config file::
[api]
enable_ssl_api = True
[ssl]
ca_file = <path-to-ca file>
cert_file = <path-to-certificate file>
key_file = <path-to-key file>
#. **If you don't use OpenStack or you want to disable authentication for the
Mistral service**, provide ``auth_enable = False`` in the config file::
[pecan]
auth_enable = False
#. **If you are not using OpenStack, skip this item**. Register Mistral service
and Mistral endpoints on Keystone::
$ MISTRAL_URL="http://[host]:[port]/v2"
$ openstack service create workflowv2 --name mistral --description 'OpenStack Workflow service'
$ openstack endpoint create workflowv2 public $MISTRAL_URL
$ openstack endpoint create workflowv2 internal $MISTRAL_URL
$ openstack endpoint create workflowv2 admin $MISTRAL_URL
#. Configure transport properties in the [DEFAULT] section::
[DEFAULT]
transport_url = rabbit://<user_id>:<password>@<host>:5672/
#. Configure database. **SQLite can't be used in production**. Use *MySQL* or
*PostgreSQL* instead. Here are the steps how to connect *MySQL* DB to Mistral:
Make sure you have installed **mysql-server** package on your database machine
(it can be your Mistral machine as well).
Install MySQL driver for python::
$ pip install mysql-python
Create the database and grant privileges::
$ mysql -u root -p
CREATE DATABASE mistral;
USE mistral
GRANT ALL ON mistral.* TO 'root':<password>@<database-host>;
Configure connection in Mistral config::
[database]
connection = mysql://<user>:<password>@<database-host>:3306/mistral
**NOTE**: If PostgreSQL is used, configure connection item as below::
connection = postgresql://<user>:<password>@<database-host>:5432/mistral
#. **If you are not using OpenStack, skip this item.**
Update mistral/actions/openstack/mapping.json file which contains all allowed
OpenStack actions, according to the specific client versions of OpenStack
projects in your deployment. Please find more detailed information in
tools/get_action_list.py script.
#. Configure Task affinity feature if needed. It is needed for distinguishing
either single task executor or one task executor from group of task executors::
[executor]
host = my_favorite_executor
Then, this executor can be referred in Workflow Language by
.. code-block:: yaml
...Workflow YAML...
my_task:
...
target: my_favorite_executor
...Workflow YAML...
#. Configure role based access policies for Mistral endpoints (policy.json)::
[oslo_policy]
policy_file = <path-of-policy.json file>
Default policy.json file is in ``mistral/etc/``. For more deatils see `policy.json file <https://docs.openstack.org/ocata/config-reference/policy-json-file.html>`_.
#. After that try to run mistral engine and see it is running without any error::
$ mistral-server --config-file <path-to-config> --server engine

View File

@ -1,139 +0,0 @@
=====================================
How to work with asynchronous actions
=====================================
*******
Concept
*******
.. image:: /img/Mistral_actions.png
During a workflow execution Mistral eventually runs actions. Action is a particular
function (or a piece of work) that a workflow task is associated to.
Actions can be synchronous and asynchronous.
Synchronous actions are actions that get completed without a 3rd party, i.e. by
Mistral itself. When Mistral engine schedules to run a synchronous action it sends
its definition and parameters to Mistral executor, then executor runs it and upon
its completion sends a result of the action back to Mistral engine.
In case of asynchronous actions executor doesn't send a result back to Mistral.
In fact, the concept of asynchronous action assumes that a result won't be known
at a time when executor is running it. It rather assumes that action will just
delegate actual work to a 3rd party which can be either a human or a computer
system (e.g. a web service). So an asynchronous action's run() method is supposed
to just send a signal to something that is capable of doing required job.
Once the 3rd party has done the job it takes responsibility to send result of
the action back to Mistral via Mistral API. Effectively, the 3rd party just needs
to update the state of corresponding action execution object. To make it possible
it must know corresponding action execution id.
It's worth noting that from Mistral engine perspective the schema is essentially
the same in case of synchronous and asynchronous actions. If action is synchronous,
then executor immediately sends a result back with RPC mechanism (most often,
a message queue as a transport) to Mistral engine after action completion. But
engine itself is not waiting anything proactively, its architecture is fully on
asynchronous messages. So in case of asynchronous action the only change is that
executor is not responsible for sending action result, something else takes over.
Let's see what we need to keep in mind when working with asynchronous actions.
******
How to
******
Currently, Mistral comes with one asynchronous action out of the box, "mistral_http".
There's also "async_noop" action that is also asynchronous but it's mostly useful
for testing purposes because it does nothing. "mistral_http" is an asynchronous
version of action "http" sending HTTP requests. Asynchrony is controlled by
action's method is_sync() which should return *True* for synchronous actions and
*False* for asynchronous.
Let's see how "mistral_http" action works and how to use it step by step.
We can imagine that we have a simple web service playing a role of 3rd party system
mentioned before accessible at http://my.webservice.com. And if we send an HTTP
request to that url then our web service will do something useful. To keep it
simple, let's say our web service just calculates a sum of two numbers provided
as request parameters "a" and "b".
1. Workflow example
===================
.. code-block:: yaml
---
version: '2.0'
my_workflow:
tasks:
one_plus_two:
action: mistral_http url=http://my.webservice.com
input:
params:
a: 1
b: 2
So our workflow has just one task "one_plus_two" that sends a request to our web
service and passes parameters "a" and "b" in a query string. Note that we specify
"url" right after action name but "params" in a special section "input". This is
because there's no one-line syntax for dictionaries currently in Mistral. But both
"url" and "params" are basically just parameters of action "mistral_http".
It is important to know that when "mistral_http" action sends a request it includes
special HTTP headers that help identify action execution object. These headers are:
- **Mistral-Workflow-Name**
- **Mistral-Workflow-Execution-Id**
- **Mistral-Task-Id**
- **Mistral-Action-Execution-Id**
- **Mistral-Callback-URL**
The most important one is "Mistral-Action-Execution-Id" which contains an id of
action execution that we need to calculate result for. Using that id a 3rd party
can deliver a result back to Mistral once it's calculated. If a 3rd party is a
computer system it can just call Mistral API via HTTP using header
"Mistral-Callback-URL" which contains a base URL. However, a human can also do
it, the simplest way is just to use Mistral CLI.
Of course, this is a practically meaningless example. It doesn't make sense to use
asynchronous actions for simple arithmetic operations. Real examples when asynchronous
actions are needed may include:
- **Analysis of big data volumes**. E.g. we need to run an external reporting tool.
- **Human interaction**. E.g. an administrator needs to approve allocation of resources.
In general, this can be anything that takes significant time, such as hours, days
or weeks. Sometimes duration of a job may be even unpredictable (it's reasonable
though to try to limit such jobs with timeout policy in practice).
The key point here is that Mistral shouldn't try to wait for completion of such
job holding some resources needed for that in memory.
An important aspect of using asynchronous actions is that even when we interact
with 3rd party computer systems a human can still trigger action completion by
just calling Mistral API.
2. Pushing action result to Mistral
===================================
Using CLI:
.. code-block:: console
$ mistral action-execution-update <id> --state SUCCESS --output 3
This command will update "state" and "output" of action execution object with
corresponding id. That way Mistral will know what the result of this action
is and decide how to proceed with workflow execution.
Using raw HTTP::
POST <Mistral-Callback-URL>/v2/action-executions/<id>
{
"state": "SUCCESS",
"output": 3
}

View File

@ -1,53 +0,0 @@
============================
How to write a Custom Action
============================
1. Write a class inherited from mistral.actions.base.Action
.. code-block:: python
from mistral.actions import base
class RunnerAction(base.Action):
def __init__(self, param):
# store the incoming params
self.param = param
def run(self):
# return your results here
return {'status': 0}
2. Publish the class in a namespace (in your ``setup.cfg``)
.. code-block:: ini
[entry_points]
mistral.actions =
example.runner = my.mistral_plugins.somefile:RunnerAction
3. Reinstall Mistral if it was installed in system (not in virtualenv).
4. Run db-sync tool via either
.. code-block:: console
$ tools/sync_db.sh --config-file <path-to-config>
or
.. code-block:: console
$ mistral-db-manage --config-file <path-to-config> populate
5. Now you can call the action ``example.runner``
.. code-block:: yaml
my_workflow:
tasks:
my_action_task:
action: example.runner
input:
param: avalue_to_pass_in

View File

@ -1,65 +0,0 @@
Mistral Debugging Guide
=======================
To debug using a local engine and executor without dependencies such as
RabbitMQ, make sure your ``/etc/mistral/mistral.conf`` has the following
settings::
[DEFAULT]
rpc_backend = fake
[pecan]
auth_enable = False
and run the following command in *pdb*, *PyDev* or *PyCharm*::
mistral/cmd/launch.py --server all --config-file /etc/mistral/mistral.conf --use-debugger
.. note::
In PyCharm, you also need to enable the Gevent compatibility flag in
Settings -> Build, Execution, Deployment -> Python Debugger -> Gevent
compatible. Without this setting, PyCharm will not show variable values
and become unstable during debugging.
Running unit tests in PyCharm
-----------------------------
In order to be able to conveniently run unit tests, you need to:
1. Set unit tests as the default runner:
Settings -> Tools -> Python Integrated Tools ->
Default test runner: Unittests
2. Enable test detection for all classes:
Run/Debug Configurations -> Defaults -> Python tests -> Unittests -> uncheck
Inspect only subclasses of unittest.TestCase
Running examples
----------------
To run the examples find them in mistral-extra repository
(https://github.com/openstack/mistral-extra) and follow the instructions on
each example.
Tests
-----
You can run some of the functional tests in non-openstack mode locally. To do
this:
#. set ``auth_enable = False`` in the ``mistral.conf`` and restart Mistral
#. execute::
$ ./run_functional_tests.sh
To run tests for only one version need to specify it::
$ bash run_functional_tests.sh v1
More information about automated tests for Mistral can be found on
`Mistral Wiki <https://wiki.openstack.org/wiki/Mistral/Testing>`_.

View File

@ -1,13 +0,0 @@
Mistral Devstack Installation
=============================
1. Download DevStack::
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
2. Add this repo as an external repository, edit ``localrc`` file::
enable_plugin mistral https://github.com/openstack/mistral
3. Run ``stack.sh``

View File

@ -1,175 +0,0 @@
===================================
How to write a custom YAQL function
===================================
********
Tutorial
********
1. Create a new Python project, an empty folder, containing a basic ``setup.py`` file.
.. code-block:: bash
$ mkdir my_project
$ cd my_project
$ vim setup.py
.. code-block:: python
try:
from setuptools import setup, find_packages
except ImportError:
from distutils.core import setup, find_packages
setup(
name="project_name",
version="0.1.0",
packages=find_packages(),
install_requires=["mistral", "yaql"],
entry_points={
"mistral.yaql_functions": [
"random_uuid = my_package.sub_package.yaql:random_uuid_"
]
}
)
Publish the ``random_uuid_`` function in the ``entry_points`` section, in the
``mistral.yaql_functions`` namespace in ``setup.py``. This function will be
defined later.
Note that the package name will be used in Pip and must not overlap with
other packages installed. ``project_name`` may be replaced by something else.
The package name (``my_package`` here) may overlap with other
packages, but module paths (``.py`` files) may not.
For example, it is possible to have a ``mistral`` package (though not
recommended), but there must not be a ``mistral/version.py`` file, which
would overlap with the file existing in the original ``mistral`` package.
``yaql`` and ``mistral`` are the required packages. ``mistral`` is necessary
in this example only because calls to the Mistral Python DB API are made.
For each entry point, the syntax is:
.. code-block:: python
"<name_of_YAQL_expression> = <path.to.module>:<function_name>"
``stevedore`` will detect all the entry points and make them available to
all Python applications needing them. Using this feature, there is no need
to modify Mistral's core code.
2. Create a package folder.
A package folder is directory with a ``__init__.py`` file. Create a file
that will contain the custom YAQL functions. There are no restrictions on
the paths or file names used.
.. code-block:: bash
$ mkdir -p my_package/sub_package
$ touch my_package/__init__.py
$ touch my_package/sub_package/__init__.py
3. Write a function in ``yaql.py``.
That function might have ``context`` as first argument to have the current
YAQL context available inside the function.
.. code-block:: bash
$ cd my_package/sub_package
$ vim yaql.py
.. code-block:: python
from uuid import uuid5, UUID
from time import time
def random_uuid_(context):
"""generate a UUID using the execution ID and the clock"""
# fetch the current workflow execution ID found in the context
execution_id = context['__execution']['id']
time_str = str(time())
execution_uuid = UUID(execution_id)
return uuid5(execution_uuid, time_str)
This function returns a random UUID using the current workflow execution ID
as a namespace.
The ``context`` argument will be passed by Mistral YAQL engine to the
function. It is invisble to the user. It contains variables from the current
task execution scope, such as ``__execution`` which is a dictionary with
information about the current workflow execution such as its ``id``.
Note that errors can be raised and will be displayed in the task execution
state information in case they are raised. Any valid Python primitives may
be returned.
The ``context`` argument is optional. There can be as many arguments as wanted,
even list arguments such as ``*args`` or dictionary arguments such as
``**kwargs`` can be used as function arguments.
For more information about YAQL, read the `official YAQL documentation <http://yaql.readthedocs.io/en/latest/.>`_.
4. Install ``pip`` and ``setuptools``.
.. code-block:: bash
$ curl https://bootstrap.pypa.io/get-pip.py | python
$ pip install --upgrade setuptools
$ cd -
5. Install the package (note that there is a dot ``.`` at the end of the line).
.. code-block:: bash
$ pip install .
6. The YAQL function can be called in Mistral using its name ``random_uuid``.
The function name in Python ``random_uuid_`` does not matter, only the entry
point name ``random_uuid`` does.
.. code-block:: yaml
my_workflow:
tasks:
my_action_task:
action: std.echo
publish:
random_id: <% random_uuid() %>
input:
output: "hello world"
****************
Updating changes
****************
After any new created functions or any modification in the code, re-run
``pip install .`` and restart Mistral.
***********
Development
***********
While developing, it is sufficient to add the root source folder (the parent
folder of ``my_package``) to the ``PYTHONPATH`` environment variable and the
line ``random_uuid = my_package.sub_package.yaql:random_uuid_`` in the Mistral
entry points in the ``mistral.yaql_functions`` namespace. If the path to the
parent folder of ``my_package`` is ``/path/to/my_project``.
.. code-block:: bash
$ export PYTHONPATH=$PYTHONPATH:/path/to/my_project
$ vim $(find / -name "mistral.*egg-info*")/entry_points.txt
.. code-block:: ini
[entry_points]
mistral.yaql_functions =
random_uuid = my_package.sub_package.yaql:random_uuid_

View File

@ -1,12 +0,0 @@
Developer's Reference
=====================
.. toctree::
:maxdepth: 3
creating_custom_action
extending_yaql
asynchronous_actions
devstack
debug
troubleshooting

View File

@ -1,71 +0,0 @@
Troubleshooting And Debugging
=============================
Mistral-Dashboard debug instructions
------------------------------------
**Pycharm**
Debugging OpenStack Mistral-Dashboard is the same
as debugging OpenStack Horizon.
The following instructions should get you sorted to debug both on the same run.
Set PyCharm debug settings:
1. Under File > Settings > Languages and Framework > Django - Enter the following:
a. Check "Enable Django Support"
b. Django project root: your file system path to Horizon project root
c. Settings: openstack_dashboard/settings.py (under your Horizon folder)
d. Manage script: manage.py (also in your horizon folder)
e. Click OK
.. image:: ../img/Mistral_dashboard_django_settings.png
2. Enter debug configurations menu, using the tiny arrow pointing down,
left to the "play" icon, or under the run menu
.. image:: ../img/Pycharm_run_config_menu.png
3. In the new window, click the green plus icon and then select "Django server"
to create a new Django Server configuration.
4. In the new window appeared:
a. Name that configuration Horizon
b. Enter some port so it won't run on the default (for example - port: 4000)
.. image:: ../img/Mistral_dashboard_debug_config.png
5. Click on Environment variables button, then in the new window:
a. Make sure you have PYTHONUNBUFFERED set as 1
b. Create a new pair - DJANGO_SETTINGS_MODULE : openstack_dashboard.settings
c. When finished click OK.
.. image:: ../img/Mistral_dashboard_environment_variables.png
You should now be able to debug and run the project using PyCharm.
PyCharm will listen to any changes you make
and restart the Horizon server automatically.
**Note**: When executing the project via PyCharm Run / Debug,
you could get an error page
after trying to login: "Page not found (404)".
To resolve that - remove the port from the browser URL bar,
then login.
You should be able to login without it.
After a successful login bring the port back - it will continue your session.
**Further notes**
- If you need help with PyCharm and general debugging, please refer to:
`JetBrains PyCharm developer guide <https://www.jetbrains.com/pycharm/help/debugging.html.>`_
- If you would like to manually restart the apache server,
open a terminal and run::
$ sudo service apache2 restart
*(if not under Ubuntu, replace "sudo" with an identical command)*

View File

@ -1,4 +0,0 @@
Mistral Cookbooks
=================
- `Mistral for Administration (aka Cloud Cron) <https://wiki.openstack.org/wiki/Mistral/Cookbooks/AdministrationCloudCron>`_

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

View File

@ -1,66 +0,0 @@
Welcome to Mistral's documentation!
===================================
Mistral is the OpenStack workflow service. This project aims to provide
a mechanism to define tasks and workflows without writing code, manage
and execute them in the cloud environment.
Overview
--------
.. toctree::
:maxdepth: 1
overview
quickstart
architecture
terminology/index
main_features
cookbooks
User guide
----------
**Installation**
.. toctree::
:maxdepth: 2
install/index
configuration/index
admin/upgrade_guide
**API**
.. toctree::
:maxdepth: 2
api/index
**Mistral Workflow Language**
.. toctree::
:maxdepth: 2
admin/dsl_v2
**CLI**
.. toctree::
:maxdepth: 1
cli/index
Developer guide
---------------
.. toctree::
:maxdepth: 2
contributor/index
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`

View File

@ -1,59 +0,0 @@
====================================
Mistral Dashboard Installation Guide
====================================
Mistral dashboard is the plugin for Horizon where it is easily possible to control
mistral objects by interacting with web user interface.
Setup Instructions
------------------
This instruction assumes that Horizon is already installed and it's installation
folder is <horizon>. Detailed information on how to install Horizon can be
found at `Horizon Installation <https://docs.openstack.org/horizon/latest/contributor/quickstart.html#setup>`_
The installation folder of Mistral Dashboard will be referred to as <mistral-dashboard>.
The following should get you started:
1. Clone the repository into your local OpenStack directory::
$ git clone https://github.com/openstack/mistral-dashboard.git
2. Install mistral-dashboard::
$ sudo pip install -e <mistral-dashboard>
Or if you're planning to run Horizon server in a virtual environment (see below)::
$ tox -evenv -- pip install -e ../mistral-dashboard/
and then::
$ cp -b <mistral-dashboard>/mistraldashboard/enabled/_50_mistral.py <horizon>/openstack_dashboard/local/enabled/_50_mistral.py
3. Since Mistral only supports Identity v3, you must ensure that the dashboard
points the proper OPENSTACK_KEYSTONE_URL in
<horizon>/openstack_dashboard/local/local_settings.py file::
OPENSTACK_API_VERSIONS = {
"identity": 3,
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
4. Also, make sure you have changed OPENSTACK_HOST to point to your Keystone
server and check all endpoints are accessible. You may want to change
OPENSTACK_ENDPOINT_TYPE to "publicURL" if some of them are not.
5. When you're ready, you would need to either restart your apache::
$ sudo service apache2 restart
or run the development server (in case you have decided to use local horizon)::
$ cd ../horizon/
$ tox -evenv -- python manage.py runserver
Debug instructions
------------------
Please refer to :doc:`Mistral Troubleshooting <../contributor/troubleshooting>`

View File

@ -1,9 +0,0 @@
Mistral User Guide
==================
.. toctree::
:maxdepth: 1
installation_guide
dashboard_guide
mistralclient_guide

View File

@ -1,177 +0,0 @@
Mistral Installation Guide
==========================
Prerequisites
-------------
It is necessary to install some specific system libs for installing Mistral.
They can be installed on most popular operating system using their package
manager (for Ubuntu - *apt*, for Fedora - *dnf*, CentOS - *yum*, for Mac OS -
*brew* or *macports*).
The list of needed packages is shown below:
1. **python-dev**
2. **python-setuptools**
3. **python-pip**
4. **libffi-dev**
5. **libxslt1-dev (or libxslt-dev)**
6. **libxml2-dev**
7. **libyaml-dev**
8. **libssl-dev**
In case of Ubuntu, just run::
$ apt-get install python-dev python-setuptools python-pip libffi-dev libxslt1-dev \
libxml2-dev libyaml-dev libssl-dev
**NOTE:** **Mistral can be used without authentication at all or it can work
with OpenStack.** In case of OpenStack, it works **only on Keystone v3**, make
sure **Keystone v3** is installed.
Installation
------------
**NOTE**: If it is needed to install Mistral using devstack, please refer to :doc:`Mistral Devstack Installation </contributor/devstack>`
First of all, clone the repo and go to the repo directory::
$ git clone https://github.com/openstack/mistral.git
$ cd mistral
Generate config::
$ tox -egenconfig
Configure Mistral as needed. The configuration file is located in
``etc/mistral.conf.sample``. You will need to modify the configuration options
and then copy it into ``/etc/mistral/mistral.conf``.
For details see :doc:`Mistral Configuration Guide </configuration/index>`
**Virtualenv installation**::
$ tox
This will install necessary virtual environments and run all the project tests.
Installing virtual environments may take significant time (~10-15 mins).
**Local installation**::
$ pip install -e .
or::
$ pip install -r requirements.txt
$ python setup.py install
**NOTE**: Differences *pip install -e* and *setup.py install*. **pip install -e**
works very similarly to **setup.py install** or the EasyInstall tool, except
that it doesnt actually install anything. Instead, it creates a special
.egg-link file in the deployment directory, that links to your projects
source code.
Before the first run
--------------------
After installation you will see **mistral-server** and **mistral-db-manage** commands
in your environment, either in system or virtual environment.
**NOTE**: In case of using **virtualenv**, all Mistral related commands available via
**tox -evenv --**. For example, *mistral-server* is available via
*tox -evenv -- mistral-server*.
**mistral-db-manage** command can be used for migrations.
For updating the database to the latest revision type::
$ mistral-db-manage --config-file <path-to-mistral.conf> upgrade head
Before starting Mistral server, run *mistral-db-manage populate* command.
It prepares the DB, creates in it with all standard actions and standard
workflows which Mistral provides for all Mistral users.::
$ mistral-db-manage --config-file <path-to-mistral.conf> populate
For more detailed information about *mistral-db-manage* script please see :doc:`Mistral Upgrade Guide </admin/upgrade_guide>`.
**NOTE**: For users who want a dry run with **SQLite** database backend(not
used in production), *mistral-db-manage* is not recommended for database
initialization because of `SQLite limitations <http://www.sqlite.org/omitted.html>`_.
Please use sync_db script described below instead for database initialization.
**If you use virtualenv**::
$ tools/sync_db.sh --config-file <path-to-mistral.conf>
**Or run sync_db directly**::
$ python tools/sync_db.py --config-file <path-to-mistral.conf>
Running Mistral API server
--------------------------
To run Mistral API server perform the following command in a shell::
$ mistral-server --server api --config-file <path-to-mistral.conf>
Running Mistral Engines
-----------------------
To run Mistral Engine perform the following command in a shell::
$ mistral-server --server engine --config-file <path-to-mistral.conf>
Running Mistral Task Executors
------------------------------
To run Mistral Task Executor instance perform the following command in a shell::
$ mistral-server --server executor --config-file <path-to-mistral.conf>
Note that at least one Engine instance and one Executor instance should be
running so that workflow tasks are processed by Mistral.
Running Multiple Mistral Servers Under the Same Process
-------------------------------------------------------
To run more than one server (API, Engine, or Task Executor) on the same process,
perform the following command in a shell::
$ mistral-server --server api,engine --config-file <path-to-mistral.conf>
The --server command line option can be a comma delimited list. The valid
options are "all" (by default if not specified) or any combination of "api",
"engine", and "executor". It's important to note that the "fake" transport for
the rpc_backend defined in the config file should only be used if "all" the
Mistral servers are launched on the same process. Otherwise, messages do not
get delivered if the Mistral servers are launched on different processes
because the "fake" transport is using an in process queue.
Mistral And Docker
------------------
Please first refer `installation steps for docker <https://docs.docker.com/installation/>`_.
To build the image from the mistral source, change directory to the root
directory of the Mistral git repository and run::
$ docker build -t <Name of image> .
In case you want pre-built image, you can download it from `openstack tarballs source <https://tarballs.openstack.org/mistral/images/mistral-docker.tar.gz>`_.
To load this image to docker registry, please run following command::
$ docker load -i '<path of mistral-docker.tar.gz>'
The Mistral Docker image is configured to store the database in the user's home
directory. For persistence of these data, you may want to keep this directory
outside of the container. This may be done by the following steps::
$ sudo mkdir '<user-defined-directory>'
$ docker run -it -v '<user-defined-directory>':/home/mistral <Name of image>
More about docker: https://www.docker.com/
**NOTE:** This docker image uses **SQLite** database. So, it cannot be used for
production environment. If you want to use this for production environment,
then put customized mistral.conf to '<user-defined-directory>'.
Mistral Client Installation
---------------------------
Please refer to :doc:`Mistral Client / CLI Guide <../cli/index>`

View File

@ -1,145 +0,0 @@
Mistral Client Installation Guide
=================================
To install ``python-mistralclient``, it is required to have ``pip``
(in most cases). Make sure that ``pip`` is installed. Then type::
$ pip install python-mistralclient
Or, if it is needed to install ``python-mistralclient`` from master branch,
type::
$ pip install git+https://github.com/openstack/python-mistralclient.git
After ``python-mistralclient`` is installed you will see command ``mistral``
in your environment.
Configure authentication against Keystone
-----------------------------------------
If Keystone is used for authentication in Mistral, then the environment should
have auth variables::
$ export OS_AUTH_URL=http://<Keystone_host>:5000/v2.0
$ export OS_TENANT_NAME=tenant
$ export OS_USERNAME=admin
$ export OS_PASSWORD=secret
$ export OS_MISTRAL_URL=http://<Mistral host>:8989/v2 (optional, by default URL=http://localhost:8989/v2)
and in the case when you are authenticating against keystone over https::
$ export OS_CACERT=<path_to_ca_cert>
.. note:: In client, we can use both Keystone auth versions - v2.0 and v3. But server supports only v3.
You can see the list of available commands by typing::
$ mistral --help
To make sure Mistral client works, type::
$ mistral workbook-list
Configure authentication against Keycloak
-----------------------------------------
Mistral also supports authentication against Keycloak server via OpenID Connect protocol.
In order to use it on the client side the environment should look as follows::
$ export MISTRAL_AUTH_TYPE=keycloak-oidc
$ export OS_AUTH_URL=https://<Keycloak-server-host>:<Keycloak-server-port>/auth
$ export OS_TENANT_NAME=my_keycloak_realm
$ export OS_USERNAME=admin
$ export OS_PASSWORD=secret
$ export OPENID_CLIENT_ID=my_keycloak_client
$ export OPENID_CLIENT_SECRET=my_keycloak_client_secret
$ export OS_MISTRAL_URL=http://<Mistral host>:8989/v2 (optional, by default URL=http://localhost:8989/v2)
.. note:: Variables OS_TENANT_NAME, OS_USERNAME, OS_PASSWORD are used for both Keystone and Keycloak
authentication. OS_TENANT_NAME in case of Keycloak needs to correspond a Keycloak realm. Unlike
Keystone, Keycloak requires to register a client that access some resources (Mistral server in
our case) protected by Keycloak in advance. For this reason, OPENID_CLIENT_ID and
OPENID_CLIENT_SECRET variables should be assigned with correct values as registered in Keycloak.
Similar to Keystone OS_CACERT variable can also be added to provide a certification for SSL/TLS
verification::
$ export OS_CACERT=<path_to_ca_cert>
In order to disable SSL/TLS certificate verification MISTRALCLIENT_INSECURE variable needs to be set
to True::
$ export MISTRALCLIENT_INSECURE=True
Targeting non-preconfigured clouds
----------------------------------
Mistral is capable of executing workflows on external OpenStack clouds,
different from the one defined in the `mistral.conf` file in the
`keystone_authtoken` section. (More detail in the :doc:`/configuration/index`).
For example, if the mistral server is configured to authenticate with the
`http://keystone1.example.com` cloud and the user wants to execute the workflow
on the `http://keystone2.example.com` cloud.
The mistral.conf will look like::
[keystone_authtoken]
auth_uri = http://keystone1.example.com:5000/v3
...
The client side parameters will be::
$ export OS_AUTH_URL=http://keystone1.example.com:5000/v3
$ export OS_USERNAME=mistral_user
...
$ export OS_TARGET_AUTH_URL=http://keystone2.example.com:5000/v3
$ export OS_TARGET_USERNAME=cloud_user
...
.. note:: Every `OS_*` parameter has an `OS_TARGET_*` correspondent. For more
detail, check out `mistral --help`
The `OS_*` parameters are used to authenticate and authorize the user with
Mistral, that is, to check if the user is allowed to utilize the Mistral
service. Whereas the `OS_TARGET_*` parameters are used to define the user that
executes the workflow on the external cloud, keystone2.example.com/.
Use cases
^^^^^^^^^
**Authenticate in Mistral and execute OpenStack actions with different users**
As a user of Mistral, I want to execute a workflow with a different user on the
cloud.
**Execute workflows on any OpenStack cloud**
As a user of Mistral, I want to execute a workflow on a cloud of my choice.
Special cases
^^^^^^^^^^^^^
**Using Mistral with zero OpenStack configuration**:
With the targeting feature, it is possible to execute a workflow on any
arbitrary cloud without additional configuration on the Mistral server side.
If authentication is turned off in the Mistral server (Pecan's
`auth_enable = False` option in `mistral.conf`), there is no need to set the
`keystone_authtoken` section. It is possible to have Mistral use an external
OpenStack cloud even when it isn't deployed in an OpenStack environment (i.e.
no Keystone integration).
With this setup, the following call will return the heat stack list::
$ mistral \
--os-target-auth-url=http://keystone2.example.com:5000/v3 \
--os-target-username=testuser \
--os-target-tenant=testtenant \
--os-target-password="MistralRuleZ" \
run-action heat.stacks_list
This setup is particularly useful when Mistral is used in standalone mode, when
the Mistral service is not part of the OpenStack cloud and runs separately.
Note that only the OS-TARGET-* parameters enable this operation.

View File

@ -1,324 +0,0 @@
Mistral Main Features
=====================
Task result / Data flow
-----------------------
Mistral supports transferring data from one task to another. In other words,
if *taskA* produces a value then *taskB* which follows *taskA* can use it.
In order to use this data Mistral relies on a query language called `YAQL <https://github.com/openstack/yaql>`_.
YAQL is a powerful yet simple tool that allows the user to filter information,
transform data and call functions. Find more information about it in the
`YAQL official documentation <http://yaql.readthedocs.org>`_ . This mechanism
for transferring data plays a central role in the workflow concept and is
referred to as Data Flow.
Below is a simple example of how Mistral Data Flow looks like from the Mistral
Workflow Language perspective:
.. code-block:: mistral
version: '2.0'
my_workflow:
input:
- host
- username
- password
tasks:
task1:
action: std.ssh host=<% $.host %> username=<% $.username %> password=<% $.password %>
input:
cmd: "cd ~ && ls"
on-complete: task2
task2:
action: do_something data=<% task(task1).result %>
The task called "task1" produces a result that contains a list of the files in
a user's home folder on a host(both username and host are provided as workflow
input) and the task "task2" uses this data using the YAQLexpression
"task(task1).result". "task()" here is a function registered in YAQL by
Mistral to get information about a task by its name.
Task affinity
-------------
Task affinity is a feature which could be useful for executing particular
tasks on specific Mistral executors. In fact, there are 2 cases:
1. You need to execute the task on a single executor.
2. You need to execute the task on any executor within a named group.
To enable the task affinity feature, edit the "host" property in the
"executor" section of the configuration file::
[executor]
host = my_favorite_executor
Then start (restart) the executor. Use the "target" task property to specify
this executor in Mistral Workflow Language::
... Workflow YAML ...
task1:
...
target: my_favorite_executor
... Workflow YAML ...
Task policies
-------------
Any Mistral task regardless of its workflow type can optionally have
configured policies. Policies control the flow of the task - for example,
a policy can delay task execution before the task starts or after the task
completes.
YAML example
^^^^^^^^^^^^
.. code-block:: yaml
my_task:
action: my_action
pause-before: true
wait-before: 2
wait-after: 4
timeout: 30
retry:
count: 10
delay: 20
break-on: <% $.my_var = true %>
There are different types of policies in Mistral.
1. **pause-before**
Specifies whether Mistral Engine should put the workflow on pause or not
before starting a task.
2. **wait-before**
Specifies a delay in seconds that Mistral Engine should wait before starting
a task.
3. **wait-after**
Specifies a delay in seconds that Mistral Engine should wait after a task
has completed before starting the tasks specified in *'on-success'*,
*'on-error'* or *'on-complete'*.
4. **timeout**
Specifies a period of time in seconds after which a task will be failed
automatically by the engine if it hasn't completed.
5. **retry**
Specifies a pattern for how the task should be repeated.
* *count* - Specifies a maximum number of times that a task can be repeated.
* *delay* - Specifies a delay in seconds between subsequent task iterations.
* *break-on* - Specifies a YAQL expression that will break the iteration loop
if it evaluates to *'true'*. If it fires then the task is considered to
have experienced an error.
* *continue-on* - Specifies a YAQL expression that will continue the iteration
loop if it evaluates to *'true'*. If it fires then the task is considered
successful.
A retry policy can also be configured on a single line, as follows
.. code-block:: yaml
task1:
action: my_action
retry: count=10 delay=5 break-on=<% $.foo = 'bar' %>
All parameter values for any policy can be defined as YAQL expressions.
Join
----
Join flow control allows to synchronize multiple parallel workflow branches
and aggregate their data.
**Full join (join: all)**.
YAML example
^^^^^^^^^^^^
.. code-block:: yaml
register_vm_in_load_balancer:
...
on-success:
- wait_for_all_registrations
register_vm_in_dns:
...
on-success:
- wait_for_all_registrations
try_to_do_something_without_registration:
...
on-error:
- wait_for_all_registrations
wait_for_all_registrations:
join: all
action: send_email
When a task has property *"join"* assigned with value *"all"* the task will
run only if all upstream tasks (ones that lead to this task) are completed
and corresponding conditions have triggered. Task A is considered an upstream
task of Task B if Task A has Task B mentioned in any of its *"on-success"*,
*"on-error"* and *"on-complete"* clauses regardless of YAQL guard expressions.
**Partial join (join: 2)**
YAML example
^^^^^^^^^^^^
.. code-block:: yaml
register_vm_in_load_balancer:
...
on-success:
- wait_for_all_registrations
register_vm_in_dns:
...
on-success:
- wait_for_all_registrations
register_vm_in_zabbix:
...
on-success:
- wait_for_all_registrations
wait_for_two_registrations:
join: 2
action: send_email
When a task has a numeric value assigned to the property *"join"*, then the
task will run once at least this number of upstream tasks are completed and
the corresponding conditions have triggered. In the example above, the task
"wait_for_two_registrations" will run if two any of the "register_vm_xxx"
tasks are complete.
**Discriminator (join: one)**
Discriminator is the special case of Partial Join where the *"join"* property has the value 1.
In this case instead of 1 it is possible to specify the special string value *"one"*
which is introduced for symmetry with *"all"*. However, it's up to the user whether to use *"1"* or *"one"*.
Processing collections (with-items)
-----------------------------------
YAML example
^^^^^^^^^^^^
.. code-block:: yaml
---
version: '2.0'
create_vms:
description: Creating multiple virtual servers using "with-items".
input:
- vm_names
- image_ref
- flavor_ref
output:
vm_ids: <% $.vm_ids %>
tasks:
create_servers:
with-items: vm_name in <% $.vm_names %>
action: nova.servers_create name=<% $.vm_name %> image=<% $.image_ref %> flavor=<% $.flavor_ref %>
publish:
vm_ids: <% $.create_servers.id %>
on-success:
- wait_for_servers
wait_for_servers:
with-items: vm_id in <% $.vm_ids %>
action: nova.servers_find id=<% $.vm_id %> status='ACTIVE'
retry:
delay: 5
count: <% $.vm_names.len() * 10 %>
The workflow *"create_vms"* in this example creates as many virtual servers
as we provide in the *"vm_names"* input parameter. E.g., if we specify
*vm_names=["vm1", "vm2"]* then it'll create servers with these names based on
the same image and flavor. This is possible because we are using the *"with-items"*
keyword that associates an action or a workflow with a task run multiple times.
The value of the *"with-items"* task property contains an expression in the
form: **<variable_name> in <% YAQL_expression %>**.
The most common form is
.. code-block:: yaml
with-items:
- var1 in <% YAQL_expression_1 %>
- var2 in <% YAQL_expression_2 %>
...
- varN in <% YAQL_expression_N %>
where collections expressed as YAQL_expression_1, YAQL_expression_2,
YAQL_expression_N must have equal sizes. When a task gets started Mistral
will iterate over all collections in parallel, i.e. the number of iterations
will be equal to the length of any of the collections.
Note that in the *"with-items"* case, the task result (accessible in workflow
context as <% $.task_name %>) will be a list containing results of
corresponding action/workflow calls. If at least one action/workflow call has
failed then the whole task will get into *ERROR* state. It's also possible to
apply retry policy for tasks with a *"with-items"* property. In this case the
retry policy will relaunch all action/workflow calls according to the
*"with-items"* configuration. Other policies can also be used in the same way
as with regular non-*"with-items"* tasks.
Execution expiration policy
---------------------------
When Mistral is used in production it can be difficult to control the number
of completed workflow executions. By default Mistral will store all
executions indefinitely and over time the number stored will accumulate. This
can be resolved by setting an expiration policy.
**By default this feature is disabled.**
This policy defines the maximum age of an execution since the last updated time
(in minutes) and the maximum number of finished executions. Each evaluation will
satisfy these conditions, so the expired executions (older than specified) will
be deleted, and the number of execution in finished state (regardless of expiration)
will be limited to max_finished_executions.
To enable the policy, edit the Mistral configuration file and specify
``evaluation_interval`` and at least one of the ``older_than``
or ``evaluation_interval`` options.
.. code-block:: cfg
[execution_expiration_policy]
evaluation_interval = 120 # 2 hours
older_than = 10080 # 1 week
max_finished_executions = 500
- **evaluation_interval**
The evaluation interval defines how frequently Mistral will check and ensure
the above mentioned constraints. In the above example it is set to two hours,
so every two hours Mistral will remove executions older than 1 week, and
keep only the 500 latest finished executions.
- **older_than**
Defines the maximum age of an execution in minutes since it was last
updated. It must be greater or equal to ``1``.
- **max_finished_executions**
Defines the maximum number of finished executions.
It must be greater or equal to ``1``.

View File

@ -1,81 +0,0 @@
Mistral Overview
================
What is Mistral?
----------------
Mistral is a workflow service. Most business processes consist of multiple
distinct interconnected steps that need to be executed in a particular order
in a distributed environment. A user can describe such a process as a set of
tasks and their transitions. After that, it is possible to upload such a
description to Mistral, which will take care of state management, correct
execution order, parallelism, synchronization and high availability. Mistral
also provides flexible task scheduling so that it can run a process according
to a specified schedule (for example, every Sunday at 4.00pm) instead of
running it immediately. In Mistral terminology such a set of tasks and
relations between them is called a **workflow**.
Main use cases
--------------
Task scheduling - Cloud Cron
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A user can use Mistral to schedule tasks to run within a cloud. Tasks can be
anything from executing local processes (shell scripts, binaries) on specified
virtual instances to calling REST APIs accessible in a cloud environment. They
can also be tasks related to cloud management like creating or terminating
virtual instances. It is important that several tasks can be combined in a
single workflow and run in a scheduled manner (for example, on Sundays at 2.00
am). Mistral will take care of their parallel execution (if it's logically
possible) and fault tolerance, and will provide workflow execution
management/monitoring capabilities (stop, resume, current status, errors and
other statistics).
Cloud environment deployment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A user or a framework can use Mistral to specify workflows needed for
deploying environments consisting of multiple VMs and applications.
Long-running business process
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A user makes a request to run a complex multi-step business process and
wants it to be fault-tolerant so that if the execution crashes at some point
on one node then another active node of the system can automatically take on
and continue from the exact same point where it stopped. In this use case the
user splits the business process into a set of tasks and lets Mistral handle
them, in the sense that it serves as a coordinator and decides what particular
task should be started at what time. So that Mistral calls back with "Execute
action X, here is the data". If an application that executes action X dies
then another instance takes the responsibility to continue the work.
Big Data analysis & reporting
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A data analyst can use Mistral as a tool for data crawling. For example,
in order to prepare a financial report the whole set of steps for gathering
and processing required report data can be represented as a graph of related
Mistral tasks. As with other cases, Mistral makes sure to supply fault
tolerance, high availability and scalability.
Live migration
^^^^^^^^^^^^^^
A user specifies tasks for VM live migration triggered upon an event from
Ceilometer (CPU consumption 100%).
Rationale
---------
The main idea behind the Mistral service includes the following main points:
- Ability to upload custom workflow definitions.
- The actual task execution may not be performed by the service itself.
The service can rather serve as a coordinator for other worker processes
that do the actual work, and notify back about task execution results.
In other words, task execution may be asynchronous, thus providing
flexibility for plugging in any domain specific handling and opportunities
to make this service scalable and highly available.
- The service provides a notion of **task action**, which is a pluggable piece
of logic that a workflow task is associated with. Out of the box, the service
provides a set of standard actions for user convenience. However, the user
can create custom actions based on the standard action pack.

View File

@ -1,170 +0,0 @@
Quick Start
===========
Prerequisites
-------------
Before you start following this guide, make sure you have completed these
three prerequisites.
Install and run Mistral
~~~~~~~~~~~~~~~~~~~~~~~
Go through the installation manual: :doc:`Mistral Installation Guide <install/index>`
Install Mistral client
~~~~~~~~~~~~~~~~~~~~~~
To install mistralclient, please refer to :doc:`Mistral Client / CLI Guide <cli/index>`
Export Keystone credentials
~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use the OpenStack command line tools you should specify environment
variables with the configuration details for your OpenStack installation. The
following example assumes that the Identity service is at ``127.0.0.1:5000``,
with a user ``admin`` in the ``admin`` tenant whose password is ``password``:
.. code-block:: bash
$ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
$ export OS_TENANT_NAME=admin
$ export OS_USERNAME=admin
$ export OS_PASSWORD=password
Write a workflow
----------------
For example, we have the following workflow.
.. code-block:: mistral
---
version: "2.0"
my_workflow:
type: direct
input:
- names
tasks:
task1:
with-items: name in <% $.names %>
action: std.echo output=<% $.name %>
on-success: task2
task2:
action: std.echo output="Done"
This simple workflow iterates through a list of names in ``task1`` (using
`with-items`), stores them as a task result (using the `std.echo` action) and
then stores the word "Done" as a result of the second task (`task2`).
To learn more about the Mistral Workflows and what you can do, read the
:doc:`Mistral Workflow Language specification <admin/dsl_v2>`
Upload the workflow
-------------------
Use the *Mistral CLI* to create the workflow::
$ mistral workflow-create <workflow.yaml>
The output should look similar to this::
+------------------------------------+-------------+--------+---------+---------------------+------------+
|ID | Name | Tags | Input | Created at | Updated at |
+------------------------------------+-------------+--------+---------+---------------------+------------+
|9b719d62-2ced-47d3-b500-73261bb0b2ad| my_workflow | <none> | names | 2015-08-13 08:44:49 | None |
+------------------------------------+-------------+--------+---------+---------------------+------------+
Run the workflow and check the result
-------------------------------------
Use the *Mistral CLI* to start the new workflow, passing in a list of names
as JSON::
$ mistral execution-create my_workflow '{"names": ["John", "Mistral", "Ivan", "Crystal"]}'
Make sure the output is like the following::
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| ID | 49213eb5-196c-421f-b436-775849b55040 |
| Workflow ID | 9b719d62-2ced-47d3-b500-73261bb0b2ad |
| Workflow name | my_workflow |
| Description | |
| Task Execution ID | <none> |
| State | RUNNING |
| State info | None |
| Created at | 2017-03-06 11:24:10 |
| Updated at | 2017-03-06 11:24:10 |
+-------------------+--------------------------------------+
After a moment, check the status of the workflow execution (replace the
example execution id with the ID output above)::
$ mistral execution-get 49213eb5-196c-421f-b436-775849b55040
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| ID | 49213eb5-196c-421f-b436-775849b55040 |
| Workflow ID | 9b719d62-2ced-47d3-b500-73261bb0b2ad |
| Workflow name | my_workflow |
| Description | |
| Task Execution ID | <none> |
| State | SUCCESS |
| State info | None |
| Created at | 2017-03-06 11:24:10 |
| Updated at | 2017-03-06 11:24:20 |
+-------------------+--------------------------------------+
The status of each **task** also can be checked::
$ mistral task-list 49213eb5-196c-421f-b436-775849b55040
+--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+
| ID | Name | Workflow name | Execution ID | State | State info | Created at | Updated at |
+--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+
| f639e7a9-9609-468e-aa08-7650e1472efe | task1 | my_workflow | 49213eb5-196c-421f-b436-775849b55040 | SUCCESS | None | 2017-03-06 11:24:11 | 2017-03-06 11:24:17 |
| d565c5a0-f46f-4ebe-8655-9eb6796307a3 | task2 | my_workflow | 49213eb5-196c-421f-b436-775849b55040 | SUCCESS | None | 2017-03-06 11:24:17 | 2017-03-06 11:24:18 |
+--------------------------------------+-------+---------------+--------------------------------------+---------+------------+---------------------+---------------------+
Check the result of task *'task1'*::
$ mistral task-get-result f639e7a9-9609-468e-aa08-7650e1472efe
[
"John",
"Mistral",
"Ivan",
"Crystal"
]
If needed, we can go deeper and look at a list of the results of the
**action_executions** of a single task::
$ mistral action-execution-list f639e7a9-9609-468e-aa08-7650e1472efe
+--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+
| ID | Name | Workflow name | Task name | Task ID | State | Accepted | Created at | Updated at |
+--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+
| 4e0a60be-04df-42d7-aa59-5107e599d079 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 |
| 5bd95da4-9b29-4a79-bcb1-298abd659bd6 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 |
| 6ae6c19e-b51b-4910-9e0e-96c788093715 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:16 |
| bed5a6a2-c1d8-460f-a2a5-b36f72f85e19 | std.echo | my_workflow | task1 | f639e7a9-9609-468e-aa08-7650e1472efe | SUCCESS | True | 2017-03-06 11:24:12 | 2017-03-06 11:24:17 |
+--------------------------------------+----------+---------------+-----------+--------------------------------------+---------+----------+---------------------+---------------------+
Check the result of the first **action_execution**::
$ mistral action-execution-get-output 4e0a60be-04df-42d7-aa59-5107e599d079
{
"result": "John"
}
**Congratulations! Now you are ready to use OpenStack Workflow Service!**

View File

@ -1,47 +0,0 @@
Actions
=======
Actions are a particular instruction associated with a task that will be
performed when the task runs. For instance: running a shell script, making an
HTTP request, or sending a signal to an external system. Actions can be
synchronous or asynchronous.
With synchronous actions, Mistral will send a signal to the Mistral Executor
and wait for a result. Once the Executor completes the action, the result will
be sent to the Mistral Engine.
With asynchronous actions, Mistral will send a signal to a third party service
and wait for a corresponding action result to be delivered back via the Mistral
API. Once the signal has been sent, Mistral isn't responsible for the state and
result of the action. The third-party service should send a request to the
Mistral API and provide information corresponding to the *action execution* and
its state and result.
.. image:: /img/Mistral_actions.png
:doc:`How to work with asynchronous actions </contributor/asynchronous_actions>`
System actions
--------------
System actions are provided by Mistral out of the box and are available to all
users. Additional actions can be added via the custom action plugin mechanism.
:doc:`How to write an Action Plugin </contributor/creating_custom_action>`
Ad-hoc actions
--------------
Ad-hoc actions are defined in YAML files by users. They wrap existing actions
and their main goal is to simplify using the same action multiple times. For
example, if the same HTTP request is used in multiple workflows, it can be
defined in one place and then re-used without the need to duplicate all of the
parameters.
More about actions; :ref:`actions-dsl`.
.. note::
Nested ad-hoc actions (i.e. ad-hoc actions wrapping around other ad-hoc
actions) are not currently supported.

View File

@ -1,12 +0,0 @@
Cron-triggers
=============
Cron trigger is an object allowing to run workflow on a schedule. User
specifies what workflow with what input needs to be run and also specifies
how often it should be run.
.. image:: /img/Mistral_cron_trigger.png
:align: center
Cron-pattern is used to describe the frequency of execution in Mistral.
To see more about cron-patterns, refer to `Cron expression <https://en.wikipedia.org/wiki/Cron#CRON_expression>`_

View File

@ -1,61 +0,0 @@
Executions
==========
Executions are runtime objects and they reflect the information about the
progress and state of concrete execution type.
Workflow execution
------------------
A particular execution of specific workflow. When user submits a workflow to
run, Mistral creates an object in database for execution of this workflow. It
contains all information about workflow itself, about execution progress,
state, input and output data. Workflow execution contains at least one
*task execution*.
A workflow execution can be in one of a number of predefined states reflecting
its current status:
* **RUNNING** - workflow is currently being executed.
* **PAUSED** - workflow is paused.
* **SUCCESS** - workflow has finished successfully.
* **ERROR** - workflow has finished with an error.
Task execution
--------------
Defines a workflow execution step. It has a state and result.
**Task state**
A task can be in one of a number of predefined states reflecting its current
status:
* **IDLE** - task is not started yet; probably not all requirements are
satisfied.
* **WAITING** - task execution object has been created but it is not ready to
start because some preconditions are not met. **NOTE:** The task may never
run just because some of the preconditions may never be met.
* **RUNNING_DELAYED** - task was in the running state before and the task
execution has been delayed on precise amount of time.
* **RUNNING** - task is currently being executed.
* **SUCCESS** - task has finished successfully.
* **ERROR** - task has finished with an error.
All the actual task states belonging to current execution are persisted in DB.
Task result is an aggregation of all *action executions* belonging to current
*task execution*. Usually one *task execution* has at least one
*action execution*. But in case of task is executing nested workflow, this
*task execution* won't have *action executions*. Instead, there will be at
least one *workflow execution*.
Action execution
----------------
Execution of specific action. To see details about actions, please refer to :ref:`actions-dsl`
Action execution has a state, input and output data.
Usually action execution belongs to task execution but Mistral also is able to run
separate action executions.

View File

@ -1,11 +0,0 @@
Mistral Terminology
===================
.. toctree::
:maxdepth: 3
workbooks
workflows
actions
executions
cron_triggers

View File

@ -1,83 +0,0 @@
Workbooks
=========
Using workbooks users can combine multiple entities of any type (workflows and
actions) into one document and upload to Mistral service. When uploading a
workbook, Mistral will parse it and save its workflows and actions as
independent objects which will be accessible via their own API endpoints
(/workflows and /actions). Once it's done the workbook comes out of the game.
User can just start workflows and use references to workflows/actions as if
they were uploaded without workbook in the first place. However, if need to
modify these individual objects user can modify the same workbook definition
and re-upload it to Mistral (or, of course, user can do it independently).
**Namespacing**
One thing that's worth noting is that when using a workbook Mistral uses its
name as a prefix for generating final names of workflows and actions included
into the workbook. To illustrate this principle let's take a look at the
figure below:
.. image:: /img/Mistral_workbook_namespacing.png
:align: center
So after a workbook has been uploaded its workflows and actions become
independent objects but with slightly different names.
YAML example
^^^^^^^^^^^^
::
---
version: '2.0'
name: my_workbook
description: My set of workflows and ad-hoc actions
workflows:
local_workflow1:
type: direct
tasks:
task1:
action: local_action str1='Hi' str2=' Mistral!'
on-complete:
- task2
task2:
action: global_action
...
local_workflow2:
type: reverse
tasks:
task1:
workflow: local_workflow1
task2:
workflow: global_workflow param1='val1' param2='val2'
requires: [task1]
...
actions:
local_action:
input:
- str1
- str2
base: std.echo output="<% $.str1 %><% $.str2 %>"
**NOTE:** Even though names of objects inside workbooks change upon uploading
Mistral allows referencing between those objects using local names declared in
the original workbook.
**Attributes**
* **name** - Workbook name. **Required.**
* **description** - Workbook description. *Optional*.
* **tags** - String with arbitrary comma-separated values. *Optional*.
* **workflows** - Dictionary containing workflow definitions. *Optional*.
* **actions** - Dictionary containing ad-hoc action definitions. *Optional*.
For more details about Mistral Workflow Language itself, please see
:doc:`Mistral Workflow Language specification </admin/dsl_v2>`

View File

@ -1,140 +0,0 @@
Mistral Workflows
=================
Workflow is the main building block of Mistral Workflow Language, the reason
why the project exists. Workflow represents a process that can be described in
a various number of ways and that can do some job interesting to the end user.
Each workflow consists of tasks (at least one) describing what exact steps
should be made during workflow execution.
YAML example
^^^^^^^^^^^^
::
---
version: '2.0'
create_vm:
  description: Simple workflow sample
  type: direct
  input: # Input parameter declarations
    - vm_name
    - image_ref
    - flavor_ref
  output: # Output definition
    vm_id: <% $.vm_id %>
  tasks:
    create_server:
      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_ref %> flavor=<% $.flavor_ref %>
      publish:
        vm_id: <% $.id %>
      on-success:
        - wait_for_instance
    wait_for_instance:
      action: nova.servers_find id=<% $.vm_id %> status='ACTIVE'
      retry:
        delay: 5
        count: 15
Workflow types
--------------
Mistral Workflow Language v2 introduces different workflow types and the
structure of each workflow type varies according to its semantics. Currently,
Mistral provides two workflow types:
- `Direct workflow <#direct-workflow>`__
- `Reverse workflow <#reverse-workflow>`__
See corresponding sections for details.
Direct workflow
---------------
Direct workflow consists of tasks combined in a graph where every next
task starts after another one depending on produced result. So direct
workflow has a notion of transition. Direct workflow is considered to be
completed if there aren't any transitions left that could be used to
jump to next tasks.
.. image:: /img/Mistral_direct_workflow.png
YAML example
^^^^^^^^^^^^
::
---
version: '2.0'
create_vm_and_send_email:
  type: direct
  input:
    - vm_name
    - image_id
    - flavor_id
  output:
    result: <% $.vm_id %>
  tasks:
    create_vm:
      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>
      publish:
        vm_id: <% $.id %>
      on-error:
        - send_error_email
      on-success:
        - send_success_email
    send_error_email:
      action: send_email to='admin@mysite.org' body='Failed to create a VM'
      on-complete:
        - fail
    send_success_email:
      action: send_email to='admin@mysite.org' body='Vm is successfully created and its id is <% $.vm_id %>'
Reverse workflow
----------------
In reverse workflow all relationships in workflow task graph are
dependencies. In order to run this type of workflow we need to specify a
task that needs to be completed, it can be conventionally called 'target
task'. When Mistral Engine starts a workflow it recursively identifies
all the dependencies that need to be completed first.
.. image:: /img/Mistral_reverse_workflow.png
The figure explains how reverse workflow works. In the example, task
**T1** is chosen a target task. So when the workflow starts Mistral will
run only tasks **T7**, **T8**, **T5**, **T6**, **T2** and **T1** in the
specified order (starting from tasks that have no dependencies). Tasks
**T3** and **T4** won't be a part of this workflow because there's no
route in the directed graph from **T1** to **T3** or **T4**.
YAML example
^^^^^^^^^^^^
::
---
version: '2.0'
create_vm_and_send_email:
  type: reverse
  input:
    - vm_name
    - image_id
    - flavor_id
  output:
    result: <% $.vm_id %>
  tasks:
    create_vm:
      action: nova.servers_create name=<% $.vm_name %> image=<% $.image_id %> flavor=<% $.flavor_id %>
      publish:
        vm_id: <% $.id %>
    search_for_ip:
      action: nova.floating_ips_findall instance_id=null
      publish:
        vm_ip: <% $[0].ip %>
    associate_ip:
      action: nova.servers_add_floating_ip server=<% $.vm_id %> address=<% $.vm_ip %>
      requires: [search_for_ip]
    send_email:
      action: send_email to='admin@mysite.org' body='Vm is created and id <% $.vm_id %> and ip address <% $.vm_ip %>'
      requires: [create_vm, associate_ip]
For more details about Mistral Workflow Language itself, please see
:doc:`Mistral Workflow Language specification </admin/dsl_v2>`

View File

@ -1,16 +0,0 @@
#!/bin/bash -xe
# TODO (akovi): This script is needed practically only for the CI builds.
# Should be moved to some other place
# install docker
curl -fsSL https://get.docker.com/ | sh
sudo service docker restart
sudo -E docker pull ubuntu:14.04
# build image
sudo -E tools/docker/build.sh
sudo -E docker save mistral-all | gzip > mistral-docker.tar.gz

View File

@ -1,6 +0,0 @@
- event_types:
- compute.instance.create.*
properties:
resource_id: <% $.payload.instance_id %>
project_id: <% $.context.project_id %>
user_id: <% $.context.user_id %>

View File

@ -1,32 +0,0 @@
[loggers]
keys=root
[handlers]
keys=consoleHandler, fileHandler
[formatters]
keys=verboseFormatter, simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler, fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=FileHandler
level=INFO
formatter=verboseFormatter
args=("/var/log/mistral.log",)
[formatter_verboseFormatter]
format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s
datefmt=
[formatter_simpleFormatter]
format=%(asctime)s %(levelname)s [-] %(message)s
datefmt=

View File

@ -1,32 +0,0 @@
[loggers]
keys=root
[handlers]
keys=consoleHandler, fileHandler
[formatters]
keys=verboseFormatter, simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler, fileHandler
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_fileHandler]
class=logging.handlers.RotatingFileHandler
level=INFO
formatter=verboseFormatter
args=("/var/log/mistral.log", "a", 10485760, 5)
[formatter_verboseFormatter]
format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s
datefmt=
[formatter_simpleFormatter]
format=%(asctime)s %(levelname)s [-] %(message)s
datefmt=

View File

@ -1,66 +0,0 @@
{
"admin_only": "is_admin:True",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"action_executions:delete": "rule:admin_or_owner",
"action_execution:create": "rule:admin_or_owner",
"action_executions:get": "rule:admin_or_owner",
"action_executions:list": "rule:admin_or_owner",
"action_executions:update": "rule:admin_or_owner",
"actions:create": "rule:admin_or_owner",
"actions:delete": "rule:admin_or_owner",
"actions:get": "rule:admin_or_owner",
"actions:list": "rule:admin_or_owner",
"actions:update": "rule:admin_or_owner",
"cron_triggers:create": "rule:admin_or_owner",
"cron_triggers:delete": "rule:admin_or_owner",
"cron_triggers:get": "rule:admin_or_owner",
"cron_triggers:list": "rule:admin_or_owner",
"environments:create": "rule:admin_or_owner",
"environments:delete": "rule:admin_or_owner",
"environments:get": "rule:admin_or_owner",
"environments:list": "rule:admin_or_owner",
"environments:update": "rule:admin_or_owner",
"executions:create": "rule:admin_or_owner",
"executions:delete": "rule:admin_or_owner",
"executions:get": "rule:admin_or_owner",
"executions:list": "rule:admin_or_owner",
"executions:update": "rule:admin_or_owner",
"members:create": "rule:admin_or_owner",
"members:delete": "rule:admin_or_owner",
"members:get": "rule:admin_or_owner",
"members:list": "rule:admin_or_owner",
"members:update": "rule:admin_or_owner",
"services:list": "rule:admin_or_owner",
"tasks:get": "rule:admin_or_owner",
"tasks:list": "rule:admin_or_owner",
"tasks:update": "rule:admin_or_owner",
"workbooks:create": "rule:admin_or_owner",
"workbooks:delete": "rule:admin_or_owner",
"workbooks:get": "rule:admin_or_owner",
"workbooks:list": "rule:admin_or_owner",
"workbooks:update": "rule:admin_or_owner",
"workflows:create": "rule:admin_or_owner",
"workflows:delete": "rule:admin_or_owner",
"workflows:get": "rule:admin_or_owner",
"workflows:list": "rule:admin_or_owner",
"workflows:list:all_projects": "rule:admin_only",
"workflows:update": "rule:admin_or_owner",
"event_triggers:create": "rule:admin_or_owner",
"event_triggers:delete": "rule:admin_or_owner",
"event_triggers:get": "rule:admin_or_owner",
"event_triggers:list": "rule:admin_or_owner",
"event_triggers:list:all_projects": "rule:admin_only",
"event_triggers:update": "rule:admin_or_owner"
}

View File

@ -1,62 +0,0 @@
[loggers]
keys=workflow_trace,profiler_trace,root
[handlers]
keys=consoleHandler, wfTraceFileHandler, profilerFileHandler, fileHandler
[formatters]
keys=wfFormatter, profilerFormatter, simpleFormatter, verboseFormatter
[logger_workflow_trace]
level=INFO
handlers=consoleHandler, wfTraceFileHandler
qualname=workflow_trace
[logger_profiler_trace]
level=INFO
handlers=profilerFileHandler
qualname=profiler_trace
[logger_root]
level=INFO
handlers=fileHandler
[handler_fileHandler]
class=FileHandler
level=INFO
formatter=verboseFormatter
args=("/var/log/mistral.log",)
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_wfTraceFileHandler]
class=FileHandler
level=INFO
formatter=wfFormatter
args=("/var/log/mistral_wf_trace.log",)
[handler_profilerFileHandler]
class=FileHandler
level=INFO
formatter=profilerFormatter
args=("/var/log/mistral_osprofile.log",)
[formatter_verboseFormatter]
format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s
datefmt=
[formatter_simpleFormatter]
format=%(asctime)s %(levelname)s [-] %(message)s
datefmt=
[formatter_wfFormatter]
format=%(asctime)s WF [-] %(message)s
datefmt=
[formatter_profilerFormatter]
format=%(message)s
datefmt=

View File

@ -1,62 +0,0 @@
[loggers]
keys=workflow_trace,profiler_trace,root
[handlers]
keys=consoleHandler, wfTraceFileHandler, profilerFileHandler, fileHandler
[formatters]
keys=wfFormatter, profilerFormatter, simpleFormatter, verboseFormatter
[logger_workflow_trace]
level=INFO
handlers=consoleHandler, wfTraceFileHandler
qualname=workflow_trace
[logger_profiler_trace]
level=INFO
handlers=profilerFileHandler
qualname=profiler_trace
[logger_root]
level=INFO
handlers=fileHandler
[handler_fileHandler]
class=logging.handlers.RotatingFileHandler
level=INFO
formatter=verboseFormatter
args=("/var/log/mistral.log", "a", 10485760, 5)
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=simpleFormatter
args=(sys.stdout,)
[handler_wfTraceFileHandler]
class=logging.handlers.RotatingFileHandler
level=INFO
formatter=wfFormatter
args=("/var/log/mistral_wf_trace.log", "a", 10485760, 5)
[handler_profilerFileHandler]
class=logging.handlers.RotatingFileHandler
level=INFO
formatter=profilerFormatter
args=("/var/log/mistral_osprofile.log", "a", 10485760, 5)
[formatter_verboseFormatter]
format=%(asctime)s %(thread)s %(levelname)s %(module)s [-] %(message)s
datefmt=
[formatter_simpleFormatter]
format=%(asctime)s %(levelname)s [-] %(message)s
datefmt=
[formatter_wfFormatter]
format=%(asctime)s WF [-] %(message)s
datefmt=
[formatter_profilerFormatter]
format=%(message)s
datefmt=

View File

@ -1,34 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
set -ex
sudo chmod -R a+rw /opt/stack/
(cd $BASE/new/tempest/; sudo virtualenv .venv)
source $BASE/new/tempest/.venv/bin/activate
(cd $BASE/new/tempest/; sudo pip install -r requirements.txt -r test-requirements.txt)
sudo pip install nose
sudo pip install numpy
sudo cp $BASE/new/tempest/etc/logging.conf.sample $BASE/new/tempest/etc/logging.conf
(cd $BASE/new/mistral/; sudo pip install -r requirements.txt -r test-requirements.txt)
(cd $BASE/new/mistral/; sudo python setup.py install)
export TOX_TESTENV_PASSENV=ZUUL_PROJECT
(cd $BASE/new/tempest/; sudo -E testr init)
(cd $BASE/new/tempest/; sudo -E tox -eall-plugin mistral)

View File

@ -1,41 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# How many seconds to wait for the API to be responding before giving up
API_RESPONDING_TIMEOUT=20
if ! timeout ${API_RESPONDING_TIMEOUT} sh -c "until curl --output /dev/null --silent --head --fail http://localhost:8989; do sleep 1; done"; then
echo "Mistral API failed to respond within ${API_RESPONDING_TIMEOUT} seconds"
exit 1
fi
echo "Successfully contacted Mistral API"
# Where tempest code lives
TEMPEST_DIR=${TEMPEST_DIR:-/opt/stack/new/tempest}
# Path to directory with tempest.conf file, otherwise it will
# take relative path from where the run tests command is being executed.
export TEMPEST_CONFIG_DIR=${TEMPEST_CONFIG_DIR:-$TEMPEST_DIR/etc/}
echo "Tempest configuration file directory: $TEMPEST_CONFIG_DIR"
# Where mistral code and mistralclient code live
MISTRAL_DIR=/opt/stack/new/mistral
MISTRALCLIENT_DIR=/opt/stack/new/python-mistralclient
# Define PYTHONPATH
export PYTHONPATH=$PYTHONPATH:$TEMPEST_DIR
pwd
nosetests -sv mistral_tempest_tests/tests/

View File

View File

@ -1,26 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See https://docs.openstack.org/oslo.i18n/latest/user/usage.html
"""
import oslo_i18n
DOMAIN = 'mistral'
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,28 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_utils import importutils
def construct_action_class(action_class_str, attributes):
# Rebuild action class and restore attributes.
action_class = importutils.import_class(action_class_str)
unique_action_class = type(
action_class.__name__,
(action_class,),
attributes
)
return unique_action_class

View File

@ -1,31 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
class ActionGenerator(object):
"""Action generator.
Action generator uses some data to build Action classes
dynamically.
"""
@abc.abstractmethod
def create_actions(self, *args, **kwargs):
"""Constructs classes of needed action.
return: list of actions dicts containing name, class,
description and parameter info.
"""
pass

View File

@ -1,91 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import warnings
warnings.warn(
"mistral.actions.Action is deprecated as of the 5.0.0 release in favor of "
"mistral_lib. It will be removed in a future release.", DeprecationWarning
)
class Action(object):
"""Action.
Action is a means in Mistral to perform some useful work associated with
a workflow during its execution. Every workflow task is configured with
an action and when the task runs it eventually delegates to the action.
When it happens task parameters get evaluated (calculating expressions,
if any) and are treated as action parameters. So in a regular general
purpose languages terminology action is a method declaration and task is
a method call.
Base action class initializer doesn't have arguments. However, concrete
action classes may have any number of parameters defining action behavior.
These parameters must correspond to parameters declared in action
specification (e.g. using DSL or others).
Action initializer may have a conventional argument with name
"action_context". If it presents then action factory will fill it with
a dictionary containing contextual information like execution identifier,
workbook name and other that may be needed for some specific action
implementations.
"""
@abc.abstractmethod
def run(self):
"""Run action logic.
:return: Result of the action. Note that for asynchronous actions
it should always be None, however, if even it's not None it will be
ignored by a caller.
Result can be of two types:
1) Any serializable value meaningful from a user perspective (such
as string, number or dict).
2) Instance of {mistral.workflow.utils.Result} which has field "data"
for success result and field "error" for keeping so called "error
result" like HTTP error code and similar. Using the second type
allows to communicate a result even in case of error and hence to have
conditions in "on-error" clause of direct workflows. Depending on
particular action semantics one or another option may be preferable.
In case if action failed and there's no need to communicate any error
result this method should throw a ActionException.
"""
pass
@abc.abstractmethod
def test(self):
"""Returns action test result.
This method runs in test mode as a test version of method run() to
generate and return a representative test result. It's basically a
contract for action 'dry-run' behavior specifically useful for
testing and workflow designing purposes.
:return: Representative action result.
"""
pass
def is_sync(self):
"""Returns True if the action is synchronous, otherwise False.
:return: True if the action is synchronous and method run() returns
final action result. Otherwise returns False which means that
a result of method run() should be ignored and a real action
result is supposed to be delivered in an asynchronous manner
using public API. By default, if a concrete implementation
doesn't override this method then the action is synchronous.
"""
return True

View File

@ -1,42 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_utils import importutils
from mistral.actions.openstack.action_generator import base
SUPPORTED_MODULES = [
'Nova', 'Glance', 'Keystone', 'Heat', 'Neutron', 'Cinder', 'Ceilometer',
'Trove', 'Ironic', 'Baremetal Introspection', 'Swift', 'Zaqar', 'Barbican',
'Mistral', 'Designate', 'Magnum', 'Murano', 'Tacker', 'Aodh', 'Gnocchi',
]
def all_generators():
for mod_name in SUPPORTED_MODULES:
prefix = mod_name.replace(' ', '')
mod_namespace = mod_name.lower().replace(' ', '_')
mod_cls_name = 'mistral.actions.openstack.actions.%sAction' % prefix
mod_action_cls = importutils.import_class(mod_cls_name)
generator_cls_name = '%sActionGenerator' % prefix
yield type(
generator_cls_name,
(base.OpenStackActionGenerator,),
{
'action_namespace': mod_namespace,
'base_action_class': mod_action_cls
}
)

View File

@ -1,168 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
from oslo_config import cfg
from oslo_log import log as logging
import pkg_resources as pkg
from mistral.actions import action_generator
from mistral.utils import inspect_utils as i_u
from mistral import version
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def get_mapping():
def delete_comment(map_part):
for key, value in map_part.items():
if isinstance(value, dict):
delete_comment(value)
if '_comment' in map_part:
del map_part['_comment']
package = version.version_info.package
if os.path.isabs(CONF.openstack_actions_mapping_path):
mapping_file_path = CONF.openstack_actions_mapping_path
else:
path = CONF.openstack_actions_mapping_path
mapping_file_path = pkg.resource_filename(package, path)
LOG.info("Processing OpenStack action mapping from file: %s",
mapping_file_path)
with open(mapping_file_path) as fh:
mapping = json.load(fh)
for k, v in mapping.items():
if isinstance(v, dict):
delete_comment(v)
return mapping
class OpenStackActionGenerator(action_generator.ActionGenerator):
"""OpenStackActionGenerator.
Base generator for all OpenStack actions,
creates a client method declaration using
specific python-client and sets needed arguments
to actions.
"""
action_namespace = None
base_action_class = None
@classmethod
def prepare_action_inputs(cls, origin_inputs, added=[]):
"""Modify action input string.
Sometimes we need to change the default action input definition for
OpenStack actions in order to make the workflow more powerful.
Examples::
>>> prepare_action_inputs('a,b,c', added=['region=RegionOne'])
a, b, c, region=RegionOne
>>> prepare_action_inputs('a,b,c=1', added=['region=RegionOne'])
a, b, region=RegionOne, c=1
>>> prepare_action_inputs('a,b,c=1,**kwargs',
added=['region=RegionOne'])
a, b, region=RegionOne, c=1, **kwargs
>>> prepare_action_inputs('**kwargs', added=['region=RegionOne'])
region=RegionOne, **kwargs
>>> prepare_action_inputs('', added=['region=RegionOne'])
region=RegionOne
:param origin_inputs: A string consists of action inputs, separated by
comma.
:param added: (Optional) A list of params to add to input string.
:return: The new action input string.
"""
if not origin_inputs:
return ", ".join(added)
inputs = [i.strip() for i in origin_inputs.split(',')]
kwarg_index = None
for index, input in enumerate(inputs):
if "=" in input:
kwarg_index = index
if "**" in input:
kwarg_index = index - 1
kwarg_index = len(inputs) if kwarg_index is None else kwarg_index
kwarg_index = kwarg_index + 1 if kwarg_index < 0 else kwarg_index
for a in added:
if "=" not in a:
inputs.insert(0, a)
kwarg_index += 1
else:
inputs.insert(kwarg_index, a)
return ", ".join(inputs)
@classmethod
def create_action_class(cls, method_name):
if not method_name:
return None
action_class = type(str(method_name), (cls.base_action_class,),
{'client_method_name': method_name})
return action_class
@classmethod
def create_actions(cls):
mapping = get_mapping()
method_dict = mapping.get(cls.action_namespace, {})
action_classes = []
for action_name, method_name in method_dict.items():
class_ = cls.create_action_class(method_name)
try:
client_method = class_.get_fake_client_method()
except Exception:
LOG.exception("Failed to create action: %s.%s" %
(cls.action_namespace, action_name))
continue
arg_list = i_u.get_arg_list_as_str(client_method)
# Support specifying region for OpenStack actions.
modules = CONF.openstack_actions.modules_support_region
if cls.action_namespace in modules:
arg_list = cls.prepare_action_inputs(
arg_list,
added=['action_region=""']
)
description = i_u.get_docstring(client_method)
action_classes.append(
{
'class': class_,
'name': "%s.%s" % (cls.action_namespace, action_name),
'description': description,
'arg_list': arg_list,
}
)
return action_classes

View File

@ -1,845 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from oslo_config import cfg
from oslo_log import log
from oslo_utils import importutils
from keystoneclient.auth import identity
from keystoneclient import httpclient
from mistral.actions.openstack import base
from mistral.utils import inspect_utils
from mistral.utils.openstack import keystone as keystone_utils
LOG = log.getLogger(__name__)
CONF = cfg.CONF
IRONIC_API_VERSION = '1.22'
"""The default microversion to pass to Ironic API.
1.22 corresponds to Newton final.
"""
def _try_import(module_name):
try:
return importutils.try_import(module_name)
except Exception as e:
msg = 'Unable to load module "%s". %s' % (module_name, str(e))
LOG.error(msg)
return None
aodhclient = _try_import('aodhclient.v2.client')
barbicanclient = _try_import('barbicanclient.client')
ceilometerclient = _try_import('ceilometerclient.v2.client')
cinderclient = _try_import('cinderclient.v2.client')
designateclient = _try_import('designateclient.v1')
glanceclient = _try_import('glanceclient.v2.client')
gnocchiclient = _try_import('gnocchiclient.v1.client')
heatclient = _try_import('heatclient.v1.client')
ironic_inspector_client = _try_import('ironic_inspector_client.v1')
ironicclient = _try_import('ironicclient.v1.client')
keystoneclient = _try_import('keystoneclient.v3.client')
magnumclient = _try_import('magnumclient.v1.client')
mistralclient = _try_import('mistralclient.api.v2.client')
muranoclient = _try_import('muranoclient.v1.client')
neutronclient = _try_import('neutronclient.v2_0.client')
novaclient = _try_import('novaclient.client')
senlinclient = _try_import('senlinclient.v1.client')
swift_client = _try_import('swiftclient.client')
tackerclient = _try_import('tackerclient.v1_0.client')
troveclient = _try_import('troveclient.v1.client')
zaqarclient = _try_import('zaqarclient.queues.v2.client')
class NovaAction(base.OpenStackAction):
_service_type = 'compute'
@classmethod
def _get_client_class(cls):
return novaclient.Client
def _create_client(self, context):
LOG.debug("Nova action security context: %s" % context)
return novaclient.Client(2, **self.get_session_and_auth(context))
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()(2)
class GlanceAction(base.OpenStackAction):
_service_type = 'image'
@classmethod
def _get_client_class(cls):
return glanceclient.Client
def _create_client(self, context):
LOG.debug("Glance action security context: %s" % context)
glance_endpoint = self.get_service_endpoint()
return self._get_client_class()(
glance_endpoint.url,
region_name=glance_endpoint.region,
**self.get_session_and_auth(context)
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("fake_endpoint")
class KeystoneAction(base.OpenStackAction):
_service_type = 'identity'
@classmethod
def _get_client_class(cls):
return keystoneclient.Client
def _create_client(self, context):
LOG.debug("Keystone action security context: %s" % context)
kwargs = self.get_session_and_auth(context)
# NOTE(akovi): the endpoint in the token messes up
# keystone. The auth parameter should not be provided for
# these operations.
kwargs.pop('auth')
client = self._get_client_class()(**kwargs)
return client
@classmethod
def _get_fake_client(cls):
# Here we need to replace httpclient authenticate method temporarily
authenticate = httpclient.HTTPClient.authenticate
httpclient.HTTPClient.authenticate = lambda x: True
fake_client = cls._get_client_class()()
# Once we get fake client, return back authenticate method
httpclient.HTTPClient.authenticate = authenticate
return fake_client
class CeilometerAction(base.OpenStackAction):
_service_type = 'metering'
@classmethod
def _get_client_class(cls):
return ceilometerclient.Client
def _create_client(self, context):
LOG.debug("Ceilometer action security context: %s" % context)
ceilometer_endpoint = self.get_service_endpoint()
endpoint_url = keystone_utils.format_url(
ceilometer_endpoint.url,
{'tenant_id': context.project_id}
)
return self._get_client_class()(
endpoint_url,
region_name=ceilometer_endpoint.region,
token=context.auth_token,
username=context.user_name,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("")
class HeatAction(base.OpenStackAction):
_service_type = 'orchestration'
@classmethod
def _get_client_class(cls):
return heatclient.Client
def _create_client(self, context):
LOG.debug("Heat action security context: %s" % context)
heat_endpoint = self.get_service_endpoint()
endpoint_url = keystone_utils.format_url(
heat_endpoint.url,
{
'tenant_id': context.project_id,
'project_id': context.project_id
}
)
return self._get_client_class()(
endpoint_url,
region_name=heat_endpoint.region,
**self.get_session_and_auth(context)
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("")
class NeutronAction(base.OpenStackAction):
_service_type = 'network'
@classmethod
def _get_client_class(cls):
return neutronclient.Client
def _create_client(self, context):
LOG.debug("Neutron action security context: %s" % context)
neutron_endpoint = self.get_service_endpoint()
return self._get_client_class()(
endpoint_url=neutron_endpoint.url,
region_name=neutron_endpoint.region,
token=context.auth_token,
auth_url=context.auth_uri,
insecure=context.insecure
)
class CinderAction(base.OpenStackAction):
_service_type = 'volumev2'
@classmethod
def _get_client_class(cls):
return cinderclient.Client
def _create_client(self, context):
LOG.debug("Cinder action security context: %s" % context)
cinder_endpoint = self.get_service_endpoint()
cinder_url = keystone_utils.format_url(
cinder_endpoint.url,
{
'tenant_id': context.project_id,
'project_id': context.project_id
}
)
client = self._get_client_class()(
context.user_name,
context.auth_token,
project_id=context.project_id,
auth_url=cinder_url,
region_name=cinder_endpoint.region,
insecure=context.insecure
)
client.client.auth_token = context.auth_token
client.client.management_url = cinder_url
return client
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()
class MistralAction(base.OpenStackAction):
_service_type = 'workflowv2'
@classmethod
def _get_client_class(cls):
return mistralclient.Client
def _create_client(self, context):
LOG.debug("Mistral action security context: %s" % context)
session_and_auth = self.get_session_and_auth(context)
return self._get_client_class()(
mistral_url=session_and_auth['auth'].endpoint,
**session_and_auth
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()
class TroveAction(base.OpenStackAction):
_service_type = 'database'
@classmethod
def _get_client_class(cls):
return troveclient.Client
def _create_client(self, context):
LOG.debug("Trove action security context: %s" % context)
trove_endpoint = self.get_service_endpoint()
trove_url = keystone_utils.format_url(
trove_endpoint.url,
{'tenant_id': context.project_id}
)
client = self._get_client_class()(
context.user_name,
context.auth_token,
project_id=context.project_id,
auth_url=trove_url,
region_name=trove_endpoint.region,
insecure=context.insecure
)
client.client.auth_token = context.auth_token
client.client.management_url = trove_url
return client
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("fake_user", "fake_passwd")
class IronicAction(base.OpenStackAction):
_service_name = 'ironic'
@classmethod
def _get_client_class(cls):
return ironicclient.Client
def _create_client(self, context):
LOG.debug("Ironic action security context: %s" % context)
ironic_endpoint = self.get_service_endpoint()
return self._get_client_class()(
ironic_endpoint.url,
token=context.auth_token,
region_name=ironic_endpoint.region,
os_ironic_api_version=IRONIC_API_VERSION,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("http://127.0.0.1:6385/")
class BaremetalIntrospectionAction(base.OpenStackAction):
@classmethod
def _get_client_class(cls):
return ironic_inspector_client.ClientV1
@classmethod
def _get_fake_client(cls):
try:
# ironic-inspector client tries to get and validate it's own
# version when created. This might require checking the keystone
# catalog if the ironic-inspector server is not listening on the
# localhost IP address. Thus, we get a session for this case.
sess = keystone_utils.get_admin_session()
return cls._get_client_class()(session=sess)
except Exception as e:
LOG.warning("There was an error trying to create the "
"ironic-inspector client using a session: %s" % str(e))
# If it's not possible to establish a keystone session, attempt to
# create a client without it. This should fall back to where the
# ironic-inspector client tries to get it's own version on the
# default IP address.
LOG.debug("Attempting to create the ironic-inspector client "
"without a session.")
return cls._get_client_class()()
def _create_client(self, context):
LOG.debug(
"Baremetal introspection action security context: %s" % context)
inspector_endpoint = keystone_utils.get_endpoint_for_project(
service_type='baremetal-introspection'
)
return self._get_client_class()(
api_version=1,
inspector_url=inspector_endpoint.url,
auth_token=context.auth_token,
)
class SwiftAction(base.OpenStackAction):
@classmethod
def _get_client_class(cls):
return swift_client.Connection
def _create_client(self, context):
LOG.debug("Swift action security context: %s" % context)
swift_endpoint = keystone_utils.get_endpoint_for_project('swift')
kwargs = {
'preauthurl': swift_endpoint.url % {
'tenant_id': context.project_id
},
'preauthtoken': context.auth_token,
'insecure': context.insecure
}
return self._get_client_class()(**kwargs)
class ZaqarAction(base.OpenStackAction):
@classmethod
def _get_client_class(cls):
return zaqarclient.Client
def _create_client(self, context):
LOG.debug("Zaqar action security context: %s" % context)
zaqar_endpoint = keystone_utils.get_endpoint_for_project(
service_type='messaging')
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
opts = {
'os_auth_token': context.auth_token,
'os_auth_url': keystone_endpoint.url,
'os_project_id': context.project_id,
'insecure': context.insecure,
}
auth_opts = {'backend': 'keystone', 'options': opts}
conf = {'auth_opts': auth_opts}
return self._get_client_class()(zaqar_endpoint.url, conf=conf)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("")
@classmethod
def _get_client_method(cls, client):
method = getattr(cls, cls.client_method_name)
# We can't use partial as it's not supported by getargspec
@functools.wraps(method)
def wrap(*args, **kwargs):
return method(client, *args, **kwargs)
arguments = inspect_utils.get_arg_list_as_str(method)
# Remove client
wrap.__arguments__ = arguments.split(', ', 1)[1]
return wrap
@staticmethod
def queue_messages(client, queue_name, **params):
"""Gets a list of messages from the queue.
:param client: the Zaqar client
:type client: zaqarclient.queues.client
:param queue_name: Name of the target queue.
:type queue_name: `six.string_type`
:param params: Filters to use for getting messages.
:type params: **kwargs dict
:returns: List of messages.
:rtype: `list`
"""
queue = client.queue(queue_name)
return queue.messages(**params)
@staticmethod
def queue_post(client, queue_name, messages):
"""Posts one or more messages to a queue.
:param client: the Zaqar client
:type client: zaqarclient.queues.client
:param queue_name: Name of the target queue.
:type queue_name: `six.string_type`
:param messages: One or more messages to post.
:type messages: `list` or `dict`
:returns: A dict with the result of this operation.
:rtype: `dict`
"""
queue = client.queue(queue_name)
return queue.post(messages)
@staticmethod
def queue_pop(client, queue_name, count=1):
"""Pop `count` messages from the queue.
:param client: the Zaqar client
:type client: zaqarclient.queues.client
:param queue_name: Name of the target queue.
:type queue_name: `six.string_type`
:param count: Number of messages to pop.
:type count: int
:returns: List of messages.
:rtype: `list`
"""
queue = client.queue(queue_name)
return queue.pop(count)
class BarbicanAction(base.OpenStackAction):
@classmethod
def _get_client_class(cls):
return barbicanclient.Client
def _create_client(self, context):
LOG.debug("Barbican action security context: %s" % context)
barbican_endpoint = keystone_utils.get_endpoint_for_project('barbican')
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
auth = identity.v2.Token(
auth_url=keystone_endpoint.url,
tenant_name=context.user_name,
token=context.auth_token,
tenant_id=context.project_id
)
return self._get_client_class()(
project_id=context.project_id,
endpoint=barbican_endpoint.url,
auth=auth,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()(
project_id="1",
endpoint="http://127.0.0.1:9311"
)
@classmethod
def _get_client_method(cls, client):
if cls.client_method_name != "secrets_store":
return super(BarbicanAction, cls)._get_client_method(client)
method = getattr(cls, cls.client_method_name)
@functools.wraps(method)
def wrap(*args, **kwargs):
return method(client, *args, **kwargs)
arguments = inspect_utils.get_arg_list_as_str(method)
# Remove client.
wrap.__arguments__ = arguments.split(', ', 1)[1]
return wrap
@staticmethod
def secrets_store(client,
name=None,
payload=None,
algorithm=None,
bit_length=None,
secret_type=None,
mode=None, expiration=None):
"""Create and Store a secret in Barbican.
:param client: the Zaqar client
:type client: zaqarclient.queues.client
:param name: A friendly name for the Secret
:type name: string
:param payload: The unencrypted secret data
:type payload: string
:param algorithm: The algorithm associated with this secret key
:type algorithm: string
:param bit_length: The bit length of this secret key
:type bit_length: int
:param secret_type: The secret type for this secret key
:type secret_type: string
:param mode: The algorithm mode used with this secret keybit_length:
:type mode: string
:param expiration: The expiration time of the secret in ISO 8601 format
:type expiration: string
:returns: A new Secret object
:rtype: class:`barbicanclient.secrets.Secret'
"""
entity = client.secrets.create(
name,
payload,
algorithm,
bit_length,
secret_type,
mode,
expiration
)
entity.store()
return entity._get_formatted_entity()
class DesignateAction(base.OpenStackAction):
_service_type = 'dns'
@classmethod
def _get_client_class(cls):
return designateclient.Client
def _create_client(self, context):
LOG.debug("Designate action security context: %s" % context)
designate_endpoint = self.get_service_endpoint()
designate_url = keystone_utils.format_url(
designate_endpoint.url,
{'tenant_id': context.project_id}
)
client = self._get_client_class()(
endpoint=designate_url,
tenant_id=context.project_id,
auth_url=context.auth_uri,
region_name=designate_endpoint.region,
service_type='dns',
insecure=context.insecure
)
client.client.auth_token = context.auth_token
client.client.management_url = designate_url
return client
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()
class MagnumAction(base.OpenStackAction):
@classmethod
def _get_client_class(cls):
return magnumclient.Client
def _create_client(self, context):
LOG.debug("Magnum action security context: %s" % context)
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
auth_url = keystone_endpoint.url
magnum_url = keystone_utils.get_endpoint_for_project('magnum').url
return self._get_client_class()(
magnum_url=magnum_url,
auth_token=context.auth_token,
project_id=context.project_id,
user_id=context.user_id,
auth_url=auth_url,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()(auth_url='X', magnum_url='X')
class MuranoAction(base.OpenStackAction):
_service_name = 'murano'
@classmethod
def _get_client_class(cls):
return muranoclient.Client
def _create_client(self, context):
LOG.debug("Murano action security context: %s" % context)
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
murano_endpoint = self.get_service_endpoint()
return self._get_client_class()(
endpoint=murano_endpoint.url,
token=context.auth_token,
tenant=context.project_id,
region_name=murano_endpoint.region,
auth_url=keystone_endpoint.url,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("http://127.0.0.1:8082/")
class TackerAction(base.OpenStackAction):
_service_name = 'tacker'
@classmethod
def _get_client_class(cls):
return tackerclient.Client
def _create_client(self, context):
LOG.debug("Tacker action security context: %s" % context)
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
tacker_endpoint = self.get_service_endpoint()
return self._get_client_class()(
endpoint_url=tacker_endpoint.url,
token=context.auth_token,
tenant_id=context.project_id,
region_name=tacker_endpoint.region,
auth_url=keystone_endpoint.url,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()
class SenlinAction(base.OpenStackAction):
_service_name = 'senlin'
@classmethod
def _get_client_class(cls):
return senlinclient.Client
def _create_client(self, context):
LOG.debug("Senlin action security context: %s" % context)
keystone_endpoint = keystone_utils.get_keystone_endpoint_v2()
senlin_endpoint = self.get_service_endpoint()
return self._get_client_class()(
endpoint_url=senlin_endpoint.url,
token=context.auth_token,
tenant_id=context.project_id,
region_name=senlin_endpoint.region,
auth_url=keystone_endpoint.url,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()("http://127.0.0.1:8778")
class AodhAction(base.OpenStackAction):
_service_type = 'alarming'
@classmethod
def _get_client_class(cls):
return aodhclient.Client
def _create_client(self, context):
LOG.debug("Aodh action security context: %s" % context)
aodh_endpoint = self.get_service_endpoint()
endpoint_url = keystone_utils.format_url(
aodh_endpoint.url,
{'tenant_id': context.project_id}
)
return self._get_client_class()(
endpoint_url,
region_name=aodh_endpoint.region,
token=context.auth_token,
username=context.user_name,
insecure=context.insecure
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()
class GnocchiAction(base.OpenStackAction):
_service_type = 'metric'
@classmethod
def _get_client_class(cls):
return gnocchiclient.Client
def _create_client(self, context):
LOG.debug("Gnocchi action security context: %s" % context)
gnocchi_endpoint = self.get_service_endpoint()
endpoint_url = keystone_utils.format_url(
gnocchi_endpoint.url,
{'tenant_id': context.project_id}
)
return self._get_client_class()(
endpoint_url,
region_name=gnocchi_endpoint.region,
token=context.auth_token,
username=context.user_name
)
@classmethod
def _get_fake_client(cls):
return cls._get_client_class()()

View File

@ -1,136 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import inspect
import traceback
from oslo_log import log
from mistral import exceptions as exc
from mistral.utils.openstack import keystone as keystone_utils
from mistral_lib import actions
LOG = log.getLogger(__name__)
class OpenStackAction(actions.Action):
"""OpenStack Action.
OpenStack Action is the basis of all OpenStack-specific actions,
which are constructed via OpenStack Action generators.
"""
_kwargs_for_run = {}
client_method_name = None
_service_name = None
_service_type = None
_client_class = None
def __init__(self, **kwargs):
self._kwargs_for_run = kwargs
self.action_region = self._kwargs_for_run.pop('action_region', None)
@abc.abstractmethod
def _create_client(self, context):
"""Creates client required for action operation."""
return None
@classmethod
def _get_client_class(cls):
return cls._client_class
@classmethod
def _get_client_method(cls, client):
hierarchy_list = cls.client_method_name.split('.')
attribute = client
for attr in hierarchy_list:
attribute = getattr(attribute, attr)
return attribute
@classmethod
def _get_fake_client(cls):
"""Returns python-client instance which initiated via wrong args.
It is needed for getting client-method args and description for
saving into DB.
"""
# Default is simple _get_client_class instance
return cls._get_client_class()()
@classmethod
def get_fake_client_method(cls):
return cls._get_client_method(cls._get_fake_client())
def _get_client(self, context):
"""Returns python-client instance via cache or creation
Gets client instance according to specific OpenStack Service
(e.g. Nova, Glance, Heat, Keystone etc)
"""
return self._create_client(context)
def get_session_and_auth(self, context):
"""Get keystone session and auth parameters.
:param context: the action context
:return: dict that can be used to initialize service clients
"""
return keystone_utils.get_session_and_auth(
service_name=self._service_name,
service_type=self._service_type,
region_name=self.action_region,
context=context)
def get_service_endpoint(self):
"""Get OpenStack service endpoint.
'service_name' and 'service_type' are defined in specific OpenStack
service action.
"""
endpoint = keystone_utils.get_endpoint_for_project(
service_name=self._service_name,
service_type=self._service_type,
region_name=self.action_region
)
return endpoint
def run(self, context):
try:
method = self._get_client_method(self._get_client(context))
result = method(**self._kwargs_for_run)
if inspect.isgenerator(result):
return [v for v in result]
return result
except Exception as e:
# Print the traceback for the last exception so that we can see
# where the issue comes from.
LOG.warning(traceback.format_exc())
raise exc.ActionException(
"%s.%s failed: %s" %
(self.__class__.__name__, self.client_method_name, str(e))
)
def test(self, context):
return dict(
zip(self._kwargs_for_run, ['test'] * len(self._kwargs_for_run))
)

File diff suppressed because it is too large Load Diff

View File

@ -1,503 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
# Copyright 2014 - StackStorm, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from email import header
from email.mime import text
import json
import requests
import six
import smtplib
import time
from mistral import exceptions as exc
from mistral.utils import javascript
from mistral.utils import ssh_utils
from mistral_lib import actions
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class EchoAction(actions.Action):
"""Echo action.
This action just returns a configured value as a result without doing
anything else. The value of such action implementation is that it
can be used in development (for testing), demonstration and designing
of workflows themselves where echo action can play the role of temporary
stub.
"""
def __init__(self, output):
self.output = output
def run(self, context):
LOG.info('Running echo action [output=%s]' % self.output)
return self.output
def test(self, context):
return 'Echo'
class NoOpAction(actions.Action):
"""No-operation action.
This action does nothing. It can be mostly useful for testing and
debugging purposes.
"""
def __init__(self):
pass
def run(self, context):
LOG.info('Running no-op action')
return None
def test(self, context):
return None
class AsyncNoOpAction(NoOpAction):
"""Asynchronous no-operation action."""
def is_sync(self):
return False
class FailAction(actions.Action):
"""'Always fail' action.
This action just always throws an instance of ActionException.
This behavior is useful in a number of cases, especially if we need to
test a scenario where some of workflow tasks fail.
"""
def __init__(self):
pass
def run(self, context):
LOG.info('Running fail action.')
raise exc.ActionException('Fail action expected exception.')
def test(self, context):
raise exc.ActionException('Fail action expected exception.')
class HTTPAction(actions.Action):
"""Constructs an HTTP action.
:param url: URL for the new HTTP request.
:param method: (optional, 'GET' by default) method for the new HTTP
request.
:param params: (optional) Dictionary or bytes to be sent in the
query string for the HTTP request.
:param body: (optional) Dictionary, bytes, or file-like object to send
in the body of the HTTP request.
:param headers: (optional) Dictionary of HTTP Headers to send with
the HTTP request.
:param cookies: (optional) Dict or CookieJar object to send with
the HTTP request.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom
HTTP Auth.
:param timeout: (optional) Float describing the timeout of the request
in seconds.
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE
redirect following is allowed.
:param proxies: (optional) Dictionary mapping protocol to the URL of
the proxy.
:param verify: (optional) if ``True``, the SSL cert will be verified.
A CA_BUNDLE path can also be provided.
"""
def __init__(self,
url,
method="GET",
params=None,
body=None,
headers=None,
cookies=None,
auth=None,
timeout=None,
allow_redirects=None,
proxies=None,
verify=None):
if auth and len(auth.split(':')) == 2:
self.auth = (auth.split(':')[0], auth.split(':')[1])
else:
self.auth = auth
if isinstance(headers, dict):
for key, val in headers.items():
if isinstance(val, (six.integer_types, float)):
headers[key] = str(val)
self.url = url
self.method = method
self.params = params
self.body = json.dumps(body) if isinstance(body, dict) else body
self.headers = headers
self.cookies = cookies
self.timeout = timeout
self.allow_redirects = allow_redirects
self.proxies = proxies
self.verify = verify
def run(self, context):
LOG.info("Running HTTP action "
"[url=%s, method=%s, params=%s, body=%s, headers=%s,"
" cookies=%s, auth=%s, timeout=%s, allow_redirects=%s,"
" proxies=%s, verify=%s]" %
(self.url,
self.method,
self.params,
self.body,
self.headers,
self.cookies,
self.auth,
self.timeout,
self.allow_redirects,
self.proxies,
self.verify))
try:
resp = requests.request(
self.method,
self.url,
params=self.params,
data=self.body,
headers=self.headers,
cookies=self.cookies,
auth=self.auth,
timeout=self.timeout,
allow_redirects=self.allow_redirects,
proxies=self.proxies,
verify=self.verify
)
except Exception as e:
raise exc.ActionException("Failed to send HTTP request: %s" % e)
LOG.info(
"HTTP action response:\n%s\n%s" % (resp.status_code, resp.content)
)
# TODO(akuznetsova): Need to refactor Mistral serialiser and
# deserializer to have an ability to pass needed encoding and work
# with it. Now it can process only default 'utf-8' encoding.
# Appropriate bug #1676411 was created.
# Represent important resp data as a dictionary.
try:
content = resp.json(encoding=resp.encoding)
except Exception as e:
LOG.debug("HTTP action response is not json.")
content = resp.content
if content and resp.encoding != 'utf-8':
content = content.decode(resp.encoding).encode('utf-8')
_result = {
'content': content,
'status': resp.status_code,
'headers': dict(resp.headers.items()),
'url': resp.url,
'history': resp.history,
'encoding': resp.encoding,
'reason': resp.reason,
'cookies': dict(resp.cookies.items()),
'elapsed': resp.elapsed.total_seconds()
}
if resp.status_code not in range(200, 307):
return actions.Result(error=_result)
return _result
def test(self, context):
# TODO(rakhmerov): Implement.
return None
class MistralHTTPAction(HTTPAction):
def __init__(self,
action_context,
url,
method="GET",
params=None,
body=None,
headers=None,
cookies=None,
auth=None,
timeout=None,
allow_redirects=None,
proxies=None,
verify=None):
actx = action_context
headers = headers or {}
headers.update({
'Mistral-Workflow-Name': actx.get('workflow_name'),
'Mistral-Workflow-Execution-Id': actx.get('workflow_execution_id'),
'Mistral-Task-Id': actx.get('task_id'),
'Mistral-Action-Execution-Id': actx.get('action_execution_id'),
'Mistral-Callback-URL': actx.get('callback_url'),
})
super(MistralHTTPAction, self).__init__(
url,
method,
params,
body,
headers,
cookies,
auth,
timeout,
allow_redirects,
proxies,
verify,
)
def is_sync(self):
return False
def test(self, context):
return None
class SendEmailAction(actions.Action):
def __init__(self, from_addr, to_addrs, smtp_server,
smtp_password=None, subject=None, body=None):
# TODO(dzimine): validate parameters
# Task invocation parameters.
self.to = to_addrs
self.subject = subject or "<No subject>"
self.body = body or "<No body>"
# Action provider settings.
self.smtp_server = smtp_server
self.sender = from_addr
self.password = smtp_password
def run(self, context):
LOG.info("Sending email message "
"[from=%s, to=%s, subject=%s, using smtp=%s, body=%s...]" %
(self.sender, self.to, self.subject,
self.smtp_server, self.body[:128]))
message = text.MIMEText(self.body, _charset='utf-8')
message['Subject'] = header.Header(self.subject, 'utf-8')
message['From'] = self.sender
message['To'] = ', '.join(self.to)
try:
s = smtplib.SMTP(self.smtp_server)
if self.password is not None:
# Sequence to request TLS connection and log in (RFC-2487).
s.ehlo()
s.starttls()
s.ehlo()
s.login(self.sender, self.password)
s.sendmail(from_addr=self.sender,
to_addrs=self.to,
msg=message.as_string())
except (smtplib.SMTPException, IOError) as e:
raise exc.ActionException("Failed to send an email message: %s"
% e)
def test(self, context):
# Just logging the operation since this action is not supposed
# to return a result.
LOG.info("Sending email message "
"[from=%s, to=%s, subject=%s, using smtp=%s, body=%s...]" %
(self.sender, self.to, self.subject,
self.smtp_server, self.body[:128]))
class SSHAction(actions.Action):
"""Runs Secure Shell (SSH) command on provided single or multiple hosts.
It is allowed to provide either a single host or a list of hosts in
action parameter 'host'. In case of a single host the action result
will be a single value, otherwise a list of results provided in the
same order as provided hosts.
"""
@property
def _execute_cmd_method(self):
return ssh_utils.execute_command
def __init__(self, cmd, host, username,
password=None, private_key_filename=None):
self.cmd = cmd
self.host = host
self.username = username
self.password = password
self.private_key_filename = private_key_filename
self.params = {
'cmd': self.cmd,
'host': self.host,
'username': self.username,
'password': self.password,
'private_key_filename': self.private_key_filename
}
def run(self, context):
def raise_exc(parent_exc=None):
message = ("Failed to execute ssh cmd "
"'%s' on %s" % (self.cmd, self.host))
if parent_exc:
message += "\nException: %s" % str(parent_exc)
raise exc.ActionException(message)
try:
results = []
if not isinstance(self.host, list):
self.host = [self.host]
for host_name in self.host:
self.params['host'] = host_name
status_code, result = self._execute_cmd_method(**self.params)
if status_code > 0:
return raise_exc()
else:
results.append(result)
if len(results) > 1:
return results
return result
except Exception as e:
return raise_exc(parent_exc=e)
def test(self, context):
# TODO(rakhmerov): Implement.
return None
class SSHProxiedAction(SSHAction):
@property
def _execute_cmd_method(self):
return ssh_utils.execute_command_via_gateway
def __init__(self, cmd, host, username, private_key_filename,
gateway_host, gateway_username=None,
password=None, proxy_command=None):
super(SSHProxiedAction, self).__init__(
cmd,
host,
username,
password,
private_key_filename
)
self.gateway_host = gateway_host
self.gateway_username = gateway_username
self.params.update(
{
'gateway_host': gateway_host,
'gateway_username': gateway_username,
'proxy_command': proxy_command
}
)
class JavaScriptAction(actions.Action):
"""Evaluates given JavaScript.
"""
def __init__(self, script, context=None):
"""Context here refers to a javasctript context
Not the usual mistral context. That is passed during the run method
"""
self.script = script
self.js_context = context
def run(self, context):
try:
script = """function f() {
%s
}
f()
""" % self.script
return javascript.evaluate(script, self.js_context)
except Exception as e:
raise exc.ActionException("JavaScriptAction failed: %s" % str(e))
def test(self, context):
return self.script
class SleepAction(actions.Action):
"""Sleep action.
This action sleeps for given amount of seconds. It can be mostly useful
for testing and debugging purposes.
"""
def __init__(self, seconds=1):
try:
self._seconds = int(seconds)
self._seconds = 0 if self._seconds < 0 else self._seconds
except ValueError:
self._seconds = 0
def run(self, context):
LOG.info('Running sleep action [seconds=%s]' % self._seconds)
time.sleep(self._seconds)
return None
def test(self, context):
time.sleep(1)
return None
class TestDictAction(actions.Action):
"""Generates test dict."""
def __init__(self, size=0, key_prefix='', val=''):
self.size = size
self.key_prefix = key_prefix
self.val = val
def run(self, context):
LOG.info(
'Running test_dict action [size=%s, key_prefix=%s, val=%s]' %
(self.size, self.key_prefix, self.val)
)
res = {}
for i in range(self.size):
res['%s%s' % (self.key_prefix, i)] = self.val
return res
def test(self, context):
return {}

View File

@ -1,117 +0,0 @@
# Copyright 2013 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Access Control API server."""
from keystonemiddleware import auth_token
from oslo_config import cfg
from oslo_policy import policy
from mistral import exceptions as exc
_ENFORCER = None
def setup(app):
if cfg.CONF.pecan.auth_enable and cfg.CONF.auth_type == 'keystone':
conf = dict(cfg.CONF.keystone_authtoken)
# Change auth decisions of requests to the app itself.
conf.update({'delay_auth_decision': True})
# NOTE(rakhmerov): Policy enforcement works only if Keystone
# authentication is enabled. No support for other authentication
# types at this point.
_ensure_enforcer_initialization()
return auth_token.AuthProtocol(app, conf)
else:
return app
def enforce(action, context, target=None, do_raise=True,
exc=exc.NotAllowedException):
"""Verifies that the action is valid on the target in this context.
:param action: String, representing the action to be checked.
This should be colon separated for clarity.
i.e. ``workflows:create``
:param context: Mistral context.
:param target: Dictionary, representing the object of the action.
For object creation, this should be a dictionary
representing the location of the object.
e.g. ``{'project_id': context.project_id}``
:param do_raise: if True (the default), raises specified exception.
:param exc: Exception to be raised if not authorized. Default is
mistral.exceptions.NotAllowedException.
:return: returns True if authorized and False if not authorized and
do_raise is False.
"""
if cfg.CONF.auth_type != 'keystone':
# Policy enforcement is supported now only with Keystone
# authentication.
return
target_obj = {
'project_id': context.project_id,
'user_id': context.user_id,
}
target_obj.update(target or {})
policy_context = context.to_policy_values()
# Because policy.json example in Mistral repo still uses the rule
# 'is_admin: True', we insert 'is_admin' key to the default policy
# values.
policy_context['is_admin'] = context.is_admin
_ensure_enforcer_initialization()
return _ENFORCER.enforce(
action,
target_obj,
policy_context,
do_raise=do_raise,
exc=exc
)
def _ensure_enforcer_initialization():
global _ENFORCER
if not _ENFORCER:
_ENFORCER = policy.Enforcer(cfg.CONF)
_ENFORCER.load_rules()
def get_limited_to(headers):
"""Return the user and project the request should be limited to.
:param headers: HTTP headers dictionary
:return: A tuple of (user, project), set to None if there's no limit on
one of these.
"""
return headers.get('X-User-Id'), headers.get('X-Project-Id')
def get_limited_to_project(headers):
"""Return the project the request should be limited to.
:param headers: HTTP headers dictionary
:return: A project, or None if there's no limit on it.
"""
return get_limited_to(headers)[1]

View File

@ -1,94 +0,0 @@
# Copyright 2013 - Mirantis, Inc.
# Copyright 2016 - Brocade Communications Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
import oslo_middleware.cors as cors_middleware
import osprofiler.web
import pecan
from mistral.api import access_control
from mistral import config as m_config
from mistral import context as ctx
from mistral.db.v2 import api as db_api_v2
from mistral.rpc import base as rpc
from mistral.service import coordination
from mistral.services import periodic
def get_pecan_config():
# Set up the pecan configuration.
opts = cfg.CONF.pecan
cfg_dict = {
"app": {
"root": opts.root,
"modules": opts.modules,
"debug": opts.debug,
"auth_enable": opts.auth_enable
}
}
return pecan.configuration.conf_from_dict(cfg_dict)
def setup_app(config=None):
if not config:
config = get_pecan_config()
m_config.set_config_defaults()
app_conf = dict(config.app)
db_api_v2.setup_db()
if not app_conf.pop('disable_cron_trigger_thread', False):
periodic.setup()
coordination.Service('api_group').register_membership()
app = pecan.make_app(
app_conf.pop('root'),
hooks=lambda: [ctx.ContextHook(), ctx.AuthHook()],
logging=getattr(config, 'logging', {}),
**app_conf
)
# Set up access control.
app = access_control.setup(app)
# TODO(rakhmerov): need to get rid of this call.
# Set up RPC related flags in config
rpc.get_transport()
# Set up profiler.
if cfg.CONF.profiler.enabled:
app = osprofiler.web.WsgiMiddleware(
app,
hmac_keys=cfg.CONF.profiler.hmac_keys,
enabled=cfg.CONF.profiler.enabled
)
# Create a CORS wrapper, and attach mistral-specific defaults that must be
# included in all CORS responses.
return cors_middleware.CORS(app, cfg.CONF)
def init_wsgi():
# By default, oslo.config parses the CLI args if no args is provided.
# As a result, invoking this wsgi script from gunicorn leads to the error
# with argparse complaining that the CLI options have already been parsed.
m_config.parse_args(args=[])
return setup_app()

View File

@ -1,169 +0,0 @@
# Copyright 2013 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from wsme import types as wtypes
from mistral import utils
class Resource(wtypes.Base):
"""REST API Resource."""
_wsme_attributes = []
def to_dict(self):
d = {}
for attr in self._wsme_attributes:
attr_val = getattr(self, attr.name)
if not isinstance(attr_val, wtypes.UnsetType):
d[attr.name] = attr_val
return d
@classmethod
def from_tuples(cls, tuple_iterator):
obj = cls()
for col_name, col_val in tuple_iterator:
if hasattr(obj, col_name):
# Convert all datetime values to strings.
setattr(obj, col_name, utils.datetime_to_str(col_val))
return obj
@classmethod
def from_dict(cls, d):
return cls.from_tuples(d.items())
@classmethod
def from_db_model(cls, db_model):
return cls.from_tuples(db_model.iter_columns())
def __str__(self):
"""WSME based implementation of __str__."""
res = "%s [" % type(self).__name__
first = True
for attr in self._wsme_attributes:
if not first:
res += ', '
else:
first = False
res += "%s='%s'" % (attr.name, getattr(self, attr.name))
return res + "]"
def to_json(self):
return json.dumps(self.to_dict())
@classmethod
def get_fields(cls):
obj = cls()
return [attr.name for attr in obj._wsme_attributes]
class ResourceList(Resource):
"""Resource containing the list of other resources."""
next = wtypes.text
"""A link to retrieve the next subset of the resource list"""
@property
def collection(self):
return getattr(self, self._type)
@classmethod
def convert_with_links(cls, resources, limit, url=None, fields=None,
**kwargs):
resource_list = cls()
setattr(resource_list, resource_list._type, resources)
resource_list.next = resource_list.get_next(
limit,
url=url,
fields=fields,
**kwargs
)
return resource_list
def has_next(self, limit):
"""Return whether resources has more items."""
return len(self.collection) and len(self.collection) == limit
def get_next(self, limit, url=None, fields=None, **kwargs):
"""Return a link to the next subset of the resources."""
if not self.has_next(limit):
return wtypes.Unset
q_args = ''.join(
['%s=%s&' % (key, value) for key, value in kwargs.items()]
)
resource_args = (
'?%(args)slimit=%(limit)d&marker=%(marker)s' %
{
'args': q_args,
'limit': limit,
'marker': self.collection[-1].id
}
)
# Fields is handled specially here, we can move it above when it's
# supported by all resources query.
if fields:
resource_args += '&fields=%s' % fields
next_link = "%(host_url)s/v2/%(resource)s%(args)s" % {
'host_url': url,
'resource': self._type,
'args': resource_args
}
return next_link
def to_dict(self):
d = {}
for attr in self._wsme_attributes:
attr_val = getattr(self, attr.name)
if isinstance(attr_val, list):
if isinstance(attr_val[0], Resource):
d[attr.name] = [v.to_dict() for v in attr_val]
elif not isinstance(attr_val, wtypes.UnsetType):
d[attr.name] = attr_val
return d
class Link(Resource):
"""Web link."""
href = wtypes.text
target = wtypes.text
rel = wtypes.text
@classmethod
def sample(cls):
return cls(href='http://example.com/here',
target='here', rel='self')

View File

@ -1,78 +0,0 @@
# Copyright 2013 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
import pecan
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api.controllers import resource
from mistral.api.controllers.v2 import root as v2_root
LOG = logging.getLogger(__name__)
API_STATUS = wtypes.Enum(str, 'SUPPORTED', 'CURRENT', 'DEPRECATED')
class APIVersion(resource.Resource):
"""An API Version."""
id = wtypes.text
"The version identifier."
status = API_STATUS
"The status of the API (SUPPORTED, CURRENT or DEPRECATED)."
links = wtypes.ArrayType(resource.Link)
"The link to the versioned API."
@classmethod
def sample(cls):
return cls(
id='v1.0',
status='CURRENT',
links=[
resource.Link(target_name='v1', rel="self",
href='http://example.com:9777/v1')
]
)
class APIVersions(resource.Resource):
"""API Versions."""
versions = wtypes.ArrayType(APIVersion)
@classmethod
def sample(cls):
v2 = APIVersion(id='v2.0', status='CURRENT', rel="self",
href='http://example.com:9777/v2')
return cls(versions=[v2])
class RootController(object):
v2 = v2_root.Controller()
@wsme_pecan.wsexpose(APIVersions)
def index(self):
LOG.debug("Fetching API versions.")
host_url_v2 = '%s/%s' % (pecan.request.host_url, 'v2')
api_v2 = APIVersion(
id='v2.0',
status='CURRENT',
links=[resource.Link(href=host_url_v2, target='v2',
rel="self",)]
)
return APIVersions(versions=[api_v2])

View File

@ -1,222 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
import pecan
from pecan import hooks
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import types
from mistral.api.controllers.v2 import validation
from mistral.api.hooks import content_type as ct_hook
from mistral import context
from mistral.db.v2 import api as db_api
from mistral import exceptions as exc
from mistral.lang import parser as spec_parser
from mistral.services import actions
from mistral.utils import filter_utils
from mistral.utils import rest_utils
LOG = logging.getLogger(__name__)
class ActionsController(rest.RestController, hooks.HookController):
# TODO(nmakhotkin): Have a discussion with pecan/WSME folks in order
# to have requests and response of different content types. Then
# delete ContentTypeHook.
__hooks__ = [ct_hook.ContentTypeHook("application/json", ['POST', 'PUT'])]
validate = validation.SpecValidationController(
spec_parser.get_action_list_spec_from_yaml)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Action, wtypes.text)
def get(self, identifier):
"""Return the named action.
:param identifier: ID or name of the Action to get.
"""
acl.enforce('actions:get', context.ctx())
LOG.info("Fetch action [identifier=%s]", identifier)
db_model = db_api.get_action_definition(identifier)
return resources.Action.from_db_model(db_model)
@rest_utils.wrap_pecan_controller_exception
@pecan.expose(content_type="text/plain")
def put(self, identifier=None):
"""Update one or more actions.
:param identifier: Optional. If provided, it's UUID or name of an
action. Only one action can be updated with identifier param.
NOTE: This text is allowed to have definitions
of multiple actions. In this case they all will be updated.
"""
acl.enforce('actions:update', context.ctx())
definition = pecan.request.text
LOG.info("Update action(s) [definition=%s]", definition)
scope = pecan.request.GET.get('scope', 'private')
if scope not in resources.SCOPE_TYPES.values:
raise exc.InvalidModelException(
"Scope must be one of the following: %s; actual: "
"%s" % (resources.SCOPE_TYPES.values, scope)
)
with db_api.transaction():
db_acts = actions.update_actions(
definition,
scope=scope,
identifier=identifier
)
action_list = [
resources.Action.from_db_model(db_act) for db_act in db_acts
]
return resources.Actions(actions=action_list).to_json()
@rest_utils.wrap_pecan_controller_exception
@pecan.expose(content_type="text/plain")
def post(self):
"""Create a new action.
NOTE: This text is allowed to have definitions
of multiple actions. In this case they all will be created.
"""
acl.enforce('actions:create', context.ctx())
definition = pecan.request.text
scope = pecan.request.GET.get('scope', 'private')
pecan.response.status = 201
if scope not in resources.SCOPE_TYPES.values:
raise exc.InvalidModelException(
"Scope must be one of the following: %s; actual: "
"%s" % (resources.SCOPE_TYPES.values, scope)
)
LOG.info("Create action(s) [definition=%s]", definition)
with db_api.transaction():
db_acts = actions.create_actions(definition, scope=scope)
action_list = [
resources.Action.from_db_model(db_act) for db_act in db_acts
]
return resources.Actions(actions=action_list).to_json()
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, wtypes.text, status_code=204)
def delete(self, identifier):
"""Delete the named action.
:param identifier: Name or UUID of the action to delete.
"""
acl.enforce('actions:delete', context.ctx())
LOG.info("Delete action [identifier=%s]", identifier)
with db_api.transaction():
db_model = db_api.get_action_definition(identifier)
if db_model.is_system:
msg = "Attempt to delete a system action: %s" % identifier
raise exc.DataAccessException(msg)
db_api.delete_action_definition(identifier)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Actions, types.uuid, int, types.uniquelist,
types.list, types.uniquelist, wtypes.text,
wtypes.text, resources.SCOPE_TYPES, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, wtypes.text,
wtypes.text)
def get_all(self, marker=None, limit=None, sort_keys='name',
sort_dirs='asc', fields='', created_at=None, name=None,
scope=None, tags=None, updated_at=None,
description=None, definition=None, is_system=None, input=None):
"""Return all actions.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: name.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: asc.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param name: Optional. Keep only resources with a specific name.
:param scope: Optional. Keep only resources with a specific scope.
:param definition: Optional. Keep only resources with a specific
definition.
:param is_system: Optional. Keep only system actions or ad-hoc
actions (if False).
:param input: Optional. Keep only resources with a specific input.
:param description: Optional. Keep only resources with a specific
description.
:param tags: Optional. Keep only resources containing specific tags.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
"""
acl.enforce('actions:list', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
name=name,
scope=scope,
tags=tags,
updated_at=updated_at,
description=description,
definition=definition,
is_system=is_system,
input=input
)
LOG.info("Fetch actions. marker=%s, limit=%s, sort_keys=%s, "
"sort_dirs=%s, filters=%s", marker, limit, sort_keys,
sort_dirs, filters)
return rest_utils.get_all(
resources.Actions,
resources.Action,
db_api.get_action_definitions,
db_api.get_action_definition_by_id,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
**filters
)

View File

@ -1,426 +0,0 @@
# Copyright 2015 - Mirantis, Inc.
# Copyright 2016 - Brocade Communications Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from oslo_log import log as logging
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import types
from mistral import context
from mistral.db.v2 import api as db_api
from mistral import exceptions as exc
from mistral.rpc import clients as rpc
from mistral.utils import filter_utils
from mistral.utils import rest_utils
from mistral.workflow import states
from mistral_lib import actions as ml_actions
LOG = logging.getLogger(__name__)
def _load_deferred_output_field(action_ex):
# We need to refer to this lazy-load field explicitly in
# order to make sure that it is correctly loaded.
hasattr(action_ex, 'output')
def _get_action_execution(id):
with db_api.transaction():
return _get_action_execution_resource(db_api.get_action_execution(id))
def _get_action_execution_resource(action_ex):
_load_deferred_output_field(action_ex)
return _get_action_execution_resource_for_list(action_ex)
def _get_action_execution_resource_for_list(action_ex):
# TODO(nmakhotkin): Get rid of using dicts for constructing resources.
# TODO(nmakhotkin): Use db_model for this instead.
res = resources.ActionExecution.from_db_model(action_ex)
task_name = (action_ex.task_execution.name
if action_ex.task_execution else None)
setattr(res, 'task_name', task_name)
return res
def _get_action_executions(task_execution_id=None, marker=None, limit=None,
sort_keys='created_at', sort_dirs='asc',
fields='', include_output=False, **filters):
"""Return all action executions.
Where project_id is the same as the requester or
project_id is different but the scope is public.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param filters: Optional. A list of filters to apply to the result.
"""
if task_execution_id:
filters['task_execution_id'] = task_execution_id
if include_output:
resource_function = _get_action_execution_resource
else:
resource_function = _get_action_execution_resource_for_list
return rest_utils.get_all(
resources.ActionExecutions,
resources.ActionExecution,
db_api.get_action_executions,
db_api.get_action_execution,
resource_function=resource_function,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
**filters
)
class ActionExecutionsController(rest.RestController):
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.ActionExecution, wtypes.text)
def get(self, id):
"""Return the specified action_execution.
:param id: UUID of action execution to retrieve
"""
acl.enforce('action_executions:get', context.ctx())
LOG.info("Fetch action_execution [id=%s]", id)
return _get_action_execution(id)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.ActionExecution,
body=resources.ActionExecution, status_code=201)
def post(self, action_ex):
"""Create new action_execution.
:param action_ex: Action to execute
"""
acl.enforce('action_executions:create', context.ctx())
LOG.info("Create action_execution [action_execution=%s]", action_ex)
name = action_ex.name
description = action_ex.description or None
action_input = action_ex.input or {}
params = action_ex.params or {}
if not name:
raise exc.InputException(
"Please provide at least action name to run action."
)
values = rpc.get_engine_client().start_action(
name,
action_input,
description=description,
**params
)
return resources.ActionExecution.from_dict(values)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(
resources.ActionExecution,
wtypes.text,
body=resources.ActionExecution
)
def put(self, id, action_ex):
"""Update the specified action_execution.
:param id: UUID of action execution to update
:param action_ex: Action execution for update
"""
acl.enforce('action_executions:update', context.ctx())
LOG.info(
"Update action_execution [id=%s, action_execution=%s]"
% (id, action_ex)
)
output = action_ex.output
if action_ex.state == states.SUCCESS:
result = ml_actions.Result(data=output)
elif action_ex.state == states.ERROR:
if not output:
output = 'Unknown error'
result = ml_actions.Result(error=output)
elif action_ex.state == states.CANCELLED:
result = ml_actions.Result(cancel=True)
else:
raise exc.InvalidResultException(
"Error. Expected one of %s, actual: %s" % (
[states.SUCCESS, states.ERROR, states.CANCELLED],
action_ex.state
)
)
values = rpc.get_engine_client().on_action_complete(id, result)
return resources.ActionExecution.from_dict(values)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.ActionExecutions, types.uuid, int,
types.uniquelist, types.list, types.uniquelist,
wtypes.text, wtypes.text, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, types.uuid,
wtypes.text, wtypes.text, bool, types.jsontype,
types.jsontype, types.jsontype, wtypes.text, bool)
def get_all(self, marker=None, limit=None, sort_keys='created_at',
sort_dirs='asc', fields='', created_at=None, name=None,
tags=None, updated_at=None, workflow_name=None,
task_name=None, task_execution_id=None, state=None,
state_info=None, accepted=None, input=None, output=None,
params=None, description=None, include_output=False):
"""Return all tasks within the execution.
Where project_id is the same as the requester or
project_id is different but the scope is public.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param name: Optional. Keep only resources with a specific name.
:param workflow_name: Optional. Keep only resources with a specific
workflow name.
:param task_name: Optional. Keep only resources with a specific
task name.
:param task_execution_id: Optional. Keep only resources within a
specific task execution.
:param state: Optional. Keep only resources with a specific state.
:param state_info: Optional. Keep only resources with specific state
information.
:param accepted: Optional. Keep only resources which have been accepted
or not.
:param input: Optional. Keep only resources with a specific input.
:param output: Optional. Keep only resources with a specific output.
:param params: Optional. Keep only resources with specific parameters.
:param description: Optional. Keep only resources with a specific
description.
:param tags: Optional. Keep only resources containing specific tags.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
:param include_output: Optional. Include the output for all executions
in the list
"""
acl.enforce('action_executions:list', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
name=name,
tags=tags,
updated_at=updated_at,
workflow_name=workflow_name,
task_name=task_name,
task_execution_id=task_execution_id,
state=state,
state_info=state_info,
accepted=accepted,
input=input,
output=output,
params=params,
description=description
)
LOG.info("Fetch action_executions. marker=%s, limit=%s, "
"sort_keys=%s, sort_dirs=%s, filters=%s",
marker, limit, sort_keys, sort_dirs, filters)
return _get_action_executions(
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
include_output=include_output,
**filters
)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, wtypes.text, status_code=204)
def delete(self, id):
"""Delete the specified action_execution.
:param id: UUID of action execution to delete
"""
acl.enforce('action_executions:delete', context.ctx())
LOG.info("Delete action_execution [id=%s]", id)
if not cfg.CONF.api.allow_action_execution_deletion:
raise exc.NotAllowedException("Action execution deletion is not "
"allowed.")
with db_api.transaction():
action_ex = db_api.get_action_execution(id)
if action_ex.task_execution_id:
raise exc.NotAllowedException(
"Only ad-hoc action execution can be deleted."
)
if not states.is_completed(action_ex.state):
raise exc.NotAllowedException(
"Only completed action execution can be deleted."
)
return db_api.delete_action_execution(id)
class TasksActionExecutionController(rest.RestController):
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.ActionExecutions, types.uuid, types.uuid,
int, types.uniquelist, types.list, types.uniquelist,
wtypes.text, types.uniquelist, wtypes.text,
wtypes.text, wtypes.text, wtypes.text, wtypes.text,
wtypes.text, bool, types.jsontype, types.jsontype,
types.jsontype, wtypes.text, bool)
def get_all(self, task_execution_id, marker=None, limit=None,
sort_keys='created_at', sort_dirs='asc', fields='',
created_at=None, name=None, tags=None,
updated_at=None, workflow_name=None, task_name=None,
state=None, state_info=None, accepted=None, input=None,
output=None, params=None, description=None,
include_output=None):
"""Return all tasks within the execution.
Where project_id is the same as the requester or
project_id is different but the scope is public.
:param task_execution_id: Keep only resources within a specific task
execution.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param name: Optional. Keep only resources with a specific name.
:param workflow_name: Optional. Keep only resources with a specific
workflow name.
:param task_name: Optional. Keep only resources with a specific
task name.
:param state: Optional. Keep only resources with a specific state.
:param state_info: Optional. Keep only resources with specific state
information.
:param accepted: Optional. Keep only resources which have been accepted
or not.
:param input: Optional. Keep only resources with a specific input.
:param output: Optional. Keep only resources with a specific output.
:param params: Optional. Keep only resources with specific parameters.
:param description: Optional. Keep only resources with a specific
description.
:param tags: Optional. Keep only resources containing specific tags.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
:param include_output: Optional. Include the output for all executions
in the list
"""
acl.enforce('action_executions:list', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
name=name,
tags=tags,
updated_at=updated_at,
workflow_name=workflow_name,
task_name=task_name,
task_execution_id=task_execution_id,
state=state,
state_info=state_info,
accepted=accepted,
input=input,
output=output,
params=params,
description=description
)
LOG.info("Fetch action_executions. marker=%s, limit=%s, "
"sort_keys=%s, sort_dirs=%s, filters=%s",
marker, limit, sort_keys, sort_dirs, filters)
return _get_action_executions(
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
include_output=include_output,
**filters
)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.ActionExecution, wtypes.text, wtypes.text)
def get(self, task_execution_id, action_ex_id):
"""Return the specified action_execution.
:param task_execution_id: Task execution UUID
:param action_ex_id: Action execution UUID
"""
acl.enforce('action_executions:get', context.ctx())
LOG.info("Fetch action_execution [id=%s]", action_ex_id)
return _get_action_execution(action_ex_id)

View File

@ -1,176 +0,0 @@
# Copyright 2014 - Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import types
from mistral import context
from mistral.db.v2 import api as db_api
from mistral.services import triggers
from mistral.utils import filter_utils
from mistral.utils import rest_utils
LOG = logging.getLogger(__name__)
class CronTriggersController(rest.RestController):
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.CronTrigger, wtypes.text)
def get(self, name):
"""Returns the named cron_trigger.
:param name: Name of cron trigger to retrieve
"""
acl.enforce('cron_triggers:get', context.ctx())
LOG.info('Fetch cron trigger [name=%s]' % name)
db_model = db_api.get_cron_trigger(name)
return resources.CronTrigger.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(
resources.CronTrigger,
body=resources.CronTrigger,
status_code=201
)
def post(self, cron_trigger):
"""Creates a new cron trigger.
:param cron_trigger: Required. Cron trigger structure.
"""
acl.enforce('cron_triggers:create', context.ctx())
LOG.info('Create cron trigger: %s' % cron_trigger)
values = cron_trigger.to_dict()
db_model = triggers.create_cron_trigger(
values['name'],
values.get('workflow_name'),
values.get('workflow_input'),
values.get('workflow_params'),
values.get('pattern'),
values.get('first_execution_time'),
values.get('remaining_executions'),
workflow_id=values.get('workflow_id')
)
return resources.CronTrigger.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, wtypes.text, status_code=204)
def delete(self, name):
"""Delete cron trigger.
:param name: Name of cron trigger to delete
"""
acl.enforce('cron_triggers:delete', context.ctx())
LOG.info("Delete cron trigger [name=%s]" % name)
triggers.delete_cron_trigger(name)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.CronTriggers, types.uuid, int,
types.uniquelist, types.list, types.uniquelist,
wtypes.text, wtypes.text, types.uuid, types.jsontype,
types.jsontype, resources.SCOPE_TYPES, wtypes.text,
wtypes.IntegerType(minimum=1), wtypes.text,
wtypes.text, wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None, sort_keys='created_at',
sort_dirs='asc', fields='', name=None, workflow_name=None,
workflow_id=None, workflow_input=None, workflow_params=None,
scope=None, pattern=None, remaining_executions=None,
first_execution_time=None, next_execution_time=None,
created_at=None, updated_at=None):
"""Return all cron triggers.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param name: Optional. Keep only resources with a specific name.
:param workflow_name: Optional. Keep only resources with a specific
workflow name.
:param workflow_id: Optional. Keep only resources with a specific
workflow ID.
:param workflow_input: Optional. Keep only resources with a specific
workflow input.
:param workflow_params: Optional. Keep only resources with specific
workflow parameters.
:param scope: Optional. Keep only resources with a specific scope.
:param pattern: Optional. Keep only resources with a specific pattern.
:param remaining_executions: Optional. Keep only resources with a
specific number of remaining executions.
:param first_execution_time: Optional. Keep only resources with a
specific time and date of first execution.
:param next_execution_time: Optional. Keep only resources with a
specific time and date of next execution.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
"""
acl.enforce('cron_triggers:list', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
name=name,
updated_at=updated_at,
workflow_name=workflow_name,
workflow_id=workflow_id,
workflow_input=workflow_input,
workflow_params=workflow_params,
scope=scope,
pattern=pattern,
remaining_executions=remaining_executions,
first_execution_time=first_execution_time,
next_execution_time=next_execution_time
)
LOG.info(
"Fetch cron triggers. marker=%s, limit=%s, sort_keys=%s, "
"sort_dirs=%s, filters=%s",
marker, limit, sort_keys, sort_dirs, filters
)
return rest_utils.get_all(
resources.CronTriggers,
resources.CronTrigger,
db_api.get_cron_triggers,
db_api.get_cron_trigger,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
**filters
)

View File

@ -1,191 +0,0 @@
# Copyright 2015 - StackStorm, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from oslo_log import log as logging
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import types
from mistral import context
from mistral.db.v2 import api as db_api
from mistral import exceptions as exceptions
from mistral.utils import filter_utils
from mistral.utils import rest_utils
LOG = logging.getLogger(__name__)
class EnvironmentController(rest.RestController):
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Environments, types.uuid, int,
types.uniquelist, types.list, types.uniquelist,
wtypes.text, wtypes.text, types.jsontype,
resources.SCOPE_TYPES, wtypes.text, wtypes.text)
def get_all(self, marker=None, limit=None, sort_keys='created_at',
sort_dirs='asc', fields='', name=None, description=None,
variables=None, scope=None, created_at=None, updated_at=None):
"""Return all environments.
Where project_id is the same as the requester or
project_id is different but the scope is public.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param name: Optional. Keep only resources with a specific name.
:param description: Optional. Keep only resources with a specific
description.
:param variables: Optional. Keep only resources with specific
variables.
:param scope: Optional. Keep only resources with a specific scope.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
"""
acl.enforce('environments:list', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
name=name,
updated_at=updated_at,
description=description,
variables=variables,
scope=scope
)
LOG.info("Fetch environments. marker=%s, limit=%s, sort_keys=%s, "
"sort_dirs=%s, filters=%s", marker, limit, sort_keys,
sort_dirs, filters)
return rest_utils.get_all(
resources.Environments,
resources.Environment,
db_api.get_environments,
db_api.get_environment,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
**filters
)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Environment, wtypes.text)
def get(self, name):
"""Return the named environment.
:param name: Name of environment to retrieve
"""
acl.enforce('environments:get', context.ctx())
LOG.info("Fetch environment [name=%s]" % name)
db_model = db_api.get_environment(name)
return resources.Environment.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(
resources.Environment,
body=resources.Environment,
status_code=201
)
def post(self, env):
"""Create a new environment.
:param env: Required. Environment structure to create
"""
acl.enforce('environments:create', context.ctx())
LOG.info("Create environment [env=%s]" % env)
self._validate_environment(
json.loads(wsme_pecan.pecan.request.body.decode()),
['name', 'description', 'variables']
)
db_model = db_api.create_environment(env.to_dict())
return resources.Environment.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Environment, body=resources.Environment)
def put(self, env):
"""Update an environment.
:param env: Required. Environment structure to update
"""
acl.enforce('environments:update', context.ctx())
if not env.name:
raise exceptions.InputException(
'Name of the environment is not provided.'
)
LOG.info("Update environment [name=%s, env=%s]" % (env.name, env))
definition = json.loads(wsme_pecan.pecan.request.body.decode())
definition.pop('name')
self._validate_environment(
definition,
['description', 'variables', 'scope']
)
db_model = db_api.update_environment(env.name, env.to_dict())
return resources.Environment.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, wtypes.text, status_code=204)
def delete(self, name):
"""Delete the named environment.
:param name: Name of environment to delete
"""
acl.enforce('environments:delete', context.ctx())
LOG.info("Delete environment [name=%s]" % name)
db_api.delete_environment(name)
@staticmethod
def _validate_environment(env_dict, legal_keys):
if env_dict is None:
return
if set(env_dict) - set(legal_keys):
raise exceptions.InputException(
"Please, check your environment definition. Only: "
"%s are allowed as definition keys." % legal_keys
)

View File

@ -1,152 +0,0 @@
# Copyright 2016 - IBM Corp.
# Copyright 2016 Catalyst IT Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from pecan import rest
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import types
from mistral import context as auth_ctx
from mistral.db.v2 import api as db_api
from mistral import exceptions as exc
from mistral.services import triggers
from mistral.utils import rest_utils
LOG = logging.getLogger(__name__)
UPDATE_NOT_ALLOWED = ['exchange', 'topic', 'event']
CREATE_MANDATORY = set(['exchange', 'topic', 'event', 'workflow_id'])
class EventTriggersController(rest.RestController):
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.EventTrigger, types.uuid)
def get(self, id):
"""Returns the specified event_trigger."""
acl.enforce('event_triggers:get', auth_ctx.ctx())
LOG.info('Fetch event trigger [id=%s]', id)
db_model = db_api.get_event_trigger(id)
return resources.EventTrigger.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.EventTrigger, body=resources.EventTrigger,
status_code=201)
def post(self, event_trigger):
"""Creates a new event trigger."""
acl.enforce('event_triggers:create', auth_ctx.ctx())
values = event_trigger.to_dict()
input_keys = [k for k in values if values[k]]
if CREATE_MANDATORY - set(input_keys):
raise exc.EventTriggerException(
"Params %s must be provided for creating event trigger." %
CREATE_MANDATORY
)
LOG.info('Create event trigger: %s', values)
db_model = triggers.create_event_trigger(
values.get('name', ''),
values.get('exchange'),
values.get('topic'),
values.get('event'),
values.get('workflow_id'),
workflow_input=values.get('workflow_input'),
workflow_params=values.get('workflow_params'),
)
return resources.EventTrigger.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.EventTrigger, types.uuid,
body=resources.EventTrigger)
def put(self, id, event_trigger):
"""Updates an existing event trigger.
The exchange, topic and event can not be updated. The right way to
change them is to delete the event trigger first, then create a new
event trigger with new params.
"""
acl.enforce('event_triggers:update', auth_ctx.ctx())
values = event_trigger.to_dict()
for field in UPDATE_NOT_ALLOWED:
if values.get(field):
raise exc.EventTriggerException(
"Can not update fields %s of event trigger." %
UPDATE_NOT_ALLOWED
)
LOG.info('Update event trigger: [id=%s, values=%s]', id, values)
with db_api.transaction():
db_api.ensure_event_trigger_exists(id)
db_model = triggers.update_event_trigger(id, values)
return resources.EventTrigger.from_db_model(db_model)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, types.uuid, status_code=204)
def delete(self, id):
"""Delete event trigger."""
acl.enforce('event_triggers:delete', auth_ctx.ctx())
LOG.info("Delete event trigger [id=%s]", id)
with db_api.transaction():
event_trigger = db_api.get_event_trigger(id)
triggers.delete_event_trigger(event_trigger.to_dict())
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.EventTriggers, types.uuid, int,
types.uniquelist, types.list, types.uniquelist,
bool, types.jsontype)
def get_all(self, marker=None, limit=None, sort_keys='created_at',
sort_dirs='asc', fields='', all_projects=False, **filters):
"""Return all event triggers."""
acl.enforce('event_triggers:list', auth_ctx.ctx())
if all_projects:
acl.enforce('event_triggers:list:all_projects', auth_ctx.ctx())
LOG.info(
"Fetch event triggers. marker=%s, limit=%s, sort_keys=%s, "
"sort_dirs=%s, fields=%s, all_projects=%s, filters=%s", marker,
limit, sort_keys, sort_dirs, fields, all_projects, filters
)
return rest_utils.get_all(
resources.EventTriggers,
resources.EventTrigger,
db_api.get_event_triggers,
db_api.get_event_trigger,
resource_function=None,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
all_projects=all_projects,
**filters
)

View File

@ -1,329 +0,0 @@
# Copyright 2013 - Mirantis, Inc.
# Copyright 2015 - StackStorm, Inc.
# Copyright 2015 Huawei Technologies Co., Ltd.
# Copyright 2016 - Brocade Communications Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from mistral.api import access_control as acl
from mistral.api.controllers.v2 import resources
from mistral.api.controllers.v2 import task
from mistral.api.controllers.v2 import types
from mistral import context
from mistral.db.v2 import api as db_api
from mistral import exceptions as exc
from mistral.rpc import clients as rpc
from mistral.services import workflows as wf_service
from mistral.utils import filter_utils
from mistral.utils import rest_utils
from mistral.workflow import states
LOG = logging.getLogger(__name__)
STATE_TYPES = wtypes.Enum(
str,
states.IDLE,
states.RUNNING,
states.SUCCESS,
states.ERROR,
states.PAUSED,
states.CANCELLED
)
def _get_execution_resource(wf_ex):
# We need to refer to this lazy-load field explicitly in
# order to make sure that it is correctly loaded.
hasattr(wf_ex, 'output')
return resources.Execution.from_db_model(wf_ex)
# TODO(rakhmerov): Make sure to make all needed renaming on public API.
class ExecutionsController(rest.RestController):
tasks = task.ExecutionTasksController()
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Execution, wtypes.text)
def get(self, id):
"""Return the specified Execution.
:param id: UUID of execution to retrieve.
"""
acl.enforce("executions:get", context.ctx())
LOG.info("Fetch execution [id=%s]", id)
with db_api.transaction():
wf_ex = db_api.get_workflow_execution(id)
# If a single object is requested we need to explicitly load
# 'output' attribute. We don't do this for collections to reduce
# amount of DB queries and network traffic.
hasattr(wf_ex, 'output')
return resources.Execution.from_db_model(wf_ex)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(
resources.Execution,
wtypes.text,
body=resources.Execution
)
def put(self, id, wf_ex):
"""Update the specified workflow execution.
:param id: UUID of execution to update.
:param wf_ex: Execution object.
"""
acl.enforce('executions:update', context.ctx())
LOG.info('Update execution [id=%s, execution=%s]' % (id, wf_ex))
with db_api.transaction():
db_api.ensure_workflow_execution_exists(id)
delta = {}
if wf_ex.state:
delta['state'] = wf_ex.state
if wf_ex.description:
delta['description'] = wf_ex.description
if wf_ex.params and wf_ex.params.get('env'):
delta['env'] = wf_ex.params.get('env')
# Currently we can change only state, description, or env.
if len(delta.values()) <= 0:
raise exc.InputException(
'The property state, description, or env '
'is not provided for update.'
)
# Description cannot be updated together with state.
if delta.get('description') and delta.get('state'):
raise exc.InputException(
'The property description must be updated '
'separately from state.'
)
# If state change, environment cannot be updated if not RUNNING.
if (delta.get('env') and
delta.get('state') and delta['state'] != states.RUNNING):
raise exc.InputException(
'The property env can only be updated when workflow '
'execution is not running or on resume from pause.'
)
if delta.get('description'):
wf_ex = db_api.update_workflow_execution(
id,
{'description': delta['description']}
)
if not delta.get('state') and delta.get('env'):
wf_ex = db_api.get_workflow_execution(id)
wf_ex = wf_service.update_workflow_execution_env(
wf_ex,
delta.get('env')
)
if delta.get('state'):
if states.is_paused(delta.get('state')):
wf_ex = rpc.get_engine_client().pause_workflow(id)
elif delta.get('state') == states.RUNNING:
wf_ex = rpc.get_engine_client().resume_workflow(
id,
env=delta.get('env')
)
elif states.is_completed(delta.get('state')):
msg = wf_ex.state_info if wf_ex.state_info else None
wf_ex = rpc.get_engine_client().stop_workflow(
id,
delta.get('state'),
msg
)
else:
# To prevent changing state in other cases throw a message.
raise exc.InputException(
"Cannot change state to %s. Allowed states are: '%s" % (
wf_ex.state,
', '.join([
states.RUNNING,
states.PAUSED,
states.SUCCESS,
states.ERROR,
states.CANCELLED
])
)
)
return resources.Execution.from_dict(
wf_ex if isinstance(wf_ex, dict) else wf_ex.to_dict()
)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(
resources.Execution,
body=resources.Execution,
status_code=201
)
def post(self, wf_ex):
"""Create a new Execution.
:param wf_ex: Execution object with input content.
"""
acl.enforce('executions:create', context.ctx())
LOG.info("Create execution [execution=%s]", wf_ex)
engine = rpc.get_engine_client()
exec_dict = wf_ex.to_dict()
if not (exec_dict.get('workflow_id')
or exec_dict.get('workflow_name')):
raise exc.WorkflowException(
"Workflow ID or workflow name must be provided. Workflow ID is"
" recommended."
)
result = engine.start_workflow(
exec_dict.get('workflow_id', exec_dict.get('workflow_name')),
exec_dict.get('input'),
exec_dict.get('description', ''),
**exec_dict.get('params') or {}
)
return resources.Execution.from_dict(result)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(None, wtypes.text, status_code=204)
def delete(self, id):
"""Delete the specified Execution.
:param id: UUID of execution to delete.
"""
acl.enforce('executions:delete', context.ctx())
LOG.info("Delete execution [id=%s]", id)
return db_api.delete_workflow_execution(id)
@rest_utils.wrap_wsme_controller_exception
@wsme_pecan.wsexpose(resources.Executions, types.uuid, int,
types.uniquelist, types.list, types.uniquelist,
wtypes.text, types.uuid, wtypes.text, types.jsontype,
types.uuid, STATE_TYPES, wtypes.text, types.jsontype,
types.jsontype, wtypes.text, wtypes.text, bool,
types.uuid, bool)
def get_all(self, marker=None, limit=None, sort_keys='created_at',
sort_dirs='asc', fields='', workflow_name=None,
workflow_id=None, description=None, params=None,
task_execution_id=None, state=None, state_info=None,
input=None, output=None, created_at=None, updated_at=None,
include_output=None, project_id=None, all_projects=False):
"""Return all Executions.
:param marker: Optional. Pagination marker for large data sets.
:param limit: Optional. Maximum number of resources to return in a
single result. Default value is None for backward
compatibility.
:param sort_keys: Optional. Columns to sort results by.
Default: created_at, which is backward compatible.
:param sort_dirs: Optional. Directions to sort corresponding to
sort_keys, "asc" or "desc" can be chosen.
Default: desc. The length of sort_dirs can be equal
or less than that of sort_keys.
:param fields: Optional. A specified list of fields of the resource to
be returned. 'id' will be included automatically in
fields if it's provided, since it will be used when
constructing 'next' link.
:param workflow_name: Optional. Keep only resources with a specific
workflow name.
:param workflow_id: Optional. Keep only resources with a specific
workflow ID.
:param description: Optional. Keep only resources with a specific
description.
:param params: Optional. Keep only resources with specific parameters.
:param task_execution_id: Optional. Keep only resources with a
specific task execution ID.
:param state: Optional. Keep only resources with a specific state.
:param state_info: Optional. Keep only resources with specific
state information.
:param input: Optional. Keep only resources with a specific input.
:param output: Optional. Keep only resources with a specific output.
:param created_at: Optional. Keep only resources created at a specific
time and date.
:param updated_at: Optional. Keep only resources with specific latest
update time and date.
:param include_output: Optional. Include the output for all executions
in the list.
:param project_id: Optional. Only get exectuions belong to the project.
Admin required.
:param all_projects: Optional. Get resources of all projects. Admin
required.
"""
acl.enforce('executions:list', context.ctx())
if all_projects or project_id:
acl.enforce('executions:list:all_projects', context.ctx())
filters = filter_utils.create_filters_from_request_params(
created_at=created_at,
workflow_name=workflow_name,
workflow_id=workflow_id,
params=params,
task_execution_id=task_execution_id,
state=state,
state_info=state_info,
input=input,
output=output,
updated_at=updated_at,
description=description,
project_id=project_id
)
LOG.info(
"Fetch executions. marker=%s, limit=%s, sort_keys=%s, "
"sort_dirs=%s, filters=%s, all_projects=%s", marker, limit,
sort_keys, sort_dirs, filters, all_projects
)
if include_output:
resource_function = _get_execution_resource
else:
resource_function = None
return rest_utils.get_all(
resources.Executions,
resources.Execution,
db_api.get_workflow_executions,
db_api.get_workflow_execution,
resource_function=resource_function,
marker=marker,
limit=limit,
sort_keys=sort_keys,
sort_dirs=sort_dirs,
fields=fields,
all_projects=all_projects,
**filters
)

Some files were not shown because too many files have changed in this diff Show More