Retire Paunch

Change-Id: I8b2e9e20e477f2f00ad922f03e82114ff13212fe
This commit is contained in:
Emilien Macchi 2020-06-05 15:24:51 -04:00
parent 324d5f67df
commit d0e81c22ca
80 changed files with 4 additions and 5693 deletions

View File

@ -1,6 +0,0 @@
[run]
branch = True
source = paunch
[report]
ignore_errors = True

60
.gitignore vendored
View File

@ -1,60 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
cover/
.coverage*
!.coveragerc
.tox
nosetests.xml
.stestr
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
RELEASENOTES.rst
# Editors
*~
.*.swp
.*sw?
# Files created by releasenotes build
releasenotes/build
releasenotes/notes/reno.cache

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=${TEST_PATH:-./paunch/tests}
top_dir=./

View File

@ -1,17 +0,0 @@
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
https://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/paunch

View File

@ -1,4 +0,0 @@
paunch Style Commandments
===============================================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,295 +1,7 @@
======
paunch
======
This project is no longer maintained.
Utility to launch and manage containers using YAML based configuration data
It was replaced by "tripleo_container_manage" Ansible role in tripleo-ansible.
* Free software: Apache license
* Documentation: https://docs.openstack.org/developer/paunch
* Source: https://opendev.org/openstack/paunch
* Bugs: https://bugs.launchpad.net/paunch
* Release Notes: https://docs.openstack.org/releasenotes/paunch
The contents of this repository are still available in the Git source code management system. To see the contents of this repository before it reached its end of life, please check out the previous commit with "git checkout HEAD^1".
Features
--------
* Single host only, operations are performed via the podman client.
* Zero external state, only labels on running containers are used when
determining which containers an operation will perform on.
* Single threaded and blocking, containers which are not configured to detach
will halt further configuration until they exit.
* Co-exists with other container configuration tools. Only containers created
by paunch will be modified by paunch. Unique container names are assigned if
the desired name is taken, and containers are renamed when the desired name
becomes available.
* Accessable via the ``paunch`` command line utility, or by importing python
package ``paunch``.
* Builtin ``debug`` command lets you see how individual containers are run,
get configuration information for them, and run them any way you need to.
Running Paunch Commands
-----------------------
The only state that paunch is aware of is the labels that it sets on running
containers, so it is up to the user to keep track of what paunch configs
*should* be running so that others can be deleted on cleanup. For these
examples we're going to store that state in a simple text file:
::
$ touch paunch-state.txt
We'll start of by deleting any containers that were started by previous calls
to ``paunch apply``:
::
$ paunch --verbose cleanup $(cat paunch-state.txt)
Next we'll apply a simple hello-world config found in
``examples/hello-world.yml`` which contains the following:
::
hello:
image: hello-world
detach: false
Applied by running:
::
$ paunch --verbose apply --file examples/hello-world.yml --config-id hi
$ echo hi >> paunch-state.txt
A container called ``hello`` will be created, print a Hello World message, then
exit. You can confirm that it still exists by running ``podman ps -a``.
Now lets try running the exact same ``paunch apply`` command:
::
$ paunch --verbose apply --file examples/hello-world.yml --config-id hi
This will not make any changes at all due to the idempotency behaviour of
paunch.
Lets try again with a unique --config-id:
::
$ paunch --verbose apply --file examples/hello-world.yml --config-id hi-again
$ echo hi-again >> paunch-state.txt
Doing a ``podman ps -a`` now will show that there are now 2 containers, one
called ``hello`` and the other called ``hello-(random suffix)``. Lets delete the
one associated with the ``hi`` config-id:
::
$ cat paunch-state.txt
$ echo hi-again > paunch-state.txt
$ cat paunch-state.txt
$ paunch --verbose cleanup $(cat paunch-state.txt)
Doing a ``podman ps -a`` will show that the original ``hello`` container has been
deleted and ``hello-(random suffix)`` has been renamed to ``hello``
Generally ``paunch cleanup`` will be run first to delete containers for configs
that are no longer apply. Then a series of ``paunch apply`` commands can be run.
If these ``apply`` calls are part of a live upgrade where a mixture of old and
new containers are left running, the upgrade can be completed in the next run
to ``paunch cleanup`` with the updated list of config-id state.
Paunch can also be used as a library by other tools. By default running the
``paunch`` command won't affect these other containers due to the different ``managed_by``
label being set on those containers. For example if you wanted to run paunch
commands masquerading as the
`heat-agents <https://opendev.org/openstack/heat-agents/src/branch/master/>`_
`docker-cmd hook <https://opendev.org/openstack/heat-agents/src/branch/master/heat-config-docker-cmd>`_
then you can run:
::
paunch --verbose apply --file examples/hello-world.yml --config-id hi --managed-by docker-cmd
This will result in a ``hello`` container being run, which will be deleted the
next time the ``docker-cmd`` hook does its own ``cleanup`` run since it won't
be aware of a ``config_id`` called ``hi``.
Idempotency Behaviour
---------------------
In many cases the user will want to use the same --config-id with changed
config data. The aim of the idempotency behaviour is to leave containers
running when their config has not changed, but replace containers which have
modified config.
When ``paunch apply`` is run with the same ``--config-id`` but modified config
data, the following logic is applied:
* For each existing container with a matching config_id and managed_by:
* delete containers which no longer exist in config
* delete containers with missing config_data label
* delete containers where config_data label differs from current config
* Do a full rename to desired names since deletes have occured
* Only create containers from config if there is no container running with that name
* ``exec`` actions will be run regardless, so commands they run may require
their own idempotency behaviour
Only configuration data is used to determine whether something has changed to
trigger replacing the container during ``apply``. This means that changing the
contents of a file referred to in ``env_file`` will *not* trigger replacement
unless something else changes in the configuration data (such as the path
specified in ``env_file``).
The most common reason to restart containers is to have them running with an
updated image. As such it is recommended that stable image tags such as
``latest`` are not used when specifying the ``image``, and that changing the
release version tag in the configuration data is the recommended way of
propagating image changes to the running containers.
Debugging with Paunch
---------------------
The ``paunch debug`` command allows you to perform specific actions on a given
container. This can be used to:
* Run a container with a specific configuration.
* Dump the configuration of a given container in either json or yaml.
* Output the podman command line used to start the container.
* Run a container with any configuration additions you wish such that you can
run it with a shell as any user etc.
The configuration options you will likely be interested in here include:
::
--file <file> YAML or JSON file containing configuration data
--action <name> Action can be one of: "dump-json", "dump-yaml",
"print-cmd", or "run"
--container <name> Name of the container you wish to manipulate
--interactive Run container in interactive mode - modifies config
and execution of container
--shell Similar to interactive but drops you into a shell
--user <name> Start container as the specified user
--overrides <name> JSON configuration information used to override
default config values
``file`` is the name of the configuration file to use
containing the configuration for the container you wish to use.
Here is an example of using ``paunch debug`` to start a root shell inside the
test container:
::
# paunch debug --file examples/hello-world.yml --interactive --shell --user root --container hello --action run
This will drop you an interactive session inside the hello world container
starting /bin/bash running as root.
To see how this container is started normally:
::
# paunch debug --file examples/hello-world.yml --container hello --action print-cmd
You can also dump the configuration of this to a file so you can edit
it and rerun it with different a different configuration. This is more
useful when there are multiple configurations in a single file:
::
# paunch debug --file examples/hello-world.yml --container hello --action dump-json > hello.json
You can then use ``hello.json`` as your ``--file`` argument after
editing it to your liking.
You can also add any configuration elements you wish on the command line
to test paunch or debug containers etc. In this example I'm running
the hello container with ``net=host``.
::
# paunch debug --file examples/hello-world.yml --overrides '{"net": "host"}' --container hello --action run
Configuration Format
--------------------
The current format is loosely based on a subset of the `docker-compose v1
format <https://docs.docker.com/compose/compose-file/compose-file-v1/>`_ with
modifications. The intention is for the format to evolve to faithfully
implement existing formats such as the
`Kubernetes Pod format <https://kubernetes.io/docs/concepts/workloads/pods/pod/>`_.
The top-level of the YAML format is a dict where the keys (generally)
correspond to the name of the container to be created. The following config
creates 2 containers called ``hello1`` and ``hello2``:
::
hello1:
image: hello-world
hello2:
image: hello-world
The values are a dict which specifies the arguments that are used when the
container is launched. Supported keys which comply with the docker-compose v1
format are as follows:
command:
String or list. Overrides the default command.
detach:
Boolean, defaults to true. If true the container is run in the background. If
false then paunch will block until the container has exited.
environment:
List of the format ['KEY1=value1', 'KEY2=value2']. Sets environment variables
that are available to the process launched in the container.
env_file:
List of file paths containing line delimited environment variables.
image:
String, mandatory. Specify the image to start the container from. Can either
be a repositorys/tag or a partial image ID.
net:
String. Set the network mode for the container.
pid:
String. Set the PID mode for the container.
uts:
String. Set the UTS namespace for the container.
privileged:
Boolean, defaults to false. If true, give extended privileges to this container.
restart:
String. Restart policy to apply when a container exits.
remove:
Boolean: Remove container after running.
interactive:
Boolean: Run container in interactive mode.
tty:
Boolean: Allocate a tty to interact with the container.
user:
String. Sets the username or UID used and optionally the groupname or GID for
the specified command.
volumes:
List of strings. Specify the bind mount for this container.
volumes_from:
List of strings. Mount volumes from the specified container(s).
log_tag:
String. Set the log tag for the specified container.
For any further questions, please email openstack-dev@lists.openstack.org or join #openstack-dev on Freenode.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,4 +0,0 @@
# Even if we do not build doc with python 2, we have to
# push that content due to some global check.
sphinx>=2.0.0,!=2.1.0 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0

View File

@ -1,78 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocstheme',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/paunch'
openstackdocs_bug_project = 'paunch'
openstackdocs_bug_tag = ''
# General information about the project.
copyright = u'2016, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'openstackdocs'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = 'paunchdoc'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'paunch.tex',
u'paunch Documentation',
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,25 +0,0 @@
.. paunch documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to paunch's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install paunch
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv paunch
$ pip install paunch

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use paunch in a project::
import paunch

View File

@ -1,4 +0,0 @@
hello:
image: hello-world
detach: false

View File

@ -1,259 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''Stable library interface to managing containers with paunch.'''
import json
import pbr.version
import yaml
from paunch.builder import compose1
from paunch.builder import podman
from paunch import runner
from paunch.utils import common
__version__ = pbr.version.VersionInfo('paunch').version_string()
def apply(config_id, config, managed_by, labels=None, cont_cmd='podman',
default_runtime=None, log_level=None, log_file=None,
cont_log_path=None, healthcheck_disabled=False, cleanup=True):
"""Execute supplied container configuration.
:param str config_id: Unique config ID, should not be re-used until any
running containers with that config ID have been
deleted.
:param dict config: Configuration data describing container actions to
apply.
:param str managed_by: Name of the tool managing the containers. Only
containers labelled with this will be modified.
:param dict labels: Optional keys/values of labels to apply to containers
created with this invocation.
:param str cont_cmd: Optional override to the container command to run.
:param str default_runtime: (deprecated) does nothing.
:param int log_level: optional log level for loggers
:param str log_file: optional log file for messages
:param str cont_log_path: optional log path for containers. Works only for
podman engine. Must be an absolute path.
:param bool healthcheck_disabled: optional boolean to disable container
healthcheck.
:param bool cleanup: optional boolean to delete containers missing in the
config.
:returns (list, list, int) lists of stdout and stderr for each execution,
and a single return code representing the
overall success of the apply.
:rtype: tuple
"""
log = common.configure_logging(__name__, log_level, log_file)
if default_runtime:
log.warning("DEPRECATION: 'default_runtime' does nothing, "
"use 'cont_cmd' instead")
if cont_cmd == 'podman':
r = runner.PodmanRunner(managed_by, cont_cmd=cont_cmd, log=log)
builder = podman.PodmanBuilder(
config_id=config_id,
config=config,
runner=r,
labels=labels,
log=log,
cont_log_path=cont_log_path,
healthcheck_disabled=healthcheck_disabled,
cleanup=cleanup
)
else:
r = runner.DockerRunner(managed_by, cont_cmd=cont_cmd, log=log)
builder = compose1.ComposeV1Builder(
config_id=config_id,
config=config,
runner=r,
labels=labels,
log=log,
cleanup=cleanup
)
return builder.apply()
def cleanup(config_ids, managed_by, cont_cmd='podman', default_runtime=None,
log_level=None, log_file=None):
"""Delete containers no longer applied, rename others to preferred name.
:param list config_ids: List of config IDs still applied. All containers
managed by this tool will be deleted if their
config ID is not specified in this list.
:param str managed_by: Name of the tool managing the containers. Only
containers labelled with this will be modified.
:param str cont_cmd: Optional override to the container command to run.
:param str default_runtime: (deprecated) does nothing.
:param int log_level: optional log level for loggers
:param int log_file: optional log file for messages
"""
log = common.configure_logging(__name__, log_level, log_file)
if default_runtime:
log.warning("DEPRECATION: 'default_runtime' does nothing, "
"use 'cont_cmd' instead")
if cont_cmd == 'podman':
r = runner.PodmanRunner(managed_by, cont_cmd=cont_cmd, log=log)
else:
r = runner.DockerRunner(managed_by, cont_cmd=cont_cmd, log=log)
r.delete_missing_configs(config_ids)
r.rename_containers()
def list(managed_by, cont_cmd='podman', default_runtime=None,
log_level=None, log_file=None):
"""List all containers associated with all config IDs.
:param str managed_by: Name of the tool managing the containers. Only
containers labelled with this will be modified.
:param str cont_cmd: Optional override to the container command to run.
:param str default_runtime: (deprecated) does nothing.
:param int log_level: optional log level for loggers
:param int log_file: optional log file for messages
:returns a dict where the key is the config ID and the value is a list of
'podman inspect' dicts for each container.
:rtype: defaultdict(list)
"""
log = common.configure_logging(__name__, log_level, log_file)
if default_runtime:
log.warning("DEPRECATION: 'default_runtime' does nothing, "
"use 'cont_cmd' instead")
if cont_cmd == 'podman':
r = runner.PodmanRunner(managed_by, cont_cmd=cont_cmd, log=log)
else:
r = runner.DockerRunner(managed_by, cont_cmd=cont_cmd, log=log)
return r.list_configs()
def debug(config_id, container_name, action, config, managed_by, labels=None,
cont_cmd='podman', default_runtime=None, log_level=None,
log_file=None):
"""Execute supplied container configuration.
:param str config_id: Unique config ID, should not be re-used until any
running containers with that config ID have been
deleted.
:param str container_name: Name of the container in the config you
wish to manipulate.
:param str action: Action to take.
:param dict config: Configuration data describing container actions to
apply.
:param str managed_by: Name of the tool managing the containers. Only
containers labeled with this will be modified.
:param dict labels: Optional keys/values of labels to apply to containers
created with this invocation.
:param str cont_cmd: Optional override to the container command to run.
:param str default_runtime: (deprecated) does nothing.
:param int log_level: optional log level for loggers
:param int log_file: optional log file for messages
:returns integer return value from running command or failure for any
other reason.
:rtype: int
"""
log = common.configure_logging(__name__, log_level, log_file)
if default_runtime:
log.warning("DEPRECATION: 'default_runtime' does nothing, "
"use 'cont_cmd' instead")
if cont_cmd == 'podman':
r = runner.PodmanRunner(managed_by, cont_cmd=cont_cmd, log=log)
builder = podman.PodmanBuilder(
config_id=config_id,
config=config,
runner=r,
labels=labels,
log=log
)
else:
r = runner.DockerRunner(managed_by, cont_cmd=cont_cmd, log=log)
builder = compose1.ComposeV1Builder(
config_id=config_id,
config=config,
runner=r,
labels=labels,
log=log
)
if action == 'print-cmd':
uname = r.unique_container_name(container_name)
cmd = [
r.cont_cmd,
'run',
'--name',
uname
]
builder.label_arguments(cmd, container_name)
builder.container_run_args(cmd, container_name, uname)
if '--health-cmd' in cmd:
health_check_arg_index = cmd.index('--health-cmd') + 1
# The argument given needs to be quoted to work properly with a
# copy and paste of the full command.
try:
cmd[health_check_arg_index] = (
'"%s"' % cmd[health_check_arg_index])
except IndexError:
log.warning("No argument provided to --health-cmd.")
print(' '.join(cmd))
elif action == 'run':
uname = r.unique_container_name(container_name)
cmd = [
r.cont_cmd,
'run',
'--name',
uname
]
builder.label_arguments(cmd, container_name)
if builder.container_run_args(cmd, container_name, uname):
return r.execute_interactive(cmd, log)
elif action == 'dump-yaml':
print(yaml.safe_dump(config, default_flow_style=False))
elif action == 'dump-json':
print(json.dumps(config, indent=4))
else:
raise ValueError('action should be one of: "dump-json", "dump-yaml"',
'"print-cmd", or "run"')
def delete(config_ids, managed_by, cont_cmd='podman', default_runtime=None,
log_level=None, log_file=None):
"""Delete containers with the specified config IDs.
:param list config_ids: List of config IDs to delete the containers for.
:param str managed_by: Name of the tool managing the containers. Only
containers labelled with this will be modified.
:param str cont_cmd: Optional override to the container command to run.
:param str default_runtime: (deprecated) does nothing.
"""
log = common.configure_logging(__name__, log_level, log_file)
if default_runtime:
log.warning("DEPRECATION: 'default_runtime' does nothing, "
"use 'cont_cmd' instead")
if not config_ids:
log.warn('No config IDs specified')
if cont_cmd == 'podman':
r = runner.PodmanRunner(managed_by, cont_cmd=cont_cmd, log=log)
else:
r = runner.DockerRunner(managed_by, cont_cmd=cont_cmd, log=log)
for conf_id in config_ids:
r.remove_containers(conf_id)

View File

@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import sys
from cliff.app import App
from cliff.commandmanager import CommandManager
import paunch
"""Utility to launch and manage containers using
YAML based configuration data"""
class PaunchApp(App):
def __init__(self):
super(PaunchApp, self).__init__(
description=__doc__,
version=paunch.__version__,
command_manager=CommandManager('paunch'),
deferred_help=True,
)
def main(argv=sys.argv[1:]):
myapp = PaunchApp()
return myapp.run(argv)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))

View File

@ -1,463 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import distutils.spawn
import itertools
import json
import os
import re
import shutil
import tenacity
import yaml
from paunch.utils import common
from paunch.utils import systemd
class BaseBuilder(object):
def __init__(self, config_id, config, runner, labels, log=None,
cont_log_path=None, healthcheck_disabled=False, cleanup=True):
self.config_id = config_id
self.config = config
self.labels = labels
self.runner = runner
# Leverage pre-configured logger
self.log = log or common.configure_logging(__name__)
self.cont_log_path = cont_log_path
self.healthcheck_disabled = healthcheck_disabled
self.cleanup = cleanup
if os.path.isfile('/var/lib/tripleo-config/.ansible-managed'):
msg = ('Containers were previously deployed with '
'tripleo-ansible, paunch CLI can not be used.')
raise RuntimeError(msg)
self.log.warning('Paunch is deprecated and has been replaced by '
'tripleo_container_manage role in tripleo-ansible.')
def apply(self):
stdout = []
stderr = []
pull_returncode = self.pull_missing_images(stdout, stderr)
if pull_returncode != 0:
return stdout, stderr, pull_returncode
deploy_status_code = 0
key_fltr = lambda k: self.config[k].get('start_order', 0)
failed_containers = []
container_names = self.runner.container_names(self.config_id)
# Cleanup containers missing from the config.
# Also applying new containers configs is an opportunity for
# renames to their preferred names.
changed = self.delete_missing(container_names)
changed |= self.runner.rename_containers()
if changed:
# If anything has been changed, refresh the container_names
container_names = self.runner.container_names(self.config_id)
desired_names = set([cn[-1] for cn in container_names])
for container in sorted(self.config, key=key_fltr):
# Before creating the container, figure out if it needs to be
# removed because of its configuration has changed.
# If anything has been deleted, refresh the container_names/desired
if self.delete_updated(container, container_names):
container_names = self.runner.container_names(self.config_id)
desired_names = set([cn[-1] for cn in container_names])
self.log.debug("Running container: %s" % container)
cconfig = self.config[container]
action = cconfig.get('action', 'run')
restart = cconfig.get('restart', 'none')
exit_codes = cconfig.get('exit_codes', [0])
container_name = self.runner.unique_container_name(container)
systemd_managed = (restart != 'none'
and self.runner.cont_cmd == 'podman'
and action == 'run')
start_cmd = 'create' if systemd_managed else 'run'
# When upgrading from Docker to Podman, we want to stop the
# container that runs under Docker first before starting it with
# Podman. The container will be removed later in THT during
# upgrade_tasks.
if self.runner.cont_cmd == 'podman' and \
os.path.exists('/var/run/docker.sock'):
self.runner.stop_container(container, 'docker', quiet=True)
if action == 'run':
if container in desired_names:
self.log.debug('Skipping existing container: %s' %
container)
continue
c_name = self.runner.discover_container_name(
container, self.config_id) or container
cmd = [
self.runner.cont_cmd,
start_cmd,
'--name',
c_name
]
self.label_arguments(cmd, container)
self.log.debug("Start container {} as {}.".format(container,
c_name))
validations_passed = self.container_run_args(
cmd, container, c_name)
elif action == 'exec':
# for exec, the first argument is the fixed named container
# used when running the command into the running container.
# use the discovered container name to manipulate with the
# real (delagate) container representing the fixed named one
command = self.command_argument(cconfig.get('command'))
if command:
c_name = self.runner.discover_container_name(
command[0], self.config_id)
else:
c_name = self.runner.discover_container_name(
container, self.config_id)
# Before running the exec, we want to make sure the container
# is running.
# https://bugs.launchpad.net/bugs/1839559
if not c_name or not self.runner.container_running(c_name):
msg = ('Failing to apply action exec for '
'container: %s' % container)
raise RuntimeError(msg)
cmd = [self.runner.cont_cmd, 'exec']
validations_passed = self.cont_exec_args(cmd,
container,
c_name)
if not validations_passed:
self.log.debug('Validations failed. Skipping container: %s' %
container)
failed_containers.append(container)
continue
(cmd_stdout, cmd_stderr, returncode) = self.runner.execute(
cmd, self.log)
if cmd_stdout:
stdout.append(cmd_stdout)
if cmd_stderr:
stderr.append(cmd_stderr)
if returncode not in exit_codes:
self.log.error("Error running %s. [%s]\n" % (cmd, returncode))
self.log.error("stdout: %s" % cmd_stdout)
self.log.error("stderr: %s" % cmd_stderr)
deploy_status_code = returncode
else:
self.log.debug('Completed $ %s' % ' '.join(cmd))
self.log.info("stdout: %s" % cmd_stdout)
self.log.info("stderr: %s" % cmd_stderr)
if systemd_managed:
systemd.service_create(container=container_name,
cconfig=cconfig,
log=self.log)
if (not self.healthcheck_disabled and
'healthcheck' in cconfig):
check = cconfig.get('healthcheck')['test']
systemd.healthcheck_create(container=container_name,
log=self.log, test=check)
systemd.healthcheck_timer_create(
container=container_name,
cconfig=cconfig,
log=self.log)
if failed_containers:
message = (
"The following containers failed validations "
"and were not started: {}".format(
', '.join(failed_containers)))
self.log.error(message)
# The message is also added to stderr so that it's returned and
# logged by the paunch module for ansible
stderr.append(message)
deploy_status_code = 1
return stdout, stderr, deploy_status_code
def delete_missing(self, container_names):
deleted = False
for cn in container_names:
container = cn[0]
# if the desired name is not in the config, delete it
if cn[-1] not in self.config:
if self.cleanup:
self.log.debug("Deleting container (removed): "
"%s" % container)
self.runner.remove_container(container)
deleted = True
else:
self.log.debug("Skipping container (cleanup disabled): "
"%s" % container)
return deleted
def delete_updated(self, container, container_names):
# If a container is not deployed, there is nothing to delete
if (container not in
list(itertools.chain.from_iterable(container_names))):
return False
ex_data_str = self.runner.inspect(
container, '{{index .Config.Labels "config_data"}}')
if not ex_data_str:
if self.cleanup:
self.log.debug("Deleting container (no_config_data): "
"%s" % container)
self.runner.remove_container(container)
return True
else:
self.log.debug("Skipping container (cleanup disabled): "
"%s" % container)
try:
ex_data = yaml.safe_load(str(ex_data_str))
except Exception:
ex_data = None
new_data = self.config[container]
if new_data != ex_data:
self.log.debug("Deleting container (changed config_data): %s"
% container)
self.runner.remove_container(container)
return True
return False
def label_arguments(self, cmd, container):
if self.labels:
for i, v in self.labels.items():
cmd.extend(['--label', '%s=%s' % (i, v)])
cmd.extend([
'--label',
'config_id=%s' % self.config_id,
'--label',
'container_name=%s' % container,
'--label',
'managed_by=%s' % self.runner.managed_by,
'--label',
'config_data=%s' % json.dumps(self.config.get(container))
])
def boolean_arg(self, cconfig, cmd, key, arg):
if cconfig.get(key, False):
cmd.append(arg)
def string_arg(self, cconfig, cmd, key, arg, transform=None):
if key in cconfig:
if transform:
value = transform(cconfig[key])
else:
value = cconfig[key]
cmd.append('%s=%s' % (arg, value))
def list_or_string_arg(self, cconfig, cmd, key, arg):
if key not in cconfig:
return
value = cconfig[key]
if not isinstance(value, list):
value = [value]
for v in value:
if v:
cmd.append('%s=%s' % (arg, v))
def list_arg(self, cconfig, cmd, key, arg):
if key not in cconfig:
return
value = cconfig[key]
for v in value:
if v:
cmd.append('%s=%s' % (arg, v))
def list_or_dict_arg(self, cconfig, cmd, key, arg):
if key not in cconfig:
return
value = cconfig[key]
if isinstance(value, dict):
for k, v in sorted(value.items()):
if v:
cmd.append('%s=%s=%s' % (arg, k, v))
elif k:
cmd.append('%s=%s' % (arg, k))
elif isinstance(value, list):
for v in value:
if v:
cmd.append('%s=%s' % (arg, v))
def cont_exec_args(self, cmd, container, delegate=None):
"""Prepare the exec command args, from the container configuration.
:param cmd: The list of command options to be modified
:param container: A dict with container configurations
:param delegate: A predictable/unique name of the actual container
:returns: True if configuration is valid, otherwise False
"""
if delegate and container != delegate:
self.log.debug("Container {} has a delegate "
"{}".format(container, delegate))
cconfig = self.config[container]
if 'privileged' in cconfig:
cmd.append('--privileged=%s' % str(cconfig['privileged']).lower())
if 'user' in cconfig:
cmd.append('--user=%s' % cconfig['user'])
self.list_or_dict_arg(cconfig, cmd, 'environment', '--env')
command = self.command_argument(cconfig.get('command'))
# for exec, the first argument is the container name,
# make sure the correct one is used
if command:
if not delegate:
command[0] = self.runner.discover_container_name(
command[0], self.config_id)
else:
command[0] = delegate
cmd.extend(command)
return True
def pull_missing_images(self, stdout, stderr):
images = set()
for container in self.config:
cconfig = self.config[container]
image = cconfig.get('image')
if image:
images.add(image)
returncode = 0
for image in sorted(images):
# only pull if the image does not exist locally
if self.runner.cont_cmd == 'docker':
if self.runner.inspect(image,
output_format='exists',
o_type='image'):
continue
else:
img_exist = self.runner.image_exist(image)
if img_exist:
continue
try:
(cmd_stdout, cmd_stderr) = self._pull(image)
except PullException as e:
returncode = e.rc
cmd_stdout = e.stdout
cmd_stderr = e.stderr
self.log.error("Error pulling %s. [%s]\n" %
(image, returncode))
self.log.error("stdout: %s" % e.stdout)
self.log.error("stderr: %s" % e.stderr)
else:
self.log.debug('Pulled %s' % image)
self.log.info("stdout: %s" % cmd_stdout)
self.log.info("stderr: %s" % cmd_stderr)
if cmd_stdout:
stdout.append(cmd_stdout)
if cmd_stderr:
stderr.append(cmd_stderr)
return returncode
@tenacity.retry( # Retry up to 4 times with jittered exponential backoff
reraise=True,
wait=tenacity.wait_random_exponential(multiplier=1, max=10),
stop=tenacity.stop_after_attempt(4)
)
def _pull(self, image):
cmd = [self.runner.cont_cmd, 'pull', image]
(stdout, stderr, rc) = self.runner.execute(cmd, self.log)
if rc != 0:
raise PullException(stdout, stderr, rc)
return stdout, stderr
@staticmethod
def command_argument(command):
if not command:
return []
if not isinstance(command, list):
return command.split()
return command
def lower(self, a):
return str(a).lower()
def which(self, program):
try:
pgm = shutil.which(program)
except AttributeError:
pgm = distutils.spawn.find_executable(program)
return pgm
def duration(self, a):
if isinstance(a, (int, float)):
return a
# match groups of the format 5h34m56s
m = re.match('^'
'(([\d\.]+)h)?'
'(([\d\.]+)m)?'
'(([\d\.]+)s)?'
'(([\d\.]+)ms)?'
'(([\d\.]+)us)?'
'$', a)
if not m:
# fallback to parsing string as a number
return float(a)
n = 0.0
if m.group(2):
n += 3600 * float(m.group(2))
if m.group(4):
n += 60 * float(m.group(4))
if m.group(6):
n += float(m.group(6))
if m.group(8):
n += float(m.group(8)) / 1000.0
if m.group(10):
n += float(m.group(10)) / 1000000.0
return n
def validate_volumes(self, volumes):
"""Validate volume sources
Validates that the source volume either exists on the filesystem
or is a valid container volume. Since podman will error if the
source volume filesystem path doesn't exist, we want to catch the
error before podman.
:param: volumes: list of volume mounts in the format of "src:path"
"""
valid = True
for volume in volumes:
if not volume:
# ignore when volume is ''
continue
src_path = volume.split(':', 1)[0]
check = self.runner.validate_volume_source(src_path)
if not check:
self.log.error("%s is not a valid volume source" % src_path)
valid = False
return valid
class PullException(Exception):
def __init__(self, stdout, stderr, rc):
self.stdout = stdout
self.stderr = stderr
self.rc = rc

View File

@ -1,128 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from paunch.builder import base
from paunch.utils import common
class ComposeV1Builder(base.BaseBuilder):
def __init__(self, config_id, config, runner, labels=None, log=None,
cleanup=True):
super(ComposeV1Builder, self).__init__(config_id, config, runner,
labels, log, cleanup)
def container_run_args(self, cmd, container, delegate=None):
"""Prepare the run command args, from the container configuration.
:param cmd: The list of command options to be modified
:param container: A dict with container configurations
:delegate: A compatibility parameter for podman, does nothing here
:returns: True if configuration is valid, otherwise False
"""
if delegate and container != delegate:
self.log.debug("Delegate {} of container {} has no special "
"meanings for this context and will be "
"ignored".format(delegate, container))
cconfig = self.config[container]
if cconfig.get('detach', True):
cmd.append('--detach=true')
self.list_or_string_arg(cconfig, cmd, 'env_file', '--env-file')
self.list_or_dict_arg(cconfig, cmd, 'environment', '--env')
self.boolean_arg(cconfig, cmd, 'remove', '--rm')
self.boolean_arg(cconfig, cmd, 'interactive', '--interactive')
self.boolean_arg(cconfig, cmd, 'tty', '--tty')
self.string_arg(cconfig, cmd, 'net', '--net')
self.string_arg(cconfig, cmd, 'ipc', '--ipc')
self.string_arg(cconfig, cmd, 'pid', '--pid')
self.string_arg(cconfig, cmd, 'uts', '--uts')
# TODO(sbaker): implement ulimits property, deprecate this ulimit
# property
for u in cconfig.get('ulimit', []):
if u:
cmd.append('--ulimit=%s' % u)
if 'healthcheck' in cconfig:
hconfig = cconfig['healthcheck']
if 'test' in hconfig:
cmd.append('--health-cmd')
cmd.append(str(hconfig['test']))
if 'interval' in hconfig:
cmd.append('--health-interval=%s' % hconfig['interval'])
if 'timeout' in hconfig:
cmd.append('--health-timeout=%s' % hconfig['timeout'])
if 'retries' in hconfig:
cmd.append('--health-retries=%s' % hconfig['retries'])
self.string_arg(cconfig, cmd, 'privileged', '--privileged', self.lower)
self.string_arg(cconfig, cmd, 'restart', '--restart')
self.string_arg(cconfig, cmd, 'user', '--user')
self.list_arg(cconfig, cmd, 'group_add', '--group-add')
self.list_arg(cconfig, cmd, 'volumes', '--volume')
self.list_arg(cconfig, cmd, 'volumes_from', '--volumes-from')
# TODO(sbaker): deprecate log_tag, implement log_driver, log_opt
if 'log_tag' in cconfig:
cmd.append('--log-opt=tag=%s' % cconfig['log_tag'])
self.string_arg(cconfig, cmd, 'cpu_shares', '--cpu-shares')
self.string_arg(cconfig, cmd, 'mem_limit', '--memory')
self.string_arg(cconfig, cmd, 'memswap_limit', '--memory-swap')
self.string_arg(cconfig, cmd, 'mem_swappiness', '--memory-swappiness')
self.list_or_string_arg(cconfig, cmd, 'security_opt', '--security-opt')
self.string_arg(cconfig, cmd, 'stop_signal', '--stop-signal')
self.string_arg(cconfig, cmd, 'hostname', '--hostname')
for extra_host in cconfig.get('extra_hosts', []):
if extra_host:
cmd.append('--add-host=%s' % extra_host)
if 'cpuset_cpus' in cconfig:
# 'all' is a special value to directly configure all CPUs
# that are available. Without specifying --cpuset-cpus, we'll
# let the container engine to figure out what CPUs are online.
# https://bugs.launchpad.net/tripleo/+bug/1868135
# https://bugzilla.redhat.com/show_bug.cgi?id=1813091
if cconfig['cpuset_cpus'] != 'all':
cmd.append('--cpuset-cpus=%s' % cconfig['cpuset_cpus'])
else:
cmd.append('--cpuset-cpus=%s' % common.get_cpus_allowed_list())
self.string_arg(cconfig, cmd,
'stop_grace_period', '--stop-timeout',
self.duration)
self.list_arg(cconfig, cmd, 'cap_add', '--cap-add')
self.list_arg(cconfig, cmd, 'cap_drop', '--cap-drop')
# TODO(sbaker): add missing compose v1 properties:
# cgroup_parent
# devices
# dns, dns_search
# entrypoint
# expose
# extra_hosts
# labels
# ports
# stop_signal
# volume_driver
# cpu_quota
# domainname
# hostname
# mac_address
# memory_reservation
# kernel_memory
# read_only
# shm_size
# stdin_open
# working_dir
cmd.append(cconfig.get('image', ''))
cmd.extend(self.command_argument(cconfig.get('command')))
return self.validate_volumes(cconfig.get('volumes', []))

View File

@ -1,116 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
from paunch.builder import base
from paunch.utils import common
class PodmanBuilder(base.BaseBuilder):
def __init__(self, config_id, config, runner, labels=None, log=None,
cont_log_path=None, healthcheck_disabled=False, cleanup=True):
super(PodmanBuilder, self).__init__(config_id, config, runner,
labels, log, cont_log_path,
healthcheck_disabled, cleanup)
def container_run_args(self, cmd, container, delegate=None):
"""Prepare the run command args, from the container configuration.
:param cmd: The list of command options to be modified
:param container: A dict with container configurations
:param delegate: A predictable/unique name of the actual container
:returns: True if configuration is valid, otherwise False
"""
if delegate and container != delegate:
self.log.debug("Container {} has a delegate "
"{}".format(container, delegate))
if not delegate:
delegate = container
cconfig = self.config[container]
# write out a pid file so we can restart the container delegate
# via systemd
cmd.append('--conmon-pidfile=/var/run/{}.pid'.format(delegate))
if cconfig.get('detach', True):
cmd.append('--detach=true')
if self.cont_log_path is not None:
if os.path.isabs(self.cont_log_path):
if not os.path.exists(self.cont_log_path):
os.makedirs(self.cont_log_path)
log_path = os.path.join(self.cont_log_path, delegate)
logging = ['--log-driver', 'k8s-file',
'--log-opt', 'path=%s.log' % log_path]
cmd.extend(logging)
else:
raise ValueError('cont_log_path passed but not absolute.')
self.list_or_string_arg(cconfig, cmd, 'env_file', '--env-file')
self.list_or_dict_arg(cconfig, cmd, 'environment', '--env')
self.boolean_arg(cconfig, cmd, 'remove', '--rm')
self.boolean_arg(cconfig, cmd, 'interactive', '--interactive')
self.boolean_arg(cconfig, cmd, 'tty', '--tty')
self.string_arg(cconfig, cmd, 'net', '--net')
self.string_arg(cconfig, cmd, 'ipc', '--ipc')
self.string_arg(cconfig, cmd, 'pid', '--pid')
self.string_arg(cconfig, cmd, 'uts', '--uts')
# TODO(sbaker): implement ulimits property, deprecate this ulimit
# property
for u in cconfig.get('ulimit', []):
if u:
cmd.append('--ulimit=%s' % u)
self.string_arg(cconfig, cmd, 'privileged', '--privileged', self.lower)
self.string_arg(cconfig, cmd, 'user', '--user')
self.list_arg(cconfig, cmd, 'group_add', '--group-add')
self.list_arg(cconfig, cmd, 'volumes', '--volume')
self.list_arg(cconfig, cmd, 'volumes_from', '--volumes-from')
# TODO(sbaker): deprecate log_tag, implement log_driver, log_opt
if 'log_tag' in cconfig:
cmd.append('--log-opt=tag=%s' % cconfig['log_tag'])
self.string_arg(cconfig, cmd, 'cpu_shares', '--cpu-shares')
self.string_arg(cconfig, cmd, 'mem_limit', '--memory')
self.string_arg(cconfig, cmd, 'memswap_limit', '--memory-swap')
self.string_arg(cconfig, cmd, 'mem_swappiness', '--memory-swappiness')
self.list_or_string_arg(cconfig, cmd, 'security_opt', '--security-opt')
self.string_arg(cconfig, cmd, 'stop_signal', '--stop-signal')
self.string_arg(cconfig, cmd, 'hostname', '--hostname')
for extra_host in cconfig.get('extra_hosts', []):
if extra_host:
cmd.append('--add-host=%s' % extra_host)
if 'cpuset_cpus' in cconfig:
# 'all' is a special value to directly configure all CPUs
# that are available. Without specifying --cpuset-cpus, we'll
# let the container engine to figure out what CPUs are online.
# https://bugs.launchpad.net/tripleo/+bug/1868135
# https://bugzilla.redhat.com/show_bug.cgi?id=1813091
if cconfig['cpuset_cpus'] != 'all':
cmd.append('--cpuset-cpus=%s' % cconfig['cpuset_cpus'])
else:
cmd.append('--cpuset-cpus=%s' % common.get_cpus_allowed_list())
self.string_arg(cconfig, cmd,
'stop_grace_period', '--stop-timeout',
self.duration)
self.list_arg(cconfig, cmd, 'cap_add', '--cap-add')
self.list_arg(cconfig, cmd, 'cap_drop', '--cap-drop')
self.string_arg(cconfig, cmd, 'check_interval', '--check-interval')
cmd.append(cconfig.get('image', ''))
cmd.extend(self.command_argument(cconfig.get('command')))
return self.validate_volumes(cconfig.get('volumes', []))

View File

@ -1,390 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import collections
from cliff import command
from cliff import lister
import json
import paunch
from paunch import utils
class Apply(command.Command):
log = None
def get_parser(self, prog_name):
parser = super(Apply, self).get_parser(prog_name)
parser.add_argument(
'--file',
metavar='<file>',
required=True,
help=('YAML or JSON file or directory containing configuration '
'data'),
)
parser.add_argument(
'--label',
metavar='<label=value>',
dest='labels',
default=[],
action='append',
help=('Extra labels to apply to containers in this config, in the '
'form --label label=value --label label2=value2.'),
)
parser.add_argument(
'--managed-by',
metavar='<name>',
dest='managed_by',
default='paunch',
help=('Override the name of the tool managing the containers'),
)
parser.add_argument(
'--config-id',
metavar='<name>',
dest='config_id',
required=True,
help=('ID to assign to containers'),
)
parser.add_argument(
'--default-runtime',
dest='default_runtime',
default='podman',
choices=['docker', 'podman'],
help=('Default runtime for containers. Can be docker or podman.'),
)
parser.add_argument(
'--container-log-path',
dest='cont_log_path',
default=None,
help=('Absolute directory path for container log. Works only for '
'podman container engine')
)
parser.add_argument(
'--healthcheck-disabled',
dest='healthcheck_disabled',
action='store_true',
default=False,
help=('Whether or not we disable the containers healthchecks')
)
parser.add_argument(
'--cleanup',
dest='cleanup',
action='store_true',
default=True,
help=('Whether or not we delete containers missing in the config')
)
return parser
def take_action(self, parsed_args):
(self.log, log_file, log_level) = \
utils.common.configure_logging_from_args(__name__, self.app_args)
labels = collections.OrderedDict()
for l in parsed_args.labels:
k, v = l.split(('='), 1)
labels[k] = v
config_path = parsed_args.file
config = utils.common.load_config(config_path)
stdout, stderr, rc = paunch.apply(
parsed_args.config_id,
config,
managed_by=parsed_args.managed_by,
labels=labels,
cont_cmd=parsed_args.default_runtime,
log_level=log_level,
log_file=log_file,
cont_log_path=parsed_args.cont_log_path,
healthcheck_disabled=parsed_args.healthcheck_disabled,
cleanup=parsed_args.cleanup
)
return rc
class Cleanup(command.Command):
log = None
def get_parser(self, prog_name):
parser = super(Cleanup, self).get_parser(prog_name)
parser.add_argument(
'config_id',
metavar='<config_id>',
nargs='*',
help=('Identifiers for the configs which still apply, all others '
'will be deleted.'),
)
parser.add_argument(
'--managed-by',
metavar='<name>',
dest='managed_by',
default='paunch',
help=('Override the name of the tool managing the containers'),
)
parser.add_argument(
'--default-runtime',
dest='default_runtime',
default='podman',
choices=['docker', 'podman'],
help=('Default runtime for containers. Can be docker or podman.'),
)
return parser
def take_action(self, parsed_args):
(self.log, log_file, log_level) = \
utils.common.configure_logging_from_args(__name__, self.app_args)
paunch.cleanup(
parsed_args.config_id,
managed_by=parsed_args.managed_by,
cont_cmd=parsed_args.default_runtime,
log_level=log_level,
log_file=log_file
)
class Delete(command.Command):
log = None
def get_parser(self, prog_name):
parser = super(Delete, self).get_parser(prog_name)
parser.add_argument(
'config_id',
nargs='*',
metavar='<config_id>',
help=('Identifier for the config to delete the containers for'),
)
parser.add_argument(
'--managed-by',
metavar='<name>',
dest='managed_by',
default='paunch',
help=('Override the name of the tool managing the containers'),
)
parser.add_argument(
'--default-runtime',
dest='default_runtime',
default='podman',
choices=['docker', 'podman'],
help=('Default runtime for containers. Can be docker or podman.'),
)
return parser
def take_action(self, parsed_args):
(self.log, log_file, log_level) = \
utils.common.configure_logging_from_args(__name__, self.app_args)
paunch.delete(
parsed_args.config_id,
parsed_args.managed_by,
cont_cmd=parsed_args.default_runtime,
log_level=log_level,
log_file=log_file
)
class Debug(command.Command):
log = None
def get_parser(self, prog_name):
parser = super(Debug, self).get_parser(prog_name)
parser.add_argument(
'--file',
metavar='<file>',
required=True,
help=('YAML or JSON file or directory containing configuration '
'data'),
)
parser.add_argument(
'--label',
metavar='<label=value>',
dest='labels',
default=[],
action='append',
help=('Extra labels to apply to containers in this config, in the '
'form --label label=value --label label2=value2.'),
)
parser.add_argument(
'--managed-by',
metavar='<name>',
dest='managed_by',
default='paunch',
help=('Override the name of the tool managing the containers')
)
parser.add_argument(
'--action',
metavar='<name>',
dest='action',
default='print-cmd',
help=('Action can be one of: "dump-json", "dump-yaml", '
'"print-cmd", or "run"')
)
parser.add_argument(
'--container',
metavar='<name>',
dest='container_name',
required=True,
help=('Name of the container you wish to manipulate')
)
parser.add_argument(
'--interactive',
dest='interactive',
action='store_true',
default=False,
help=('Run container in interactive mode - modifies config and '
'execution of container')
)
parser.add_argument(
'--shell',
dest='shell',
action='store_true',
default=False,
help=('Similar to interactive but drops you into a shell')
)
parser.add_argument(
'--user',
metavar='<name>',
dest='user',
default='',
help=('Start container as the specified user')
)
parser.add_argument(
'--overrides',
metavar='<name>',
dest='overrides',
default='',
help=('JSON configuration information used to override default '
'config values')
)
parser.add_argument(
'--config-id',
metavar='<name>',
dest='config_id',
required=False,
default='debug',
help=('ID to assign to containers')
)
parser.add_argument(
'--default-runtime',
dest='default_runtime',
default='podman',
choices=['docker', 'podman'],
help=('Default runtime for containers. Can be docker or podman.'),
)
return parser
def take_action(self, parsed_args):
(self.log, log_file, log_level) = \
utils.common.configure_logging_from_args(__name__, self.app_args)
labels = collections.OrderedDict()
for l in parsed_args.labels:
k, v = l.split(('='), 1)
labels[k] = v
container_name = parsed_args.container_name
config_path = parsed_args.file
config = utils.common.load_config(config_path, container_name)
cconfig = {}
cconfig[container_name] = config[container_name]
if parsed_args.interactive or parsed_args.shell:
iconfig = {
"interactive": True,
"tty": True,
"restart": "no",
"detach": False,
"remove": True
}
cconfig[container_name].update(iconfig)
if parsed_args.shell:
sconfig = {"command": "/bin/bash"}
cconfig[container_name].update(sconfig)
if parsed_args.user:
rconfig = {"user": parsed_args.user}
cconfig[container_name].update(rconfig)
conf_overrides = []
if parsed_args.overrides:
conf_overrides = json.loads(parsed_args.overrides)
cconfig[container_name].update(conf_overrides)
paunch.debug(
parsed_args.config_id,
container_name,
parsed_args.action,
cconfig,
parsed_args.managed_by,
labels=labels,
cont_cmd=parsed_args.default_runtime,
log_level=log_level,
log_file=log_file
)
class List(lister.Lister):
log = None
def get_parser(self, prog_name):
parser = super(List, self).get_parser(prog_name)
parser.add_argument(
'--managed-by',
metavar='<name>',
dest='managed_by',
default='paunch',
help=('Override the name of the tool managing the containers'),
)
parser.add_argument(
'--default-runtime',
dest='default_runtime',
default='podman',
choices=['docker', 'podman'],
help=('Default runtime for containers. Can be docker or podman.'),
)
return parser
def take_action(self, parsed_args):
(self.log, log_file, log_level) = \
utils.common.configure_logging_from_args(__name__, self.app_args)
configs = paunch.list(
parsed_args.managed_by,
cont_cmd=parsed_args.default_runtime,
log_level=log_level,
log_file=log_file
)
columns = [
'config',
'container',
'image',
'command',
'status',
]
data = []
for k, v in configs.items():
for i in v:
# docker has a leading slash in the name, strip it
if parsed_args.default_runtime == 'docker':
name = i.get('Name', '/')[1:]
else:
name = i.get('Name', '')
cmd = ' '.join(i.get('Config', {}).get('Cmd', []))
image = i.get('Config', {}).get('Image')
status = i.get('State', {}).get('Status')
data.append((k, name, image, cmd, status))
return columns, data

View File

@ -1,17 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
LOG_FILE = '/var/log/paunch.log'
SYSTEMD_DIR = '/etc/systemd/system/'

View File

@ -1,512 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import collections
import jmespath
import json
import os
import random
import string
import subprocess
import time
from paunch.builder import podman
from paunch.utils import common
from paunch.utils import systemctl
from paunch.utils import systemd
class BaseRunner(object):
def __init__(self, managed_by, cont_cmd, log=None, cont_log_path=None,
healthcheck_disabled=False):
self.managed_by = managed_by
self.cont_cmd = cont_cmd
# Leverage pre-configured logger
self.log = log or common.configure_logging(__name__)
self.cont_log_path = cont_log_path
self.healthcheck_disabled = healthcheck_disabled
if self.cont_cmd == 'docker':
self.log.warning('docker runtime is deprecated in Stein '
'and will be removed in Train.')
@staticmethod
def execute(cmd, log=None, quiet=False):
if not log:
log = common.configure_logging(__name__)
if not quiet:
log.debug('$ %s' % ' '.join(cmd))
subproc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
cmd_stdout, cmd_stderr = subproc.communicate()
if subproc.returncode != 0:
log.error('Error executing %s: returned %s' % (cmd,
subproc.returncode))
if not quiet:
log.debug(cmd_stdout)
log.debug(cmd_stderr)
return (cmd_stdout.decode('utf-8'),
cmd_stderr.decode('utf-8'),
subproc.returncode)
@staticmethod
def execute_interactive(cmd, log=None):
if not log:
log = common.configure_logging(__name__)
log.debug('$ %s' % ' '.join(cmd))
return subprocess.call(cmd)
def current_config_ids(self):
# List all config_id labels for managed containers
# FIXME(bogdando): remove once we have it fixed:
# https://github.com/containers/libpod/issues/1729
if self.cont_cmd == 'docker':
fmt = '{{.Label "config_id"}}'
else:
fmt = '{{.Labels.config_id}}'
cmd = [
self.cont_cmd, 'ps', '-a',
'--filter', 'label=managed_by=%s' % self.managed_by,
'--format', fmt
]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
results = cmd_stdout.split()
if returncode != 0 or not results or results == ['']:
# NOTE(bogdando): also look by the historically used to
# be always specified defaults, we must also identify such configs
cmd = [
self.cont_cmd, 'ps', '-a',
'--filter', 'label=managed_by=paunch',
'--format', fmt
]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
return set()
results += cmd_stdout.split()
return set(results)
def containers_in_config(self, conf_id):
cmd = [
self.cont_cmd, 'ps', '-q', '-a',
'--filter', 'label=managed_by=%s' % self.managed_by,
'--filter', 'label=config_id=%s' % conf_id
]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
results = cmd_stdout.split()
if returncode != 0 or not results or results == ['']:
# NOTE(bogdando): also look by the historically used to
# be always specified defaults, we must also identify such configs
cmd = [
self.cont_cmd, 'ps', '-q', '-a',
'--filter', 'label=managed_by=paunch',
'--filter', 'label=config_id=%s' % conf_id
]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
return []
results += cmd_stdout.split()
return [c for c in results]
def inspect(self, name, output_format=None, o_type='container',
quiet=False):
# In podman, if we're being asked to inspect a container image, we
# want to verify that the image exists before inspecting it.
# Context: https://github.com/containers/libpod/issues/1845
if o_type == 'image':
if not self.image_exist(name):
return
cmd = [self.cont_cmd, 'inspect', '--type', o_type]
if output_format:
cmd.append('--format')
cmd.append(output_format)
cmd.append(name)
(cmd_stdout, cmd_stderr, returncode) = self.execute(
cmd, self.log, quiet)
if returncode != 0:
return
try:
if output_format:
return cmd_stdout
else:
return json.loads(cmd_stdout)[0]
except Exception as e:
self.log.error('Problem parsing %s inspect: %s' %
(self.cont_cmd, e))
def unique_container_name(self, container):
container_name = container
if self.cont_cmd == 'docker':
while self.inspect(container_name, output_format='exists',
quiet=True):
suffix = ''.join(random.choice(
string.ascii_lowercase + string.digits) for i in range(8))
container_name = '%s-%s' % (container, suffix)
break
else:
while self.container_exist(container_name, quiet=True):
suffix = ''.join(random.choice(
string.ascii_lowercase + string.digits) for i in range(8))
container_name = '%s-%s' % (container, suffix)
break
return container_name
def discover_container_name(self, container, cid):
cmd = [
self.cont_cmd,
'ps',
'-a',
'--filter',
'label=container_name=%s' % container,
'--filter',
'label=config_id=%s' % cid,
'--format',
'{{.Names}}'
]
(cmd_stdout, cmd_stderr, returncode) = self.execute(cmd, self.log)
if returncode == 0:
names = cmd_stdout.split()
if names:
return names[0]
self.log.warning('Did not find container with "%s" - retrying without '
'config_id' % cmd)
cmd = [
self.cont_cmd,
'ps',
'-a',
'--filter',
'label=container_name=%s' % container,
'--format',
'{{.Names}}'
]
(cmd_stdout, cmd_stderr, returncode) = self.execute(cmd, self.log)
if returncode == 0:
names = cmd_stdout.split()
if names:
return names[0]
self.log.warning('Did not find container with "%s"' % cmd)
def delete_missing_configs(self, config_ids):
if not config_ids:
config_ids = []
for conf_id in self.current_config_ids():
if conf_id not in config_ids:
self.log.debug('%s no longer exists, deleting containers' %
conf_id)
self.remove_containers(conf_id)
def discover_container_config(self, configs, container, name):
'''Find the paunch and runtime configs of a container by name.'''
for conf_id in self.current_config_ids():
jquerry = ("[] | [?(Name=='%s' && "
"Config.Labels.container_name=='%s' && "
"Config.Labels.config_id=='%s')]" %
(container, name, conf_id))
runtime_conf = None
try:
runtime_conf = jmespath.search(jquerry,
configs[conf_id])[0]
result = (conf_id, runtime_conf)
except Exception:
self.log.error("Failed searching container %s "
"for config %s" % (container, conf_id))
result = (None, None)
if runtime_conf:
self.log.debug("Found container %s "
"for config %s" % (container, conf_id))
break
return result
def list_configs(self):
configs = collections.defaultdict(list)
for conf_id in self.current_config_ids():
for container in self.containers_in_config(conf_id):
configs[conf_id].append(self.inspect(container))
return configs
def container_names(self, conf_id=None):
# list every container name, and its container_name label
# FIXME(bogdando): remove once we have it fixed:
# https://github.com/containers/libpod/issues/1729
if self.cont_cmd == 'docker':
fmt = '{{.Label "container_name"}}'
else:
fmt = '{{.Labels.container_name}}'
cmd = [
self.cont_cmd, 'ps', '-a',
'--filter', 'label=managed_by=%s' % self.managed_by
]
if conf_id:
cmd.extend((
'--filter', 'label=config_id=%s' % conf_id
))
cmd.extend((
'--format', '{{.Names}} %s' % fmt
))
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
results = cmd_stdout.split("\n")
if returncode != 0 or not results or results == ['']:
# NOTE(bogdando): also look by the historically used to
# be always specified defaults, we must also identify such configs
cmd = [
self.cont_cmd, 'ps', '-a',
'--filter', 'label=managed_by=paunch'
]
if conf_id:
cmd.extend((
'--filter', 'label=config_id=%s' % conf_id
))
cmd.extend((
'--format', '{{.Names}} %s' % fmt
))
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
return []
results += cmd_stdout.split("\n")
result = []
for line in results:
if line:
result.append(line.split())
return result
def remove_containers(self, conf_id):
for container in self.containers_in_config(conf_id):
self.remove_container(container)
def remove_container(self, container):
if self.cont_cmd == 'podman':
systemd.service_delete(container=container, log=self.log)
self.execute([self.cont_cmd, 'stop', container], self.log)
cmd = [self.cont_cmd, 'rm', container]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
self.log.error('Error removing container '
'gracefully: %s' % container)
self.log.error(cmd_stderr)
cmd = [self.cont_cmd, 'rm', '-f', container]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
self.log.error('Error removing container: %s' % container)
self.log.error(cmd_stderr)
def stop_container(self, container, cont_cmd=None, quiet=False):
cont_cmd = cont_cmd or self.cont_cmd
cmd = [cont_cmd, 'stop', container]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, quiet=quiet)
if returncode != 0 and not quiet:
self.log.error('Error stopping container: %s' % container)
self.log.error(cmd_stderr)
def rename_containers(self):
current_containers = []
need_renaming = {}
renamed = False
for entry in self.container_names():
current_containers.append(entry[0])
# ignore if container_name label not set
if len(entry) < 2:
continue
# ignore if desired name is already actual name
if entry[0] == entry[-1]:
continue
need_renaming[entry[0]] = entry[-1]
for current, desired in sorted(need_renaming.items()):
if desired in current_containers:
self.log.info('Cannot rename "%s" since "%s" still exists' % (
current, desired))
else:
self.log.info('Renaming "%s" to "%s"' % (current, desired))
self.rename_container(current, desired)
renamed = True
current_containers.append(desired)
return renamed
def validate_volume_source(self, volume):
"""Validate that the provided volume
This checks that the provided volume either exists on the filesystem
or is a container volume.
:param: volume: string containing either a filesystme path or container
volume name
"""
if os.path.exists(volume):
return True
if os.path.sep in volume:
# if we get here and have a path seperator, let's skip the
# container lookup because container volumes won't have / in them.
self.log.debug('Path seperator found in volume (%s), but did not '
'exist on the file system' % volume)
return False
self.log.debug('Running volume lookup for "%s"' % volume)
filter_opt = '--filter=name={}'.format(volume)
cmd = [self.cont_cmd, 'volume', 'ls', '-q', filter_opt]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd)
if returncode != 0:
self.log.error('Error during volume verification')
self.log.error(cmd_stderr)
return False
return (volume in set(cmd_stdout.split()))
class DockerRunner(BaseRunner):
def __init__(self, managed_by, cont_cmd=None, log=None):
cont_cmd = cont_cmd or 'docker'
super(DockerRunner, self).__init__(managed_by, cont_cmd, log)
def rename_container(self, container, name):
cmd = [self.cont_cmd, 'rename', container, name]
cmd_stdout, cmd_stderr, returncode = self.execute(cmd, self.log)
if returncode != 0:
self.log.error('Error renaming container: %s' % container)
self.log.error(cmd_stderr)
def image_exist(self, name, quiet=False):
self.log.warning("image_exist isn't supported "
"by %s" % self.cont_cmd)
return True
def container_exist(self, name, quiet=False):
self.log.warning("container_exist isn't supported "
"by %s" % self.cont_cmd)
return True
def container_running(self, container):
self.log.warning("container_running isn't supported "
"by %s" % self.cont_cmd)
return True
class PodmanRunner(BaseRunner):
def __init__(self, managed_by, cont_cmd=None, log=None,
cont_log_path=None, healthcheck_disabled=False):
cont_cmd = cont_cmd or 'podman'
super(PodmanRunner, self).__init__(managed_by, cont_cmd, log,
cont_log_path, healthcheck_disabled)
def rename_container(self, container, name):
# TODO(emilien) podman doesn't support rename, we'll handle it
# in paunch itself for now
configs = self.list_configs()
config_id, config = self.discover_container_config(
configs, container, name)
# Get config_data dict by the discovered conf ID,
# paunch needs it for maintaining idempotency within a conf ID
filter_names = ("[] | [?(Name!='%s' && "
"Config.Labels.config_id=='%s')]"
".Name" % (container, config_id))
filter_cdata = ("[] | [?(Name!='%s' && "
"Config.Labels.config_id=='%s')]"
".Config.Labels.config_data" % (container, config_id))
names = None
cdata = None
try:
names = jmespath.search(filter_names, configs[config_id])
cdata = jmespath.search(filter_cdata, configs[config_id])
except jmespath.exceptions.LexerError:
self.log.error("Failed to rename a container %s into %s: "
"used a bad search pattern" % (container, name))
return
if not names or not cdata:
self.log.error("Failed to rename a container %s into %s: "
"no config_data was found" % (container, name))
return
# Rename the wanted container in the config_data fetched from the
# discovered config
config_data = dict(zip(names, map(json.loads, cdata)))
config_data[name] = json.loads(
config.get('Config').get('Labels').get('config_data'))
# Re-apply a container under its amended name using the fetched configs
self.log.debug("Renaming a container known as %s into %s, "
"via re-applying its original config" %
(container, name))
self.log.debug("Removing the destination container %s" % name)
self.stop_container(name)
self.remove_container(name)
self.log.debug("Removing a container known as %s" % container)
self.stop_container(container)
self.remove_container(container)
builder = podman.PodmanBuilder(
config_id=config_id,
config=config_data,
runner=self,
labels=None,
log=self.log,
cont_log_path=self.cont_log_path,
healthcheck_disabled=self.healthcheck_disabled
)
builder.apply()
def image_exist(self, name, quiet=False):
cmd = ['podman', 'image', 'exists', name]
(_, _, returncode) = self.execute(cmd, self.log, quiet)
return returncode == 0
def container_exist(self, name, quiet=False):
cmd = ['podman', 'container', 'exists', name]
(_, _, returncode) = self.execute(cmd, self.log, quiet)
return returncode == 0
def container_running(self, container):
service_name = 'tripleo_' + container + '.service'
try:
systemctl.is_active(service_name)
self.log.debug('Unit %s is running' % service_name)
return True
except systemctl.SystemctlException:
chk_cmd = [
self.cont_cmd,
'ps',
'--filter',
'label=container_name=%s' % container,
'--format',
'{{.Names}}'
]
cmd_stdout = ''
returncode = -1
count = 1
while (not cmd_stdout or returncode != 0) and count <= 5:
self.log.warning('Attempt %i to check if %s is '
'running' % (count, container))
# at the first retry, we will force a sync with the OCI runtime
if self.cont_cmd == 'podman' and count == 2:
chk_cmd.append('--sync')
(cmd_stdout, cmd_stderr, returncode) = self.execute(chk_cmd,
self.log)
if returncode != 0:
self.log.warning('Attempt %i Error when running '
'%s:' % (count, chk_cmd))
self.log.warning(cmd_stderr)
else:
if not cmd_stdout:
self.log.warning('Attempt %i Container %s '
'is not running' % (count, container))
count += 1
time.sleep(0.2)
# return True if ps ran successfuly and returned a container name.
return (cmd_stdout and returncode == 0)

View File

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

View File

@ -1,619 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import inspect
import json
import mock
import tenacity
from paunch.builder import base as basebuilder
from paunch.builder import compose1
from paunch import runner
from paunch.tests import base
class TestBaseBuilder(base.TestCase):
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
@mock.patch("paunch.builder.base.BaseBuilder.delete_updated",
return_value=False)
def test_apply(self, mock_delete_updated, mock_cpu):
orig_call = tenacity.wait.wait_random_exponential.__call__
orig_argspec = inspect.getargspec(orig_call)
config = {
'one': {
'start_order': 0,
'image': 'centos:7',
},
'two': {
'start_order': 1,
'image': 'centos:7',
},
'three': {
'start_order': 2,
'image': 'centos:6',
},
'four': {
'start_order': 10,
'image': 'centos:7',
},
'four_ls': {
'action': 'exec',
'start_order': 20,
'command': ['four', 'ls', '-l', '/']
}
}
r = runner.DockerRunner(managed_by='tester', cont_cmd='docker')
exe = mock.Mock()
exe.side_effect = [
('exists', '', 0), # inspect for image centos:6
('', '', 1), # inspect for missing image centos:7
('Pulled centos:7', 'ouch', 1), # pull centos:6 fails
('Pulled centos:7', '', 0), # pull centos:6 succeeds
# container_names for delete_missing (twice by managed_by)
('', '', 0),
('''five five
six six
two two
three-12345678 three''', '', 0),
('', '', 0), # stop five
('', '', 0), # rm five
('', '', 0), # stop six
('', '', 0), # rm six
# container_names for rename_containers
('three-12345678 three', '', 0),
('', '', 0), # rename three
# desired/container_names to be refreshed after delete/rename
('three three', '', 0), # renamed three already exists
('Created one-12345678', '', 0),
('Created two-12345678', '', 0),
('Created four-12345678', '', 0),
('a\nb\nc', '', 0) # exec four
]
r.discover_container_name = lambda n, c: '%s-12345678' % n
r.unique_container_name = lambda n: '%s-12345678' % n
r.execute = exe
with mock.patch('tenacity.wait.wait_random_exponential.__call__') as f:
f.return_value = 0
with mock.patch('inspect.getargspec') as mock_args:
mock_args.return_value = orig_argspec
builder = compose1.ComposeV1Builder('foo', config, r)
stdout, stderr, deploy_status_code = builder.apply()
self.assertEqual(0, deploy_status_code)
self.assertEqual([
'Pulled centos:7',
'Created one-12345678',
'Created two-12345678',
'Created four-12345678',
'a\nb\nc'
], stdout)
self.assertEqual([], stderr)
exe.assert_has_calls([
# inspect existing image centos:6
mock.call(
['docker', 'inspect', '--type', 'image',
'--format', 'exists', 'centos:6'], mock.ANY, False
),
# inspect and pull missing image centos:7
mock.call(
['docker', 'inspect', '--type', 'image',
'--format', 'exists', 'centos:7'], mock.ANY, False
),
# first pull attempt fails
mock.call(
['docker', 'pull', 'centos:7'], mock.ANY
),
# second pull attempt succeeds
mock.call(
['docker', 'pull', 'centos:7'], mock.ANY
),
# container_names for delete_missing
mock.call(
['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=foo',
'--format', '{{.Names}} {{.Label "container_name"}}'],
mock.ANY
),
mock.call(
['docker', 'ps', '-a',
'--filter', 'label=managed_by=paunch',
'--filter', 'label=config_id=foo',
'--format', '{{.Names}} {{.Label "container_name"}}'],
mock.ANY
),
# rm containers missing in config
mock.call(['docker', 'stop', 'five'], mock.ANY),
mock.call(['docker', 'rm', 'five'], mock.ANY),
mock.call(['docker', 'stop', 'six'], mock.ANY),
mock.call(['docker', 'rm', 'six'], mock.ANY),
# container_names for rename
mock.call(
['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--format', '{{.Names}} {{.Label "container_name"}}'],
mock.ANY
),
# rename three from an ephemeral to the static name
mock.call(['docker', 'rename', 'three-12345678', 'three'],
mock.ANY),
# container_names to be refreshed after delete/rename
mock.call(
['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=foo',
'--format', '{{.Names}} {{.Label "container_name"}}'],
mock.ANY
),
# run one
mock.call(
['docker', 'run', '--name', 'one-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=one',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['one']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# run two
mock.call(
['docker', 'run', '--name', 'two-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=two',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['two']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# run four
mock.call(
['docker', 'run', '--name', 'four-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=four',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['four']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# execute within four
mock.call(
['docker', 'exec', 'four-12345678', 'ls', '-l',
'/'], mock.ANY
),
])
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
@mock.patch("paunch.runner.BaseRunner.container_names")
@mock.patch("paunch.runner.BaseRunner.discover_container_name",
return_value='one')
def test_apply_idempotency(self, mock_dname, mock_cnames, mock_cpu):
config = {
# running with the same config and given an ephemeral name
'one': {
'start_order': 0,
'image': 'centos:7',
},
# not running yet
'two': {
'start_order': 1,
'image': 'centos:7',
},
# running, but with a different config
'three': {
'start_order': 2,
'image': 'centos:7',
},
# not running yet
'four': {
'start_order': 10,
'image': 'centos:7',
},
'one_ls': {
'action': 'exec',
'start_order': 20,
'command': ['one', 'ls', '-l', '/']
}
# five is running but is not managed by us
}
# represents the state before and after renaming/removing things
mock_cnames.side_effect = (
# delete_missing
[['five', 'five'], ['one-12345678', 'one'], ['three', 'three']],
# rename_containers
[['one-12345678', 'one']],
# refresh container_names/desired after del/rename
[['one', 'one'], ['three', 'three']],
# refresh container_names/desired after delete_updated
[['one', 'one']]
)
r = runner.DockerRunner(managed_by='tester', cont_cmd='docker')
exe = mock.Mock()
exe.side_effect = [
# inspect for image centos:7
('exists', '', 0),
# stop five
('', '', 0),
# rm five
('', '', 0),
('', '', 0), # ps for rename one
# inspect one
('{"start_order": 0, "image": "centos:7"}', '', 0),
('Created two-12345678', '', 0),
# inspect three
('{"start_order": 42, "image": "centos:7"}', '', 0),
# stop three, changed config data
('', '', 0),
# rm three, changed config data
('', '', 0),
('Created three-12345678', '', 0),
('Created four-12345678', '', 0),
('a\nb\nc', '', 0) # exec one
]
r.discover_container_name = lambda n, c: '%s-12345678' % n
r.unique_container_name = lambda n: '%s-12345678' % n
r.execute = exe
builder = compose1.ComposeV1Builder('foo', config, r)
stdout, stderr, deploy_status_code = builder.apply()
self.assertEqual(0, deploy_status_code)
self.assertEqual([
'Created two-12345678',
'Created three-12345678',
'Created four-12345678',
'a\nb\nc'
], stdout)
self.assertEqual([], stderr)
exe.assert_has_calls([
# inspect image centos:7
mock.call(
['docker', 'inspect', '--type', 'image',
'--format', 'exists', 'centos:7'], mock.ANY, False
),
# rm containers not in config
mock.call(['docker', 'stop', 'five'], mock.ANY),
mock.call(['docker', 'rm', 'five'], mock.ANY),
# rename one from an ephemeral to the static name
mock.call(['docker', 'rename', 'one-12345678', 'one'],
mock.ANY),
# check the renamed one, config hasn't changed
mock.call(['docker', 'inspect', '--type', 'container',
'--format', '{{index .Config.Labels "config_data"}}',
'one'], mock.ANY, False),
# don't run one, its already running
# run two
mock.call(
['docker', 'run', '--name', 'two-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=two',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['two']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# rm three, changed config
mock.call(['docker', 'inspect', '--type', 'container',
'--format', '{{index .Config.Labels "config_data"}}',
'three'], mock.ANY, False),
mock.call(['docker', 'stop', 'three'], mock.ANY),
mock.call(['docker', 'rm', 'three'], mock.ANY),
# run three
mock.call(
['docker', 'run', '--name', 'three-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=three',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['three']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# run four
mock.call(
['docker', 'run', '--name', 'four-12345678',
'--label', 'config_id=foo',
'--label', 'container_name=four',
'--label', 'managed_by=tester',
'--label', 'config_data=%s' % json.dumps(config['four']),
'--detach=true', '--cpuset-cpus=0,1,2,3',
'centos:7'], mock.ANY
),
# FIXME(bogdando): shall exec ls in the renamed one!
# Why discover_container_name is never called to get it as c_name?
mock.call(
['docker', 'exec', 'one-12345678', 'ls', '-l',
'/'], mock.ANY
),
])
def test_apply_failed_pull(self):
orig_call = tenacity.wait.wait_random_exponential.__call__
orig_argspec = inspect.getargspec(orig_call)
config = {
'one': {
'start_order': 0,
'image': 'centos:7',
},
'two': {
'start_order': 1,
'image': 'centos:7',
},
'three': {
'start_order': 2,
'image': 'centos:6',
},
'four': {
'start_order': 10,
'image': 'centos:7',
},
'four_ls': {
'action': 'exec',
'start_order': 20,
'command': ['four', 'ls', '-l', '/']
}
}
r = runner.DockerRunner(managed_by='tester', cont_cmd='docker')
exe = mock.Mock()
exe.side_effect = [
('exists', '', 0), # inspect for image centos:6
('', '', 1), # inspect for missing image centos:7
('Pulling centos:7', 'ouch', 1), # pull centos:7 failure
('Pulling centos:7', 'ouch', 1), # pull centos:7 retry 2
('Pulling centos:7', 'ouch', 1), # pull centos:7 retry 3
('Pulling centos:7', 'ouch', 1), # pull centos:7 retry 4
]
r.execute = exe
with mock.patch('tenacity.wait.wait_random_exponential.__call__') as f:
f.return_value = 0
with mock.patch('inspect.getargspec') as mock_args:
mock_args.return_value = orig_argspec
builder = compose1.ComposeV1Builder('foo', config, r)
stdout, stderr, deploy_status_code = builder.apply()
self.assertEqual(1, deploy_status_code)
self.assertEqual(['Pulling centos:7'], stdout)
self.assertEqual(['ouch'], stderr)
exe.assert_has_calls([
# inspect existing image centos:6
mock.call(
['docker', 'inspect', '--type', 'image',
'--format', 'exists', 'centos:6'], mock.ANY, False
),
# inspect and pull missing image centos:7
mock.call(
['docker', 'inspect', '--type', 'image',
'--format', 'exists', 'centos:7'], mock.ANY, False
),
mock.call(
['docker', 'pull', 'centos:7'], mock.ANY
),
])
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_label_arguments(self, runner):
r = runner.return_value
r.managed_by = 'tester'
builder = compose1.ComposeV1Builder('foo', {}, r)
cmd = []
builder.label_arguments(cmd, 'one')
self.assertEqual(
['--label', 'config_id=foo',
'--label', 'container_name=one',
'--label', 'managed_by=tester',
'--label', 'config_data=null'],
cmd)
labels = collections.OrderedDict()
labels['foo'] = 'bar'
labels['bar'] = 'baz'
builder = compose1.ComposeV1Builder('foo', {}, r, labels=labels)
cmd = []
builder.label_arguments(cmd, 'one')
self.assertEqual(
['--label', 'foo=bar',
'--label', 'bar=baz',
'--label', 'config_id=foo',
'--label', 'container_name=one',
'--label', 'managed_by=tester',
'--label', 'config_data=null'],
cmd)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_durations(self, runner):
config = {
'a': {'stop_grace_period': 123},
'b': {'stop_grace_period': 123.5},
'c': {'stop_grace_period': '123.3'},
'd': {'stop_grace_period': '2.5s'},
'e': {'stop_grace_period': '10s'},
'f': {'stop_grace_period': '1m30s'},
'g': {'stop_grace_period': '2h32m'},
'h': {'stop_grace_period': '5h34m56s'},
'i': {'stop_grace_period': '1h1m1s1ms1us'},
}
builder = compose1.ComposeV1Builder('foo', config, runner)
result = {
'a': '--stop-timeout=123',
'b': '--stop-timeout=123.5',
'c': '--stop-timeout=123.3',
'd': '--stop-timeout=2.5',
'e': '--stop-timeout=10.0',
'f': '--stop-timeout=90.0',
'g': '--stop-timeout=9120.0',
'h': '--stop-timeout=20096.0',
'i': '--stop-timeout=3661.001001',
}
for container, arg in result.items():
cmd = []
builder.container_run_args(cmd, container)
self.assertIn(arg, cmd)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_container_run_args_lists(self, mock_cpu, runner):
config = {
'one': {
'image': 'centos:7',
'detach': False,
'command': 'ls -l /foo',
'remove': True,
'tty': True,
'interactive': True,
'environment': ['FOO=BAR', 'BAR=BAZ'],
'env_file': ['/tmp/foo.env', '/tmp/bar.env'],
'ulimit': ['nofile=1024', 'nproc=1024'],
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
'volumes_from': ['two', 'three'],
'group_add': ['docker', 'zuul'],
'cap_add': ['SYS_ADMIN', 'SETUID'],
'cap_drop': ['NET_RAW']
}
}
builder = compose1.ComposeV1Builder('foo', config, runner)
cmd = ['docker', 'run', '--name', 'one']
builder.container_run_args(cmd, 'one')
self.assertEqual(
['docker', 'run', '--name', 'one',
'--env-file=/tmp/foo.env', '--env-file=/tmp/bar.env',
'--env=FOO=BAR', '--env=BAR=BAZ',
'--rm', '--interactive', '--tty',
'--ulimit=nofile=1024', '--ulimit=nproc=1024',
'--group-add=docker', '--group-add=zuul',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--volumes-from=two', '--volumes-from=three',
'--cpuset-cpus=0,1,2,3',
'--cap-add=SYS_ADMIN', '--cap-add=SETUID', '--cap-drop=NET_RAW',
'centos:7', 'ls', '-l', '/foo'],
cmd
)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_container_run_args_lists_with_cpu_and_dict_env(self, runner):
config = {
'one': {
'image': 'centos:7',
'detach': False,
'command': 'ls -l /foo',
'remove': True,
'tty': True,
'interactive': True,
'environment': {'BAR': 'BAZ', 'FOO': 'BAR', 'SINGLE': ''},
'env_file': ['/tmp/foo.env', '/tmp/bar.env'],
'ulimit': ['nofile=1024', 'nproc=1024'],
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
'volumes_from': ['two', 'three'],
'group_add': ['docker', 'zuul'],
'cap_add': ['SYS_ADMIN', 'SETUID'],
'cap_drop': ['NET_RAW'],
'cpuset_cpus': '0-2',
}
}
builder = compose1.ComposeV1Builder('foo', config, runner)
cmd = ['docker', 'run', '--name', 'one']
builder.container_run_args(cmd, 'one')
self.assertEqual(
['docker', 'run', '--name', 'one',
'--env-file=/tmp/foo.env', '--env-file=/tmp/bar.env',
'--env=BAR=BAZ', '--env=FOO=BAR', '--env=SINGLE',
'--rm', '--interactive', '--tty',
'--ulimit=nofile=1024', '--ulimit=nproc=1024',
'--group-add=docker', '--group-add=zuul',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--volumes-from=two', '--volumes-from=three',
'--cpuset-cpus=0-2',
'--cap-add=SYS_ADMIN', '--cap-add=SETUID', '--cap-drop=NET_RAW',
'centos:7', 'ls', '-l', '/foo'],
cmd
)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_cont_exec_args(self, runner):
r = runner.return_value
r.discover_container_name.return_value = 'one-12345678'
config = {
'one': {
'command': 'ls -l /foo',
'privileged': True,
'environment': {'FOO': 'BAR'},
'user': 'bar'
}
}
self.builder = compose1.ComposeV1Builder(
'foo', config, runner.return_value)
cmd = ['docker', 'exec']
self.builder.cont_exec_args(cmd, 'one', 'one-12345678')
self.assertEqual(
['docker', 'exec',
'--privileged=true', '--user=bar',
'--env=FOO=BAR',
'one-12345678', '-l', '/foo'],
cmd
)
def test_command_argument(self):
b = compose1.ComposeV1Builder
self.assertEqual([], b.command_argument(None))
self.assertEqual([], b.command_argument(''))
self.assertEqual([], b.command_argument([]))
self.assertEqual(
['ls', '-l', '/foo-bar'],
b.command_argument(['ls', '-l', '/foo-bar'])
)
self.assertEqual(
['ls', '-l', '/foo-bar'],
b.command_argument('ls -l /foo-bar')
)
self.assertEqual(
['ls', '-l', '/foo bar'],
b.command_argument(['ls', '-l', '/foo bar'])
)
# don't expect quoted spaces to do the right thing
self.assertEqual(
['ls', '-l', '"/foo', 'bar"'],
b.command_argument('ls -l "/foo bar"')
)
class TestVolumeChecks(base.TestCase):
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_validate_volumes(self, runner):
runner.validate_volume_source.return_value = True
builder = basebuilder.BaseBuilder('test', {}, runner, {})
volumes = ['', '/var:/var', 'test:/bar']
self.assertTrue(builder.validate_volumes(volumes))
def test_validate_volumes_empty(self):
builder = basebuilder.BaseBuilder('test', {}, runner, {})
volumes = []
self.assertTrue(builder.validate_volumes(volumes))
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_validate_volumes_fail(self, runner):
runner.validate_volume_source.return_value = False
builder = basebuilder.BaseBuilder('test', {}, runner, {})
volumes = ['/var:/var']
self.assertFalse(builder.validate_volumes(volumes))

View File

@ -1,127 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from paunch.builder import compose1
from paunch.tests import test_builder_base as tbb
class TestComposeV1Builder(tbb.TestBaseBuilder):
@mock.patch('paunch.runner.DockerRunner', autospec=True)
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_cont_run_args(self, mock_cpu, runner):
config = {
'one': {
'image': 'centos:7',
'privileged': True,
'user': 'bar',
'net': 'host',
'ipc': 'host',
'pid': 'container:bar',
'uts': 'host',
'restart': 'always',
'healthcheck': {
'test': '/bin/true',
'interval': '30s',
'timeout': '10s',
'retries': 3
},
'env_file': '/tmp/foo.env',
'log_tag': '{{.ImageName}}/{{.Name}}/{{.ID}}',
'cpu_shares': 600,
'cpuset_cpus': 'all',
'mem_limit': '1G',
'memswap_limit': '1G',
'mem_swappiness': '60',
'security_opt': 'label:disable',
'cap_add': ['SYS_ADMIN', 'SETUID'],
'cap_drop': ['NET_RAW'],
'hostname': 'foohostname',
'extra_hosts': [
'foohost:127.0.0.1',
'barhost:127.0.0.2'
]
}
}
runner.validate_volume_source.return_value = True
builder = compose1.ComposeV1Builder('foo', config, runner)
cmd = ['docker', 'run', '--name', 'one']
builder.container_run_args(cmd, 'one')
self.assertEqual(
['docker', 'run', '--name', 'one',
'--detach=true', '--env-file=/tmp/foo.env',
'--net=host', '--ipc=host', '--pid=container:bar',
'--uts=host', '--health-cmd', '/bin/true',
'--health-interval=30s',
'--health-timeout=10s', '--health-retries=3',
'--privileged=true', '--restart=always', '--user=bar',
'--log-opt=tag={{.ImageName}}/{{.Name}}/{{.ID}}',
'--cpu-shares=600',
'--memory=1G',
'--memory-swap=1G',
'--memory-swappiness=60',
'--security-opt=label:disable',
'--hostname=foohostname',
'--add-host=foohost:127.0.0.1',
'--add-host=barhost:127.0.0.2',
'--cap-add=SYS_ADMIN', '--cap-add=SETUID', '--cap-drop=NET_RAW',
'centos:7'],
cmd
)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_cont_run_args_validation_true(self, mock_cpu, runner):
config = {
'one': {
'image': 'foo',
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
}
}
runner.validate_volume_source.return_value = True
builder = compose1.ComposeV1Builder('foo', config, runner)
cmd = ['docker']
self.assertTrue(builder.container_run_args(cmd, 'one'))
self.assertEqual(
['docker', '--detach=true',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--cpuset-cpus=0,1,2,3',
'foo'],
cmd
)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_cont_run_args_validation_false(self, mock_cpu, runner):
config = {
'one': {
'image': 'foo',
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
}
}
runner.validate_volume_source.return_value = False
builder = compose1.ComposeV1Builder('foo', config, runner)
cmd = ['docker']
self.assertFalse(builder.container_run_args(cmd, 'one'))
self.assertEqual(
['docker', '--detach=true',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--cpuset-cpus=0,1,2,3', 'foo'],
cmd
)

View File

@ -1,118 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from paunch.builder import podman
from paunch.tests import test_builder_base as base
class TestPodmanBuilder(base.TestBaseBuilder):
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_cont_run_args(self, mock_cpu):
config = {
'one': {
'image': 'centos:7',
'privileged': True,
'user': 'bar',
'net': 'host',
'ipc': 'host',
'pid': 'container:bar',
'uts': 'host',
'restart': 'always',
'env_file': '/tmp/foo.env',
'log_tag': '{{.ImageName}}/{{.Name}}/{{.ID}}',
'cpu_shares': 600,
'mem_limit': '1G',
'memswap_limit': '1G',
'mem_swappiness': '60',
'security_opt': 'label:disable',
'cap_add': ['SYS_ADMIN', 'SETUID'],
'cap_drop': ['NET_RAW'],
'hostname': 'foohostname',
'extra_hosts': [
'foohost:127.0.0.1',
'barhost:127.0.0.2'
]
}
}
builder = podman.PodmanBuilder('foo', config, None)
cmd = ['podman', 'run', '--name', 'one']
builder.container_run_args(cmd, 'one')
self.assertEqual(
['podman', 'run', '--name', 'one',
'--conmon-pidfile=/var/run/one.pid',
'--detach=true', '--env-file=/tmp/foo.env',
'--net=host', '--ipc=host', '--pid=container:bar',
'--uts=host', '--privileged=true', '--user=bar',
'--log-opt=tag={{.ImageName}}/{{.Name}}/{{.ID}}',
'--cpu-shares=600',
'--memory=1G',
'--memory-swap=1G',
'--memory-swappiness=60',
'--security-opt=label:disable',
'--hostname=foohostname',
'--add-host=foohost:127.0.0.1',
'--add-host=barhost:127.0.0.2',
'--cpuset-cpus=0,1,2,3',
'--cap-add=SYS_ADMIN', '--cap-add=SETUID', '--cap-drop=NET_RAW',
'centos:7'],
cmd
)
@mock.patch("psutil.Process.cpu_affinity",
return_value=[0, 1, 2, 3, 4, 5, 6, 7])
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_cont_run_args_validation_true(self, runner, mock_cpu):
config = {
'one': {
'image': 'foo',
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
}
}
runner.validate_volume_source.return_value = True
builder = podman.PodmanBuilder('foo', config, runner)
cmd = ['podman']
self.assertTrue(builder.container_run_args(cmd, 'one'))
self.assertEqual(
['podman', '--conmon-pidfile=/var/run/one.pid', '--detach=true',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--cpuset-cpus=0,1,2,3,4,5,6,7', 'foo'],
cmd
)
@mock.patch("psutil.Process.cpu_affinity",
return_value=[0, 1, 2, 3, 4, 5, 6, 7])
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_cont_run_args_validation_false(self, runner, mock_cpu):
config = {
'one': {
'image': 'foo',
'volumes': ['/foo:/foo:rw', '/bar:/bar:ro'],
}
}
runner.validate_volume_source.return_value = False
builder = podman.PodmanBuilder('foo', config, runner)
cmd = ['podman']
self.assertFalse(builder.container_run_args(cmd, 'one'))
self.assertEqual(
['podman', '--conmon-pidfile=/var/run/one.pid', '--detach=true',
'--volume=/foo:/foo:rw', '--volume=/bar:/bar:ro',
'--cpuset-cpus=0,1,2,3,4,5,6,7', 'foo'],
cmd
)

View File

@ -1,225 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import paunch
from paunch.tests import base
class TestPaunchDockerRuntime(base.TestCase):
@mock.patch('paunch.builder.compose1.ComposeV1Builder', autospec=True)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_apply(self, runner, builder):
paunch.apply('foo', {'bar': 'baz'}, 'tester', cont_cmd='docker')
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels=None,
log=mock.ANY,
cleanup=True
)
builder.return_value.apply.assert_called_once_with()
@mock.patch('paunch.builder.compose1.ComposeV1Builder', autospec=True)
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_apply_labels(self, runner, builder):
paunch.apply(
config_id='foo',
config={'bar': 'baz'},
cont_cmd='docker',
managed_by='tester',
labels={'bink': 'boop'})
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels={'bink': 'boop'},
log=mock.ANY,
cleanup=True
)
builder.return_value.apply.assert_called_once_with()
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_cleanup(self, runner):
paunch.cleanup(['foo', 'bar'], 'tester', cont_cmd='docker')
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
runner.return_value.delete_missing_configs.assert_called_once_with(
['foo', 'bar'])
runner.return_value.rename_containers.assert_called_once_with()
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_list(self, runner):
paunch.list('tester', cont_cmd='docker')
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
runner.return_value.list_configs.assert_called_once_with()
@mock.patch('paunch.runner.DockerRunner', autospec=True)
def test_delete(self, runner):
paunch.delete(['foo', 'bar'], 'tester', cont_cmd='docker')
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
runner.return_value.remove_containers.assert_has_calls([
mock.call('foo'), mock.call('bar')
])
@mock.patch('paunch.builder.compose1.ComposeV1Builder', autospec=True)
@mock.patch('paunch.runner.DockerRunner')
def test_debug(self, runner, builder):
paunch.debug('foo', 'testcont', 'run', {'bar': 'baz'}, 'tester',
cont_cmd='docker', log_level=42, log_file='/dev/null')
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels=None,
log=mock.ANY
)
runner.assert_called_once_with('tester', cont_cmd='docker',
log=mock.ANY)
class TestPaunchPodmanRuntime(base.TestCase):
@mock.patch('paunch.builder.podman.PodmanBuilder', autospec=True)
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_apply(self, runner, builder):
paunch.apply(
config_id='foo',
config={'bar': 'baz'},
managed_by='tester',
labels=None,
cont_cmd='podman',
cont_log_path=None,
healthcheck_disabled=False)
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels=None,
log=mock.ANY,
cont_log_path=None,
healthcheck_disabled=False,
cleanup=True
)
builder.return_value.apply.assert_called_once_with()
@mock.patch('paunch.builder.podman.PodmanBuilder', autospec=True)
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_apply_container_log(self, runner, builder):
paunch.apply(
config_id='foo',
config={'bar': 'baz'},
managed_by='tester',
labels=None,
cont_cmd='podman',
cont_log_path='/var/log',
healthcheck_disabled=False)
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels=None,
log=mock.ANY,
cont_log_path='/var/log',
healthcheck_disabled=False,
cleanup=True
)
builder.return_value.apply.assert_called_once_with()
@mock.patch('paunch.builder.podman.PodmanBuilder', autospec=True)
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_apply_labels(self, runner, builder):
paunch.apply(
config_id='foo',
config={'bar': 'baz'},
managed_by='tester',
labels={'bink': 'boop'},
cont_cmd='podman',
cont_log_path=None,
healthcheck_disabled=False)
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels={'bink': 'boop'},
log=mock.ANY,
cont_log_path=None,
healthcheck_disabled=False,
cleanup=True
)
builder.return_value.apply.assert_called_once_with()
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_cleanup(self, runner):
paunch.cleanup(
['foo', 'bar'],
managed_by='tester',
cont_cmd='podman')
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
runner.return_value.delete_missing_configs.assert_called_once_with(
['foo', 'bar'])
runner.return_value.rename_containers.assert_called_once_with()
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_list(self, runner):
paunch.list('tester', cont_cmd='podman')
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
runner.return_value.list_configs.assert_called_once_with()
@mock.patch('paunch.runner.PodmanRunner', autospec=True)
def test_delete(self, runner):
paunch.delete(
['foo', 'bar'],
managed_by='tester',
cont_cmd='podman')
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)
runner.return_value.remove_containers.assert_has_calls([
mock.call('foo'), mock.call('bar')
])
@mock.patch('paunch.builder.podman.PodmanBuilder', autospec=True)
@mock.patch('paunch.runner.PodmanRunner')
def test_debug(self, runner, builder):
paunch.debug('foo', 'testcont', 'run', {'bar': 'baz'}, 'tester',
labels=None, cont_cmd='podman',
log_level=42, log_file='/dev/null')
builder.assert_called_once_with(
config_id='foo',
config={'bar': 'baz'},
runner=runner.return_value,
labels=None,
log=mock.ANY
)
runner.assert_called_once_with('tester', cont_cmd='podman',
log=mock.ANY)

View File

@ -1,460 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from testtools import matchers
from paunch import runner
from paunch.tests import base
class TestBaseRunner(base.TestCase):
def setUp(self):
super(TestBaseRunner, self).setUp()
self.runner = runner.DockerRunner('tester')
self.podman_runner = runner.PodmanRunner('tester')
def mock_execute(self, popen, stdout, stderr, returncode):
subproc = mock.Mock()
subproc.returncode = returncode
subproc.communicate.return_value = (stdout.encode('utf-8'),
stderr.encode('utf-8'))
popen.return_value = subproc
def assert_execute(self, popen, cmd):
popen.assert_called_with(cmd, stderr=-1, stdout=-1)
@mock.patch('subprocess.Popen')
def test_execute(self, popen):
self.mock_execute(popen, 'The stdout', 'The stderr', 0)
self.assertEqual(
('The stdout', 'The stderr', 0),
self.runner.execute(['ls', '-l'])
)
self.assert_execute(popen, ['ls', '-l'])
@mock.patch('subprocess.Popen')
def test_current_config_ids_docker(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.assertEqual(
set(('one', 'two', 'three')),
self.runner.current_config_ids()
)
self.assert_execute(
popen, ['docker', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Label "config_id"}}']
)
@mock.patch('subprocess.Popen')
def test_current_config_ids_podman(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.assertEqual(
set(('one', 'two', 'three')),
self.podman_runner.current_config_ids()
)
self.assert_execute(
popen, ['podman', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Labels.config_id}}']
)
@mock.patch('subprocess.Popen')
def test_containers_in_config(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.runner.remove_container = mock.Mock()
result = self.runner.containers_in_config('foo')
self.assert_execute(
popen, ['docker', 'ps', '-q', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=foo']
)
self.assertEqual(['one', 'two', 'three'], result)
@mock.patch('subprocess.Popen')
def test_inspect(self, popen):
self.mock_execute(popen, '[{"foo": "bar"}]', '', 0)
self.assertEqual(
{"foo": "bar"},
self.runner.inspect('one')
)
self.assert_execute(
popen, ['docker', 'inspect', '--type', 'container', 'one']
)
@mock.patch('subprocess.Popen')
def test_inspect_format(self, popen):
self.mock_execute(popen, 'bar', '', 0)
self.assertEqual(
"bar",
self.runner.inspect('one', output_format='{{foo}}')
)
self.assert_execute(
popen, ['docker', 'inspect', '--type', 'container',
'--format', '{{foo}}', 'one']
)
def test_unique_container_name(self):
self.runner.inspect = mock.Mock()
self.runner.inspect.return_value = None
self.assertEqual('foo', self.runner.unique_container_name('foo'))
self.runner.inspect.side_effect = ['exists', 'exists', None]
name = self.runner.unique_container_name('foo')
name_pattern = '^foo-[a-z0-9]{8}$'
self.assertThat(name, matchers.MatchesRegex(name_pattern))
@mock.patch('subprocess.Popen')
def test_discover_container_name(self, popen):
self.mock_execute(popen, 'one-12345678', '', 0)
self.assertEqual(
'one-12345678',
self.runner.discover_container_name('one', 'foo')
)
self.assert_execute(
popen, ['docker', 'ps', '-a',
'--filter', 'label=container_name=one',
'--filter', 'label=config_id=foo',
'--format', '{{.Names}}']
)
@mock.patch('subprocess.Popen')
def test_discover_container_name_empty(self, popen):
self.mock_execute(popen, '', '', 0)
self.assertEqual(
None,
self.runner.discover_container_name('one', 'foo')
)
@mock.patch('subprocess.Popen')
def test_discover_container_name_error(self, popen):
self.mock_execute(popen, '', 'ouch', 1)
self.assertEqual(
None,
self.runner.discover_container_name('one', 'foo')
)
@mock.patch('subprocess.Popen')
def test_delete_missing_configs_docker(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree\nfour', '', 0)
self.runner.remove_containers = mock.Mock()
self.runner.delete_missing_configs(['two', 'three'])
self.assert_execute(
popen, ['docker', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Label "config_id"}}']
)
# containers one and four will be deleted
self.runner.remove_containers.assert_has_calls([
mock.call('one'), mock.call('four')
], any_order=True)
@mock.patch('subprocess.Popen')
def test_delete_missing_configs_podman(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree\nfour', '', 0)
self.podman_runner.remove_containers = mock.Mock()
self.podman_runner.delete_missing_configs(['two', 'three'])
self.assert_execute(
popen, ['podman', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Labels.config_id}}']
)
# containers one and four will be deleted
self.podman_runner.remove_containers.assert_has_calls([
mock.call('one'), mock.call('four')
], any_order=True)
@mock.patch('subprocess.Popen')
def test_list_configs_docker(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.runner.inspect = mock.Mock(
return_value={'e': 'f'})
self.runner.containers_in_config = mock.Mock(
return_value=['a', 'b', 'c'])
result = self.runner.list_configs()
self.assert_execute(
popen, ['docker', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Label "config_id"}}']
)
self.runner.containers_in_config.assert_has_calls([
mock.call('one'), mock.call('two'), mock.call('three')
], any_order=True)
self.runner.inspect.assert_has_calls([
mock.call('a'), mock.call('b'), mock.call('c'),
mock.call('a'), mock.call('b'), mock.call('c'),
mock.call('a'), mock.call('b'), mock.call('c')
])
self.assertEqual({
'one': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}],
'two': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}],
'three': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}]
}, result)
@mock.patch('subprocess.Popen')
def test_list_configs_podman(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.podman_runner.inspect = mock.Mock(
return_value={'e': 'f'})
self.podman_runner.containers_in_config = mock.Mock(
return_value=['a', 'b', 'c'])
result = self.podman_runner.list_configs()
self.assert_execute(
popen, ['podman', 'ps', '-a', '--filter',
'label=managed_by=tester',
'--format', '{{.Labels.config_id}}']
)
self.podman_runner.containers_in_config.assert_has_calls([
mock.call('one'), mock.call('two'), mock.call('three')
], any_order=True)
self.podman_runner.inspect.assert_has_calls([
mock.call('a'), mock.call('b'), mock.call('c'),
mock.call('a'), mock.call('b'), mock.call('c'),
mock.call('a'), mock.call('b'), mock.call('c')
])
self.assertEqual({
'one': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}],
'two': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}],
'three': [{'e': 'f'}, {'e': 'f'}, {'e': 'f'}]
}, result)
@mock.patch('subprocess.Popen')
def test_remove_containers(self, popen):
self.mock_execute(popen, 'one\ntwo\nthree', '', 0)
self.runner.remove_container = mock.Mock()
self.runner.remove_containers('foo')
self.assert_execute(
popen, ['docker', 'ps', '-q', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=foo']
)
self.runner.remove_container.assert_has_calls([
mock.call('one'), mock.call('two'), mock.call('three')
])
@mock.patch('subprocess.Popen')
def test_remove_container(self, popen):
self.mock_execute(popen, '', '', 0)
self.runner.remove_container('one')
self.assert_execute(
popen, ['docker', 'rm', 'one']
)
@mock.patch('subprocess.Popen')
def test_stop_container(self, popen):
self.mock_execute(popen, '', '', 0)
self.runner.stop_container('one')
self.assert_execute(
popen, ['docker', 'stop', 'one']
)
@mock.patch('subprocess.Popen')
def test_stop_container_override(self, popen):
self.mock_execute(popen, '', '', 0)
self.runner.stop_container('one', 'podman')
self.assert_execute(
popen, ['podman', 'stop', 'one']
)
@mock.patch('subprocess.Popen')
def test_container_names_docker(self, popen):
ps_result = '''one one
two-12345678 two
two two
three-12345678 three
four-12345678 four
'''
self.mock_execute(popen, ps_result, '', 0)
names = list(self.runner.container_names())
self.assert_execute(
popen, ['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--format', '{{.Names}} {{.Label "container_name"}}']
)
self.assertEqual([
['one', 'one'],
['two-12345678', 'two'],
['two', 'two'],
['three-12345678', 'three'],
['four-12345678', 'four']
], names)
@mock.patch('subprocess.Popen')
def test_container_names_podman(self, popen):
ps_result = '''one one
two-12345678 two
two two
three-12345678 three
four-12345678 four
'''
self.mock_execute(popen, ps_result, '', 0)
names = list(self.podman_runner.container_names())
self.assert_execute(
popen, ['podman', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--format', '{{.Names}} {{.Labels.container_name}}']
)
self.assertEqual([
['one', 'one'],
['two-12345678', 'two'],
['two', 'two'],
['three-12345678', 'three'],
['four-12345678', 'four']
], names)
@mock.patch('subprocess.Popen')
def test_container_names_by_conf_id_docker(self, popen):
ps_result = '''one one
two-12345678 two
'''
self.mock_execute(popen, ps_result, '', 0)
names = list(self.runner.container_names('abc'))
self.assert_execute(
popen, ['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=abc',
'--format', '{{.Names}} {{.Label "container_name"}}']
)
self.assertEqual([
['one', 'one'],
['two-12345678', 'two']
], names)
@mock.patch('subprocess.Popen')
def test_container_names_by_conf_id_podman(self, popen):
ps_result = '''one one
two-12345678 two
'''
self.mock_execute(popen, ps_result, '', 0)
names = list(self.podman_runner.container_names('abc'))
self.assert_execute(
popen, ['podman', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--filter', 'label=config_id=abc',
'--format', '{{.Names}} {{.Labels.container_name}}']
)
self.assertEqual([
['one', 'one'],
['two-12345678', 'two']
], names)
@mock.patch('os.path.exists', return_value=True)
def test_validate_volume_source_file(self, exists_mock):
self.assertTrue(self.podman_runner.validate_volume_source('/tmp'))
@mock.patch('os.path.exists', return_value=False)
def test_validate_volume_source_file_fail(self, exists_mock):
self.assertFalse(self.podman_runner.validate_volume_source('/nope'))
@mock.patch('os.path.exists', return_value=False)
@mock.patch('subprocess.Popen')
def test_validate_volume_source_container(self, popen, exists_mock):
ps_result = '''foo
foobar
'''
self.mock_execute(popen, ps_result, '', 0)
self.assertTrue(self.podman_runner.validate_volume_source('foo'))
@mock.patch('os.path.exists', return_value=False)
@mock.patch('subprocess.Popen')
def test_validate_volume_source_container_fail(self, popen, exists_mock):
ps_result = ''
self.mock_execute(popen, ps_result, '', 0)
self.assertFalse(self.podman_runner.validate_volume_source('foo'))
class TestDockerRunner(TestBaseRunner):
@mock.patch('subprocess.Popen')
def test_rename_containers(self, popen):
ps_result = '''one one
two-12345678 two
two two
three-12345678 three
four-12345678 four
'''
self.mock_execute(popen, ps_result, '', 0)
self.runner.rename_container = mock.Mock()
self.runner.rename_containers()
self.assert_execute(
popen, ['docker', 'ps', '-a',
'--filter', 'label=managed_by=tester',
'--format', '{{.Names}} {{.Label "container_name"}}']
)
# only containers three-12345678 and four-12345678 four will be renamed
self.runner.rename_container.assert_has_calls([
mock.call('three-12345678', 'three'),
mock.call('four-12345678', 'four')
], any_order=True)
class PodmanRunner(TestBaseRunner):
@mock.patch('subprocess.Popen')
def test_image_exist(self, popen):
self.mock_execute(popen, '', '', 0)
self.runner = runner.PodmanRunner('tester')
self.runner.image_exist('one')
self.assert_execute(
popen, ['podman', 'image', 'exists', 'one']
)
@mock.patch('subprocess.Popen')
def test_container_exist(self, popen):
self.mock_execute(popen, '', '', 0)
self.runner = runner.PodmanRunner('tester')
self.runner.container_exist('one')
self.assert_execute(
popen, ['podman', 'container', 'exists', 'one']
)

View File

@ -1,112 +0,0 @@
# Copyright 2019 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from paunch.tests import base
from paunch.utils import common
class TestUtilsCommonCpu(base.TestCase):
@mock.patch("psutil.Process.cpu_affinity", return_value=[0, 1, 2, 3])
def test_get_cpus_allowed_list(self, mock_cpu):
expected_list = '0,1,2,3'
actual_list = common.get_cpus_allowed_list()
self.assertEqual(actual_list, expected_list)
class TestUtilsCommonConfig(base.TestCase):
def setUp(self):
super(TestUtilsCommonConfig, self).setUp()
self.config_content = "{'image': 'docker.io/haproxy'}"
self.config_override = {'haproxy': {'image': 'quay.io/haproxy'}}
self.open_func = 'paunch.utils.common.open'
self.expected_config = {'haproxy': {'image': 'docker.io/haproxy'}}
self.expected_config_over = {'haproxy': {'image': 'quay.io/haproxy'}}
self.container = 'haproxy'
self.old_config_file = '/var/lib/tripleo-config/' + \
'hashed-container-startup-config-step_1.json'
self.old_config_content = "{'haproxy': {'image': 'docker.io/haproxy'}}"
@mock.patch('os.path.isdir')
def test_load_config_dir_with_name(self, mock_isdir):
mock_isdir.return_value = True
mock_open = mock.mock_open(read_data=self.config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config('/config_dir', self.container))
@mock.patch('os.path.isdir')
@mock.patch('glob.glob')
def test_load_config_dir_without_name(self, mock_glob, mock_isdir):
mock_isdir.return_value = True
mock_glob.return_value = ['hashed-haproxy.json']
mock_open = mock.mock_open(read_data=self.config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config('/config_dir'))
@mock.patch('os.path.isdir')
def test_load_config_file_with_name(self, mock_isdir):
mock_isdir.return_value = False
mock_open = mock.mock_open(read_data=self.config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config('/config_dir/haproxy.json', self.container))
@mock.patch('os.path.isdir')
def test_load_config_file_without_name(self, mock_isdir):
mock_isdir.return_value = False
mock_open = mock.mock_open(read_data=self.config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config('/config_dir/haproxy.json'))
@mock.patch('os.path.isdir')
def test_load_config_file_backward_compat_with_name(self, mock_isdir):
mock_isdir.return_value = False
mock_open = mock.mock_open(read_data=self.old_config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config(self.old_config_file, self.container))
@mock.patch('os.path.isdir')
@mock.patch('glob.glob')
def test_load_config_file_backward_compat_without_name(self, mock_glob,
mock_isdir):
mock_isdir.return_value = False
mock_glob.return_value = ['hashed-haproxy.json']
mock_open = mock.mock_open(read_data=self.old_config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config,
common.load_config(self.old_config_file))
@mock.patch('os.path.isdir')
def test_load_config_dir_with_name_and_override(self, mock_isdir):
mock_isdir.return_value = True
mock_open = mock.mock_open(read_data=self.config_content)
with mock.patch(self.open_func, mock_open):
self.assertEqual(
self.expected_config_over,
common.load_config('/config_dir', self.container,
self.config_override))

View File

@ -1,87 +0,0 @@
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from paunch.tests import base
from paunch.utils import systemctl
class TestUtilsSystemctl(base.TestCase):
def test_format_service_name(self):
expected = 'test.service'
self.assertEqual(expected, systemctl.format_name('test'))
self.assertEqual(expected, systemctl.format_name(expected))
@mock.patch('subprocess.check_call', autospec=True)
def test_stop(self, mock_subprocess_check_call):
test = 'test'
systemctl.stop(test)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'stop', test]),
])
@mock.patch('subprocess.check_call', autospec=True)
def test_daemon_reload(self, mock_subprocess_check_call):
systemctl.daemon_reload()
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'daemon-reload']),
])
@mock.patch('subprocess.check_call', autospec=True)
def test_is_active(self, mock_subprocess_check_call):
systemctl.is_active('foo')
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'is-active', '-q', 'foo']),
])
@mock.patch('subprocess.check_call', autospec=True)
def test_enable(self, mock_subprocess_check_call):
test = 'test'
systemctl.enable(test, now=True)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'enable', '--now', test]),
])
systemctl.enable(test)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'enable', '--now', test]),
])
systemctl.enable(test, now=False)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'enable', test]),
])
@mock.patch('subprocess.check_call', autospec=True)
def test_disable(self, mock_subprocess_check_call):
test = 'test'
systemctl.disable(test)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'disable', test]),
])
@mock.patch('subprocess.check_call', autospec=True)
def test_add_requires(self, mock_subprocess_check_call):
test = 'test'
requires = "foo"
systemctl.add_requires(test, requires)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'add-requires', test, requires]),
])
requires = ["foo", "bar"]
systemctl.add_requires(test, requires)
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'add-requires', test, "foo", "bar"]),
])

View File

@ -1,166 +0,0 @@
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import tempfile
from paunch.tests import base
from paunch.utils import systemd
class TestUtilsSystemd(base.TestCase):
@mock.patch('shutil.rmtree', autospec=True)
@mock.patch('os.path.exists', autospec=True)
@mock.patch('subprocess.check_call', autospec=True)
@mock.patch('os.chmod')
def test_service_create(self, mock_chmod, mock_subprocess_check_call,
mock_exists, mock_rmtree):
container = 'my_app'
service = 'tripleo_' + container
cconfig = {'depends_on': ['something'], 'restart': 'unless-stopped',
'stop_grace_period': '15'}
tempdir = tempfile.mkdtemp()
systemd.service_create(container, cconfig, tempdir)
sysd_unit_f = tempdir + service + '.service'
unit = open(sysd_unit_f, 'rt').read()
self.assertIn('Wants=something.service', unit)
self.assertIn('Restart=always', unit)
self.assertIn('ExecStop=/usr/bin/podman stop -t 15 my_app', unit)
self.assertIn('PIDFile=/var/run/my_app.pid', unit)
mock_chmod.assert_has_calls([mock.call(sysd_unit_f, 420)])
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'daemon-reload']),
mock.call(['systemctl', 'enable', '--now', service]),
])
mock_rmtree.assert_has_calls([
mock.call(sysd_unit_f + '.requires')
])
os.rmdir(tempdir)
@mock.patch('subprocess.check_call', autospec=True)
@mock.patch('os.chmod')
def test_svc_extended_create(self, mock_chmod, mock_subprocess_check_call):
container = 'my_app'
service = 'tripleo_' + container
cconfig = {'depends_on': ['something'], 'restart': 'unless-stopped',
'stop_grace_period': '15',
'systemd_exec_flags': {'RootDirectory': '/srv',
'LimitCPU': '60',
'RuntimeDirectory': 'my_app foo/bar'}
}
tempdir = tempfile.mkdtemp()
systemd.service_create(container, cconfig, tempdir)
sysd_unit_f = tempdir + service + '.service'
unit = open(sysd_unit_f, 'rt').read()
self.assertIn('RootDirectory=/srv', unit)
self.assertIn('LimitCPU=60', unit)
self.assertIn('RuntimeDirectory=my_app foo/bar', unit)
os.rmdir(tempdir)
@mock.patch('shutil.rmtree', autospec=True)
@mock.patch('os.remove', autospec=True)
@mock.patch('os.path.exists', autospec=True)
@mock.patch('os.path.isfile', autospec=True)
@mock.patch('subprocess.check_call', autospec=True)
def test_service_delete(self, mock_subprocess_check_call, mock_isfile,
mock_exists, mock_rm, mock_rmtree):
mock_isfile.return_value = True
container = 'my_app'
service = 'tripleo_' + container
tempdir = tempfile.mkdtemp()
service_requires_d = service + '.service.requires'
systemd.service_delete(container, tempdir)
mock_rm.assert_has_calls([
mock.call(tempdir + service + '.service'),
mock.call(tempdir + service + '_healthcheck.service'),
mock.call(tempdir + service + '_healthcheck.timer'),
])
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'stop', service + '.service']),
mock.call(['systemctl', 'disable', service + '.service']),
mock.call(['systemctl', 'stop', service + '_healthcheck.service']),
mock.call(['systemctl', 'disable', service +
'_healthcheck.service']),
mock.call(['systemctl', 'stop', service + '_healthcheck.timer']),
mock.call(['systemctl', 'disable', service +
'_healthcheck.timer']),
])
mock_rmtree.assert_has_calls([
mock.call(os.path.join(tempdir, service_requires_d)),
])
@mock.patch('os.chmod')
def test_healthcheck_create(self, mock_chmod):
container = 'my_app'
service = 'tripleo_' + container
tempdir = tempfile.mkdtemp()
healthcheck = service + '_healthcheck.service'
sysd_unit_f = tempdir + healthcheck
systemd.healthcheck_create(container, tempdir)
unit = open(sysd_unit_f, 'rt').read()
self.assertIn('Requisite=tripleo_my_app.service', unit)
self.assertIn('ExecStart=/usr/bin/podman exec --user root my_app '
'/openstack/healthcheck', unit)
mock_chmod.assert_has_calls([mock.call(sysd_unit_f, 420)])
@mock.patch('os.chmod')
def test_healthcheck_create_command(self, mock_chmod):
container = 'my_app'
service = 'tripleo_' + container
tempdir = tempfile.mkdtemp()
healthcheck = service + '_healthcheck.service'
sysd_unit_f = tempdir + healthcheck
check = '/foo/bar baz'
systemd.healthcheck_create(container, tempdir, test=check)
unit = open(sysd_unit_f, 'rt').read()
self.assertIn('ExecStart=/usr/bin/podman exec --user root my_app '
'/foo/bar baz', unit)
@mock.patch('subprocess.check_call', autospec=True)
@mock.patch('os.chmod')
def test_healthcheck_timer_create(self, mock_chmod,
mock_subprocess_check_call):
container = 'my_app'
service = 'tripleo_' + container
cconfig = {'check_interval': '15'}
tempdir = tempfile.mkdtemp()
healthcheck_timer = service + '_healthcheck.timer'
sysd_unit_f = tempdir + healthcheck_timer
systemd.healthcheck_timer_create(container, cconfig, tempdir)
unit = open(sysd_unit_f, 'rt').read()
self.assertIn('PartOf=%s.service' % service, unit)
self.assertIn('OnActiveSec=120', unit)
self.assertIn('OnUnitActiveSec=15', unit)
mock_chmod.assert_has_calls([mock.call(sysd_unit_f, 420)])
mock_subprocess_check_call.assert_has_calls([
mock.call(['systemctl', 'enable', '--now', healthcheck_timer]),
mock.call(['systemctl', 'add-requires', service + '.service',
healthcheck_timer]),
mock.call(['systemctl', 'daemon-reload']),
])

View File

@ -1,154 +0,0 @@
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import logging
import os
import psutil
import re
import sys
import yaml
from paunch import constants
from paunch import utils
def configure_logging(name, level=3, log_file=None):
'''Mimic oslo_log default levels and formatting for the logger. '''
log = logging.getLogger(name)
if level and level > 2:
ll = logging.DEBUG
elif level and level == 2:
ll = logging.INFO
else:
ll = logging.WARNING
log.setLevel(ll)
handler = logging.StreamHandler(sys.stderr)
handler.setLevel(ll)
if log_file:
fhandler = logging.FileHandler(log_file)
formatter = logging.Formatter(
'%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
'%(name)s [ ] %(message)s',
'%Y-%m-%d %H:%M:%S')
fhandler.setLevel(ll)
fhandler.setFormatter(formatter)
log.addHandler(fhandler)
log.addHandler(handler)
log.propagate = False
return log
def configure_logging_from_args(name, app_args):
# takes 1, or 2 if --verbose, or 4 - 5 if --debug
log_level = (app_args.verbose_level +
int(app_args.debug) * 3)
# if executed as root log to specified file or default log file
if os.getuid() == 0:
log_file = app_args.log_file or constants.LOG_FILE
else:
log_file = app_args.log_file
log = utils.common.configure_logging(
__name__, log_level, log_file)
return (log, log_file, log_level)
def get_cpus_allowed_list(**args):
"""Returns the process's Cpus_allowed on which CPUs may be scheduled.
:return: Value for Cpus_allowed, e.g. '0-3'
"""
return ','.join([str(c) for c in psutil.Process().cpu_affinity()])
def load_config(config, name=None, overrides=None):
container_config = {}
if overrides is None:
overrides = {}
if os.path.isdir(config):
# When the user gives a config directory and specify a container name,
# we return the container config for that specific container.
if name:
cf = 'hashed-' + name + '.json'
with open(os.path.join(config, cf), 'r') as f:
container_config[name] = {}
container_config[name].update(yaml.safe_load(f))
# When the user gives a config directory and without container name,
# we return all container configs in that directory.
else:
config_files = glob.glob(os.path.join(config, 'hashed-*.json'))
for cf in config_files:
with open(os.path.join(config, cf), 'r') as f:
name = os.path.basename(os.path.splitext(
cf.replace('hashed-', ''))[0])
container_config[name] = {}
container_config[name].update(yaml.safe_load(f))
else:
# Backward compatibility so our users can still use the old path,
# paunch will recognize it and find the right container config.
old_format = '/var/lib/tripleo-config/hashed-container-startup-config'
if config.startswith(old_format):
step = re.search('/var/lib/tripleo-config/'
'hashed-container-startup-config-step'
'_(.+).json', config).group(1)
# If a name is specified, we return the container config for that
# specific container.
if name:
new_path = os.path.join(
'/var/lib/tripleo-config/container_startup_config',
'step_' + step, 'hashed-' + name + '.json')
with open(new_path, 'r') as f:
c_config = yaml.safe_load(f)
container_config[name] = {}
container_config[name].update(c_config[name])
# When no name is specified, we return all container configs in
# the file.
else:
new_path = os.path.join(
'/var/lib/tripleo-config/container_startup_config',
'step_' + step)
config_files = glob.glob(os.path.join(new_path,
'hashed-*.json'))
for cf in config_files:
with open(os.path.join(new_path, cf), 'r') as f:
name = os.path.basename(os.path.splitext(
cf.replace('hashed-', ''))[0])
c_config = yaml.safe_load(f)
container_config[name] = {}
container_config[name].update(c_config[name])
# When the user gives a file path, that isn't the old format,
# we consider it's the new format so the file name is the container
# name.
else:
if not name:
# No name was given, we'll guess it with file name
name = os.path.basename(os.path.splitext(
config.replace('hashed-', ''))[0])
with open(os.path.join(config), 'r') as f:
container_config[name] = {}
container_config[name].update(yaml.safe_load(f))
# Overrides
for k in overrides.keys():
if k in container_config:
for mk, mv in overrides[k].items():
container_config[k][mk] = mv
return container_config

View File

@ -1,92 +0,0 @@
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import subprocess
import tenacity
from paunch.utils import common
class SystemctlException(Exception):
pass
def systemctl(cmd, log=None):
log = log or common.configure_logging(__name__)
if not isinstance(cmd, list):
raise SystemctlException("systemctl cmd passed must be a list")
cmd.insert(0, 'systemctl')
log.debug("Executing: {}".format(" ".join(cmd)))
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError as err:
raise SystemctlException(str(err))
def format_name(name):
return name if name.endswith('.service') else name + ".service"
def stop(service, log=None):
systemctl(['stop', service], log)
def daemon_reload(log=None):
systemctl(['daemon-reload'], log)
def reset_failed(service, log=None):
systemctl(['reset-failed', service], log)
def is_active(service, log=None):
systemctl(['is-active', '-q', service], log)
# NOTE(bogdando): this implements a crash-loop with reset-failed
# counters approach that provides an efficient feature parity to the
# classic rate limiting, shall we want to implement that for the
# systemctl command wrapper instead.
@tenacity.retry( # Retry up to 5 times with jittered exponential backoff
reraise=True,
retry=tenacity.retry_if_exception_type(
SystemctlException
),
wait=tenacity.wait_random_exponential(multiplier=1, max=10),
stop=tenacity.stop_after_attempt(5)
)
def enable(service, now=True, log=None):
cmd = ['enable']
if now:
cmd.append('--now')
cmd.append(service)
try:
systemctl(cmd, log)
except SystemctlException as err:
# Reset failure counters for the service unit and retry
reset_failed(service, log)
raise SystemctlException(str(err))
def disable(service, log=None):
systemctl(['disable', service], log)
def add_requires(target, units, log=None):
cmd = ['add-requires', target]
if isinstance(units, list):
cmd.extend(units)
else:
cmd.append(units)
systemctl(cmd, log)

View File

@ -1,252 +0,0 @@
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
from paunch import constants
from paunch.utils import common
from paunch.utils import systemctl
DROP_IN_MARKER_FILE = '/etc/sysconfig/podman_drop_in'
def service_create(container, cconfig, sysdir=constants.SYSTEMD_DIR, log=None):
"""Create a service in systemd
:param container: container name
:type container: String
:param cconfig: container configuration
:type cconfig: Dictionary
:param sysdir: systemd unit files directory
:type sysdir: String
:param log: optional pre-defined logger for messages
:type log: logging.RootLogger
"""
log = log or common.configure_logging(__name__)
# We prefix the SystemD service so we can identify them better:
# e.g. systemctl list-unit-files | grep tripleo_
# It'll help to not conflict when rpms are installed on the host and
# have the same service name as their container name.
# For example haproxy rpm and haproxy container would have the same
# service name so the prefix will help to not having this conflict
# when removing the rpms during a cleanup by the operator.
service = 'tripleo_' + container
wants = " ".join(systemctl.format_name(str(x)) for x in
cconfig.get('depends_on', []))
restart = cconfig.get('restart', 'always')
stop_grace_period = cconfig.get('stop_grace_period', '10')
# Please refer to systemd.exec documentation for those entries
# https://www.freedesktop.org/software/systemd/man/systemd.exec.html
sys_exec = cconfig.get('systemd_exec_flags', {})
# SystemD doesn't have the equivalent of docker unless-stopped.
# Let's force 'always' so containers aren't restarted when stopped by
# systemd, but restarted when in failure. Also this code is only for
# podman now, so nothing changed for Docker deployments.
if restart == 'unless-stopped':
restart = 'always'
# If the service depends on other services, it must be stopped
# in a specific order. The host can be configured to prevent
# systemd from stopping the associated systemd scopes too early,
# so make sure to generate the start command accordingly.
if (len(cconfig.get('depends_on', [])) > 0 and
os.path.exists(DROP_IN_MARKER_FILE)):
start_cmd = '/usr/libexec/paunch-start-podman-container %s' % container
else:
start_cmd = '/usr/bin/podman start %s' % container
sysd_unit_f = sysdir + systemctl.format_name(service)
log.debug('Creating systemd unit file: %s' % sysd_unit_f)
s_config = {
'name': container,
'start_cmd': start_cmd,
'wants': wants,
'restart': restart,
'stop_grace_period': stop_grace_period,
'sys_exec': '\n'.join(['%s=%s' % (x, y) for x, y in sys_exec.items()]),
}
# Ensure we don't have some trailing .requires directory and content for
# this service
if os.path.exists(sysd_unit_f + '.requires'):
shutil.rmtree(sysd_unit_f + '.requires')
with open(sysd_unit_f, 'w') as unit_file:
os.chmod(unit_file.name, 0o644)
unit_file.write("""[Unit]
Description=%(name)s container
After=paunch-container-shutdown.service
Wants=%(wants)s
[Service]
Restart=%(restart)s
ExecStart=%(start_cmd)s
ExecReload=/usr/bin/podman kill --signal HUP %(name)s
ExecStop=/usr/bin/podman stop -t %(stop_grace_period)s %(name)s
KillMode=none
Type=forking
PIDFile=/var/run/%(name)s.pid
%(sys_exec)s
[Install]
WantedBy=multi-user.target""" % s_config)
try:
systemctl.daemon_reload()
systemctl.enable(service, now=True)
except systemctl.SystemctlException:
log.exception("systemctl failed")
raise
def service_delete(container, sysdir=constants.SYSTEMD_DIR, log=None):
"""Delete a service in systemd
:param container: container name
:type container: String
:param sysdir: systemd unit files directory
:type sysdir: string
:param log: optional pre-defined logger for messages
:type log: logging.RootLogger
"""
log = log or common.configure_logging(__name__)
# prefix is explained in the service_create().
service = 'tripleo_' + container
sysd_unit_f = systemctl.format_name(service)
sysd_health_f = systemctl.format_name(service + '_healthcheck')
sysd_timer_f = service + '_healthcheck.timer'
sysd_health_req_d = sysd_unit_f + '.requires'
for sysd_f in sysd_unit_f, sysd_health_f, sysd_timer_f:
if os.path.isfile(sysdir + sysd_f):
log.debug('Stopping and disabling systemd service for %s' %
service)
try:
systemctl.stop(sysd_f)
systemctl.disable(sysd_f)
except systemctl.SystemctlException:
log.exception("systemctl failed")
raise
log.debug('Removing systemd unit file %s' % sysd_f)
os.remove(sysdir + sysd_f)
else:
log.info('No systemd unit file was found for %s' % sysd_f)
# Now that the service is removed, we can remove its ".requires"
if os.path.exists(os.path.join(sysdir, sysd_health_req_d)):
log.info('Removing healthcheck require for %s' % service)
shutil.rmtree(os.path.join(sysdir, sysd_health_req_d))
def healthcheck_create(container, sysdir='/etc/systemd/system/',
log=None, test='/openstack/healthcheck'):
"""Create a healthcheck for a service in systemd
:param container: container name
:type container: String
:param sysdir: systemd unit files directory
:type sysdir: String
:param log: optional pre-defined logger for messages
:type log: logging.RootLogger
:param test: optional test full command
:type test: String
"""
log = log or common.configure_logging(__name__)
service = 'tripleo_' + container
healthcheck = systemctl.format_name(service + '_healthcheck')
sysd_unit_f = sysdir + healthcheck
log.debug('Creating systemd unit file: %s' % sysd_unit_f)
s_config = {
'name': container,
'service': service,
'restart': 'restart',
'test': test,
}
with open(sysd_unit_f, 'w') as unit_file:
os.chmod(unit_file.name, 0o644)
unit_file.write("""[Unit]
Description=%(name)s healthcheck
After=paunch-container-shutdown.service %(service)s.service
Requisite=%(service)s.service
[Service]
Type=oneshot
ExecStart=/usr/bin/podman exec --user root %(name)s %(test)s
SyslogIdentifier=healthcheck_%(name)s
[Install]
WantedBy=multi-user.target
""" % s_config)
def healthcheck_timer_create(container, cconfig, sysdir='/etc/systemd/system/',
log=None):
"""Create a systemd timer for a healthcheck
:param container: container name
:type container: String
:param cconfig: container configuration
:type cconfig: Dictionary
:param sysdir: systemd unit files directory
:type sysdir: string
:param log: optional pre-defined logger for messages
:type log: logging.RootLogger
"""
log = log or common.configure_logging(__name__)
service = 'tripleo_' + container
healthcheck_timer = service + '_healthcheck.timer'
sysd_timer_f = sysdir + healthcheck_timer
log.debug('Creating systemd timer file: %s' % sysd_timer_f)
interval = cconfig.get('check_interval', 60)
s_config = {
'name': container,
'service': service,
'interval': interval,
'randomize': int(interval) * 3 / 4
}
with open(sysd_timer_f, 'w') as timer_file:
os.chmod(timer_file.name, 0o644)
timer_file.write("""[Unit]
Description=%(name)s container healthcheck
PartOf=%(service)s.service
[Timer]
OnActiveSec=120
OnUnitActiveSec=%(interval)s
RandomizedDelaySec=%(randomize)s
[Install]
WantedBy=timers.target""" % s_config)
try:
systemctl.enable(healthcheck_timer, now=True)
systemctl.add_requires(systemctl.format_name(service),
healthcheck_timer)
systemctl.daemon_reload()
except systemctl.SystemctlException:
log.exception("systemctl failed")
raise

View File

@ -1,5 +0,0 @@
---
features:
- |
the log_tag option was added to the container definition. This will make
paunch add the --log-opt tag=<value> option to the run command.

View File

@ -1,5 +0,0 @@
---
features:
- |
Add `--security-opt=xxx` option for the action run a container. Allows to
define security options, such as turning labels (SELinux) on/off.

View File

@ -1,6 +0,0 @@
---
features:
- |
docker learns 'hostname' which maps to docker run --hostname and
'extra_hosts' mapping to docker run --add-host.

View File

@ -1,8 +0,0 @@
---
features:
- |
Add `--ulimit=xxx` option for paunch run action. Using
this option, multiple ulimits can be set for the container.
For example, `--ulimit=nproc=1024 --ulimit=nofile=1024` will
set proc and nofile limit to 1024 for the container.

View File

@ -1,4 +0,0 @@
---
features:
- |
Add option to configure container's UTS namespace.

View File

@ -1,5 +0,0 @@
---
features:
- |
Add `--cpu-shares=xxx` option for the action run a container. Allows to
define upper `cpu.shares` limits in the cpu cgroup.

View File

@ -1,8 +0,0 @@
---
features:
- |
Add `--memory=x` option for the action run a container. This allows
setting constraints on max memory usage, which is `memory.limit_in_bytes`
in memory cgroup.
Also added `--memory-swap` and `--memory-swappiness` options to control
swap settings.

View File

@ -1,3 +0,0 @@
---
features:
- Allows to configure the healthcheck as set in tripleo-heat-templates

View File

@ -1,5 +0,0 @@
---
deprecations:
- |
The `default_runtime` ABI parameter is deprecated, use cont_cmd instead.
The default-runtime CLI argument retains unchanged.

View File

@ -1,12 +0,0 @@
---
prelude: >
Paunch has been replaced by tripleo_container_manage role in
tripleo-ansible during Ussuri cycle.
It is not tested anymore in this version and will be removed one day.
It it strongly encouraged to switched to the Ansible role to manage
containers; which should be the default if you deploy TripleO from
master at this time.
If you get the warning, it's possible that a parameter (EnablePaunch) is
set to True; while the default was switched.
Paunch will remain supported in Ussuri and backward, but not in Victoria
and forward.

View File

@ -1,4 +0,0 @@
---
deprecations:
- |
Docker runtime is deprecated in Stein and will be removed in Train.

View File

@ -1,5 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. The minimum version of Python now
supported by paunch is Python 3.6.

View File

@ -1,5 +0,0 @@
---
features:
- |
When deploying with Podman, we can disable the container healthchecks by
using paunch apply --healtcheck-disabled.

View File

@ -1,5 +0,0 @@
---
features:
- |
paunch learns 'hostname' wich maps to podman run --hostname and
'extra_hosts' mapping to podman run --add-host.

View File

@ -1,4 +0,0 @@
---
fixes:
- |
Fixed ``--labels`` can takes multiple values.

View File

@ -1,5 +0,0 @@
---
other:
- |
Logging verbosity and destination file can be controled with ``--debug``
``--verbose`` and ``--log-file``.

View File

@ -1,7 +0,0 @@
---
features:
- Add new "cont_log_path". It must point to a directory, where the container
engine will puts logs issued from containers standard output. It works only
for podman.
- Add new "--container-log-path" allowing to actually set cont_log_path from
the CLI call directly.

View File

@ -1,4 +0,0 @@
---
features:
- |
Podman is now the default container runtime, replacing Docker.

View File

@ -1,13 +0,0 @@
---
features:
- |
For all containers managed by podman, we'll create a systemd unit file
so the containers automatically start at boot and restart at failure.
When the container is removed, we'll disable and stop the service, then
remove the systemd unit file.
We prefix the SystemD service so we can identify them better.
It will help to not conflict when rpms are installed on the host and
have the same service name as their container name.
For example haproxy rpm and haproxy container would have the same
service name so the prefix will help to not having this conflict
when removing the rpms during a cleanup by the operator.

View File

@ -1,6 +0,0 @@
---
features:
- All referenced images are now pulled as the first step of applying a
config. Any pull failures will result in failing with an error before any of
the config is applied. This is especially helpful for detached containers,
where pull failures will not be captured during the apply.

View File

@ -1,3 +0,0 @@
---
features:
- Adds a new systemd_exec_flags parameter for paunch-managed systemd unit

View File

@ -1,268 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Glance Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = u'2016, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/paunch'
openstackdocs_bug_project = 'paunch'
openstackdocs_bug_tag = 'docs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'GlanceReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'GlanceReleaseNotes.tex', u'Glance Release Notes Documentation',
u'Glance Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'glancereleasenotes', u'Glance Release Notes Documentation',
[u'Glance Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'GlanceReleaseNotes', u'Glance Release Notes Documentation',
u'Glance Developers', 'GlanceReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,15 +0,0 @@
============================================
paunch Release Notes
============================================
.. toctree::
:maxdepth: 1
unreleased
ussuri
train
stein
rocky
queens
queens
pike

View File

@ -1,6 +0,0 @@
===================================
Pike Series Release Notes
===================================
.. release-notes::
:branch: stable/pike

View File

@ -1,6 +0,0 @@
===================================
Queens Series Release Notes
===================================
.. release-notes::
:branch: stable/queens

View File

@ -1,6 +0,0 @@
===================================
Rocky Series Release Notes
===================================
.. release-notes::
:branch: stable/rocky

View File

@ -1,6 +0,0 @@
===================================
Stein Series Release Notes
===================================
.. release-notes::
:branch: stable/stein

View File

@ -1,6 +0,0 @@
==========================
Train Series Release Notes
==========================
.. release-notes::
:branch: stable/train

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,6 +0,0 @@
===========================
Ussuri Series Release Notes
===========================
.. release-notes::
:branch: stable/ussuri

View File

@ -1,10 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=2.0.0,!=2.1.0 # Apache-2.0
cliff>=2.6.0 # Apache-2.0
tenacity>=3.2.1 # Apache-2.0
jmespath>=0.9.0 # MIT
psutil>=3.2.2 # BSD

View File

@ -1,50 +0,0 @@
[metadata]
name = paunch
summary = Utility to launch and manage containers using YAML based configuration data
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = https://docs.openstack.org/paunch/latest/
python-requires = >=3.6
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
[files]
packages =
paunch
[entry_points]
console_scripts =
paunch = paunch.__main__:main
paunch =
apply = paunch.cmd:Apply
cleanup = paunch.cmd:Cleanup
delete = paunch.cmd:Delete
list = paunch.cmd:List
debug = paunch.cmd:Debug
[compile_catalog]
directory = paunch/locale
domain = paunch
[update_catalog]
domain = paunch
output_dir = paunch/locale
input_file = paunch/locale/paunch.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = paunch/locale/paunch.pot

View File

@ -1,20 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=0.12.0,!=0.13.0,<0.14 # Apache-2.0
coverage>=4.0,!=4.4 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
oslotest>=1.10.0 # Apache-2.0
stestr>=2.0.0 # Apache-2.0
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
# releasenotes
reno>=3.1.0 # Apache-2.0

59
tox.ini
View File

@ -1,59 +0,0 @@
[tox]
minversion = 3.1.1
envlist = pep8,py37
skipsdist = False
ignore_basepython_conflict = True
[testenv]
basepython = python3
setenv = VIRTUAL_ENV={envdir}
deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
install_command = pip install -U {opts} {packages}
whitelist_externals =
bash
commands =
python -m paunch --version
paunch --version
stestr run {posargs}
[testenv:pep8]
commands = flake8 {posargs}
[testenv:venv]
commands = {posargs}
[testenv:cover]
setenv =
PYTHON=coverage run --source paunch --parallel-mode
HOME={envdir}
commands =
coverage erase
stestr run --color {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[testenv:docs]
deps = -r{toxinidir}/doc/requirements.txt
-r{toxinidir}/test-requirements.txt
commands =
sphinx-build -a -E -W -d doc/build/doctrees -b html doc/source doc/build/html
[testenv:releasenotes]
deps = -r{toxinidir}/doc/requirements.txt
commands =
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build

View File

@ -1,26 +0,0 @@
- project:
templates:
- check-requirements
- openstack-python3-ussuri-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- tripleo-ci-centos-8-scenario004-standalone:
dependencies:
- openstack-tox-pep8
- openstack-tox-py36
- openstack-tox-py37
irrelevant-files: &standalone_ignored
- ^.*\.md$
- ^.*\.rst$
- ^doc/.*$
- ^docs/.*$
- ^releasenotes/.*$
- ^test-requirements.txt$
- tox.ini
gate:
queue: tripleo
jobs:
- tripleo-ci-centos-8-scenario004-standalone:
irrelevant-files: *standalone_ignored