With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Change-Id: I6b4b83ec472d4a3de9139eb4e7c7b2dc8a9fc260
In case venv_build_group is not present in inventory or does not contain
a single host, it does not make sense to attept building wheels.
With that we're changing default behaviour to avoid building wheels
if there're no potential targets exists.
Change-Id: Ifd0e80dd1d1f002a1b80f57b53b81ca9110e719c
In order to be flexible and get rid of hardcoded repo_all group, new
variable named `venv_build_group` was added. It's set to repo_all by
default, so it is not changing current behaviour remains
Change-Id: I30c2c26abeb103de63aff0946ec7783f902886b8
At the moment we have 2 slightly different variations of merging
distro_arch - one through `-` and another through `_`. This patch
aligns usage and re-use _venv_build_dist_arch.
Change-Id: If478ab0f0c0a5acc8974a69a11453377c98c28be
Change venv_build_targets data structure to a single-level dict
with keys of the form <distro_ver>_<arch> instead of nested dicts.
When adding new architectures to the old structure, previous
entries for other architectures were overwritten, leaving only
the last seen architecture for each distro version. This could
result in a "Dict object has no attribute ..." error when trying
to build a venv for any other architure.
Closes-Bug: #2018012
Change-Id: I8ddabf996559b5300b52cad1649d8657889337cd
Due to some corner cases that are possible with currnet logic,
it was decided to simply it and always build wheel regardless of host
count. While runtime might take a bit longer, it's always better for
scaling up to already have wheels prepared.
Closes-Bug: #2004252
Change-Id: I5f53db8476eb394516fb35d593932d2552b95a57
It's not always enough to provide extra arguments for pip installation.
For usescases, like isolated installations, and some specific packages
that still utilize deprecated setuptools.installer instead of
PEP 517 installer,
you might find impossible to provide some easyisntall options without
having a configuration file that should be created by deployer as
pre-step. However, these options might be covered with ENV variables.
Change-Id: I9a060cbcdf9f5c54efd423a4b4fe32b418377f86
With fixing multi-distro support logic, venv_wheel_build_enable logic
stopped working for metal deployments, that have more then 1 host.
Currently, we were checking, that we are not building wheels if build_host
is the same as inventory_hostname. Which is now true for all metal
deployments. However, other hosts do expect wheels to present, which
results in installation failure.
According to the original idea we need to build wheels only when we have
more then 1 host with same arch/distro combination otherwise it does
not make sense to build wheels and faster to install tarball.
To achieve that we build mapping of all distro/arch combinations and
enable wheels building only when there distro-arch combination
is not unique for the host we're playing against.
Despite it might be overkill, as scenario of having only single host
with uniqe distro-arch combination, and overhead of building wheels for
it would be quite small, alternative would be to hardcode `True` for
venv_wheel_build_enable.
Closes-Bug: #1989506
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/858258
Change-Id: I762e36acf76729fd61f28ca1b03bc9f562b5db0a
This patch aims to split functionality of ``venv_rebuild`` into 2
separate parts - rebuilding venv and rebuilding wheels. That will give
more control over what needs to be done.
Change-Id: Ie14f12c6756cd1f866b660acc8fd6aa5695f6c33
Related-Bug: #1914301
The conditionals for packge installation are getting more complex
than can be handled with the ternary operator, so create per distro
vars files to describe these differences.
Change-Id: I3c2164ca8f610fbef88744d9acfca4e926059c81
python3-venv is a requirement for destination hosts, not only build host.
Thus, we introduce new variable `venv_install_base_distro_package_list`
which acts simmilar way to `venv_build_base_distro_package_list`
except it's installs packages for install hosts as well.
Change-Id: I64843518b39f930f6983ab51bb3d053c04e6b8ed
This addresses an issue when upgrading between different operating
systems. In this case, the requirements files already exist for
the old OS, which prevents new wheels from being created for the
new OS. By using a different requirements path for each OS this is
avoided.
As part of this, repeated variable construction for the
requirements path is factored our into a vars file.
Change-Id: I881a40fee31df78bf96e451509671543a49520d9
When this role is used alongside the OSA repo server it is
important to be able to control file and directory permissions.
This patch adds variables to optionally set these when creating
requirements files and wheels.
Change-Id: I1131a7e9aa2345d4c07a88a2ad2c8a36e7a35d2b
We always want to default to building virtual environments in
Python 3.
Depends-On: I08e04c198654c15fdb5e11a638da1f8626b0186c
Change-Id: I6c1165cd359c2df77f7bf495cca7b043858aa893
In a mixed py2/py3 setup the repo server currently tries to use
the same wheel build venv for both py2 and py3.
Change-Id: I70d3430cda52bf96c5e5f3b46ac3907375507f5c
In the previous repo build process, we had global constraints which
override upper constraints and anything set in the roles. This was
essential for two purposes:
1. To enable us to pin things that were not in upper constraints. eg: pip,
setuptools, wheel
2. To enable us to pin things which were in upper constraints, but broken.
This would usually be a temporary measure until upper constraints was
fixed.
This patch implements a new variable 'venv_build_global_constraints' which
is a list of constraints to be applied globally for all venvs. This list
will be used to produce a file in the venv suffixed with
'-global-constraints.txt' and will be used on the pip command line when
building the wheels and when installing packages.
We also ensure that all constraints are used when both building and
installing pip, setuptools and wheel into the venv.
Change-Id: I9ae3ef19c863b9237a51d2fcd6f4ebce1a9ebad7
We use the bool filter to make sure that venv_wheel_build_enable
is evaluated as a boolean, rather than a string or something else.
Change-Id: Ice305d2239e55be7e1a6f30628b72cf442553fd5
If we're not building wheels inside a repo_server, then there's
no need to point towards the links and the trusted host because
there's nothing to find there anyways.
Change-Id: Ia977edbb75451f285abbdb64ac1249115ed52a5c
To make it easier to follow up on later, we add a
reference to the upstream bug. This will allow us
to determine whether the bug is fixed in the future,
so that we can remove the workaround.
Change-Id: I7f0e69f3e30bd2ed18c0f340968520792a3c1f46
There are sometimes buggy releases of PyPI shipped by the
distributions which don't react well to pip.conf config
options such extra index URLs which make it impossible to
upgrade it.
This adds a a variable which allows working around this
issue by ignoring the site config to do the upgrade.
Change-Id: I5266d827f19e14c6313b11408913e2f754befaca
If a package (like stackviz) should be installed using
a tarball, rather than from a git source, then it is
better to be able to disable the wheel build process,
constraints usage, etc and just install it from the
tarball.
Change-Id: Id1dc504586a3a1bbd7a161b7367606ced3789043
Currently a configured pip.conf is required in order to make
use of the repo server in an OSA build. This ensures that it
is used, but not exclusively used, if it is available to source
python packages.
The implementation allows it to still respect any configured pip
configuration on the host, so we should be able to remove the OSA
pip.conf with this in place and still maintain the use of the local
repo.
Change-Id: Ia6f2cf86f77ee380fce2d1ecc89e1cb4341e39df
Currently the role expects all constraints to be applied
in pip arguments provided to the role, which means that
there is some pre-processing required outside the role
for this to happen.
In order to pave the way to replace repo_build with this
role, we need to be able to apply constraints and maintain
idempotency even when building from a git source.
To achieve this we ensure that we build the wheels in a
temporary location, then use the resulting wheels for the
specific service to build a service-specific set of
requirements and constraints. To enable idempotency, we
only rebuild the wheels if the requirements/constraints
change.
We use slurp to collect the constraints and re-implement
them when installing the venv on the target hosts. This
prevents us having to inform the venv build role about
the repo server URL. We may change this at a later date
in order to facilitate a centralised repo server for
multiple regions.
Change-Id: I7c467e3a9e6627b75664b94f6b8e3232975171a7
Currently python venv build targets run within the first repo server
when > 0 repo servers are found or localhost however this is sub-optimal
especially in environments with mixed architectures and operating systems.
This change sets the python venv build to a repo server target node of the
same os family and cpu arch when one is available otherwise the build process
is performed on the target host, instead of falling back to localhost, which
in many cases is a bastion server.
Change-Id: Ibc30bb90ab1ce1a074d8e93a2d2b36f4dcefb90c
Signed-off-by: cloudnull <kevin@cloudnull.com>
Some python packages have C bindings which tend to be very
particular about the version of their underlying shared libraries.
To ensure things run smoothly for stable releases, we opt to
use the distro packages for these python packages and symlink the
appropriate python library files and their bindings into the venv.
This functionality is required for libvirt and ceph and is used
across multiple roles.
Change-Id: Ib5b7fa1d06abe1e1bb4f14aea7de4207b61aca88
There are some packages which absolutely must be there
for all wheel builds, or for installing without wheels.
Without them, pip is totally unable to compile the
package due to missing headers or tooling.
This patch adds a default, minimal, set of compilers
and python headers.
Rather than use include_vars, with_first_found as we
do in most other roles, we use vars/main and a dict
based on ansible_os_family. The role is often included
by other roles, and we'd rather not risk the search
path being incorrect (there are constant bugs related
to this in ansible). Using this mechanism takes away
the need for an include_vars task and avoids any pathing
issues.
Change-Id: I4ef11e47e4d3fe5adc65e9888e660a5a121d205b
Currently all packages provided to the role (build and install)
are installed on both the build host and the install host. This
was done as a temporary measure to allow us time to ensure that
each role seperates these, but now that there are some
installations using this it's causing conflicts in services and
packages.
We should rather do the right thing, and fix the roles which need
the packages separated.
If there is no wheel build host, the venv_build_distro_package_list
packages will need to be installed on the target host, otherwise
the sdist install will not work, so we cater for that in this patch
too.
Depends-On: https://review.openstack.org/612704
Depends-On: https://review.openstack.org/613256
Change-Id: I04100a1073dddb06775d8583104bfd6ef4b3213a
Add an option to define a default set of python packages to install
within a virtual environment. This can be used to install a package
wihin a virtual environment that may be outside of a normal package
list but for a given service.
Change-Id: Ic2dc024049062ad9be396a1f71435f661576e91b
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Once we remove the repo build process, this needs to point
to the location where wheels are served by pypiserver.
For now we use this to ensure that we don't rebuild the wheels
that were already built by the repo build process. It has also
been found that pypiserver hangs when it encounters duplicated
wheels.
Change-Id: I86510cb7407e4ee69a376fc64ba5b8f5676f0bff
Python venvs are not particularly portable. To overcome this
problem we have a bunch of stuff implemented to fix them
after we unpack them. This ranges from relatively simple
things like re-implementing python in the venv and changing
the shebangs for everything in it, to more complex things
like having to package a venv per distro/architecture.
All of this can be eliminated by simplifying the mechanism
into just creating the venv on the target host, and installing
the python packages into it. To help speed up the build, we
can simply build wheels before hand and store them on a web
server somewhere.
This patch implements the changes to:
1. Do away with packaging the venv.
2. Keep it simple. We install into the venv, and that's that.
3. Add a toggle to rebuild the venv if you'd like to.
4. Use import_tasks with tags for each stage so that it's
easy to skip a portion of what executes.
Change-Id: I708b5cf32e5cce6a18624d0b3be0cd4c828ad389
The variable name is venv_distro_cache_valid_time
rather than distro_cache_valid_time. Also, this
variable is only used by apt, not zypper.
Change-Id: I71682b7d6bf183967d06f485772760876e2c8df6
1. Variables have been renamed to make it easier to
understand their purpose.
2. Unnecessary variables have been removed.
3. The role no longer caters to installing pip packages
on the host. This should never be necessary - if it
is, then something should do so beforehand.
4. The expected versions of pip/virtualenv are documented
and a check has been added to ensure that they exist.
5. The handler has been named to make debug logs less
confusing.
6. The default storage path for venvs/wheels is no longer
opinionated. If paths based on distro/architecture are
required then different paths should be provided to
the role.
Change-Id: I9eb96e9db22f918b00456af943d81f66050107ce
In order to enable the ability to build venvs on a target,
then serve them via http(s) from a central storage host,
we enable the ability to set the path to be a URL. If this
is done then the venvs will be sourced from that URL when
installing.
Change-Id: I166de361f568bfdf28c0f5d916ff8175827c45f4
When using this role, a playbook can be used to orchestrate
doing all the builds on a single host. There is no need to
orchestrate this inside the role.
In order to gain the efficiencies of building everything on
a single host, a playbook can/should be used with the setting
'venv_reuse_build_only: yes'. Which will not continue to the
install stage.
Change-Id: Ib6e13ca7623942f8b0663b54528599b56b05fb19
In order to speed up the venv build process, especially
when building multiple venvs, we implement the ability
to build the wheels prior to building the venvs. Once
the wheels are built, they are pulled back to the deploy
host so that they can be re-used for the next venv build
on another host.
By default we ensure that the location separates wheels
per distribution and architecture to prevent re-use of
wheels between them which can sometimes cause problems.
Change-Id: I0c99ea8e0b57130511704659a12c904b98ba5bcd
Sometimes venvs are not very reusable across distributions
and architectures, so to ensure this doesn't happen we
store them in entirely different paths.
In order to ensure that the build tasks are entirely skipped
when a package venv is re-used, the build and install stages
are split.
The ability to re-use venvs is also now able to be toggled.
Disabling this feature would set the build to always happen,
catering to environments where a service venv is always
deployed to the same folder (eg: stateless hypervisors with
squashfs partitions).
The ability to set constraints, etc is changed to a generalised
set of arguments that can be passed to the pip install task.
When pulling the packaged venv to the deployment host, the creation
of the local file on the deploy host is done using the user that is
running ansible. If ansible is being run by a non-root user, then
the folder that's created to store the files will not have the right
permissions and the fetch will fail.
As such, when acting on the deploy host we should always ensure that
we provide the correct rights to the user running ansible. We do this
by using a lookup to figure out which user is executing the playbook,
then setting the ownership of the folder to that user. We also use a
lookup to determine that user's home directory and default to using
a subdirectory of that folder for the cache. Both lookups have options
to fall back to in case the environment variables used are not available.