That is a follow-up patch for Rocky, that as EL derivative also follows
naming convention, where distribution_version contains also minor
version, while we need to build only against major versions of distros.
With that the only distro we need to use distro version for is Ubuntu.
Change-Id: I62f69bc31ed04ab65a167d07de44067fcaa74a66
Debian does include minor versioning into distribution_version facts
causing versions to look like "12.2" or "11.4", which is causing issues
once debian is minorly updated since there might be no repo container
to satisfy such specific request across the board (ie having repo 11.3
while compute reports 11.4). Thus we're taking into account only major
versions for these distros. Same approach has been taken for building
wheels in project-config [1]
[1] https://review.opendev.org/c/openstack/project-config/+/897545
Change-Id: Iaf6ed4dd5e01b25b5935b959c73ab657cfefef47
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Change-Id: I6b4b83ec472d4a3de9139eb4e7c7b2dc8a9fc260
In order to be flexible and get rid of hardcoded repo_all group, new
variable named `venv_build_group` was added. It's set to repo_all by
default, so it is not changing current behaviour remains
Change-Id: I30c2c26abeb103de63aff0946ec7783f902886b8
At the moment we have 2 slightly different variations of merging
distro_arch - one through `-` and another through `_`. This patch
aligns usage and re-use _venv_build_dist_arch.
Change-Id: If478ab0f0c0a5acc8974a69a11453377c98c28be
Change venv_build_targets data structure to a single-level dict
with keys of the form <distro_ver>_<arch> instead of nested dicts.
When adding new architectures to the old structure, previous
entries for other architectures were overwritten, leaving only
the last seen architecture for each distro version. This could
result in a "Dict object has no attribute ..." error when trying
to build a venv for any other architure.
Closes-Bug: #2018012
Change-Id: I8ddabf996559b5300b52cad1649d8657889337cd
Due to some corner cases that are possible with currnet logic,
it was decided to simply it and always build wheel regardless of host
count. While runtime might take a bit longer, it's always better for
scaling up to already have wheels prepared.
Closes-Bug: #2004252
Change-Id: I5f53db8476eb394516fb35d593932d2552b95a57
With fixing multi-distro support logic, venv_wheel_build_enable logic
stopped working for metal deployments, that have more then 1 host.
Currently, we were checking, that we are not building wheels if build_host
is the same as inventory_hostname. Which is now true for all metal
deployments. However, other hosts do expect wheels to present, which
results in installation failure.
According to the original idea we need to build wheels only when we have
more then 1 host with same arch/distro combination otherwise it does
not make sense to build wheels and faster to install tarball.
To achieve that we build mapping of all distro/arch combinations and
enable wheels building only when there distro-arch combination
is not unique for the host we're playing against.
Despite it might be overkill, as scenario of having only single host
with uniqe distro-arch combination, and overhead of building wheels for
it would be quite small, alternative would be to hardcode `True` for
venv_wheel_build_enable.
Closes-Bug: #1989506
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/858258
Change-Id: I762e36acf76729fd61f28ca1b03bc9f562b5db0a
There may be multiple architecures and OS versions in
ansible_play_hosts. However, with run_once we build wheels only
for one selected distro and do not repsect multi-arch/multi-distro
setups.
Instead of run_once we need to select single (and first in play)
host of each architecture and distro and delegate wheels building
from it. That is needed, because venv_build_host is selected based
on the facts gathered for current inventory_hostname and will
depend on it's arch/distro.
Change-Id: I492d17169538ad2768e28f7c48314bdec407ab36
Closes-Bug: #1964535
The conditionals for packge installation are getting more complex
than can be handled with the ternary operator, so create per distro
vars files to describe these differences.
Change-Id: I3c2164ca8f610fbef88744d9acfca4e926059c81
We create virtualenv with a separate command since we need to create
constraints before running pip. In the meanwhile when using pyenv
it becomes more messy to use virtualenv, as it would require
setting virtualenv_command and you can't define virtualenv_python then.
Using exacutable requires setuptools to be present
for ansible_python_interpreter
Change-Id: I5bf617ad0eca3dd5e58e25af7b44d536dc4579d3
python3-venv is a requirement for destination hosts, not only build host.
Thus, we introduce new variable `venv_install_base_distro_package_list`
which acts simmilar way to `venv_build_base_distro_package_list`
except it's installs packages for install hosts as well.
Change-Id: I64843518b39f930f6983ab51bb3d053c04e6b8ed
This addresses an issue when upgrading between different operating
systems. In this case, the requirements files already exist for
the old OS, which prevents new wheels from being created for the
new OS. By using a different requirements path for each OS this is
avoided.
As part of this, repeated variable construction for the
requirements path is factored our into a vars file.
Change-Id: I881a40fee31df78bf96e451509671543a49520d9
This patch aims to remove empty records from venv_pip_packages list
to avoid pip error due to that.
This also brings _venv_pip_packages variable, which optimize process by
doing union and sort only once.
Change-Id: Ic94f5a00346e47c394bd2cefc1cfca4ed8c3bdef
ujson 2.0.1 has a python dependacy of the library 'double-conversion'
which requires a C++ compiler to build. Ensure that g++ is available
which also ensures that the gcc compiler is present.
Change-Id: Ie83b41e129942ad8255fed5fb14ab328c2943a77
The first repo server is the only one configured to synchronise
packages to the other repo servers. To ensure that the build
happens on the first server, we reverse the order in which they
are processed in the loop.
This does not solve the multi-architecture use-case. A solution
for that will have to follow.
Change-Id: Ie6bd16ac08164f7dba71339858b98137f61ad716
Currently python venv build targets run within the first repo server
when > 0 repo servers are found or localhost however this is sub-optimal
especially in environments with mixed architectures and operating systems.
This change sets the python venv build to a repo server target node of the
same os family and cpu arch when one is available otherwise the build process
is performed on the target host, instead of falling back to localhost, which
in many cases is a bastion server.
Change-Id: Ibc30bb90ab1ce1a074d8e93a2d2b36f4dcefb90c
Signed-off-by: cloudnull <kevin@cloudnull.com>
There are some packages which absolutely must be there
for all wheel builds, or for installing without wheels.
Without them, pip is totally unable to compile the
package due to missing headers or tooling.
This patch adds a default, minimal, set of compilers
and python headers.
Rather than use include_vars, with_first_found as we
do in most other roles, we use vars/main and a dict
based on ansible_os_family. The role is often included
by other roles, and we'd rather not risk the search
path being incorrect (there are constant bugs related
to this in ansible). Using this mechanism takes away
the need for an include_vars task and avoids any pathing
issues.
Change-Id: I4ef11e47e4d3fe5adc65e9888e660a5a121d205b
Python venvs are not particularly portable. To overcome this
problem we have a bunch of stuff implemented to fix them
after we unpack them. This ranges from relatively simple
things like re-implementing python in the venv and changing
the shebangs for everything in it, to more complex things
like having to package a venv per distro/architecture.
All of this can be eliminated by simplifying the mechanism
into just creating the venv on the target host, and installing
the python packages into it. To help speed up the build, we
can simply build wheels before hand and store them on a web
server somewhere.
This patch implements the changes to:
1. Do away with packaging the venv.
2. Keep it simple. We install into the venv, and that's that.
3. Add a toggle to rebuild the venv if you'd like to.
4. Use import_tasks with tags for each stage so that it's
easy to skip a portion of what executes.
Change-Id: I708b5cf32e5cce6a18624d0b3be0cd4c828ad389
1. Variables have been renamed to make it easier to
understand their purpose.
2. Unnecessary variables have been removed.
3. The role no longer caters to installing pip packages
on the host. This should never be necessary - if it
is, then something should do so beforehand.
4. The expected versions of pip/virtualenv are documented
and a check has been added to ensure that they exist.
5. The handler has been named to make debug logs less
confusing.
6. The default storage path for venvs/wheels is no longer
opinionated. If paths based on distro/architecture are
required then different paths should be provided to
the role.
Change-Id: I9eb96e9db22f918b00456af943d81f66050107ce