We've just hit an issue where Ceph repo added by RDO upgraded setuptools
to a version that pip cannot uninstall. This is mainly because we're
still installing all the Kuryr dependencies in system's site-packages.
This commit switches us to create and use a virtualenv to have a clean
environment in which we install all the dependencies through pip.
Change-Id: Ieb9fd5ed0251425e9fe172e4a93ad048768ce785
Looks like RDO repos upgrade setuptools to version that creates a
problem when we attempt to upgrade it through pip. This commit pins
RPM setuptools to a version available in baseos repos.
This isn't really a great solution, we should probably just start
running in a virtualenv, but let's try to unblock the gates with this
thing.
Change-Id: I0b9e02ef1e9227497082a4a3cd803c52ad2789fb
Due to the EOL for Centos 8, we need to go forward, and adopt to the
Stream version of Centos for container images of kuryr CNI and
controller.
There was applied modification for fixing (hopefully transient) issues
with variable usage in yum/dnf, so there is a need for replacing those
manually.
And finally, there was a switch from deprecated yum to dnf.
Change-Id: I19c16877e8ba6f401c9d76ed70b2380c4e3cfbe0
This forces the golang modules that we don't support to be off. It's
important to allow building kuryr-cni to be built with go 1.16 where
modules are on by default.
Change-Id: I058ab8d9e5e7df37efeee278ff4652de5f6861f3
Seems like quay.io/app-sre is no longer available to public and builds
fail. This commit fixes that by using registry.centos.org to get
centos:8 container image and our own quay.io/kuryr to host the
golang:1.15 image.
Change-Id: I044092e83b1a525ffd7692971a2e3313dfa1e421
Seems like we were still using Ussuri's RDO to get openvswitch package
and now that it broke it's an issue. This commit updates that to use
Victoria RDO release.
Change-Id: Ide317ac064dcc2a1a2e2bdbf8129bd9021f57a0d
Using --no-cache-dir flag in pip install ,make sure dowloaded packages
by pip don't cached on system . This is a best practise which make sure
to fetch ftom repo instead of using local cached one . Further , in case
of Docker Containers , by restricing caching , we can reduce image size.
In term of stats , it depends upon the number of python packages
multiplied by their respective size . e.g for heavy packages with a lot
of dependencies it reduce a lot by don't caching pip packages.
Further , more detail information can be found at
https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6
Change-Id: I35b33ea50afce70b687762dba8b18f3f2be60e03
Signed-off-by: Pratik Raj <rajpratik71@gmail.com>
Turns out upgrading pip enables grpcio PyPi package to use wheels to
install binaries, avioding need to compile it every time. This saves a
ton of time when building containers.
Change-Id: I6e4a5f9fddd24b8e88c62b444e8b305ade3f7f2a
Dependencies needed by openvswitch are not present on
the rdo-release-train-1 rpm. We need to update it
to make sure they're present.
Change-Id: I5050d7b7e49f2d0126c9daf449f20aa7d84331c6
Somehow an update to centos repos and the fact that docker.io centos
containers weren't updated for a while broke us. To fix this we need to
make sure RPMs in the container are upgraded, otherwise `yum history
undo last` fails miserably with missing packages errors.
Also this commit makes sure installation dies when we're unable to build
containers.
Change-Id: I29e19e13aa22047bfa07817a7794fc18612bbc32
Seems like grpcio 1.28.1 requires C++ compiler to build. This commit
fixes our build issues by adding one to the containers.
Change-Id: I8421d066160774431f72e38d36b32870f7f56b4c
Make package repo (default rdo master) as ARG so it becomes
configurable during docker build, needed for offline or specific
version build.
Change-Id: I001dc69ec51b893070895e0fbb37aab8640e6fe4
Add DPDK support for nested K8s pods. Patch includes a new VIF driver on
the controller and a new CNI binding driver.
This patch introduces dependency from os-vif v.1.12.0, since there
a new vif type.
Change-Id: I6be9110192f524325e24fb97d905faff86d0cfef
Implements: blueprint nested-dpdk-support
Co-Authored-By: Kural Ramakrishnan <kuralamudhan.ramakrishnan@intel.com>
Co-Authored-By: Marco Chiappero <marco.chiappero@intel.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Signed-off-by: Danil Golov <d.golov@samsung.com>
Ussuri release is the one in which we drop Python 2 support, as its EOL
is pretty close now. This commit does so in kuryr-kubernetes by
removing Python 2 unit test jobs, switching all tempest jobs to Python
3, removing specific jobs for Python 3 and updating Dockerfiles to
centos:8 that includes Python 3 from the box.
Also CentOS 7 job is removed from check queue as it seems it doesn't
play well with Python 3. A CentOS 8 job will get created soon.
Change-Id: Id9983d2fd83cef89e3198b2760816cf4a851008b
In a rather desperate try to shrink our container images this commit
adds `yum/dnf clean all` as part of the building process. This helps to
save around 100 MB in case of centos-based images.
Change-Id: I2aaadab4ffec6e0ad744e82fc9145cd86e14a224
This commit reimplements the kuryr-cni, that is the actual CNI plugin
that gets called by kubelet and passes the request to kuryr-daemon, in
golang. This means that it can injected as a binary without any
dependencies, instead of using a bash script that looks for ID of
kuryr-daemon container and does `docker exec` to run Python kuryr-cni
inside. The obvious advantage of that is removing a constraint of
python, curl and docker/runc binaries being available on any K8s host
that runs Kuryr. This enables integration with Magnum, where kubelet
runs in such a minimal container. Besides that injecting a binary is way
more elegant and less error-prone.
The golang implementation should keep the compatibility with Python one.
Also currently only containerized jobs are switched to use it, so Python
implementation is still kept in the repo. I'm not against removing it in
very near future.
Please note that there is an important limitation in comparison to the
Python implementation - i.e. in case of golang binary running on K8s
host, we don't have an easy access to kuryr.conf, meaning that
localhost:50036 is currently hardcoded as the kuryr-daemon endpoint.
This should be fixed by putting the configured endpoint into
10-kuryr.conf file that gets injected onto the host by cni_ds_init
script.
Implements: blueprint golang-kuryr-cni
Change-Id: Ia241fb5b2937c63d3ed6e3de1ac3003e370e4db6
We use git.openstack.org/cgit to fetch global upper-constraints.txt file
in our Dockerfiles. That is currently only a redirect and we should get
it switched to use opendev.org infra. This commit does so.
Change-Id: I32945c6b5426b6274c180a4a90dad09c414977b2
PodResources client could be used by sriov cni to obtain devices
allocated for container by sriov device-plugin.
KubeletPodResources service is still in alpha, so it should be
explicitly enabled in kubelet feature-gates:
kubelet --feature-gates KubeletPodResources=true
New config option 'kubelet_root_dir' added to 'sriov' section
that defaults to kubelet default root-dir '/var/lib/kulelet'.
In case kubelet started with non-default root directory passed
via '--root-dir' option, the same value should be configured
in 'kubelet_root_dir'.
Note that if sriov binding driver will be used inside container
'kubelet_root_dir'/pod-resources directory should be mounted
to this container in order to allow communication with kubelet
via gRPC protocol over the unix domain socket.
Partial-Bug: 1826865
Depends-On: https://review.openstack.org/#/c/652629
Change-Id: Icf088b839db079efe9c7647c31be4ead867ed32b
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
This commit does several cleanups to the Dockerfiles that we have:
* git is removed from the images after Kuryr packages installation
* jq and wget is removed from kuryr-cni image as those ar no longer used
* explicit setuptools installation is no longer required
* raw Kuryr code is removed from images after it's `pip install`ed
* unnecessary VOLUME line is removed from kuryr-cni Dockerfile
* CNI_CONFIG_DIR and CNI_BIN_DIR build arguments are removed from
kuryr-cni Dockerfile as they are not used anywhere. Initially we've
kept them to allow deployer to tell where host's /etc/cni/net.d and
/opt/cni/bin will be mounted, but one of the refactorings of
cni_ds_init must have stopped depending on them and we simply started
to expect the mounts to be in the same paths as on host. We can
continue to do that.
The build_cni_daemonset_image script was created back in the time when
we have had multi-stage build of the kuryr-cni image. This is no longer
the case and building the image is as easy as:
`docker build -f cni.Dockerfile .`
Given that this commit removes the script and updates documentation to
recommend using `docker build` directly.
Change-Id: Ib1807344ede11ec6845e5f09c5a87c29a779af03
This commit fixes the container creation due to centos shipping an older
version of setuptools. [1]
[1] https://github.com/openaps/openaps/issues/95
Closes-Bug: 1778048
Change-Id: Ifbce25a5a49cbc3df58850b10819843e640c8f26
Upper constraints weren't applied to the Dockerfiles installation so
pip install deps when over it. This commit fixes it by defining a new
env var for it and passit it to pip.
Closes-Bug: #1763752
Change-Id: Id126fee033db6f150ad95c94682eb56b4b2cea03
This commit changes the way kuryr-cni is executed in containerized
deployments. Now it'll use `docker exec` command to execute kuryr-cni
inside the CNI container. This should make it easier to be consumed by
deployers.
To be able to do such changes I needed to stop mounting host's /etc
directory. I believe this was unnecessary and was blocking curl from
working in isolation from host OS.
Closes-Bug: 1757531
Change-Id: I373d65536a43eab98f0fc708936b97637f82eaff
This commit implements what was discussed on the PTG, i.e. deprecation
of running Kuryr-Kubernetes without kuryr-daemon services. This commit
includes changes in configuration defaults, sample local.conf files,
documentation, gates and a release note explaining the change.
Change-Id: I152c81797cb83237af4917a4487cb1f1918270aa
This commit adds creating a directory for lockfiles in the CNI Docker
image. As in oslo.concurrency `lock_path` option defaults to
`OSLO_LOCK_PATH` environment variable, this variable is also set to
point to that directory.
Change-Id: Ia69c75c34a8da4281414395805f4927de1e91a39
Closes-Bug: 1754636
This commit changes the way we produce kuryr-cni Docker image. Previously we've
distributed the kuryr-driver as pyinstaller binary that contained Python 3
interpreter and all the dependencies. This binary was called from CNI. That
approach had some disadvantages, the major being complicated build procedure
and having to see false-positive BrokenPipeError tracebacks in kubelet
logs.
This commit implements distributing kuryr-driver as a virtualenv with
kuryr-kubernetes and all the dependecies installed. That virtualenv is then
copied onto the host system and CNI can easily activate it and run kuryr-cni
binary. This should solve issues caused by pyinstaller.
Closes-Bug: 1747058
Change-Id: I65b01ba27cbe39b66f0a972d12f3abc166934e62
This commit implements kuryr-daemon support when
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True. It's done by:
* CNI docker image installs Kuryr-Kubernetes pip package and adds
exectution of kuryr-daemon into entrypoint script.
* Hosts /proc and /var/run/openvswitch are mounted into the CNI
container.
* Code is changed to use /host_proc instead of /proc when in a container
(it's impossible to mount host's /proc into container's /proc).
Implements: blueprint cni-split-exec-daemon
Change-Id: I9155a2cba28f578cee129a4c40066209f7ab543d
Containerized deployment through DevStack had two bugs related to
mismatches in handling environment variables in Dockerfiles:
1. cni.Dockerfile was using ENV vars to define CNI bin and conf
directories, but when DevStack was building them it wasn't setting them
correctly. This resulted in CNI binaries and configs ending up
in wrong directories when deploying through DevStack. This is fixed by
passing $CNI_BIN_DIR and $CNI_CONF_DIR into the build function.
2. cni_builder script used $CNI_BIN_DIR_PATH, but it was only defined in
cni.Dockerfile and was is missing from cni_builder.Dockerfile. This
resulted in malformed kuryr-cni script, that pointed to non-existing
"/kuryr-cni-bin" file. This is fixed by adding those ENV vars to
cni_builder.Dockerfile
Change-Id: I4833124231f256b74f80bd5fee732686bffab77e
Closes-Bug: 1718137
Make the CNI config and binary locations parametrized so it is suitable
for more kinds of deployment.
Implements: blueprint kubeadminstallable
Change-Id: I01c7540641fe120faec902008ebd842339b50384
Signed-off-by: Antoni Segura Puimedon <antonisp@celebdor.com>
Co-Authored-By: Michał Dulko <mdulko@redhat.com>