10-kuryr.conflist file from the template file kuryr.conflist.template
1. Currently kubelet's cni config file uses 10-kuryr.conf in kuryr. The
kubernetes can support the config file with two suffixes, ".conf" and
".conflist", the latter can be a list containing multiple cni plugins.
So I want to update the 10-kuryr.conf to 10-kuryr.conflist to satisfy
the need to use multiple plug-in support.
2. If I install the kuryr-cni by pod, the 10-kuryr.conf will only be
copied and overwrited from the kuryr/cni image container. What I
expected is we can customize the config file more freely. So I think we
can add an other file kuryr.conflist.template as the template, then we
generate the 10-kuryr.conflist fron it.
Change-Id: Ie3669db4e60a57b24012124cd24b524eb03f55cf
This commit reimplements the kuryr-cni, that is the actual CNI plugin
that gets called by kubelet and passes the request to kuryr-daemon, in
golang. This means that it can injected as a binary without any
dependencies, instead of using a bash script that looks for ID of
kuryr-daemon container and does `docker exec` to run Python kuryr-cni
inside. The obvious advantage of that is removing a constraint of
python, curl and docker/runc binaries being available on any K8s host
that runs Kuryr. This enables integration with Magnum, where kubelet
runs in such a minimal container. Besides that injecting a binary is way
more elegant and less error-prone.
The golang implementation should keep the compatibility with Python one.
Also currently only containerized jobs are switched to use it, so Python
implementation is still kept in the repo. I'm not against removing it in
very near future.
Please note that there is an important limitation in comparison to the
Python implementation - i.e. in case of golang binary running on K8s
host, we don't have an easy access to kuryr.conf, meaning that
localhost:50036 is currently hardcoded as the kuryr-daemon endpoint.
This should be fixed by putting the configured endpoint into
10-kuryr.conf file that gets injected onto the host by cni_ds_init
script.
Implements: blueprint golang-kuryr-cni
Change-Id: Ia241fb5b2937c63d3ed6e3de1ac3003e370e4db6
Deploying without kuryr-daemon is deprecated since Rocky and we
announced that it will be removed in the Rocky release notes. This
commit removes all the code that allows that, updates the documentation,
DevStack plugin and gates definitions.
Implements: blueprint remove-non-daemon
Change-Id: I65598d4a6ecb5c3dfde04dc5fefd7b02fc72a0cb
If your kuryr-cni pod definition consists of multiple containers (e.g.
you've added an initContainer), cni_ds_init script is prone to error
when looking up the container running kuryr-daemon. This is becasue it's
only looking at pod name and namespace. This commit extends it by adding
container.name=kuryr-cni to the conditions.
Change-Id: Ib45f6f9f366a62d373c6f37def07e7dcc862726d
Closes-Bug: 1808969
This commit does several cleanups to the Dockerfiles that we have:
* git is removed from the images after Kuryr packages installation
* jq and wget is removed from kuryr-cni image as those ar no longer used
* explicit setuptools installation is no longer required
* raw Kuryr code is removed from images after it's `pip install`ed
* unnecessary VOLUME line is removed from kuryr-cni Dockerfile
* CNI_CONFIG_DIR and CNI_BIN_DIR build arguments are removed from
kuryr-cni Dockerfile as they are not used anywhere. Initially we've
kept them to allow deployer to tell where host's /etc/cni/net.d and
/opt/cni/bin will be mounted, but one of the refactorings of
cni_ds_init must have stopped depending on them and we simply started
to expect the mounts to be in the same paths as on host. We can
continue to do that.
The build_cni_daemonset_image script was created back in the time when
we have had multi-stage build of the kuryr-cni image. This is no longer
the case and building the image is as easy as:
`docker build -f cni.Dockerfile .`
Given that this commit removes the script and updates documentation to
recommend using `docker build` directly.
Change-Id: Ib1807344ede11ec6845e5f09c5a87c29a779af03
This commit adds support for cri-o by changing the binary initially used
to run CNI plugin to runc and falling back to docker only in case it's
not available.
Also DevStack support for installing and configuring Kubernetes with
cri-o is added.
Implements: blueprint crio-support
Depends-On: Ib049d66058429e499f5d0932c4a749820bec73ff
Depends-On: Ic3c7d355a455298f43e37fb2aceddfd1e7eefaf2
Change-Id: I081edf0dbd4eb57826399c4820376381950080ed
This will make the right main process for the cni daemon
container to be reported.
Closes-Bug: 1792539
Change-Id: Ic57fbe20b7bf396ea92e0c2cbcca42814ae2a119
Signed-off-by: Antoni Segura Puimedon <celebdor@gmail.com>
In Kuryr CNI container's entrypoint we were talking to K8s API to get
the current container's CONTAINERID. This worked fine in most cases, but
in more busy environments the value may be not saved into the K8s API
yet and we end up with "null" as CONTAINERID. This obviously breaks
kuryr-cni script that's being injected onto the host.
Instead of implementing retries on "null" this commit uses another
approach and fetches CONTAINERID from Docker API.
Closes-Bug: 1777133
Change-Id: If0bbd55c4dc03077132b140a9a12cf6bd0f0cd03
This commit changes the way kuryr-cni is executed in containerized
deployments. Now it'll use `docker exec` command to execute kuryr-cni
inside the CNI container. This should make it easier to be consumed by
deployers.
To be able to do such changes I needed to stop mounting host's /etc
directory. I believe this was unnecessary and was blocking curl from
working in isolation from host OS.
Closes-Bug: 1757531
Change-Id: I373d65536a43eab98f0fc708936b97637f82eaff
This commit changes the way we produce kuryr-cni Docker image. Previously we've
distributed the kuryr-driver as pyinstaller binary that contained Python 3
interpreter and all the dependencies. This binary was called from CNI. That
approach had some disadvantages, the major being complicated build procedure
and having to see false-positive BrokenPipeError tracebacks in kubelet
logs.
This commit implements distributing kuryr-driver as a virtualenv with
kuryr-kubernetes and all the dependecies installed. That virtualenv is then
copied onto the host system and CNI can easily activate it and run kuryr-cni
binary. This should solve issues caused by pyinstaller.
Closes-Bug: 1747058
Change-Id: I65b01ba27cbe39b66f0a972d12f3abc166934e62
This commit implements kuryr-daemon support when
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True. It's done by:
* CNI docker image installs Kuryr-Kubernetes pip package and adds
exectution of kuryr-daemon into entrypoint script.
* Hosts /proc and /var/run/openvswitch are mounted into the CNI
container.
* Code is changed to use /host_proc instead of /proc when in a container
(it's impossible to mount host's /proc into container's /proc).
Implements: blueprint cni-split-exec-daemon
Change-Id: I9155a2cba28f578cee129a4c40066209f7ab543d
Turns out 99-loopback.conf file isn't really required when running
kuryr-kubernetes. This commit removes its installation from DevStack
plugin and CNI Docker image.
Change-Id: I8f2097287df907675c4113cd225a7ee9f6cd7ef1
Make the CNI config and binary locations parametrized so it is suitable
for more kinds of deployment.
Implements: blueprint kubeadminstallable
Change-Id: I01c7540641fe120faec902008ebd842339b50384
Signed-off-by: Antoni Segura Puimedon <antonisp@celebdor.com>
Co-Authored-By: Michał Dulko <mdulko@redhat.com>