* in 1.20 8080 is not supported anymore
** use only 6443
** change all probes for health to use kubectl and 6443
* configure the signing key in API
story: 2008524
task: 41731
Change-Id: Ibaf1840214016d2dd6ac15e2137eb3cd3d767889
Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
Without this, heat container agents using kubectl version
1.18.x (e.g. ussuri-dev) fail because they do not have the correct
KUBECONFIG in the environment.
Task: 39938
Story: 2007591
Change-Id: Ifc212478ae09c658adeb6ba4c8e8afc8943e3977
A regression was introduced by I970c1b91254d2a375192420a9169f3a629c56ce7
which means that deployments where use_podman is unspecified or false
fail because `podman image inspect` is not scoped by this check.
Story: 2007001
Task: 38844
Change-Id: I6a08312693caf8a52174a1ff199d205d54076ee9
Signed-off-by: Bharat Kunwar <brtknr@bath.edu>
Given we're using public container registry as the default registry,
so it would be nice to have a verification for the image's digest.
Kubernetes already supports that so user can just use format like
@sha256:xxx for those addons' tags. This patch introduces the support
for hyperkube based on podman and fedora coreos driver.
Task: 37776
Story: 2007001
Change-Id: I970c1b91254d2a375192420a9169f3a629c56ce7
In the event that master/minion instances restart before the
heat container agent bootstrapping is complete, it is safer to enable
all the services before starting them so that they can restore normal
function after reboot.
story: 2007031
task: 37835
Change-Id: Ic5c7851d6603d23e554b2df88b5deefb30dd74b9
Signed-off-by: Bharat Kunwar <brtknr@bath.edu>
Using the atomic cli to install kubelet breaks mount
propagation of secrets, configmaps and so on. Using podman
in a systemd unit works.
Additionally, with this change all atomic commands are dropped,
containers are pulled from gcr.io (ofiicial kubernetes containers).
Finally, after this patch only by starting the heat-agent with
ignition, we can use fedora coreos as a drop-in replacement.
* Drop del of docker0
This command to remove docker0 is carried from
earlier versions of docker. This is not an issue
anymore.
story: 2006459
task: 36871
Change-Id: I2ed8e02f5295e48d371ac9e1aff2ad5d30d0c2bd
Signed-off-by: Spyros Trigazis <spyridon.trigazi@cern.ch>
Due to [0], we can not label nodes with
node-role.kubernetes.io/master="". We need to do it with the kubernetes
API.
[0] https://github.com/kubernetes/kubernetes/issues/75457
story: 2006459
task: 36872
Change-Id: I2dc2a125c49f9fc33aa02d3d0c99a5bb0eec1156
Signed-off-by: Spyros Trigazis <spyridon.trigazi@cern.ch>
Rolling ugprade is an important feature for a managed k8s service,
at this stage, two user cases will be covered:
1. Upgrade base operating system
2. Upgrade k8s version
Known limitation: When doing operating system upgrade, there is no
chance to call kubectl drain to evict pods on that node.
Task: 30185
Story: 2002210
Change-Id: Ibbed59bc135969174a20e5243ff8464908801a23
Add kubelet on the master nodes. This work was
done already for calico, this patch applies the
same config when calico is used as well.
story: 2003521
task: 24797
Change-Id: Id33fb59ef23da740712d9a9b7ec4205bd6579b35
1. pods with host network can not reach coredns or any svc or resolve
their own hostname
2. If webhooks are deployed in the cluster, the apiserver needs to
contact them, which means kube-proxy is required in the master node with
the cluster-cidr set.
Change-Id: Icb8e7c3b8c75a3ab087c818c8580c0c8a9111d30
story: 2003460
task: 24719
By current design, pods under kube-system will run on minion nodes. And
given now we're not running kubelet on master node, so calico-node is
not running on k8s master node. As a result, kubectl proxy is not
working to access dashboard. And it's confirmed with calico team that
the calico-node container must be running on master node if user want
to use kubectl proxy, see [1]. So, the solution is enabling kubelet
on master but disallow the other pods scheduled on master with
taint/tolerations.
Besides, this patch includes another fix about running calico on
Fedora Atomic. Because Fedora Atomic is using NetworkManager, it
manipulates the routing table for interfaces in the default network
namespace where Calico veth pairs are anchored for connections to
containers. This can interfere with the Calico agent’s ability to
route correctly. Please see more information about this at [2].
[1] https://docs.projectcalico.org/v3.0/getting-started/kubernetes/
installation/integration#about-the-calico-components
[2] https://docs.projectcalico.org/master/usage/troubleshooting/
#configure-networkmanager
Closes-Bug: #1751978
Change-Id: Iacd964806a28b3ca6ba3e037c60060f0957d44aa
Add a new label 'cert_manager_api' to kubernetes clusters controlling the
enable/disable of the kubernetes certificate manager api.
The same cluster cert/key pair is used by this api. The heat agent is used
to install the key in the master node(s), as this is required for kubernetes
to later sign new certificate requests.
The master template init order is changed so the heat agent is launched
previous to enabling the services - the controller manager requires the CA key
to be locally available before being launched.
Change-Id: Ibf85147316e3a194d8a3f92cbb4ae9ce8e16c98f
Partial-Bug: #1734318
Due to a few several small connected patches for the
fedora atomic driver, this patch includes 4 smaller patches.
Patch 1:
k8s: Do not start kubelet and kube-proxy on master
Patch [1], misses the removal of kubelet and kube-proxy from
enable-services-master.sh and therefore they are started if they
exist in the image or the script will fail.
https://review.openstack.org/#/c/533593/
Closes-Bug: #1726482
Patch 2:
k8s: Set require-kubeconfig when needed
From kubernetes 1.8 [1] --require-kubeconfig is deprecated and
in kubernetes 1.9 it is removed.
Add --require-kubeconfig only for k8s <= 1.8.
[1] https://github.com/kubernetes/kubernetes/issues/36745
Closes-Bug: #1718926https://review.openstack.org/#/c/534309/
Patch 3:
k8s_fedora: Add RBAC configuration
* Make certificates and kubeconfigs compatible
with NodeAuthorizer [1].
* Add CoreDNS roles and rolebindings.
* Create the system:kube-apiserver-to-kubelet ClusterRole.
* Bind the system:kube-apiserver-to-kubelet ClusterRole to
the kubernetes user.
* remove creation of kube-system namespaces, it is created
by default
* update client cert generation in the conductor with
kubernetes' requirements
* Add --insecure-bind-address=127.0.0.1 to work on
multi-master too. The controller manager on each
node needs to contact the apiserver (on the same node)
on 127.0.0.1:8080
[1] https://kubernetes.io/docs/admin/authorization/node/
Closes-Bug: #1742420
Depends-On: If43c3d0a0d83c42ff1fceffe4bcc333b31dbdaab
https://review.openstack.org/#/c/527103/
Patch 4:
k8s_fedora: Update coredns config to pass e2e
To pass the e2e conformance tests, coredns needs to
be configured with POD-MODE verified. Otherwise, pods
won't be resolvable [1].
[1] https://github.com/coredns/coredns/tree/master/plugin/kuberneteshttps://review.openstack.org/#/c/528566/
Closes-Bug: #1738633
Change-Id: Ibd5245ca0f5a11e1d67a2514cebb2ffe8aa5e7de
Following up of https://review.openstack.org/#/c/487943
Depends-On: I9a7d00cddb456b885b6de28cfb3d33d2e16cc348
Implements: blueprint run-kube-as-container
Change-Id: Icddb8ed1598f2ba1f782622f86fb6083953c3b3f
Following up of https://review.openstack.org/#/c/487357
Depends-On: I22918c0b06ca34d96ee68ac43fabcd5c0b281950
Implements: blueprint run-kube-as-container
Change-Id: I9a7d00cddb456b885b6de28cfb3d33d2e16cc348
The 2 k8s atomic drivers we currently support are added to the
same driver. This breaks ironic support with the stevedore
work I'm currently doing.
With stevedore, we can choose only one driver based on the
server_type, os and coe. We won't be able to pick a driver and
then choose an implementation bases on server_type.
Partially-Implements: blueprint magnum-baremetal-full-support
Co-Authored-By: Spyros Trigazis <strigazi@gmail.com>
Change-Id: Ic1b8103551f48f85baa2ed9ff32d5b70b1fab84e