Set resource requests for system pods to
guarantee at least some amount of resources.
This prevents them from being starved of
CPU/memory when running alongside resource
intensive workloads in the cluster and
gives them a higher quality of service class.
metrics-server:
100m/200Mi recommended for up to 100 node clusters.
https://github.com/kubernetes-sigs/metrics-server#scaling
openstack-cloud-controller-manager:
200m CPU taken from example manifests.
kubernetes-dashboard:
100m/100Mi taken from helm chart defaults.
heapster:
100m/128Mi taken from helm chart defaults.
influxdb:
100m/256Mi taken from influx helm chart defaults.
grafana (for influxdb):
100m/200Mi same as monitoring grafana.
ingress-traefik:
100m/50Mi taken from helm chart defaults.
cluster-autoscaler:
100m/300Mi taken from helm chart defaults.
csi-cinder-nodeplugin:
25m CPU on both containers to ensure
Burstable QoS class.
csi-cinder-controllerplugin:
20m CPU on all containers to ensure
Burstable QoS class.
tiller-deploy:
25m CPU to ensure it can always handle
the readiness probe.
octavia-ingress-controller:
50m CPU, just a guess really.
Story: 2008825
Task: 42290
Change-Id: Ifcd764c00d7046744ba63609078cc6c5d02fdc1c
* in 1.20 8080 is not supported anymore
** use only 6443
** change all probes for health to use kubectl and 6443
* configure the signing key in API
story: 2008524
task: 41731
Change-Id: Ibaf1840214016d2dd6ac15e2137eb3cd3d767889
Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
Without this, heat container agents using kubectl version
1.18.x (e.g. ussuri-dev) fail because they do not have the correct
KUBECONFIG in the environment.
Task: 39938
Story: 2007591
Change-Id: Ifc212478ae09c658adeb6ba4c8e8afc8943e3977
Heapster has been deprecated for a while and the new k8s dashboard
2.0.0 version supports metrics-server now. So it's time to upgrade
the default k8s dashboard to v2.0.0.
Task: 39101
Story: 2007256
Change-Id: I02f8cb77b472142f42ecc59a339555e60f5f38d0
Add an ARCH parameter to handle arch specific things, mostly are the
docker image repo names.
Because not all the docker images magnum used support multi-arch
manifest[1] like kubernetes-dashboard, it will need to specific the
arch name in the docker image repo name.
[1]
https://kubernetes.io/docs/concepts/containers/images/#building-multi-architecture-images-with-manifests
Change-Id: Iccb3a030aefd2d4e55a455d1a0401cbc4eb7fd14
Task: 37884
Story: 2007026
monitoring-influxdb and monitoring-grafana were
missing the selector.
1.16+ needs it.
story: 2006459
task: 38376
Change-Id: Iab5205cc84bad30890db7fad380fb02f6ba23786
Signed-off-by: Spyros Trigazis <spyridon.trigazis@cern.ch>
- Start workers as soon as the master VM is created, rather than
waiting all the services ready.
- Move all the SoftwareDeployment outside of kubemaster stack.
- Tweak the scripts in SoftwareDeployment so that they can be combined
into a single script.
Story: 2004573
Task: 28347
Change-Id: Ie48861253615c8f60b34a2c1e9ad6b91d3ae685e
Co-Authored-By: Lingxian Kong <anlin.kong@gmail.com>
Currently, Magnum is using k8s API /version to check the API
availibility which is not a good way because /version only
reflects if the basic k8s api is working on not. And it will
return response even the etcd service is down. This patch fixes
it by using /healthz to replace /version.
Task: 22566
Story: 1775759
Change-Id: I45a1bd48a22842a251dafa6c349f0022fd319e3f
When creating a multi-master cluster, all master nodes will attempt to
create kubernetes resources in the cluster at this same time, like
coredns, the dashboard, calico etc. This race conditon shouldn't be
a problem when doing declarative calls instead of imperative (kubectl
apply instead of create). However, due to [1], kubectl fails to apply
the changes and the deployemnt scripts fail causing cluster to creation
to fail in the case of Heat SoftwareDeployments. This patch passes the
ResourceGroup index of every master so that resource creation will be
attempted only from the first master node.
[1] https://github.com/kubernetes/kubernetes/issues/44165
Task: 21673
Story: 1775759
Change-Id: I83f78022481aeef945334c37ac6c812bba9791fd
Add an admin service account and give it the
cluster role. It can be used for access apps
with token authentication like the
kubernetes-dashboard.
Remove the cluster role from the dashboard service account.
Change-Id: I7980c0e72b0d71921e42af7338d02b8a1e563c34
Closes-Bug: #1766284
Due to a few several small connected patches for the
fedora atomic driver, this patch includes 4 smaller patches.
Patch 1:
k8s: Do not start kubelet and kube-proxy on master
Patch [1], misses the removal of kubelet and kube-proxy from
enable-services-master.sh and therefore they are started if they
exist in the image or the script will fail.
https://review.openstack.org/#/c/533593/
Closes-Bug: #1726482
Patch 2:
k8s: Set require-kubeconfig when needed
From kubernetes 1.8 [1] --require-kubeconfig is deprecated and
in kubernetes 1.9 it is removed.
Add --require-kubeconfig only for k8s <= 1.8.
[1] https://github.com/kubernetes/kubernetes/issues/36745
Closes-Bug: #1718926https://review.openstack.org/#/c/534309/
Patch 3:
k8s_fedora: Add RBAC configuration
* Make certificates and kubeconfigs compatible
with NodeAuthorizer [1].
* Add CoreDNS roles and rolebindings.
* Create the system:kube-apiserver-to-kubelet ClusterRole.
* Bind the system:kube-apiserver-to-kubelet ClusterRole to
the kubernetes user.
* remove creation of kube-system namespaces, it is created
by default
* update client cert generation in the conductor with
kubernetes' requirements
* Add --insecure-bind-address=127.0.0.1 to work on
multi-master too. The controller manager on each
node needs to contact the apiserver (on the same node)
on 127.0.0.1:8080
[1] https://kubernetes.io/docs/admin/authorization/node/
Closes-Bug: #1742420
Depends-On: If43c3d0a0d83c42ff1fceffe4bcc333b31dbdaab
https://review.openstack.org/#/c/527103/
Patch 4:
k8s_fedora: Update coredns config to pass e2e
To pass the e2e conformance tests, coredns needs to
be configured with POD-MODE verified. Otherwise, pods
won't be resolvable [1].
[1] https://github.com/coredns/coredns/tree/master/plugin/kuberneteshttps://review.openstack.org/#/c/528566/
Closes-Bug: #1738633
Change-Id: Ibd5245ca0f5a11e1d67a2514cebb2ffe8aa5e7de
Add a label to prefix all container image use by magnum:
* kubernetes components
* coredns
* node-exporter
* kubernetes-dashboard
Using this label all containers will be pulled from the specified
registry and group in the registry.
TODO:
* grafana
* prometheus
Closes-Bug: #1712810
Change-Id: Iefe02f5ebc97787ee80431e0f16f73ae8444bdc0
kube-ui [2] is deprecated and not actively maintained since long time.
Instead kubernetes dashboard [1] has lot of features and is actively
managed.
With this patch kube-ui is removed and kubernetes dashboard is added
and enabled in k8s cluster by default.
The kubernetes dashboard is enabled by default. To disable it, set the
label 'kube_dashboard_enabled' to False
Reference:
[1] https://github.com/kubernetes/dashboard
[2] https://github.com/kubernetes/kube-ui
Change-Id: I8864c097a3da6a602e0f25d3ff8ade788aa134a9
Implements: blueprint add-kube-dashboard