The current code runs k8s-on-openstack's ansible in an ansible
task. This makes debugging failures especially difficult.
Instead, move the prep task to update-system-config, which will
ensure the repo is cloned, and move the post task to its own
playbook. The cinder storage class k8s action can be removed from
this completely as it's handled in the rook playbook.
Then just run the k8s-on-openstack playbook as usual, but without
the cd first so that our normal ansible.cfg works.
Change-Id: I6015e58daa940914d46602a2cb64ecac5d59fa2e
k8s-on-openstack uses the baked in ubuntu user and ssh keypairs
to interact with the host. Our other roles assume that we'll be
logging in directly as root.
Run base-repos logging in as ubuntu with become: true set so that
we can overwrite the root ssh key with the one allowing direct
login from bridge.
Change-Id: I98e91e0a9e5f4a44fcad8f22a0f710ce2c4138e0
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
The gitea service needs an HA shared filesystem, which is provided by
cephfs and managed by rook.io.
It also needs a database service, which is provided by
percona-xtradb-cluster.
Change-Id: Ie019c2e24c3780cec2468a00987dba4ac34ed570
In order to make sure we don't accidentally get broken by any
upstream patches, pin k8s-on-openstack to a specific sha.
Change-Id: Iabd80a7f95646304ed293fe11bed3a9260411705
The k8s-on-openstack project produces an opinionated kubernetes
that is correctly set up to be integrated with OpenStack. All of the
patches we've submitted to update it for our environment have been
landed upstream, so just consume it directly.
It's possible we might want to take a more hands-on forky approach in
the future, but for now it seems fairly stable.
Change-Id: I4ff605b6a947ab9b9f3d0a73852dde74c705979f