From 362f604d859e5df0a7df9b5cd8dfc599fffeb8ac Mon Sep 17 00:00:00 2001 From: Stuart Grace Date: Tue, 23 Apr 2024 17:52:12 +0100 Subject: [PATCH] Clarifications to mcapi_vexxhost README Fix a few typos and omissions in README.rst and bring config examples into line with config files used for functional tests. Change-Id: I0e36d725d2ef1f9bc94c0ae0d8435054793b12f4 --- mcapi_vexxhost/doc/source/README.rst | 54 ++++++++++++++++------------ 1 file changed, 32 insertions(+), 22 deletions(-) diff --git a/mcapi_vexxhost/doc/source/README.rst b/mcapi_vexxhost/doc/source/README.rst index 0c97311e..963f07aa 100644 --- a/mcapi_vexxhost/doc/source/README.rst +++ b/mcapi_vexxhost/doc/source/README.rst @@ -57,6 +57,8 @@ Pre-requisites OpenStack-Ansible Integration ----------------------------- +NOTE: The example configuration files shown below are suitable for use with an openstack-ansible All-In-One (AIO) build and can be found at openstack-ansible-ops/mcapi_vexxhost/playbooks/files/openstack_deploy/ + The playbooks are distributed as an ansible collection, and integrate with Openstack-Ansible by adding the collection to the deployment host by adding the following to `/etc/openstack_deploy/user-collection-requirements.yml` @@ -120,7 +122,8 @@ Specify the deployment of the control plane k8s cluster in - hosts Define the physical hosts that will host the controlplane k8s -cluster, this example is for an all-in-one deployment and should +cluster in `/etc/openstack_deploy/openstack_user_config.yml`. This +example is for an all-in-one deployment and should be adjusted to match a real deployment with multiple hosts if high availability is required. @@ -131,7 +134,7 @@ high availability is required. ip: 172.29.236.100 Integrate the control plane k8s cluster with the haproxy loadbalancer -in `/etc/openstack-deploy/group_vars/k8s_all/haproxy_service.yml` +in `/etc/openstack_deploy/group_vars/k8s_all/haproxy_service.yml` .. code-block:: yaml @@ -166,7 +169,7 @@ in `/etc/openstack-deploy/group_vars/k8s_all/haproxy_service.yml` - "{{ haproxy_k8s_service | combine(haproxy_k8s_service_overrides | default({})) }}" Configure the LXC container that will host the control plane k8s cluster to -be suitable for running nested containers in `/etc/openstack-deploy/group_vars/k8s_all/main.yml` +be suitable for running nested containers in `/etc/openstack_deploy/group_vars/k8s_all/main.yml` .. code-block:: yaml @@ -178,7 +181,7 @@ be suitable for running nested containers in `/etc/openstack-deploy/group_vars/k - "proc:rw" - "sys:rw" -Set up config-overrides for the magnum service in `/etc/openstack-deploy/user_variables_magnum.yml`. +Set up config-overrides for the magnum service in `/etc/openstack_deploy/user_variables_magnum.yml`. Adjust the images and flavors here as necessary, these are just for demonstration. Upload as many images as you need for the different workload cluster kubernetes versions. @@ -208,22 +211,6 @@ images as you need for the different workload cluster kubernetes versions. ram: 4096 vcpus: 2 - Set up config-overrides for the control plane k8s cluster in /etc/openstack-deploy/user_variables_k8s.yml` - Attention must be given to the SSL configuration. Users and workload clusters will - interact with the external endpoint and must trust the SSL certificate. The magnum - service and cluster-api can be configured to interact with either the external or - internal endpoint and must trust the SSL certificiate. Depending on the environment, - these may be derived from different certificate authorities. - - .. code-block:: yaml - - # connect ansible group, host and network addresses into control plane k8s deployment - kubernetes_control_plane_group: k8s_all - kubelet_hostname: "{{ ansible_facts['hostname'] }}" - kubelet_node_ip: "{{ management_address }}" - kubernetes_hostname: "{{ internal_lb_vip_address }}" - kubernetes_non_init_namespace: true - # install the vexxhost magnum-cluster-api plugin into the magnum venv magnum_user_pip_packages: - git+https://github.com/vexxhost/magnum-cluster-api@main#egg=magnum-cluster-api @@ -245,6 +232,23 @@ images as you need for the different workload cluster kubernetes versions. # store certificates in the magnum database instead of barbican cert_manager_type: x509keypair + + Set up config-overrides for the control plane k8s cluster in /etc/openstack_deploy/user_variables_k8s.yml` + Attention must be given to the SSL configuration. Users and workload clusters will + interact with the external endpoint and must trust the SSL certificate. The magnum + service and cluster-api can be configured to interact with either the external or + internal endpoint and must trust the SSL certificiate. Depending on the environment, + these may be derived from different certificate authorities. + + .. code-block:: yaml + + # connect ansible group, host and network addresses into control plane k8s deployment + kubernetes_control_plane_group: k8s_all + kubelet_hostname: "{{ ansible_facts['hostname'] }}" + kubelet_node_ip: "{{ management_address }}" + kubernetes_hostname: "{{ internal_lb_vip_address }}" + kubernetes_non_init_namespace: true + # Pick a range of addresses for the control plane k8s cluster cilium # network that do not collide with anything else in the deployment cilium_ipv4_cidr: 172.29.200.0/22 @@ -265,7 +269,13 @@ For a new deployment Run the OSA playbooks/setup.yml playbooks as usual, following the normal deployment guide. -Run the magnum-cluster-api deployment +Ensure that additional python modules required for ansible are present: + + .. code-block bash + + /opt/ansible-runtime/bin/pip install docker-image-py + +Run the magnum-cluster-api deployment: .. code-block:: bash @@ -320,7 +330,7 @@ It will then deploy the workload k8s cluster using magnum, and run a sonobouy "quick mode" test of the workload cluster. This playbook is intended to be used on an openstack-ansible -all-in-one deployment. +all-in-one deployment with no public network configured. Use Magnum to create a workload cluster ---------------------------------------