At this point we don't include them into
any pipelines. This will be performed in later
commits, once the 2024.1 images are ready.
Change-Id: Ie7aa59685898f065b7054b5072cb2fe0ab706b1c
Metallb be is a L2/L3 load balancer that we
use for exposing the Openstack services
outside the cluster.
Before that we used to deploy the ingress-nginx
contoller in the host network namespace.
Change-Id: I9fdb5f1b2f9403ce04f9d34b1792a0f29f55d879
Recently we refactored the deploy-env role which
among other things can deploy Openstack provider
network gateway.
Depends-On: I41f0353b286f817cb562b3bd59992e4baa473568
Change-Id: Iece2cc83c68cc282389f8380ceebeebf17f788fb
- Openvswitch agent init script skips attaching
interface if it does not exist. And the compute-kit.sh
deploys neutron with
auto_bridge_add: {"br-ex": "provider1"}
where "provider1" is a tap interface that is going
to be created while deploying the test env.
- Heat test script checks only public endpoints
- Add 1+2 nodes nodeset. The primary node is used
as a client node and not a member of K8s cluster.
Change-Id: If7c8763dd619dec31f9d141f21399d159395049a
Recent changes in ioctl made it incompatible
with parted which is used by OSH ceph-osd chart when
deploying ceph on a loop device.
The issue appears on Ubuntu Jammy.
See details here https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2049689
We temporarily disable jobs that utilize ceph-osd on Jammy.
Change-Id: Ic4657ee7c71a46e56f98b1f6ef8ad0a434593c06
We recently re-worked all the deployment jobs
so they use `deploy-env` Ansible role which works
for both multi-node and single-node environments.
This means there is no need to have diffrent sets
of scripts for these two cases.
Also when we deploy Openstack components it is better
to have values overrides for different scenarios but
not different sets of scripts. Here we remove unused
deployment scripts which in many cases duplicated
the code base.
We will be cleaning up the code base even further to
provide excelent user experience.
Change-Id: Iacda03964a4dd0e60873593df9f590ce20504f2f
The change updates all deployment jobs so they use
deploy-env role which leverages kubeadm to deploy k8s.
This role works for both single-node/multi-node
inventories.
Also all jobs are reorganized to improve job
maintenance. Check pipeline runs tests for 3
most recent releases: Yoga, Zed, 2023.1
We are focusing on 2023.1 for which we run both
Focal and Jammy jobs.
Change-Id: Ibba9b72876b11484fd7cc2e4710e92f964f15cc3
At this point it requires nfs provisioner that provides
ReadWriteMany volumes for vnfpackages, csar files and
also the same storage class is used for logs.
Also this patch adds a job that only deploys Tacker but
does not tests it in any way. This job is put to the experimental
pipeline.
Co-authored-by: Vladimir Kozhukalov <kozhukalov@gmail.com>
Story: 2010682
Task: 47771
Change-Id: I56d7ba489746ab4f818086440a7783f4b1ecb292
The compute-kit jobs are used to test new images
which are published to buildset registry. We have
to configure containerd which is used for multinode
compute-kit jobs to use this buildset registry.
The role use-buildset-registry that we used before
does not properly configure containerd. So we
extended deploy-docker playbook to configure
both buildset registry and registry mirror
if they are defined.
Change-Id: Idb892a3fcaf51385998d466dbdff8de36d9dd338
The job openstack-helm-tls is updated to use 2023.1 release.
Also we renamed the tls job to openstack-helm-tls-2023-1-ubuntu_focal
to make it more convenient. And since this feature is important
we add the job to the check pipeline.
Change-Id: I5fdba8e5a4c6497c352bf4f1e3d2c7ab7e2a3076
For recent releases we use 32GB nodes for compute-kit
jobs. The number of such nodes is extremely limited.
So we'd better use multinode nodesets for compute-kit
jobs.
We deploy K8s using kubeadm and then we set labels to the
K8s nodes so charts can use these labels for node selectors.
We deploy L3 agent only on the node where we run test scripts.
This is because we want test virual router to be always created
on this node. Otherwise the L2 overlay needs to be created
to emulate provider network (will be implemented later).
Glance is deployed w/o backend storage (will be fixed later).
Change-Id: Id2eb639fb67d41006940a7d7b45a865b2f1124f7
The change https://review.opendev.org/c/openstack/project-config/+/888901
introduces two new labels ubuntu-focal-32GB and ubuntu-jammy-32GB.
These labels are available in only one region (Vexxhost ca-ymq-1)
and the number of such nodes is extremely limited. So we are
going to switch compute-kit jobs to multinode nodesets ASAP and
this is in progress at the moment.
This change is a temporary solution to unblock all our
activities.
Change-Id: I2c2a54f4a90656bc6fb441b8a794744e4a636cb6
Openstack releases older than Yoga are now in
extended maintenance. To reduce the CI
footprint we don't run test jobs for older
relases as part of the check/gate pipelines.
Instead we are going to run those jobs as
part of the periodic-weekly pipeline.
See the detailed description of extended maintenance
status here
https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases
Change-Id: Ie227e7d2dd297b6095a40f6114ef6b0a2f226790
- Also run last two test scripts in compute-kit job
sequentially. This is handy since it allows to see
what is happening during the test run. Both these
test scripts usually take just few minutes. But if
we run them using ansible async feature and one of
the scripts fails then we are forced to wait for
a long timeout.
Change-Id: I75b8fde3ec4e3355319b1c3f257e2d76c36f6aa4
Also a new nodeset was temporarily added.
The aio compute-kit jobs for recent releases require
a huge node to work reliably. We'll remove the temporary nodeset
once this is merged
https://review.opendev.org/c/openstack/openstack-helm-infra/+/884989
Change-Id: I7572fc39a8f6248ff7dac44f20076ba74a3499fc
This change removes the ussuri-bionic jobs from check in zuul
for openstack-helm. Ussuri is the only release that is still
using bionic images and this change is part of our effort to
stay up-to-date.
A future change will update the charts to remove ussuri overrides
to reflect this.
Change-Id: I18a55426d92654e7baa422ad92ea9f092d854460
This change updates the tungsten fabric job to use a newer version
of openstack, wallaby, as well as moves it from periodic to
experimental.
Change-Id: I191bdaedba507ee76c04b2a2143362e772bcabc9
The Xena and Yoga jobs have been unstable lately, the compute-kit
job does not run reliable in Zuul. While we diagnose and fix the
issue, this change comments out both X & Y release jobs to reduce
the number of blocked developers and wasted rechecks.
Change-Id: I53f1a9cd8c24939cf73729c5c2a8bb674403fdd6
This verifies that making a configuration change to one of Umbrella's
subcharts results in only the application (DaemonSet, Deployment or
StatefulSet) for that subchart being updated. No other subchart's
application should be updated.
This only validates subcharts from openstack-helm-infra.
Validating the remaining subcharts from openstack-helm will
be done in the future.
OpenStack Umbrella's default values for rabbitmq was configured
to use a host path. This is so rabbitmq retains its data
between StatefulSet changes. Otherwise components fail to authenticate
with rabbitmq after the rabbitmq pods have been recreated. The
OpenStack Umbrella Chart will use the `standard` storage class
by default since that's what is provisioned via minikube.
Change-Id: I8570e32e6ae03563037608d337f31066edf29755
If a Helm upgrade is performed on the OpenStack Umbrella chart using
the exact same configuration as the first release, then it's expected
for no DaemonSets, Deployments, or StatefulSets to be updated.
This did not work as expected.
A few changes were required to support this desired behavior:
1. Update glance's configmap-etc.yaml to trim whitespace and convert
YAML comment to Helm template comment. Before this change, Helm
rendered the template with the YAML comment and a newline for the
install phase. On upgrades, Helm rendered the template without the
YAML comment and newline causing the hash of configmap-etc to change,
thus causing the glance-api Deployment to update.
2. Update openstack.sh script to create a randomly generated memcache
secret for glance. Without this change, the glance-api deployment
changes each time since Helm randomly generates a new memcache
secret if not provided.
This behavior is enforced via a new test script,
validate-umbrella-upgrade-no-side-effects.sh.
The following jobs are always recreated due to hooks:
- keystone-bootstrap
- keystone-credential-setup
- keystone-db-init
- keystone-db-sync
- keystone-domain-manage
- keystone-fernet-setup
- keystone-rabbit-init
- rabbitmq-cluster-wait
Some Jobs are created via CronJobs and could be created during
validation. So far, heat-engine-cleaner has been seen, but others
could be caught too.
So the validation script ignores these pod changes by ignoring if
Jobs were recreated. Plus Jobs being recreated should not impact
the OpenStack deployment.
Change-Id: Iffaa346d814b8d0a3e2292849943219f70d50a23