We have not been using it for a while since some
time ago we switched to the upstream ingress-nginx.
Change-Id: I2afe101cec2ddc562190812fc27bb3fad11469f1
This change adds a deployment script that can be used to migrate a
Ceph cluster deployed with the legacy openstack-helm-infra Ceph
charts to Rook. This process is disruptive. The Ceph cluster goes
down and comes back up multiple times during the migration, but the
end result is a Rook-deployed Ceph cluster with the original
cluster FSID and all OSD data intact.
Change-Id: Ied8ff94f25cd792a9be9f889bb6fdabc45a57f2e
The quay.io/airshipit/kubernetes-entrypoint:v1.0.0 image format is
deprecated and not supported any more by the docker registry.
This is temporary fix to download the image from third party repo
until we update the quay.io/airshipit/kubernetes-entrypoint:v1.0.0.
The deprecation message is as follows:
[DEPRECATION NOTICE] Docker Image Format v1 and Docker
Image manifest version 2, schema 1 support is disabled
by default and will be removed in an upcoming release.
Suggest the author of quay.io/airshipit/kubernetes-entrypoint:v1.0.0
to upgrade the image to the OCI Format or Docker Image
manifest v2, schema 2. More information at
https://docs.docker.com/go/deprecated-image-specs/
The docker-registry container must start not
earlier than docker-images PVC is bound.
Change-Id: I6bff98aa7d0b23e13a17a038f3039b7956703d40
This change updates the Ceph images to 18.2.2 images patched with a
fix for https://tracker.ceph.com/issues/63684. It also reverts the
package repository in the deployment scripts to use the debian-reef
directory on download.ceph.com instead of debian-18.2.1. The issue
with the repo that prompted the previous change to debian-18.2.1
has been resolved and the more generic debian-reef directory may
now be used again.
Change-Id: I85be0cfa73f752019fc3689887dbfd36cec3f6b2
Fixes issue where override files for OS charts were
missing due to specifying the wrong project directory.
Change-Id: I4af6715a33c7de43068ed76a8115c12a2c0969ed
This PS changes ceph repo to debian-18.2.1 from
debian-reef due to some issues with debian-reef
folder at https://download.ceph.com/
Change-Id: I31c501541b54d9253c334b56df975bddb13bbaeb
This change updates Rook to the 1.13.3 release. It also increases
the memory limit for ceph-mon pods deployed by Rook to prevent
pod restarts due to liveness probe failures that sometimes result
from probes causing ceph-mon pods to hit their memory limit.
Change-Id: Ib7d28fd866a51cbc5ad0d7320ae2ef4a831276aa
When using Rook for managing Ceph clusters we have
to provision a minimal set of assets (keys, endpoints, etc.)
to make Openstack-Helm charts work with these Ceph clusters.
Rook provides CRDs that can be used for managing Ceph assets
like pools/keyrings/buckets etc. but Openstack-Helm can not
utilize these CRDs. To support these CRDs in OSH would
require having lots of conditionals in OSH templates since
we still want OSH to work with OSH ceph-* charts.
Change-Id: If7fe29052640e48c37b653e13a74d95e360a6d16
This PS adds mariadb-cluster chart based on mariadb-operator. Also for
some backward compartibility this PS adds mariadb-backup chart and
prometheus-mysql-exporter chart as a separate ones.
Change-Id: I3f652375cce2e3b45e095e08d2e6f4ae73b8d8f0
The PR synchronized this script with that
used in the openstack-helm repo.
Let's use the same script in both repos.
The related PR for the openstack-helm repo
is coming.
Change-Id: I5cfaad8ebfd08790ecabb3e8fa480a7bf2bb7e1e
We don't need this for tests and it is better to
keep the test env minimal since the test hardware
is limited.
Change-Id: I0b3f663408c1ef57ad25a4d031b706cb6abc87a9
When using Rook for managing Ceph we can use
Rook CRDs to create S3 buckets and users.
This PR adds bucket claim template to the
elasticsearch chart. Rook creates a bucket for
a bucket claim and also creates a secret
containing the credentials to get access to this
bucket. So we also add a snippet to expose
these credentials via environment variables to
containers where they are needed.
Change-Id: Ic5cd35a5c64a914af97d2b3cfec21dbe399c0f14
- In case we deploy Ceph on a multi-node env we have
to prepare the loop devices on all nodes. For this
we moved loop devices setup to the deploy-env
Ansible role.
For simplicity we need the same device on all nodes,
so we create a loop device with a big
minor number (/dev/loop100 by default) hoping
that only low minor numbers could be busy.
- For test jobs we don't need to use different devices
for OSD data and metadata. There is no
any benefit from this for the test environment.
So let's keep it simple and put both OSD data and metadata
on the same device.
- On multi-node env Ceph cluster needs cluster members
see each other, so let's use pod network CIDR.
Change-Id: I493b6c31d97ff2fc4992c6bb1994d0c73320cd7b
The motivation is to reduce the code base and get rid
of unnecessary duplications. This PR is moves bandit
tasks from the osh-infra-bandit.yaml playbook
to the osh-bandit role. Then we can use this role for the
same job in OSH.
Change-Id: I9489a8c414e6679186e6c399243a7c0838df812a
Roll back Rook in the openstack-support-rook Zuul job to the 1.12.4
release to work around a problem with ceph-rook-exporter resource
conflicts while the issue is investigated further.
Change-Id: Idabc1814e9b8665c0ce63e2efd5ad94bf193f97a
This change adds an openstack-support-rook zuul job to test
deploying Ceph using the upstream Rook helm charts found in the
https://charts.rook.io/release repository. Minor changes to the
storage keyring manager job and the mon discovery service in the
ceph-mon chart are also included to allow the ceph-mon chart to be
used to generate auth keys and deploy the mon discovery service
necessary for OpenStack.
Change-Id: Iee4174dc54b6a7aac6520c448a54adb1325cccab
To make it easier to maintain the jobs all experimental
jobs (those which are not run in check and gate pipelines)
are moved to a separate file. They will be revised later
to use the same deploy-env role.
Also many charts use Openstack images for testing this
PR adds 2023.1 Ubuntu Focal overrides for all these charts.
Change-Id: I4a6fb998c7eb1026b3c05ddd69f62531137b6e51
This PS replaces deprecated kubernetes.io/ingress.class annotation with
spec.ingressClassName field that is a reference to an IngressClass
resource that contains additional Ingress configuration, including the
name of the Ingress controller.
https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#deprecating-the-ingress-class-annotation
Change-Id: I9953d966b4f9f7b1692b39f36f434f5055317025
Co-authored-by: Sergiy Markin <smarkin@mirantis.com>
Co-authored-by: Leointii Istomin <listomin@mirantis.com>
Signed-off-by: Anselme, Schubert (sa246v) <sa246v@att.com>
story: 2010785
task: 48210
There were a bunch of stories like this 2010785 and in most
cases users face the conflict of pip and apt package
management systems. We can either use --ignore-installed
or use python virtualenv. The second option does not contradict
to the first one.
Change-Id: I345e887b3f35f1d1d6c86cc40a29ff0b1920a1f1
This reverts commit 8e96a91ffa.
Reason for revert: The change broke the compute-kit tests.
The deployment of all Openstack components is successful but then when we create networks and a VM, neutron-dhcp-agent crashes. It is still not clear why it happens. Let's revert this change and figure out what is going on.
Change-Id: I07082511cd168560c8fe8dce3421e37fc402a1ae
This PS upgrades the following components:
- minikube to 1.29.0
- kubernetes to 1.26.3
- calico to 3.25
- coredns to 1.9.4
Also this PS adds cri-dockerd required for kubernetes newer than 1.24
and adds recirsive response to coredns.
Change-Id: Ie8aa43642de5dfa69ed72fadbfd943b578a80a74
This change updates all Ceph image references to use Focal images
for all charts in openstack-helm-infra.
Change-Id: I759d3bdcf1ff332413e14e367d702c3b4ec0de44
The Pacific release of Ceph disabled 1x replication by default, and
some of the gate scripts are not updated to allow this explicitly.
Some gate jobs fail in some configurations as a result, so this
change adds 'mon_allow_pool_size_one = true' to those Ceph gate
scripts that don't already have it, along with
--yes-i-really-mean-it added to commands that set pool size.
Change-Id: I5fb08d3bb714f1b67294bb01e17e8a5c1ddbb73a
This is for convenience when running deployment scripts
manually. On ubuntu loop0 and loop1 could in use
for snaps so we can find free loop devices before trying
to use them.
Change-Id: Iec54c0decd3a401c99f4770187d81f370bcee24c
When deploy ceph on loop devices we need lvm2
to be installed on the host to create necessary
device links like /dev/<vgname>/<lvname>
Change-Id: I5dabbc080aa45b28c1dd5e1d883f9d45affdf60f