This change removes ceph* Dockerfiles and relevant kolla python
code bits.
Kolla-Ansible part has been removed, and K-A was the only consumer.
Change-Id: I9568261fdbbe4b9156d0e07c414ec911ca2e8557
This patch will add new function in extend_start.sh for OSD
creation. Not only support loop device but also others that
disk dev layout is end with numbers.
Change-Id: Iee5f8b8581d70166de6eba1bdc9e42766fe8cb48
Closes-Bug: #1847014
The changes in the following commit did not completely solve the
problem of osd initialization failure:
https://review.opendev.org/652612
In order to solve this problem, the partprobe command consistent with
ceph-disk is added, and then a loop is added to detect the partition,
which is acquired every 1s. When the partition appears, the osd
initialization is continued.
Change-Id: I0ca255c6358132d9e3acfa6b610b70a78756512c
Closes-bug: #1824787
The CentOS Stein OpenStack distribution depends on the Ceph Nautilus
release, while Kolla currently deploys Luminous on CentOS.
This change switches CentOS and OracleLinux builds to use Ceph Nautilus.
Support for auth UID has been removed in the Nautilus release [1][2], so
we have removed the --set-uid argument from calls to ceph-authtool.
Also ceph-osd bootstrap mode fails when no mon config is provided, so
we added --no-mon-config (mon config is injected later after bootstrap).
Due to ceph-nfs build issues (ceph upstream nfs-ganesha rpms relied on older
version of userspace-rcu, than CentOS Storage SIG packages) we need to move
to ceph upstream rpm repos.
[1] d6def8ba11
[2] http://docs.ceph.com/docs/master/releases/nautilus/
Co-Authored-By: Michal Nasiadka <michal.nasiadka@nokia.com>
Change-Id: I000398f587c5f4d6cc8995e34e162eebc77bc3e3
Implements: blueprint centos-ceph-nautilus
When deploying osd, if the user does not use the extra block
partition, the kolla will automatically partition the disk and then
clean up the data on the disk partition. Sometimes the disk partition
will not be updated, there will be an error not finding the partition.
This commit fixes the problem.
Change-Id: I14708f38614dcb75268c2f460ae3d921748c2d10
Closes-bug: #1824787
When deploying ceph osd with kolla, each osd deployment will move the
host bucket to default in the crush map.
If we adjust the crush map, we have to re-adjust the crush map after
adding osd or fixing osd.
This commit fixes this problem.
Change-Id: Ifdc3a1fd5fe37da529b2aee9811b12f744cff3bf
Closes-bug: #1821681
The current bluestore disk label naming is inconsistent with the
filestore. The filestore naming format is that the disk prefixes
belonging to the same osd are the same and the suffixes are
different.
This patch keeps the bluestore's disk naming as well.
Change-Id: I090cf055ebedc555b5ada35e140b7a7bb2a4cf8f
Enhance the deployment of Kolla-Ceph bluestore OSD.
Deploy bluestore OSD including up to 4 partitions:
* one partition is for bluestore OSD information
* one partition is for bluestore block
* one partition is for bluestore block.wal
* one partition is for bluestore block.db
Deploy bluestore OSD deployment with LOOP devices.
Partially-Implements: blueprint kolla-ceph-bluestore
Change-Id: I00eaa600a5e9ad4c1ebca2eeb523bca3d7a25128
Signed-off-by: tone.zhang <tone.zhang@arm.com>
Support Kolla Ceph to deploy blustore OSDs. With the patch, Kolla
Ceph can deploy bluestore OSDs on ONE, TWO or THREE storage
devices.
Before deploy bluestore OSD, please prepare devices. The detailed
Please refer to [1] for devices initialization.
extend_start.sh: initialize and start bluestore OSD
find_disk.py: search the devices for bluestore OSD
[1]: specs/kolla-ceph-bluestore.rst
Partially-Implements: blueprint kolla-ceph-bluestore
Change-Id: I832f490de63e1aeb68814697cda610a51b622c1f
Signed-off-by: Tone Zhang <tone.zhang@arm.com>
In most of case, the disks used by ceph have different size. Use the
default value 1 may block the ceph when one disk is full. Use the disk
size as osd weight will more reasonally.
TrivialFix
Change-Id: Ib875c7289188cbb9380355baf0c8048f1eb09332
This allows us to specify external journals for osds which can greatly
improve performance when the external journals are on the solid-state
drives.
The new lookup and startup methods fix the previous races we had
preventing osds from being created properly.
This retains the same functionality as before and is completely
compatible with the previous method and labels, however this does set
new labels for all new bootstrap OSDs. This was due to a limitation
in the length of the name of a GPT partition.
Closes-Bug: #1558853
DocImpact
Partially-Implements: blueprint ceph-improvements
Change-Id: I61fd10cb35c67dabc53bd82270f26909ef51fc38
This was an attempt to get storage_interface to work properly but that
work will not be completed and functional this cycle. There are design
topics that need to be discussed about it that were brought to light
by the RAX gate failing for it.
TrivialFix
Change-Id: I65579f9e0e0dcf3fa51c0ea031ff474145457c40
This will make sure ceph has a quorum and the cluster is functional
before attempting to use it. We also make sure udev has time to create
its links by looping a few times. This resolves the races found in the
bootstrap process
TrivialFix
Change-Id: Ia4624916feb5c80b2a067e5a62c176c1a5dea460
Introduces a new flag to bootstrap cache devices
DocImpact
Partially-Implements: blueprint ceph-improvements
Change-Id: I09b5a0d5c61b3465237e5f01dc10120725561cd3
The majority of the start.sh code is identical. This removes that
duplicate code while still maintaining the ability to call code in a
specific container.
The start.sh is moved into /usr/local/bin/kolla_start in the container
The extend_start.sh script is called by the kolla_start script at the
location /usr/local/bin/kolla_extend_start . It always exists because
we create a noop kolla_extend_start in the base directory. We override
it with extend_start.sh in a specific image should we need to.
Of note, the neutron-agents container is exempt from this new
structure due to it being a fat container.
Additionally, we fix the inconsistent permissions throughout. 644 for
repo files and the scripts are set to 755 via a Docker RUN command to
ensure someones local perm change won't break upstream containers.
Change-Id: I7da8d19965463ad30ee522a71183e3f092e0d6ad
Closes-Bug: #1501295