Commit Graph

492 Commits

Author SHA1 Message Date
Luciano Lo Giudice 1ee3d04fda First rewrite of ceph-mon with operator framework
This patchset implements the first rewrite of the charm using the
operator framework by simply calling into the hooks.

This change also includes functional validation about charm upgrades
from the previous stable to the locally built charm.

Fix tempest breakage for python < 3.8

Co-authored-by: Chris MacNaughton <chris.macnaughton@canonical.com>

Change-Id: I61308bb2900134ea163d9e92444066a3cb0de43d
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/849
2022-08-19 19:00:56 -03:00
Chris MacNaughton a1d0518c80 Disable insecure global-id reclamation
Closes-Bug: #1929262
Change-Id: Id9f4cfdd70bab0090b66cbc8aeb258936cbf909e
2022-08-16 16:56:37 -04:00
Hicham El Gharbi dfbda68e1a Create NRPE check to verify ceph daemons versions
This NRPE check confirms if the versions of cluster daemons are divergent.

WARN - any minor version diverged
WARN – any versions are 1 release behind the mon
CRIT – any versions are 2 releases behind the mon
CRIT – any versions releases are head the mon

A juju action is also provided 'get-versions-report'
which provide to users, a quick way to see
daemons versions running on cluster hosts.

Closes-Bug: #1943628
Change-Id: I41b5c8576dc9cf885fa813a93e6d51e8804eb9d8
2022-07-19 12:18:06 +02:00
Billy Olsen d72e8db254 Updates for jammy enablement
- charmcraft: build-on 20.04 -> run-on 20.04/22.04 [*archs]
- Refresh tox targets
- Drop impish bundles and OSCI testing
- Add jammy metadata
- Default source is yoga
- Resync charmhelpers and charms.ceph

Change-Id: Ib62d7f882f22146419dfe920045b73452f9af2cb
2022-04-07 14:16:20 +01:00
Chris MacNaughton b6bcec8072 Resolve type change in Ceph Quincy for enabled_manager_modules
Change-Id: I4f81391e51312ec5795e3a3b840b2461e48cb3c4
2022-04-01 13:15:45 +00:00
Chris MacNaughton c07fb2dc6a Remove functionality for auth-supported
Closes-Bug: #1841445
Change-Id: I394d025ff5c0b4a73c6683d67b0949484a5924a1
2022-03-22 11:30:32 +01:00
Luciano Lo Giudice 5be69f4b17 Update osd-removal permissions for Ceph Pacific
For Luminous, read permissions for the mgr was enough, but for
Pacific and beyond, we need broader permissions.

Change-Id: If9f3934d299a9d118832f54dd88afc920adce959
2022-03-16 16:18:36 -03:00
Luciano Lo Giudice 3f2730a93d Add 'mgr' permissions in removal key
These additional changes are needed for the 'ok-to-stop' and
'safe-to-destroy' commands to work correctly within OSD units
and prevent them from hanging indefinitely.

Change-Id: Ic0e1933bcba76126717f439dd5175d1fe835a807
2022-03-10 20:29:46 -03:00
Luciano Lo Giudice f280b5b412 Enhance the permissions for the removal key
In addition to the enabled commands, we also need the OSD
command 'safe-to-stop' to fully implement OSD removal.

Change-Id: I4ff51182148d25f07f5f2de2342cc970ffc1b7d9
2022-03-07 12:34:29 -03:00
Corey Bryant d695ab315f Fix handling of profile-name
The current code:
self.profile_name = op.get('crush-profile', profile_name)

will only default to profile_name if the 'crush-profile' key
doesn't exist in the op dictionary. If the 'crush-profile' key
exists and is set to None, the default profile_name is not used.

This change will use the default profile_name in both cases.

A full charm-helpers sync is done here.

Closes-Bug: #1960622
Change-Id: If9749e16eadfab5523d06c82f3899a83b8c6fdc1
2022-02-18 12:26:18 -05:00
Zuul 3c5f539b15 Merge "Create a new key to support OSD disk removal" 2022-02-15 11:17:55 +00:00
Aqsa Malik da798bdd95 Add profile-name parameter in create-pool action
This change adds a profile name parameter in the create-pool action that
allows a replicated pool to be created with a CRUSH profile other than
the default replicated_rule.

Closes-Bug: #1905573

Change-Id: Ib21ded8f4a977b4a2d57c6b6b4bb82721b12c4ea
2022-02-11 16:35:30 +01:00
Luciano Lo Giudice 701e1107f2 Create a new key to support OSD disk removal
This patchset creates a new key that permits ceph-mon units to
execute a set of commands that is needed to properly implement
full disk removal from within ceph-osd units.

Change-Id: Ib959e81833eb2094d02c7bdd507b1c8b7fbcd3db
2022-01-27 01:08:45 -03:00
Samuel Walladge 48c52fafdd Display information if missing OSD relation
When ceph-mon is blocked on waiting for enough OSDs to be available,
it will display a message to that effect.
But this is misleading if ceph-mon has not been related to ceph-osd.
So if the two are not related,
and ceph-mon is waiting for OSDS,
then display a message about the relation missing.

Closes-Bug: #1886558
Change-Id: Ic5ee9d33d2bb874af7fc7c325773f88c5661fcc6
2022-01-13 14:44:56 +10:30
Alex Kavanagh 9e5f668299 Fix get_mon_map() for octopus and later
The "ceph mon_status" command seems to have disappeared on octopus and
later, and is replaced by "ceph quorum_status". This changes the
get_mon_map() function to detect the underlying ceph version and do the
right thing.

Note that the fix is actually in charm-helpers, and this has been
manually synced into the charm [1].

[1] https://github.com/juju/charm-helpers/pull/659

Change-Id: I59cf6fc19cf2a91b0aef37059cdb0ed37379b5cb
Closes-Bug: #1951094
2021-11-26 12:31:43 +00:00
Corey Bryant 7cd789601d Add yoga bundles and release-tool syncs
* charm-helpers sync for classic charms
* sync from release-tools
* switch to release-specific zosci functional tests
* run focal-ussuri as smoke tests
* remove trusty, xenial, and groovy metadata/tests
* drop py35 and add py39
* charms.ceph sync

Change-Id: I214c0517b223da5fce9e942269fd8703422d1a2b
2021-11-17 13:46:05 -05:00
Zuul 1e148346b7 Merge "Add balancer module support for 'upmap'" 2021-10-05 22:29:16 +00:00
Luciano Lo Giudice 691605e6fc Add balancer module support for 'upmap'
This allows the user to change the configuration parameter
'balancer-mode' via Juju in order to set the balancer mode for Ceph.

Change-Id: I60dbd5f163e0c9d004275eff65db7ada41ad2660
Closes-Bug: #1888914
2021-10-04 11:53:21 -03:00
Alex Kavanagh 061e726ee8 charm-helpers sync for 21-10
Change-Id: I7e6c2303ae2eed691475a1c5209c27b9173e2bf2
2021-10-04 12:29:00 +01:00
Alex Kavanagh cb580a0b91 Add xena bundles
- add non-voting focal-xena bundle
- add non-voting impish-xena bundle
- charm-helpers sync for new charm-helpers changes
- update tox/pip.sh to ensure setuptools<50.0.0

Change-Id: If511b7fee8cf676b6ba7017aa60fe916ac9a26d9
2021-09-21 14:11:55 +01:00
Dmitrii Shcherbakov 82743ab7e5 Notify more relations when cluster is bootstrapped
Currently mon_relation only calls notify_rbd_mirrors when the cluster is
already bootstrapped which leads to broker requests not being handled
for other relations in some cases.

The change also moves the bootstrap attempt code into a separate
function and adds unit tests for mon_relation to cover different
branches for various inputs.

Closes-Bug: #1942224
Change-Id: Id9b611d128acb7d49a9a9ad9c096b232fefd6c68
2021-09-01 23:26:57 +03:00
Liam Young be716fea82 Add support dashboard relation
Add support for the dashbaord relation. The relation enables the
mons to signal to the dashboard that the cluster is ready.

Change-Id: I279142d386a8bf369c0b9dff3b7be9d65f314bf5
2021-08-19 12:14:08 +00:00
Chris MacNaughton abb6871226 Only check for expected-osd-count until it has been met
Checking for enough OSDs to be presented by OSD units is
necessary during deploy time to ensure that clients can
correctly connect and perform their operations; however,
checking is useless post-deploy and can be harmful during
replacement operations.

This change introduces a restriction that this check should
only be done on the leader, as well as ensuring that it
short-circuits the check after the check passes.

Closes-Bug: #1938970
Change-Id: Ie285bbc34692964acb35315f866fe617b0ef1305
2021-08-05 15:52:24 -05:00
Corey Bryant c21d0c562c c-h sync - restore proxy env vars for add-apt-repository
Change-Id: Id4886deff01dbf04861d8815d6816f13b0c6b735
2021-06-04 19:13:10 +00:00
Cornellius Metto 320ddae827 Add configuration options for disk usage alerting thresholds
The ceph cluster degrades to HEALTH_{WARN|CRIT} when the following
default thresholds are breached:

mon data avail warn = 30
mon data avail crit = 5

- These thresholds can be conservative. It might be desirable
  to change them.
- A specific common scenario is when ceph-mon units are run in lxd
  containers which report the disk usage of the underlying host. The
  underlying host may have its own monitoring and its own
  thresholds which can lead to duplicate or conflicting alerts.


Closes-Bug: #1890777
Change-Id: I13e35be71697b98b19260970bcf9812a43ef9369
2021-05-12 17:25:50 +03:00
Alex Kavanagh e7aeaaf088 21.04 sync - add 'hirsute' in UBUNTU_RELEASES
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms.  This sync is to
add just that key.  See also [1]

Note that this sync is only for classic charms.

[1] https://github.com/juju/charm-helpers/pull/598

Change-Id: I89019b188806981b786a9fa9a292195b52078193
2021-04-11 16:50:42 +01:00
Zuul cb5bd5a981 Merge "21.04 libraries freeze for charms on master branch" 2021-04-06 20:26:07 +00:00
Alex Kavanagh f180a2e729 21.04 libraries freeze for charms on master branch
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
  - ensure stable/21.04 branch for charms.openstack
  - ensure stable/21.04 branch for charm-helpers

Change-Id: I7de7b61d63aef57c3242631b969d5bb54fe76ab1
2021-04-03 20:21:50 +01:00
Hemanth Nakkina 7bf9c879a0 optimizations to reduce charm upgrade time
charm upgrades takes longer time for ceph-mon to get into idle state
when there are more number of OSDs and ceph clients whenever there
is change in osd-relation data.
In these cases osd_relation triggers notify_client() that takes
significant amount of time as the client_relation() is executed for
every related unit on each client relation. Some of the function
calls and ceph commands can be reduced to be executed for every
relation or once per notify_client instead of executing them for
every related unit.

ceph.get_named_key() is currently executed for every related unit
and also each execution takes longer time as it is supposed to run
minimum of 2 ceph commands. This patch tries to reduce the number
of calls for ceph.get_named_key to once per relation.

Partial-Bug: #1913992
Change-Id: Ic455cd7c4876efafee221bc6e7a5ec61fee5643f
2021-04-03 17:22:45 +00:00
Liam Young 4b23b9dc34 Handle requests if they have errored
When checking if a request is a duplicate handle case where the
request has errored and does not have an id.

Closes-Bug: #1918143
Change-Id: I314b3658b57fdfaa77f4c1a0c1139e6b7dd4b1c4
2021-03-08 19:22:44 +00:00
Pedro 490bba8497 Block same broker_req from running twice
Goal is to avoid broker request retrials for every hook called
on ceph-mon

Change-Id: Iefd8d305409ba97804795eef9a5add81103d05a2
Closes-Bug: #1773910
Partial-Bug: #1913992
2021-02-18 12:00:12 +05:30
Alex Kavanagh 6c7a168f81 Hotfix charmhelpers sync for local_address() fix
The network-get --primary-address juju-info fails on pre-2.8.?
versions of juju.  This results in a NoNetworkBinding error.
Fallback to unit_get() if that occurs for local_address().

Change-Id: I39648aa65299c77abe5790e5ac3cc29541142d46
2021-01-20 12:20:09 +00:00
Alex Kavanagh 7ac9cc68d0 Updates for testing period for 20.01 release
Includes updates to charmhelpers/charms.openstack for cert_utils
and unit-get for the install hook error on Juju 2.9

* charm-helpers sync for classic charms
* rebuild for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
  - ensure master branch for charms.openstack
  - ensure master branch for charm-helpers

Change-Id: Ib761eb90cbc87620c990add6da7f4184e7dcb453
2021-01-16 12:57:34 +00:00
Alex Kavanagh 65d23459da Updates for testing period for 20.01 release
* charm-helpers sync for classic charms
* rebuild for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
  - ensure master branch for charms.openstack
  - ensure master branch for charm-helpers

Change-Id: Ieb501893e211e442398a03b338072705a9d8b51a
2021-01-12 15:28:24 +00:00
Marius Oprin 00e7129d87 Sync charmhelpers
Recent charmhelpers change forwards the broker requests to the
ceph-rbd-mirror with information about the RBD mirroring mode.
This is needed for Cinder Ceph Replication spec

Change-Id: I1d2b5351574a8741e55a8e6482d0c4a168562050
Co-authored-by: Ionut Balutiou <ibalutoiu@cloudbasesolutions.com>
2020-11-23 17:56:59 +02:00
Aurelien Lourot e87ef6ec00 Add Groovy to the test gate
Also sync libraries

Change-Id: I4e9d276edde7fb46ebf6b641edb3ad5df86cd040
2020-11-12 11:23:01 +01:00
Frode Nordahl a165b2a3bb
Forward broker requests on rbd-mirror relation
To be able to support mirroring of pools with advanced features
the RBD Mirror charm needs more information about the intent
behind pools requested in a deployment. We solve this by
forwarding all the broker requests in the deployment.

It is up to the consumer of the rbd-mirror relation to filter
the requests and relay the ones eligible for use on a remote
cluster.

Change-Id: I16196053bee93bdc4e5c62f5467d9e786b047b30
2020-11-05 10:55:42 +01:00
Chris MacNaughton e1fe9d1a3d Batch update to land Ubuntu Groovy support into the charms
Cherry-Pick from 09752a1527

Change-Id: I1baf66021be290985c58a7ea5e484fd80e09b4cf
2020-10-12 11:20:57 +02:00
Liam Young d875568ad7 Support ceph client over CMRs
Support ceph client over CMRs of and only if permit-insecure-cmr
config option has been set to true, otherwise go into a blocked
state.

To support CMR clients try and get client service name from relation
data first before falling back to using the remote unit name. Using
the remote unit name fails when the clients are connecting via a
cross-model relation.

The clients side change is here: https://github.com/juju/charm-helpers/pull/481

Change-Id: If9616170b8af9eac309dc6e8edd670fb5cfd8e0f
Closes-Bug: #1780712
2020-10-01 12:27:01 +00:00
James Page 56946244f9 Make EC profiles immutable
Changing an existing EC profile can have some nasty side effects
including crashing OSD's (which is why its guarded with a --force).

Update the ceph helper to log a warning and return if an EC profile
already exists, effectively making them immutable and avoiding
any related issues.

Reconfiguration of a pool would be undertaking using actions:

  - create new EC profile
  - create new pool using new EC profile
  - copy data from old pool to new pool
  - rename old pool
  - rename new pool to original pool name

this obviously requires an outage in the consuming application.

Change-Id: Ifb3825750f0299589f404e06103d79e393d608f3
Closes-Bug: 1897517
2020-09-28 11:46:51 +01:00
Ponnuvel Palaniyappan 60a9a4f27a Remove chrony if inside a container
When running ceph-mon in containers, best practice is
to have chrony/ntp configured and installed on the bare
metal and then have the container trust the system
clock, as the container should not manage the system
clock.

The chrony package get installed automatically as
part of the dependencies of other packages, which
gets removed in this change.

Also contains related changes for charms.ceph.

Change-Id: If8beb28ea5b5e6317180e52c3e32463e472276f4
Closes-Bug: #1852441
Depends-On: Ie3c9c5899c1d46edd21c32868938d3290db321e7
2020-09-15 09:11:47 +01:00
Frode Nordahl 210493277e
Add BlueStore Compression support
Sync in updates from charm-helpers and charms.ceph.

Remove unit tests that belong to charms.ceph.

Depends-On: Ibec4e3221387199adbc1a920e130975d7b25343c
Change-Id: I153c22efb952fc38c5e3d36eed5d85c953e695f7
2020-08-26 15:33:27 +02:00
Frode Nordahl c0113217bf Unpin flake8, fix lint
Change-Id: Iab73f1127bfbdf11626727f3044366d2e5745439
2020-08-24 10:54:54 +02:00
James Page 4fd788d3a2 Updates for improved EC support
Sync charmhelpers and charms.openstack to pickup changes for
improved Erasure Coded pool support.

Update action code for EC profile creation for extended
option support and other charmhelpers changes.

Depends-On: I2547933964849f7af1c623b2fbc014fb332839ef
Change-Id: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
2020-08-07 15:24:59 +01:00
Alex Kavanagh e60d30630f Release sync for 20.08
- Classic charms: sync charm-helpers.
- Classic ceph based charms:  also sync charms.ceph
- Reactive charms: trigger a rebuild
- sync tox.ini
- sync requirements.txt and test-requirements.txt

Change-Id: Ifd8c81255770f95980c7fd4117e6f07e44eea2ee
2020-07-27 20:49:41 +01:00
Corey Bryant 0c318c9bc3 Sync charm-helpers for Victoria/Groovy updates
This sync picks up the release and version details for Victoria/Groovy.

Change-Id: I8b4c3046101ad41004b2fa5108b5aafed6c75070
2020-07-13 18:59:11 +00:00
Zuul 74ad5886a9 Merge "Updates for 20.08 cycle start for groovy and libs" 2020-06-08 08:19:37 +00:00
Chris MacNaughton 1f85b7c001
The prometheus relation should not use the cluster public address
Change-Id: I2a43fc77f1f8bc4c16aeeecb0ba9a37615642d1c
2020-06-04 10:36:13 +02:00
Alex Kavanagh eb0efab571 Updates for 20.08 cycle start for groovy and libs
- Adds groovy to the series in the metadata
- Classic charms: sync charm-helpers.
- Classic ceph based charms:  also sync charms.ceph
- Reactive charms: trigger a rebuild

Change-Id: I2f7aaaa327a82f85e9b90b9369c81db86d848324
2020-06-02 14:28:08 +01:00
James Page 8c7516ebe0 Set sane min pg size with autoscaler
Resync charmhelpers to pickup behavioural changes in
autoscaler configuration for pools.

This ensures that pools that have a calculated pg size
less than the default of 32 get a min size set to match
the calculated pg size avoiding instance scale-up to
32 pgs.

This avoids overloading of the max pgs per osd limit
in smaller test clusters.

Change-Id: Ic34d029dae9c67dbba3e2e502d7c9ac4576fcfa5
Closes-Bug: 1872748
2020-05-11 12:48:11 +01:00