Use the remove command (instead of rm) which is supported in
Ceph firefly or later (which covers all current deployment targets).
This resolves an issue with the ceph -> ceph-mon/ceph-osd migration
process for Luminous based deployments.
Change-Id: I127930e7c4b80465796b8270a9966b08f7c03037
Closes-Bug: 1729370
Add new relation to support bootstrapping a new deployment
of the ceph-mon charm from an existing ceph charm deployment,
supporting migration away from the deprecated ceph charm.
Each member of the existing ceph application will present
the required fsid and monitor-secret values, as well as its
public address so that the related ceph-mon units can
correctly seed from the exisitng MON cluster.
Provide stop hook implementation, which will leaves OSD
services running but will remove the ceph.conf provided
directly from this charm, falling back to ceph.conf provided
by other charms installed on the same machine. MON and MGR
services will be shutdown and disabled.
Closes-Bug: 1665159
Change-Id: I9bd1d7630a8eff53c65cb0f07d17e095fc7f32a9
Depends-On: Iac34d1bee4b51b55dfb3d14d315aae8526a0893c
Bring ceph charm inline with ceph-mon and ceph-osd charms,
supporting all upgrades paths for trusty and xenial deployments.
Change-Id: I8284e1f9b583b34cb68babec69407edc14c04930
Closes-Bug: 1662863
Add highly experimental support for bluestore storage format for
OSD devices; this is disabled by default and should only be enabled
in deployments where loss of data does not present a problem!
Change-Id: I67323e26a4698de4e08c8c755db232399f7fed02
Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
This change skips over any devices which does not start with a leading
folder separator ('/'). Allowing such entries causes an OSD to be
created out of the charm directory. This can be caused by something as
innocuous as 2 spaces between devices. The result is that the root
device is also running an OSD, which is undesirable.
Change-Id: I5b52096da0b6f100ae9835c339905585425b27ae
Closes-Bug: 1652175
Only check for upgrade requests if the local unit is installed
and bootstrapped, avoiding attempts to upgrade on initial
execution of config-changed for trusty UCA pockets.
Note that the upgrade process relies on a running ceph cluster.
Change-Id: Ia3efe2f8cfdac4317809681e7d169725c6bd9ef2
Closes-Bug: 1662943
This function is no longer necessary as we do
not need to ensure that the remote units can
create their own pools
Change-Id: I7e46b97ad2bb18a6e11a393a34f40e9bf51445c7
Partial-Bug: 1424771
Addition of configurable availability_zone allows the
administrator to deploy Ceph with two dimensions of
crush locations, one from config and one from Juju's
availability zone
Change-Id: Icd0ee2eeaea8bad2b78f2ed46176084e01601261
The nrpe ceph status script relies on lockfile-create, but
lockfile-progs (package containing lockfile-create) was missing from
the install. Install it when related to nagios, and on upgrade-charm
when related to nagios.
Change-Id: I0addf9993d486a4d305dd554237efe554d4608d4
Closes-Bug: #1629104
The rolling upgrade code sets keys in the ceph mon
cluster to discover whether it can upgrade itself. This
patch addresses an issue where the upgrade code was not
taking into account multiple upgrades to newer ceph versions
in a row.
Change-Id: Icae681e1817ce50039ef22a0677398fe84057bf7
Juju 2.0 provides support for display of the version of
an application deployed by a charm in juju status.
Insert the application_version_set function into the
existing assess_status function - this gets called after
all hook executions, and periodically after that, so any
changes in package versions due to normal system updates
will also be reflected in the status output.
This review also includes a resync of charm-helpers to
pickup hookenv support for this feature.
Change-Id: I22763c26a28d397688f02845f0acb8021320a5ae
This includes a resync of charms_ceph to raise the directory one level
The charms_ceph change that we're syncing in changes the
name of the ceph.py file into the __init__.py file to remove the
second level of namespacing
Change-Id: I8773a26266a2a13f92083e89db957a6454df9bb3
This change ensures that when ceph is upgraded from an
older version that uses root to a newer version that
uses ceph as the process owner that all directories
are chowned.
As the ceph charm can also host OSD processes, ensure that
any ceph-osd daemons are stopped and started during the
upgrade process.
Change-Id: Ief3fd6352b440b7740965746cd0d1d846c647f84
Closes-Bug: 1600338
This change moves our ceph.py and ceph_broker.py into
a seperate repository that we can share between various
ceph related Juju projects, along with a Makefile
change to use a new git_sync file to partially sync
a git repository into a specified path
Change-Id: I8942d2f3411acec197fd6b854c1d9e50457502a5
The pause and resume actions shell out to the ceph command to run
OSD operations (in/out).
Because the default cephx key given out by the monitor cluster does
not contain the correct permissions, these commands fail.
Use the osd-upgrade user which has the correct permissions.
Closes-Bug: 1602826
Depends-On: I6af43b61149c6eeeeb5c77950701194beda2da71
Change-Id: I95bedcdea622fbf2fd799e63932cedd0d577568a
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.
In order to make this change, this commit also drops the
inclusion of upstart configurations for very early versions
of Ceph (argonaut), as they are no longer required.
Change-Id: I5e5db16b6f04ee8282275e9fa63a8d864c5b51ec
Adds a new config-flags option to the charm that
supports setting a dictionary of ceph configuration
settings that will be applied to ceph.conf.
This implementation supports config sections so that
settings can be applied to any section supported by
the ceph.conf template in the charm.
Change-Id: I5ed2530b1e06a4565029d62124b469b97e17d342
Closes-Bug: 1522375
radosgw relation was only providing information
when executed by a leader unit. This patch ensures
that the minimum info is provided regardless.
Closes-Bug: 1570823
Change-Id: Ice044350c0c7b30ce65554b2ba5476d537588126
Juju 2.0 provides support for network spaces, allowing
charm authors to support direct binding of relations and
extra-bindings onto underlying network spaces.
Add public and cluster extra bindings to this charm to
support separation of client facing and cluster network
traffic using Juju network spaces.
Existing network configuration options will still be
preferred over any Juju provided network bindings, ensuring
that upgrades to existing deployments don't break.
Change-Id: I4df75a40f5308f701f15c45d3d7b1df1e03832ad
This changeset provides pause and resume actions to the ceph charm.
The pause action issues a 'ceph osd out <local_id>' for each of the
ceph osd ids that are on the unit. The action does not stop the
ceph osd processes.
Note that if the pause-health action is NOT used on the ceph charm then the
cluster will start trying to rebalance the PGs accross the remaining OSDs. If
the cluster might reach its 'full ratio' then this will be a breaking action.
The charm does NOT check for this eventuality.
The resume action issues a 'ceph osd in <local_id>' for each of the
local ceph osd process on the unit.
The charm 'remembers' that a pause action was issued, and if
successful, it shows a 'maintenance' workload status as a reminder.
Change-Id: Ic5b5b33e59e72e13843d874a08e3d142a1befde3
This change adds functionality to allow the ceph monitor cluster to
upgrade in a serial rolled fashion. This will use the ceph monitor
cluster itself as a locking mechanism and only allows 1 ceph monitor
at a time to upgrade. If a monitor has been waiting on the previous
server for more than 10 minutes and hasn't seen it finish it will
assume it died during the upgrade and proceed with its own upgrade.
Limitations of this patch: As long as the monitor cluster does not
split brain this should work fine. Also this assumes that NTP
among the ceph cluster is fairly accurate.
Change-Id: I04e227ea0713f0eaa4b4d78ad856e3f06fa7f225
Add charmhelpers.contrib.hardening and calls to install,
config-changed, upgrade-charm and update-status hooks. Also
add new config option to allow one or more hardening
modules to be applied at runtime.
Change-Id: I230b8c11ba32395b708b9300d6b91dad728194e8
Currently there is an msg for no storage status on
ceph node. But it doesn't make this charm state
'blocked'.
is_storage_fine function has been created to check
storage devices on ceph_hooks.py and using it on
assess_status.
Change-Id: I790fde0280060fa220ee83de2ad2319ac2c77230
Closes-Bug: lp1424510