This patchset implements key rotation in the ceph-radosgw charm,
by replacing the keyring file if it exists and the ceph-mon
relation reports a new key.
Change-Id: I447b5f827e39118e7dbd430b1c63b3ec4ea3e176
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/1195
This is a rebuild/make sync for charms to pickup the fix in charmhelpers to fix
any inadvertant accesses of ['ca'] in the relation data before it is available
from vault in the certificates relation. Fix in charmhelpers is in [1].
[1] https://github.com/juju/charm-helpers/pull/824
Closes-Bug: #2028683
Change-Id: Ie05a9ff536700282dc0c66816b50efee5da62767
The latest Ceph versions forbid pool names that start with a dot.
Since the RadosGW charm uses pools named so extensively, this
patchset fixes that issue.
In addition, the Ceph libraries are synced as well, since they
were outdated.
Change-Id: I50112480bb3669de08ee85a9bf9a594b379e9ec3
* sync charm-helpers to classic charms
* change openstack-origin/source default to quincy
* add mantic to metadata series
* align testing with bobcat
* add new bobcat bundles
* add bobcat bundles to tests.yaml
* add bobcat tests to osci.yaml
* update build-on and run-on bases
* drop kinetic
* add additional unit test https mocks needed since
charm-helpers commit 6064a34627882d1c8acf74644c48d05db67ee3b4
* update charmcraft_channel to 2.x/stable
Change-Id: I2d9c41c294668c3bb7fcba253adb8bc0c939d150
This option is required for server-side encryption to be allowed
if radosgw is behind a reverse proxy,
such as here when certificates are configured and apache2 is running.
ref. https://docs.ceph.com/en/latest/radosgw/encryption/
It is safe to always enable when https is configured in the charm,
because it will be securely behind the reverse proxy in the unit.
This option must not be enabled when https is not configured in the charm,
because this would allow clients to spoof headers.
Closes-Bug: #2021560
Change-Id: I940f9b2f424a3d98936b5f185bf8f87b71091317
A new relation with primary/secondary nomenclature is added and the
old master/slave relation is marked as *Deprecated*. In future,
master/slave relation would be completely removed.
Change-Id: I9cda48b74a20aaa9a41baedc79332bfaf13951d3
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/926
Adds implementation for relation-departed hooks to cleanly remove
participant sites from the multisite system. The replication
between zones is stopped and both zones split up to continue as
separate master zones.
Change-Id: I420f7933db55f3004f752949b5c09b1b79774f64
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/863
* sync charm-helpers to classic charms
* change openstack-origin/source default to zed
* align testing with zed
* add new zed bundles
* add zed bundles to tests.yaml
* add zed tests to osci.yaml and .zuul.yaml
* update build-on and run-on bases
* add bindep.txt for py310
* sync tox.ini and requirements.txt for ruamel
* use charmcraft_channel 2.0/stable
* drop reactive plugin overrides
* move interface/layer env vars to charmcraft.yaml
Change-Id: Ieb1ef7b7ab76775f5769621a6a7cbcfb18c40b7f
Multisite config values (realm, zonegroup, zone) are written
to ceph.conf as the defaults without verifying their existence, this
causes failure for commands which use the default values.
Closes-Bug: #1987127
Change-Id: I0ab4df34f0000339227e5d5b80352355ea7bd36e
1.) Currently multi-site can only be configured when system is being
deployed from scratch, migration works by renaming the existing
Zone/Zonegroups (Z/ZG) to Juju config values on primary site before
secondary site pulls the realm data and then rename and configure
secondary Zone accordingly.
During migration:
2.) If multiple Z/ZG not matching the config values are present at
primary site, the leader unit will block and prompt use of
'force-enable-multisite' which renames and configures selected Z/ZG
according to multisite config values.
3.) If the site being added as a secondary already contain Buckets,
the unit will block and prompt the operator to purge all such Buckets
before proceeding.
Closes-Bug: #1959837
Change-Id: I01a4c1c4551c797f0a32951dfbde8a1a4126c2d6
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/840
This patchset adds these 2 additional keys to the ceph.conf file,
which are used in multisite configurations when present.
Change-Id: I51ca46bbb3479cb73ec4d9966208ed794f0ed774
Closes-Bug: #1975857
Ceph radosgw supports [0] the swift health check endpoint
"/swift/healthcheck". This change adds the haproxy
configuration [1] necessary to take the response of "GET
/swift/healthcheck" into account when determining the health
of a radosgw service.
For testing, I verified that:
- HAProxy starts and responds to requests normally with this
configuration.
- Servers with status != 2xx or 3xx are removed from the
backend.
- Servers that take too long to respond are also removed
from the backend. The default timeout value is 2s.
[0] https://tracker.ceph.com/issues/11682
[1] https://www.haproxy.com/documentation/hapee/2-0r1/onepage/#4.2-option%20httpchk
Closes-Bug: 1946280
Change-Id: I82634255ca3423fec3fc15c1e714dcb31db5da7a
Fix the create_system_user method so it returns the access_key
and secret when a user is created.
This patch also includes the following changes:
* Improve logging of multisite methods to help with debugging issues.
* Fix multisite relations in bundles.
Func-Test-Pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/667
Closes-Bug: #1950329
Change-Id: I0528fe7f4a89c69f2790a0e472f6f43e23c2de19
Add a radosgw-user relation to allow charms to request a user. The
requesting charm should supply the 'system-role' key in the app
relation data bag to indicate whether the requested user should
be a system user. This charm creates the user if it does not exist
or looks up the users credentials if it does. The username and
credentials are then passed back to the requestor via the
app relation data bag. The units radosgw url and daemon id
are also passed back this time using the unit relation data
bag.
Change-Id: Ieff1943b02f490559ccd245f60b744fb76a5d832
When the MonContext becomes incomplete during regular operation from,
for example, the replacement of an existing mon unit due to failure,
Ceph Radosgw shoud be able to continue while the new mon
bootstraps itself into the cluster. By ensuring that the
context can complete with one of the mons not reporting an
FSID, the remaining members of the monitor cluster can
support the continuing functioning of RadosGW.
Closes-Bug: #1938919
Change-Id: I293224f46d06cc427b2d3c8f4ae65366ed06909e
When radosgw packages are upgraded, the radosgw service needs to
be restarted by the charm. Check to see that packages were installed
on the upgrade path and if so, restart the radosgw service.
Change-Id: I61055ea4605a9a7c490c18f611d0eb583c617ce3
Closes-Bug: #1906707
Guarantee that the object-store URL is updated when the certificates
relation is completed.
Sync release-tools tox and requirements
Change-Id: I4ca967f2c5c5eedfc56969785fcf23e4063d2a78
Introduce support for the beast web frontend for the Ceph
RADOS Gateway which brings improvements to speed and scalability.
Default behaviour is changed in that for Octopus and later
(aside from some unsupported architectures) beast is enabled by
default; for older releases civetweb is still used.
This may be overridden using the 'http-frontend' configuration
option which accepts either 'beast' or 'civetweb' as valid
values. 'beast' is only supported with Ceph Mimic or later.
Closes-Bug: 1865396
Change-Id: Ib73e58e21219eca611cd4293da69bf80040f5803
Ceph RGW checks revocation list for every 600 seconds. This is not
required for non PKI tokens and PKI tokens are removed in OpenStack
Pike release. This results in unnecessary logs in ceph and keystone.
Set the rgw keystone revocation interval to 0 in ceph conf. Also
this parameter is removed in upstream from Ceph Octopus. So ensure
not to add this parameter from ceph release Octopus.
Closes-Bug: #1758982
Change-Id: Iaeb10dc25bb52df9dd3746ecf4fe5859d4efd459
Ceph RADOS gateway >= Mimic has an additional metadata pool (otp).
Add this to the broker request to ensure that its created correctly
by the ceph-mon application rather than being auto-created by the
radosgw application
Change-Id: I5e9b4e449bd1bc300225d223329bb62f3a381705
Closes-Bug: 1921453
Fix an oversight and regression in commit c97fced794
("Close previously opened ports on port config change").
The comparison between an integer and a string (returned
by .split()) is always different and thus when upgrading
the charm 'port 80' is closed.
Make sure the types are set to str. Right now it should
only be needed for port and not opened_port_number; but
let's future proof both sides of the comparison.
(Update: using str() vs int() as apparently int() might
fail but str() should always work no matter what it got;
thanks, Alex Kavanagh!)
Before:
$ juju run --unit ceph-radosgw/0 opened-ports
80/tcp
$ juju upgrade-charm --path . ceph-radosgw
$ juju run --unit ceph-radosgw/0 opened-ports
$
@ log:
2021-04-05 15:08:04 INFO juju-log Closed port 80 in favor of port 80
$ python3 -q
>>> x=80
>>> y='80/tcp'
>>> z=y.split('/')[0]
>>> z
'80'
>>> x
80
>>> x != z
True
>>> x=str(x)
>>> x != z
False
After:
$ juju run --unit ceph-radosgw/1 opened-ports
80/tcp
$ juju upgrade-charm --path . ceph-radosgw
$ juju run --unit ceph-radosgw/1 opened-ports
80/tcp
Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Change-Id: I2bcdfec1459ea45d8f57b850b7fd935c360cc7c1
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms. This sync is to
add just that key. See also [1]
Note that this sync is only for classic charms.
[1] https://github.com/juju/charm-helpers/pull/598
Change-Id: I8544b62b2c7e5f38488f564af57dbe815638bf32
Older charms pass endpoint data with the legacy method, without
service prefix (e.g., `admin_url` instead of `swift_admin_url`.)
After charm upgrade the endpoint data is set in the new method,
with service prefix, however the legacy endpoint data is still
there as it has not been removed.
The keystone charms checks first for the legacy method, and if
it's found, the new method is ignored and any endpoint changes
made on the new charm (e.g., port) are not implemented.
So make sure to remove the legacy endpoint settings from the
relation, so the keystone charm can pick up eg, port changes,
and even set up the s3 endpoint after charm upgrades between
the legacy method and the new method.
Simplied test-case:
- Old charm:
$ juju deploy cs:ceph-radosgw-285 # + keystone/percona-cluster
$ openstack endpoint list --service swift
| ... | http://10.5.2.210:80/swift |
$ juju config ceph-radosgw port=1111
$ openstack endpoint list --service swift
| ... | http://10.5.2.210:1111/swift |
- New charm:
$ juju upgrade-charm ceph-radosgw
$ juju config ceph-radosgw port=2222
unit-keystone-0: 12:37:16 INFO unit.keystone/0.juju-log identity-service:6:
{'admin_url': 'http://10.5.2.210:1111/swift', ...
'swift_admin_url': 'http://10.5.2.210:2222/swift',
'service': 'swift', ...}
$ openstack endpoint list --service swift
| ... | http://10.5.2.210:1111/swift |
- Patched charm:
$ juju upgrade-charm --path ~/charm-ceph-radosgw ceph-radosgw
$ juju config ceph-radosgw port=3333
...
unit-keystone-0: 12:40:46 INFO unit.keystone/0.juju-log identity-service:6:
endpoint: s3
{'admin_url': 'http://10.5.2.210:3333/', ..., 'service': 's3'}
endpoint: swift
{'admin_url': 'http://10.5.2.210:3333/swift', ..., 'service': 'swift'}
$ openstack endpoint list --service swift
| ... | http://10.5.2.210:3333/swift |
$ openstack endpoint list --service s3
| ... | http://10.5.2.210:3333/ |
Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Closes-bug: #1887722
Change-Id: Iaf3005b6507914004b6c9dcbb77957e0230fb4f4
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Change-Id: I6c46959aa659454d28880e375e3488058227dca7
When the charm config option `port` is changed,
the previously opened port is not closed.
This leads to leaks of open ports (potential security
issue), and long ports field on status after tests:
Test:
$ juju config ceph-radosgw port=1111
$ juju config ceph-radosgw port=2222
$ juju config ceph-radosgw port=3333
$ juju status ceph-radosgw
...
Unit Workload Agent Machine Public address Ports Message
ceph-radosgw/1* blocked idle 3 10.5.2.210
80/tcp,1111/tcp,2222/tcp,3333/tcp Missing relations: mon
...
$ juju run --unit ceph-radosgw/1 'opened-ports'
80/tcp
1111/tcp
2222/tcp
3333/tcp
Patched:
$ juju run --unit ceph-radosgw/1 'opened-ports'
80/tcp
1111/tcp
1234/tcp
2222/tcp
3333/tcp
33331/tcp
33332/tcp
33334/tcp
$ juju config ceph-radosgw port=33335
$ juju run --unit ceph-radosgw/1 'opened-ports'
33335/tcp
$ juju status ceph-radosgw
...
Unit Workload Agent Machine Public address Ports
Message
ceph-radosgw/1* blocked idle 3 10.5.2.210 33335/tcp
Missing relations: mon
@ unit log
2021-03-24 13:20:51 INFO juju-log Closed port 80 in favor of port 33335
2021-03-24 13:20:51 INFO juju-log Closed port 1111 in favor of port 33335
2021-03-24 13:20:51 INFO juju-log Closed port 1234 in favor of port 33335
2021-03-24 13:20:51 INFO juju-log Closed port 2222 in favor of port 33335
2021-03-24 13:20:52 INFO juju-log Closed port 3333 in favor of port 33335
2021-03-24 13:20:52 INFO juju-log Closed port 33331 in favor of port 33335
2021-03-24 13:20:52 INFO juju-log Closed port 33332 in favor of port 33335
2021-03-24 13:20:52 INFO juju-log Closed port 33334 in favor of port 33335
Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Closes-bug: #1921131
Change-Id: I5ac4b66137faffee82ae0f1e13718f21274f1f56