Commit Graph

292 Commits

Author SHA1 Message Date
Luciano Lo Giudice 940be7fdfc Implement key rotation on the ceph-radosgw charm
This patchset implements key rotation in the ceph-radosgw charm,
by replacing the keyring file if it exists and the ceph-mon
relation reports a new key.

Change-Id: I447b5f827e39118e7dbd430b1c63b3ec4ea3e176
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/1195
2024-04-16 14:44:53 -03:00
Shunde Zhang 6f2a7540e8 Add a config option for virtual hosted bucket
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/1187

Closes-Bug: #1871745
Change-Id: I295baab496d1eb95daaa8073d4119d01b90d0b38
2024-04-05 16:17:08 +11:00
Peter Sabaini 92caaa710b Initial support for the s3 interface
Implement initial support for the s3 interface here:
https://github.com/canonical/charm-relation-interfaces/tree/main/interfaces/s3/v0

Drive-by: fully qualify rename.sh in allowlist_externals

Change-Id: I8a78c41840c529cf2c35f487739c0397e4374f97
2024-02-02 12:09:04 +01:00
Luciano Lo Giudice 56c95bac5b Sync charm libraries
Change-Id: I3cc5a774f0d4fec2eb7fb719579df6fce24167ef
2023-09-13 14:10:47 -03:00
Alex Kavanagh 40782db848 Ensure get_requests_for_local_unit doesn't fail on incomplete relation
This is a rebuild/make sync for charms to pickup the fix in charmhelpers to fix
any inadvertant accesses of ['ca'] in the relation data before it is available
from vault in the certificates relation.  Fix in charmhelpers is in [1].

[1] https://github.com/juju/charm-helpers/pull/824
Closes-Bug: #2028683

Change-Id: Ie05a9ff536700282dc0c66816b50efee5da62767
2023-08-21 11:20:37 +01:00
Luciano Lo Giudice fd4497f8dc Fix pool names in RadosGW charm
The latest Ceph versions forbid pool names that start with a dot.
Since the RadosGW charm uses pools named so extensively, this
patchset fixes that issue.

In addition, the Ceph libraries are synced as well, since they
were outdated.

Change-Id: I50112480bb3669de08ee85a9bf9a594b379e9ec3
2023-08-09 11:36:16 -03:00
Corey Bryant 37cb69d7f8 Add 2023.2 Bobcat support
* sync charm-helpers to classic charms
* change openstack-origin/source default to quincy
* add mantic to metadata series
* align testing with bobcat
* add new bobcat bundles
* add bobcat bundles to tests.yaml
* add bobcat tests to osci.yaml
* update build-on and run-on bases
* drop kinetic
* add additional unit test https mocks needed since
  charm-helpers commit 6064a34627882d1c8acf74644c48d05db67ee3b4
* update charmcraft_channel to 2.x/stable

Change-Id: I2d9c41c294668c3bb7fcba253adb8bc0c939d150
2023-08-02 14:10:40 -04:00
Samuel Walladge 541ceec401 Enable rgw trust forwarded https when https proxy
This option is required for server-side encryption to be allowed
if radosgw is behind a reverse proxy,
such as here when certificates are configured and apache2 is running.

ref. https://docs.ceph.com/en/latest/radosgw/encryption/

It is safe to always enable when https is configured in the charm,
because it will be securely behind the reverse proxy in the unit.
This option must not be enabled when https is not configured in the charm,
because this would allow clients to spoof headers.

Closes-Bug: #2021560
Change-Id: I940f9b2f424a3d98936b5f185bf8f87b71091317
2023-05-31 14:16:47 +09:30
Zuul 4484b0f0ed Merge "Removes stderr pipe from _check_output" 2023-05-26 17:36:22 +00:00
Felipe Reyes 8e9d25f72e Charm-helpers sync
charm-helpers sync to pick up Antelope UCA support

Change-Id: Ie649be98ecd338b6441a59a0ad32aa696fc8ca99
2023-04-06 15:57:18 -04:00
Utkarsh Bhatt b76b1df0dd
Removes stderr pipe from _check_output
Change-Id: Ia6e838d607fecb9b391ebc450d611af1865b2eab
2023-02-01 17:08:03 +05:30
utkarshbhatthere 367a2aedcb Adds primary/secondary multisite relation
A new relation with primary/secondary nomenclature is added and the
old master/slave relation is marked as *Deprecated*. In future,
master/slave relation would be completely removed.

Change-Id: I9cda48b74a20aaa9a41baedc79332bfaf13951d3
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/926
2022-09-23 18:17:42 +00:00
Zuul c1c9531a7f Merge "Adds support for scaling down multisite rgw system" 2022-09-12 18:48:30 +00:00
utkarshbhatthere cb70cf4c5f Adds support for scaling down multisite rgw system
Adds implementation for relation-departed hooks to cleanly remove
participant sites from the multisite system. The replication
between zones is stopped and both zones split up to continue as
separate master zones.

Change-Id: I420f7933db55f3004f752949b5c09b1b79774f64
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/863
2022-09-08 08:09:21 +00:00
Zuul 349f90828e Merge "Add Kinetic and Zed support" 2022-08-30 03:56:38 +00:00
Corey Bryant 51f59879d3 Add Kinetic and Zed support
* sync charm-helpers to classic charms
* change openstack-origin/source default to zed
* align testing with zed
* add new zed bundles
* add zed bundles to tests.yaml
* add zed tests to osci.yaml and .zuul.yaml
* update build-on and run-on bases
* add bindep.txt for py310
* sync tox.ini and requirements.txt for ruamel
* use charmcraft_channel 2.0/stable
* drop reactive plugin overrides
* move interface/layer env vars to charmcraft.yaml

Change-Id: Ieb1ef7b7ab76775f5769621a6a7cbcfb18c40b7f
2022-08-26 18:40:29 +00:00
utkarshbhatthere e97e3607e2
Adds existence verification for config values
Multisite config values (realm, zonegroup, zone) are written
to ceph.conf as the defaults without verifying their existence, this
causes failure for commands which use the default values.

Closes-Bug: #1987127
Change-Id: I0ab4df34f0000339227e5d5b80352355ea7bd36e
2022-08-24 18:35:44 +05:30
utkarshbhatthere 44fee84d4d
Adds support for migration to multi-site system.
1.) Currently multi-site can only be configured when system is being
deployed from scratch, migration works by renaming the existing
Zone/Zonegroups (Z/ZG) to Juju config values on primary site before
secondary site pulls the realm data and then rename and configure
secondary Zone accordingly.

During migration:
2.) If multiple Z/ZG not matching the config values are present at
primary site, the leader unit will block and prompt use of
'force-enable-multisite' which renames and configures selected Z/ZG
according to multisite config values.

3.) If the site being added as a secondary already contain Buckets,
the unit will block and prompt the operator to purge all such Buckets
before proceeding.

Closes-Bug: #1959837
Change-Id: I01a4c1c4551c797f0a32951dfbde8a1a4126c2d6
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/840
2022-08-07 13:32:37 +05:30
Luciano Lo Giudice 5c4cab3f82 Add the 'zonegroup' and 'realm' keys to ceph.conf file
This patchset adds these 2 additional keys to the ceph.conf file,
which are used in multisite configurations when present.

Change-Id: I51ca46bbb3479cb73ec4d9966208ed794f0ed774
Closes-Bug: #1975857
2022-05-31 18:08:13 -03:00
Ethan Myers 2bda1f68a6 Add a config option for relaxed s3 bucket names.
Closes-Bug: #1926498
Change-Id: I4b329f3327a0e91ccd9f65841cc5d62736918a85
2022-05-19 15:02:03 +00:00
Chris MacNaughton b069b3aa4c Multisite replication should use public, rather than internal, networks
Closes-Bug: #1960520
Change-Id: Ie2954a9a59acbc384c18c901e2d324ee003d7108
2022-04-18 07:13:35 -07:00
Chris MacNaughton 1f4dbd3a5d Updates for jammy enablement
- charmcraft: build-on 20.04 -> run-on 20.04/22.04 [*archs]
- Refresh tox targets
- Drop impish bundles and OSCI testing
- Add jammy metadata
- Default source is yoga
- Charmhelpers and charms.ceph sync

Change-Id: I39f091db8ef8f18c0a40d4e46d54dfc964c03d70
2022-04-08 10:23:48 +01:00
Zuul 25885fdd0e Merge "Ceph Quincy dropped support for the Civetweb http frontend." 2022-03-10 16:06:31 +00:00
Chris MacNaughton b00783c14c Ceph Quincy dropped support for the Civetweb http frontend.
Change-Id: I2428cd34110fbc8f7775eb79fe70c34a4eafe3eb
2022-02-24 11:41:11 +01:00
Cornellius Metto 31a4584169 Enable HAProxy HTTP Health Checks
Ceph radosgw supports [0] the swift health check endpoint
"/swift/healthcheck". This change adds the haproxy
configuration [1] necessary to take the response of "GET
/swift/healthcheck" into account when determining the health
of a radosgw service.

For testing, I verified that:
- HAProxy starts and responds to requests normally with this
  configuration.
- Servers with status != 2xx or 3xx are removed from the
  backend.
- Servers that take too long to respond are also removed
  from the backend. The default timeout value is 2s.

[0] https://tracker.ceph.com/issues/11682
[1] https://www.haproxy.com/documentation/hapee/2-0r1/onepage/#4.2-option%20httpchk

Closes-Bug: 1946280
Change-Id: I82634255ca3423fec3fc15c1e714dcb31db5da7a
2022-02-18 12:50:54 +03:00
Cornellius Metto 158c42b4a9 Charmhelpers Sync: https health checks for HAProxy
Change-Id: I4848168b5a45c3430dec58f786d4453e40539361
2022-02-07 13:04:35 +03:00
Zuul 6dd6144f66 Merge "Fix create_system_user so it returns creds" 2021-12-13 14:35:59 +00:00
Liam Young 083a0e6722 Fix create_system_user so it returns creds
Fix the create_system_user method so it returns the access_key
and secret when a user is created.

This patch also includes the following changes:

* Improve logging of multisite methods to help with debugging issues.
* Fix multisite relations in bundles.

Func-Test-Pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/667
Closes-Bug: #1950329
Change-Id: I0528fe7f4a89c69f2790a0e472f6f43e23c2de19
2021-12-02 17:37:55 -03:00
Corey Bryant 3a27c7090e Add yoga bundles and release-tool syncs
* charm-helpers sync for classic charms
* sync from release-tools
* switch to release-specific zosci functional tests
* run focal-ussuri as smoke tests
* remove trusty, xenial, and groovy metadata/tests
* drop py35 and add py39
* charms.ceph sync

Change-Id: I8b0ac822cdf37d70ac39f1b115f95a448afb624d
2021-11-22 15:22:23 -05:00
Alex Kavanagh d15ac894a9 Add xena bundles
- add non-voting focal-xena bundle
- add non-voting impish-xena bundle
- rebuild to pick up charm-helpers changes
- update tox/pip.sh to ensure setuptools<50.0.0
- Remove redundant (and failing) IdentityContext tests
- Remove EOL groovy-* gate tests.

Change-Id: I32c8195ff76164de565e6af7c329645be40769f1
Co-authored-by: Aurelien Lourot <aurelien.lourot@canonical.com>
2021-10-05 19:15:09 +01:00
Zuul cebcc73380 Merge "Restart radosgw services on upgrade" 2021-09-30 11:06:19 +00:00
Zuul 13402ca8f8 Merge "Update catalog entry on addition of certificates" 2021-09-28 23:41:22 +00:00
Zuul 2311e64e24 Merge "Add radosgw-user relation" 2021-09-06 15:24:30 +00:00
Liam Young fa1e41e2f8 Add radosgw-user relation
Add a radosgw-user relation to allow charms to request a user. The
requesting charm should supply the 'system-role' key in the app
relation data bag to indicate whether the requested user should
be a system user. This charm creates the user if it does not exist
or looks up the users credentials if it does. The username and
credentials are then passed back to the requestor via the
app relation data bag. The units radosgw url and daemon id
are also passed back this time using the unit relation data
bag.

Change-Id: Ieff1943b02f490559ccd245f60b744fb76a5d832
2021-09-06 13:11:31 +00:00
Chris MacNaughton d77a751287 The MonContext can be complete when not all mons have provided an fsid
When the MonContext becomes incomplete during regular operation from,
for example, the replacement of an existing mon unit due to failure,
Ceph Radosgw shoud be able to continue while the new mon
bootstraps itself into the cluster. By ensuring that the
context can complete with one of the mons not reporting an
FSID, the remaining members of the monitor cluster can
support the continuing functioning of RadosGW.

Closes-Bug: #1938919
Change-Id: I293224f46d06cc427b2d3c8f4ae65366ed06909e
2021-08-05 12:03:02 -05:00
Billy Olsen 1d41112ce2 Restart radosgw services on upgrade
When radosgw packages are upgraded, the radosgw service needs to
be restarted by the charm. Check to see that packages were installed
on the upgrade path and if so, restart the radosgw service.

Change-Id: I61055ea4605a9a7c490c18f611d0eb583c617ce3
Closes-Bug: #1906707
2021-07-26 11:38:39 -07:00
David Ames 1874911cd1 Update catalog entry on addition of certificates
Guarantee that the object-store URL is updated when the certificates
relation is completed.

Sync release-tools tox and requirements

Change-Id: I4ca967f2c5c5eedfc56969785fcf23e4063d2a78
2021-07-22 14:06:29 -07:00
James Page c634aba6fd Enable support for beast frontend
Introduce support for the beast web frontend for the Ceph
RADOS Gateway which brings improvements to speed and scalability.

Default behaviour is changed in that for Octopus and later
(aside from some unsupported architectures) beast is enabled by
default; for older releases civetweb is still used.

This may be overridden using the 'http-frontend' configuration
option which accepts either 'beast' or 'civetweb' as valid
values.  'beast' is only supported with Ceph Mimic or later.

Closes-Bug: 1865396
Change-Id: Ib73e58e21219eca611cd4293da69bf80040f5803
2021-07-07 12:44:53 +00:00
Zuul 0f1b77b7d5 Merge "set rgw keystone revocation interval to 0" 2021-06-15 03:21:15 +00:00
Zuul d99b2d6ba4 Merge "Enable object versioning for a container" 2021-06-11 14:41:29 +00:00
Zuul 56e517aae2 Merge "c-h sync - restore proxy env vars for add-apt-repository" 2021-06-04 18:26:13 +00:00
Hemanth Nakkina d9cc3f3bfb set rgw keystone revocation interval to 0
Ceph RGW checks revocation list for every 600 seconds. This is not
required for non PKI tokens and PKI tokens are removed in OpenStack
Pike release. This results in unnecessary logs in ceph and keystone.

Set the rgw keystone revocation interval to 0 in ceph conf. Also
this parameter is removed in upstream from Ceph Octopus. So ensure
not to add this parameter from ceph release Octopus.

Closes-Bug: #1758982
Change-Id: Iaeb10dc25bb52df9dd3746ecf4fe5859d4efd459
2021-05-21 12:35:18 +05:30
Corey Bryant 147589b87b c-h sync - restore proxy env vars for add-apt-repository
Change-Id: I083014f4886f2c7df643ca67a324bdc2f476cf81
2021-05-13 08:45:52 -04:00
James Page 15d7a9d827 Add otp pool to broker request
Ceph RADOS gateway >= Mimic has an additional metadata pool (otp).

Add this to the broker request to ensure that its created correctly
by the ceph-mon application rather than being auto-created by the
radosgw application

Change-Id: I5e9b4e449bd1bc300225d223329bb62f3a381705
Closes-Bug: 1921453
2021-04-13 11:06:43 +01:00
Mauricio Faria de Oliveira 358121d74c Ensure identical types for comparing port number/string
Fix an oversight and regression in commit c97fced794
("Close previously opened ports on port config change").

The comparison between an integer and a string (returned
by .split()) is always different and thus when upgrading
the charm 'port 80' is closed.

Make sure the types are set to str. Right now it should
only be needed for port and not opened_port_number; but
let's future proof both sides of the comparison.

(Update: using str() vs int() as apparently int() might
fail but str() should always work no matter what it got;
thanks, Alex Kavanagh!)

Before:

    $ juju run --unit ceph-radosgw/0 opened-ports
    80/tcp

    $ juju upgrade-charm --path . ceph-radosgw

    $ juju run --unit ceph-radosgw/0 opened-ports
    $

    @ log:
    2021-04-05 15:08:04 INFO juju-log Closed port 80 in favor of port 80

    $ python3 -q
    >>> x=80
    >>> y='80/tcp'
    >>> z=y.split('/')[0]
    >>> z
    '80'
    >>> x
    80
    >>> x != z
    True
    >>> x=str(x)
    >>> x != z
    False

After:

    $ juju run --unit ceph-radosgw/1 opened-ports
    80/tcp

    $ juju upgrade-charm --path . ceph-radosgw

    $ juju run --unit ceph-radosgw/1 opened-ports
    80/tcp

Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Change-Id: I2bcdfec1459ea45d8f57b850b7fd935c360cc7c1
2021-04-12 11:57:20 -03:00
Alex Kavanagh 73c096998b 21.04 sync - add 'hirsute' in UBUNTU_RELEASES
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms.  This sync is to
add just that key.  See also [1]

Note that this sync is only for classic charms.

[1] https://github.com/juju/charm-helpers/pull/598

Change-Id: I8544b62b2c7e5f38488f564af57dbe815638bf32
2021-04-11 16:51:23 +01:00
Zuul ff894ba97a Merge "21.04 libraries freeze for charms on master branch" 2021-04-06 19:11:30 +00:00
Mauricio Faria de Oliveira 44936bf9cd Remove endpoint settings without service prefix on config-changed
Older charms pass endpoint data with the legacy method, without
service prefix (e.g., `admin_url` instead of `swift_admin_url`.)

After charm upgrade the endpoint data is set in the new method,
with service prefix, however the legacy endpoint data is still
there as it has not been removed.

The keystone charms checks first for the legacy method, and if
it's found, the new method is ignored and any endpoint changes
made on the new charm (e.g., port) are not implemented.

So make sure to remove the legacy endpoint settings from the
relation, so the keystone charm can pick up eg, port changes,
and even set up the s3 endpoint after charm upgrades between
the legacy method and the new method.

Simplied test-case:

- Old charm:

    $ juju deploy cs:ceph-radosgw-285 # + keystone/percona-cluster

    $ openstack endpoint list --service swift
    | ... | http://10.5.2.210:80/swift    |

    $ juju config ceph-radosgw port=1111

    $ openstack endpoint list --service swift
    | ... | http://10.5.2.210:1111/swift    |

- New charm:

    $ juju upgrade-charm ceph-radosgw

    $ juju config ceph-radosgw port=2222

    unit-keystone-0: 12:37:16 INFO unit.keystone/0.juju-log identity-service:6:
     {'admin_url': 'http://10.5.2.210:1111/swift', ...
      'swift_admin_url': 'http://10.5.2.210:2222/swift',
      'service': 'swift', ...}

    $ openstack endpoint list --service swift
    | ... | http://10.5.2.210:1111/swift    |

- Patched charm:

    $ juju upgrade-charm --path ~/charm-ceph-radosgw ceph-radosgw

    $ juju config ceph-radosgw port=3333
    ...
    unit-keystone-0: 12:40:46 INFO unit.keystone/0.juju-log identity-service:6:
     endpoint: s3
     {'admin_url': 'http://10.5.2.210:3333/', ..., 'service': 's3'}
     endpoint: swift
     {'admin_url': 'http://10.5.2.210:3333/swift', ..., 'service': 'swift'}

    $ openstack endpoint list --service swift
    | ... | http://10.5.2.210:3333/swift    |

    $ openstack endpoint list --service s3
    | ... | http://10.5.2.210:3333/ |

Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Closes-bug: #1887722
Change-Id: Iaf3005b6507914004b6c9dcbb77957e0230fb4f4
2021-04-05 09:13:35 -03:00
Alex Kavanagh 916fbd4474 21.04 libraries freeze for charms on master branch
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
  - ensure stable/21.04 branch for charms.openstack
  - ensure stable/21.04 branch for charm-helpers

Change-Id: I6c46959aa659454d28880e375e3488058227dca7
2021-04-03 20:22:37 +01:00
Mauricio Faria de Oliveira c97fced794 Close previously opened ports on port config change
When the charm config option `port` is changed,
the previously opened port is not closed.

This leads to leaks of open ports (potential security
issue), and long ports field on status after tests:

Test:

    $ juju config ceph-radosgw port=1111
    $ juju config ceph-radosgw port=2222
    $ juju config ceph-radosgw port=3333

    $ juju status ceph-radosgw
    ...
    Unit Workload Agent Machine Public address Ports Message
    ceph-radosgw/1* blocked idle 3 10.5.2.210
    80/tcp,1111/tcp,2222/tcp,3333/tcp Missing relations: mon
    ...

    $ juju run --unit ceph-radosgw/1 'opened-ports'
    80/tcp
    1111/tcp
    2222/tcp
    3333/tcp

Patched:

    $ juju run --unit ceph-radosgw/1 'opened-ports'
    80/tcp
    1111/tcp
    1234/tcp
    2222/tcp
    3333/tcp
    33331/tcp
    33332/tcp
    33334/tcp

    $ juju config ceph-radosgw port=33335

    $ juju run --unit ceph-radosgw/1 'opened-ports'
    33335/tcp

    $ juju status ceph-radosgw
    ...
    Unit             Workload  Agent  Machine  Public address  Ports
    Message
    ceph-radosgw/1*  blocked   idle   3        10.5.2.210      33335/tcp
    Missing relations: mon

    @ unit log
    2021-03-24 13:20:51 INFO juju-log Closed port 80 in favor of port 33335
    2021-03-24 13:20:51 INFO juju-log Closed port 1111 in favor of port 33335
    2021-03-24 13:20:51 INFO juju-log Closed port 1234 in favor of port 33335
    2021-03-24 13:20:51 INFO juju-log Closed port 2222 in favor of port 33335
    2021-03-24 13:20:52 INFO juju-log Closed port 3333 in favor of port 33335
    2021-03-24 13:20:52 INFO juju-log Closed port 33331 in favor of port 33335
    2021-03-24 13:20:52 INFO juju-log Closed port 33332 in favor of port 33335
    2021-03-24 13:20:52 INFO juju-log Closed port 33334 in favor of port 33335

Signed-off-by: Mauricio Faria de Oliveira <mfo@canonical.com>
Closes-bug: #1921131
Change-Id: I5ac4b66137faffee82ae0f1e13718f21274f1f56
2021-03-24 12:06:21 -03:00