Add support for vault key management with vaultlocker

vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.

Add support for this key management approach for Ceph
Luminous or later.   Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.

The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.

Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.

Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous.  If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.

Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
This commit is contained in:
James Page 2018-04-09 12:32:18 +01:00
parent 901b8731d4
commit 2069e620b7
25 changed files with 3157 additions and 22 deletions

2
.gitignore vendored
View File

@ -9,3 +9,5 @@ bin
.unit-state.db
.idea
func-results.json
*__pycache__
.settings

View File

@ -10,32 +10,32 @@ available in a Ceph cluster.
Usage
=====
The charm also supports specification of the storage devices to use in the ceph
cluster::
osd-devices:
A list of devices that the charm will attempt to detect, initialise and
activate as ceph storage.
This this can be a superset of the actual storage devices presented to
each service unit and can be changed post ceph-osd deployment using
`juju set`.
For example::
For example::
ceph-osd:
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Boot things up by using::
juju deploy -n 3 --config ceph.yaml ceph
You can then deploy this charm by simple doing::
juju deploy -n 10 --config ceph.yaml ceph-osd
juju add-relation ceph-osd ceph
Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd
charm which will scan for the configured storage devices and add them to the
pool of available storage.
@ -80,9 +80,39 @@ The AppArmor profile(s) which are generated by the charm should NOT yet be used
- With Bluestore enabled.
Block Device Encryption
=======================
The ceph-osd charm supports encryption of underlying block devices supporting OSD's.
To use the 'native' key management approach (where dm-crypt keys are stored in the
ceph-mon cluster), simply set the 'osd-encrypt' configuration option::
ceph-osd:
options:
osd-encrypt: True
**NOTE:** This is supported for Ceph Jewel or later.
Alternatively, encryption keys can be stored in Vault; this requires deployment of
the vault charm (and associated initialization of vault - see the Vault charm for
details) and configuration of the 'osd-encrypt' and 'osd-encrypt-keymanager'
options::
ceph-osd:
options:
osd-encrypt: True
osd-encrypt-keymanager: vault
**NOTE:** This option is only supported with Ceph Luminous or later.
**NOTE:** Changing these options post deployment will only take effect for any
new block devices added to the ceph-osd application; existing OSD devices will
not be encrypted.
Contact Information
===================
Author: James Page <james.page@ubuntu.com>
Report bugs at: http://bugs.launchpad.net/charms/+source/ceph-osd/+filebug
Location: http://jujucharms.com/charms/ceph-osd
Report bugs at: http://bugs.launchpad.net/charm-ceph-osd/+filebug
Location: http://jujucharms.com/ceph-osd

View File

@ -10,7 +10,7 @@ include:
- cluster
- contrib.python.packages
- contrib.storage.linux
- contrib.openstack.alternatives
- contrib.openstack
- contrib.network.ip
- contrib.openstack:
- alternatives

View File

@ -148,6 +148,17 @@ options:
.
Specifying this option on a running Ceph OSD node will have no effect
until new disks are added, at which point new disks will be encrypted.
osd-encrypt-keymanager:
type: string
default: ceph
description: |
Keymanager to use for storage of dm-crypt keys used for OSD devices;
by default 'ceph' itself will be used for storage of keys, making use
of the key/value storage provided by the ceph-mon cluster.
.
Alternatively 'vault' may be used for storage of dm-crypt keys. Both
approaches ensure that keys are never written to the local filesystem.
This also requires a relation to the vault charm.
crush-initial-weight:
type: float
default:
@ -248,7 +259,7 @@ options:
Availability Zone instead of specifically by host.
availability_zone:
type: string
default:
default:
description: |
Custom availability zone to provide to Ceph for the OSD placement
max-sectors-kb:

View File

@ -13,6 +13,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import json
import glob
import os
import shutil
@ -34,6 +37,7 @@ from charmhelpers.core.hookenv import (
relation_ids,
related_units,
relation_get,
relation_set,
Hooks,
UnregisteredHookError,
service_name,
@ -49,7 +53,8 @@ from charmhelpers.core.host import (
service_reload,
service_restart,
add_to_updatedb_prunepath,
restart_on_change
restart_on_change,
write_file,
)
from charmhelpers.fetch import (
add_source,
@ -59,7 +64,9 @@ from charmhelpers.fetch import (
get_upstream_version,
)
from charmhelpers.core.sysctl import create as create_sysctl
from charmhelpers.contrib.openstack.context import AppArmorContext
from charmhelpers.contrib.openstack.context import (
AppArmorContext,
)
from utils import (
get_host_ip,
get_networks,
@ -74,12 +81,15 @@ from charmhelpers.contrib.openstack.alternatives import install_alternative
from charmhelpers.contrib.network.ip import (
get_ipv6_addr,
format_ipv6_addr,
get_relation_ip,
)
from charmhelpers.contrib.storage.linux.ceph import (
CephConfContext)
from charmhelpers.contrib.charmsupport import nrpe
from charmhelpers.contrib.hardening.harden import harden
import charmhelpers.contrib.openstack.vaultlocker as vaultlocker
hooks = Hooks()
STORAGE_MOUNT_PATH = '/var/lib/ceph'
@ -170,6 +180,23 @@ class CephOsdAppArmorContext(AppArmorContext):
return self.ctxt
def use_vaultlocker():
"""Determine whether vaultlocker should be used for OSD encryption
:returns: whether vaultlocker should be used for key management
:rtype: bool
:raises: ValueError if vaultlocker is enable but ceph < 12.2.4"""
if (config('osd-encrypt') and
config('osd-encrypt-keymanager') == ceph.VAULT_KEY_MANAGER):
if cmp_pkgrevno('ceph', '12.2.4') < 0:
msg = ('vault usage only supported with ceph >= 12.2.4')
status_set('blocked', msg)
raise ValueError(msg)
else:
return True
return False
def install_apparmor_profile():
"""
Install ceph apparmor profiles and configure
@ -365,6 +392,14 @@ def check_overlap(journaldevs, datadevs):
@hooks.hook('config-changed')
@harden()
def config_changed():
# Determine whether vaultlocker is required and install
if use_vaultlocker():
installed = len(filter_installed_packages(['vaultlocker'])) == 0
if not installed:
add_source('ppa:openstack-charmers/vaultlocker')
apt_update(fatal=True)
apt_install('vaultlocker', fatal=True)
# Check if an upgrade was requested
check_for_upgrade()
@ -390,6 +425,18 @@ def config_changed():
@hooks.hook('storage.real')
def prepare_disks_and_activate():
# NOTE: vault/vaultlocker preflight check
vault_kv = vaultlocker.VaultKVContext(vaultlocker.VAULTLOCKER_BACKEND)
context = vault_kv()
if use_vaultlocker() and not vault_kv.complete:
log('Deferring OSD preparation as vault not ready',
level=DEBUG)
return
elif use_vaultlocker() and vault_kv.complete:
log('Vault ready, writing vaultlocker configuration',
level=DEBUG)
vaultlocker.write_vaultlocker_conf(context)
osd_journal = get_journal_devices()
check_overlap(osd_journal, set(get_devices()))
log("got journal devs: {}".format(osd_journal), level=DEBUG)
@ -407,7 +454,8 @@ def prepare_disks_and_activate():
osd_journal, config('osd-reformat'),
config('ignore-device-errors'),
config('osd-encrypt'),
config('bluestore'))
config('bluestore'),
config('osd-encrypt-keymanager'))
# Make it fast!
if config('autotune'):
ceph.tune_dev(dev)
@ -536,6 +584,26 @@ def update_nrpe_config():
nrpe_setup.write()
@hooks.hook('secrets-storage-relation-joined')
def secrets_storage_joined(relation_id=None):
relation_set(relation_id=relation_id,
secret_backend='charm-vaultlocker',
isolated=True,
access_address=get_relation_ip('secrets-storage'),
hostname=socket.gethostname())
@hooks.hook('secrets-storage-relation-changed')
def secrets_storage_changed():
vault_ca = relation_get('vault_ca')
if vault_ca:
vault_ca = base64.decodestring(json.loads(vault_ca).encode())
write_file('/usr/local/share/ca-certificates/vault-ca.crt',
vault_ca, perms=0o644)
subprocess.check_call(['update-ca-certificates', '--fresh'])
prepare_disks_and_activate()
VERSION_PACKAGE = 'ceph-common'
@ -559,6 +627,15 @@ def assess_status():
status_set('waiting', 'Incomplete relation: monitor')
return
# Check for vault
if use_vaultlocker():
if not relation_ids('secrets-storage'):
status_set('blocked', 'Missing relation: vault')
return
if not vaultlocker.vault_relation_complete():
status_set('waiting', 'Incomplete relation: vault')
return
# Check for OSD device creation parity i.e. at least some devices
# must have been presented and used for this charm to be operational
running_osds = ceph.get_running_osds()

View File

@ -0,0 +1,13 @@
# Copyright 2014-2015 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,354 @@
# Copyright 2014-2015 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import re
import sys
import six
from collections import OrderedDict
from charmhelpers.contrib.amulet.deployment import (
AmuletDeployment
)
from charmhelpers.contrib.openstack.amulet.utils import (
OPENSTACK_RELEASES_PAIRS
)
DEBUG = logging.DEBUG
ERROR = logging.ERROR
class OpenStackAmuletDeployment(AmuletDeployment):
"""OpenStack amulet deployment.
This class inherits from AmuletDeployment and has additional support
that is specifically for use by OpenStack charms.
"""
def __init__(self, series=None, openstack=None, source=None,
stable=True, log_level=DEBUG):
"""Initialize the deployment environment."""
super(OpenStackAmuletDeployment, self).__init__(series)
self.log = self.get_logger(level=log_level)
self.log.info('OpenStackAmuletDeployment: init')
self.openstack = openstack
self.source = source
self.stable = stable
def get_logger(self, name="deployment-logger", level=logging.DEBUG):
"""Get a logger object that will log to stdout."""
log = logging
logger = log.getLogger(name)
fmt = log.Formatter("%(asctime)s %(funcName)s "
"%(levelname)s: %(message)s")
handler = log.StreamHandler(stream=sys.stdout)
handler.setLevel(level)
handler.setFormatter(fmt)
logger.addHandler(handler)
logger.setLevel(level)
return logger
def _determine_branch_locations(self, other_services):
"""Determine the branch locations for the other services.
Determine if the local branch being tested is derived from its
stable or next (dev) branch, and based on this, use the corresonding
stable or next branches for the other_services."""
self.log.info('OpenStackAmuletDeployment: determine branch locations')
# Charms outside the ~openstack-charmers
base_charms = {
'mysql': ['trusty'],
'mongodb': ['trusty'],
'nrpe': ['trusty', 'xenial'],
}
for svc in other_services:
# If a location has been explicitly set, use it
if svc.get('location'):
continue
if svc['name'] in base_charms:
# NOTE: not all charms have support for all series we
# want/need to test against, so fix to most recent
# that each base charm supports
target_series = self.series
if self.series not in base_charms[svc['name']]:
target_series = base_charms[svc['name']][-1]
svc['location'] = 'cs:{}/{}'.format(target_series,
svc['name'])
elif self.stable:
svc['location'] = 'cs:{}/{}'.format(self.series,
svc['name'])
else:
svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format(
self.series,
svc['name']
)
return other_services
def _add_services(self, this_service, other_services, use_source=None,
no_origin=None):
"""Add services to the deployment and optionally set
openstack-origin/source.
:param this_service dict: Service dictionary describing the service
whose amulet tests are being run
:param other_services dict: List of service dictionaries describing
the services needed to support the target
service
:param use_source list: List of services which use the 'source' config
option rather than 'openstack-origin'
:param no_origin list: List of services which do not support setting
the Cloud Archive.
Service Dict:
{
'name': str charm-name,
'units': int number of units,
'constraints': dict of juju constraints,
'location': str location of charm,
}
eg
this_service = {
'name': 'openvswitch-odl',
'constraints': {'mem': '8G'},
}
other_services = [
{
'name': 'nova-compute',
'units': 2,
'constraints': {'mem': '4G'},
'location': cs:~bob/xenial/nova-compute
},
{
'name': 'mysql',
'constraints': {'mem': '2G'},
},
{'neutron-api-odl'}]
use_source = ['mysql']
no_origin = ['neutron-api-odl']
"""
self.log.info('OpenStackAmuletDeployment: adding services')
other_services = self._determine_branch_locations(other_services)
super(OpenStackAmuletDeployment, self)._add_services(this_service,
other_services)
services = other_services
services.append(this_service)
use_source = use_source or []
no_origin = no_origin or []
# Charms which should use the source config option
use_source = list(set(
use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
'ceph-osd', 'ceph-radosgw', 'ceph-mon',
'ceph-proxy', 'percona-cluster', 'lxd']))
# Charms which can not use openstack-origin, ie. many subordinates
no_origin = list(set(
no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch',
'nrpe', 'openvswitch-odl', 'neutron-api-odl',
'odl-controller', 'cinder-backup', 'nexentaedge-data',
'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
'cinder-nexentaedge', 'nexentaedge-mgmt']))
if self.openstack:
for svc in services:
if svc['name'] not in use_source + no_origin:
config = {'openstack-origin': self.openstack}
self.d.configure(svc['name'], config)
if self.source:
for svc in services:
if svc['name'] in use_source and svc['name'] not in no_origin:
config = {'source': self.source}
self.d.configure(svc['name'], config)
def _configure_services(self, configs):
"""Configure all of the services."""
self.log.info('OpenStackAmuletDeployment: configure services')
for service, config in six.iteritems(configs):
self.d.configure(service, config)
def _auto_wait_for_status(self, message=None, exclude_services=None,
include_only=None, timeout=None):
"""Wait for all units to have a specific extended status, except
for any defined as excluded. Unless specified via message, any
status containing any case of 'ready' will be considered a match.
Examples of message usage:
Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
Wait for all units to reach this status (exact match):
message = re.compile('^Unit is ready and clustered$')
Wait for all units to reach any one of these (exact match):
message = re.compile('Unit is ready|OK|Ready')
Wait for at least one unit to reach this status (exact match):
message = {'ready'}
See Amulet's sentry.wait_for_messages() for message usage detail.
https://github.com/juju/amulet/blob/master/amulet/sentry.py
:param message: Expected status match
:param exclude_services: List of juju service names to ignore,
not to be used in conjuction with include_only.
:param include_only: List of juju service names to exclusively check,
not to be used in conjuction with exclude_services.
:param timeout: Maximum time in seconds to wait for status match
:returns: None. Raises if timeout is hit.
"""
if not timeout:
timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 1800))
self.log.info('Waiting for extended status on units for {}s...'
''.format(timeout))
all_services = self.d.services.keys()
if exclude_services and include_only:
raise ValueError('exclude_services can not be used '
'with include_only')
if message:
if isinstance(message, re._pattern_type):
match = message.pattern
else:
match = message
self.log.debug('Custom extended status wait match: '
'{}'.format(match))
else:
self.log.debug('Default extended status wait match: contains '
'READY (case-insensitive)')
message = re.compile('.*ready.*', re.IGNORECASE)
if exclude_services:
self.log.debug('Excluding services from extended status match: '
'{}'.format(exclude_services))
else:
exclude_services = []
if include_only:
services = include_only
else:
services = list(set(all_services) - set(exclude_services))
self.log.debug('Waiting up to {}s for extended status on services: '
'{}'.format(timeout, services))
service_messages = {service: message for service in services}
# Check for idleness
self.d.sentry.wait(timeout=timeout)
# Check for error states and bail early
self.d.sentry.wait_for_status(self.d.juju_env, services, timeout=timeout)
# Check for ready messages
self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
self.log.info('OK')
def _get_openstack_release(self):
"""Get openstack release.
Return an integer representing the enum value of the openstack
release.
"""
# Must be ordered by OpenStack release (not by Ubuntu release):
for i, os_pair in enumerate(OPENSTACK_RELEASES_PAIRS):
setattr(self, os_pair, i)
releases = {
('trusty', None): self.trusty_icehouse,
('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
('xenial', None): self.xenial_mitaka,
('xenial', 'cloud:xenial-newton'): self.xenial_newton,
('xenial', 'cloud:xenial-ocata'): self.xenial_ocata,
('xenial', 'cloud:xenial-pike'): self.xenial_pike,
('xenial', 'cloud:xenial-queens'): self.xenial_queens,
('yakkety', None): self.yakkety_newton,
('zesty', None): self.zesty_ocata,
('artful', None): self.artful_pike,
('bionic', None): self.bionic_queens,
}
return releases[(self.series, self.openstack)]
def _get_openstack_release_string(self):
"""Get openstack release string.
Return a string representing the openstack release.
"""
releases = OrderedDict([
('trusty', 'icehouse'),
('xenial', 'mitaka'),
('yakkety', 'newton'),
('zesty', 'ocata'),
('artful', 'pike'),
('bionic', 'queens'),
])
if self.openstack:
os_origin = self.openstack.split(':')[1]
return os_origin.split('%s-' % self.series)[1].split('/')[0]
else:
return releases[self.series]
def get_ceph_expected_pools(self, radosgw=False):
"""Return a list of expected ceph pools in a ceph + cinder + glance
test scenario, based on OpenStack release and whether ceph radosgw
is flagged as present or not."""
if self._get_openstack_release() == self.trusty_icehouse:
# Icehouse
pools = [
'data',
'metadata',
'rbd',
'cinder-ceph',
'glance'
]
elif (self.trusty_kilo <= self._get_openstack_release() <=
self.zesty_ocata):
# Kilo through Ocata
pools = [
'rbd',
'cinder-ceph',
'glance'
]
else:
# Pike and later
pools = [
'cinder-ceph',
'glance'
]
if radosgw:
pools.extend([
'.rgw.root',
'.rgw.control',
'.rgw',
'.rgw.gc',
'.users.uid'
])
return pools

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,16 @@
# Copyright 2014-2015 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# dummy __init__.py to fool syncer into thinking this is a syncable python
# module

View File

@ -0,0 +1,13 @@
# Copyright 2016 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,265 @@
# Copyright 2014-2016 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Copyright 2016 Canonical Ltd.
#
# Authors:
# Openstack Charmers <
#
"""
Helpers for high availability.
"""
import json
import re
from charmhelpers.core.hookenv import (
log,
relation_set,
charm_name,
config,
status_set,
DEBUG,
WARNING,
)
from charmhelpers.core.host import (
lsb_release
)
from charmhelpers.contrib.openstack.ip import (
resolve_address,
is_ipv6,
)
from charmhelpers.contrib.network.ip import (
get_iface_for_address,
get_netmask_for_address,
)
from charmhelpers.contrib.hahelpers.cluster import (
get_hacluster_config
)
JSON_ENCODE_OPTIONS = dict(
sort_keys=True,
allow_nan=False,
indent=None,
separators=(',', ':'),
)
class DNSHAException(Exception):
"""Raised when an error occurs setting up DNS HA
"""
pass
def update_dns_ha_resource_params(resources, resource_params,
relation_id=None,
crm_ocf='ocf:maas:dns'):
""" Configure DNS-HA resources based on provided configuration and
update resource dictionaries for the HA relation.
@param resources: Pointer to dictionary of resources.
Usually instantiated in ha_joined().
@param resource_params: Pointer to dictionary of resource parameters.
Usually instantiated in ha_joined()
@param relation_id: Relation ID of the ha relation
@param crm_ocf: Corosync Open Cluster Framework resource agent to use for
DNS HA
"""
_relation_data = {'resources': {}, 'resource_params': {}}
update_hacluster_dns_ha(charm_name(),
_relation_data,
crm_ocf)
resources.update(_relation_data['resources'])
resource_params.update(_relation_data['resource_params'])
relation_set(relation_id=relation_id, groups=_relation_data['groups'])
def assert_charm_supports_dns_ha():
"""Validate prerequisites for DNS HA
The MAAS client is only available on Xenial or greater
:raises DNSHAException: if release is < 16.04
"""
if lsb_release().get('DISTRIB_RELEASE') < '16.04':
msg = ('DNS HA is only supported on 16.04 and greater '
'versions of Ubuntu.')
status_set('blocked', msg)
raise DNSHAException(msg)
return True
def expect_ha():
""" Determine if the unit expects to be in HA
Check for VIP or dns-ha settings which indicate the unit should expect to
be related to hacluster.
@returns boolean
"""
return config('vip') or config('dns-ha')
def generate_ha_relation_data(service):
""" Generate relation data for ha relation
Based on configuration options and unit interfaces, generate a json
encoded dict of relation data items for the hacluster relation,
providing configuration for DNS HA or VIP's + haproxy clone sets.
@returns dict: json encoded data for use with relation_set
"""
_haproxy_res = 'res_{}_haproxy'.format(service)
_relation_data = {
'resources': {
_haproxy_res: 'lsb:haproxy',
},
'resource_params': {
_haproxy_res: 'op monitor interval="5s"'
},
'init_services': {
_haproxy_res: 'haproxy'
},
'clones': {
'cl_{}_haproxy'.format(service): _haproxy_res
},
}
if config('dns-ha'):
update_hacluster_dns_ha(service, _relation_data)
else:
update_hacluster_vip(service, _relation_data)
return {
'json_{}'.format(k): json.dumps(v, **JSON_ENCODE_OPTIONS)
for k, v in _relation_data.items() if v
}
def update_hacluster_dns_ha(service, relation_data,
crm_ocf='ocf:maas:dns'):
""" Configure DNS-HA resources based on provided configuration
@param service: Name of the service being configured
@param relation_data: Pointer to dictionary of relation data.
@param crm_ocf: Corosync Open Cluster Framework resource agent to use for
DNS HA
"""
# Validate the charm environment for DNS HA
assert_charm_supports_dns_ha()
settings = ['os-admin-hostname', 'os-internal-hostname',
'os-public-hostname', 'os-access-hostname']
# Check which DNS settings are set and update dictionaries
hostname_group = []
for setting in settings:
hostname = config(setting)
if hostname is None:
log('DNS HA: Hostname setting {} is None. Ignoring.'
''.format(setting),
DEBUG)
continue
m = re.search('os-(.+?)-hostname', setting)
if m:
endpoint_type = m.group(1)
# resolve_address's ADDRESS_MAP uses 'int' not 'internal'
if endpoint_type == 'internal':
endpoint_type = 'int'
else:
msg = ('Unexpected DNS hostname setting: {}. '
'Cannot determine endpoint_type name'
''.format(setting))
status_set('blocked', msg)
raise DNSHAException(msg)
hostname_key = 'res_{}_{}_hostname'.format(service, endpoint_type)
if hostname_key in hostname_group:
log('DNS HA: Resource {}: {} already exists in '
'hostname group - skipping'.format(hostname_key, hostname),
DEBUG)
continue
hostname_group.append(hostname_key)
relation_data['resources'][hostname_key] = crm_ocf
relation_data['resource_params'][hostname_key] = (
'params fqdn="{}" ip_address="{}"'
.format(hostname, resolve_address(endpoint_type=endpoint_type,
override=False)))
if len(hostname_group) >= 1:
log('DNS HA: Hostname group is set with {} as members. '
'Informing the ha relation'.format(' '.join(hostname_group)),
DEBUG)
relation_data['groups'] = {
'grp_{}_hostnames'.format(service): ' '.join(hostname_group)
}
else:
msg = 'DNS HA: Hostname group has no members.'
status_set('blocked', msg)
raise DNSHAException(msg)
def update_hacluster_vip(service, relation_data):
""" Configure VIP resources based on provided configuration
@param service: Name of the service being configured
@param relation_data: Pointer to dictionary of relation data.
"""
cluster_config = get_hacluster_config()
vip_group = []
for vip in cluster_config['vip'].split():
if is_ipv6(vip):
res_neutron_vip = 'ocf:heartbeat:IPv6addr'
vip_params = 'ipv6addr'
else:
res_neutron_vip = 'ocf:heartbeat:IPaddr2'
vip_params = 'ip'
iface = (get_iface_for_address(vip) or
config('vip_iface'))
netmask = (get_netmask_for_address(vip) or
config('vip_cidr'))
if iface is not None:
vip_key = 'res_{}_{}_vip'.format(service, iface)
if vip_key in vip_group:
if vip not in relation_data['resource_params'][vip_key]:
vip_key = '{}_{}'.format(vip_key, vip_params)
else:
log("Resource '%s' (vip='%s') already exists in "
"vip group - skipping" % (vip_key, vip), WARNING)
continue
relation_data['resources'][vip_key] = res_neutron_vip
relation_data['resource_params'][vip_key] = (
'params {ip}="{vip}" cidr_netmask="{netmask}" '
'nic="{iface}"'.format(ip=vip_params,
vip=vip,
iface=iface,
netmask=netmask)
)
vip_group.append(vip_key)
if len(vip_group) >= 1:
relation_data['groups'] = {
'grp_{}_vips'.format(service): ' '.join(vip_group)
}

View File

@ -0,0 +1,178 @@
#!/usr/bin/python
#
# Copyright 2017 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import six
from charmhelpers.fetch import apt_install
from charmhelpers.contrib.openstack.context import IdentityServiceContext
from charmhelpers.core.hookenv import (
log,
ERROR,
)
def get_api_suffix(api_version):
"""Return the formatted api suffix for the given version
@param api_version: version of the keystone endpoint
@returns the api suffix formatted according to the given api
version
"""
return 'v2.0' if api_version in (2, "2", "2.0") else 'v3'
def format_endpoint(schema, addr, port, api_version):
"""Return a formatted keystone endpoint
@param schema: http or https
@param addr: ipv4/ipv6 host of the keystone service
@param port: port of the keystone service
@param api_version: 2 or 3
@returns a fully formatted keystone endpoint
"""
return '{}://{}:{}/{}/'.format(schema, addr, port,
get_api_suffix(api_version))
def get_keystone_manager(endpoint, api_version, **kwargs):
"""Return a keystonemanager for the correct API version
@param endpoint: the keystone endpoint to point client at
@param api_version: version of the keystone api the client should use
@param kwargs: token or username/tenant/password information
@returns keystonemanager class used for interrogating keystone
"""
if api_version == 2:
return KeystoneManager2(endpoint, **kwargs)
if api_version == 3:
return KeystoneManager3(endpoint, **kwargs)
raise ValueError('No manager found for api version {}'.format(api_version))
def get_keystone_manager_from_identity_service_context():
"""Return a keystonmanager generated from a
instance of charmhelpers.contrib.openstack.context.IdentityServiceContext
@returns keystonamenager instance
"""
context = IdentityServiceContext()()
if not context:
msg = "Identity service context cannot be generated"
log(msg, level=ERROR)
raise ValueError(msg)
endpoint = format_endpoint(context['service_protocol'],
context['service_host'],
context['service_port'],
context['api_version'])
if context['api_version'] in (2, "2.0"):
api_version = 2
else:
api_version = 3
return get_keystone_manager(endpoint, api_version,
username=context['admin_user'],
password=context['admin_password'],
tenant_name=context['admin_tenant_name'])
class KeystoneManager(object):
def resolve_service_id(self, service_name=None, service_type=None):
"""Find the service_id of a given service"""
services = [s._info for s in self.api.services.list()]
service_name = service_name.lower()
for s in services:
name = s['name'].lower()
if service_type and service_name:
if (service_name == name and service_type == s['type']):
return s['id']
elif service_name and service_name == name:
return s['id']
elif service_type and service_type == s['type']:
return s['id']
return None
def service_exists(self, service_name=None, service_type=None):
"""Determine if the given service exists on the service list"""
return self.resolve_service_id(service_name, service_type) is not None
class KeystoneManager2(KeystoneManager):
def __init__(self, endpoint, **kwargs):
try:
from keystoneclient.v2_0 import client
from keystoneclient.auth.identity import v2
from keystoneclient import session
except ImportError:
if six.PY2:
apt_install(["python-keystoneclient"], fatal=True)
else:
apt_install(["python3-keystoneclient"], fatal=True)
from keystoneclient.v2_0 import client
from keystoneclient.auth.identity import v2
from keystoneclient import session
self.api_version = 2
token = kwargs.get("token", None)
if token:
api = client.Client(endpoint=endpoint, token=token)
else:
auth = v2.Password(username=kwargs.get("username"),
password=kwargs.get("password"),
tenant_name=kwargs.get("tenant_name"),
auth_url=endpoint)
sess = session.Session(auth=auth)
api = client.Client(session=sess)
self.api = api
class KeystoneManager3(KeystoneManager):
def __init__(self, endpoint, **kwargs):
try:
from keystoneclient.v3 import client
from keystoneclient.auth import token_endpoint
from keystoneclient import session
from keystoneclient.auth.identity import v3
except ImportError:
if six.PY2:
apt_install(["python-keystoneclient"], fatal=True)
else:
apt_install(["python3-keystoneclient"], fatal=True)
from keystoneclient.v3 import client
from keystoneclient.auth import token_endpoint
from keystoneclient import session
from keystoneclient.auth.identity import v3
self.api_version = 3
token = kwargs.get("token", None)
if token:
auth = token_endpoint.Token(endpoint=endpoint,
token=token)
sess = session.Session(auth=auth)
else:
auth = v3.Password(auth_url=endpoint,
user_id=kwargs.get("username"),
password=kwargs.get("password"),
project_id=kwargs.get("tenant_name"))
sess = session.Session(auth=auth)
self.api = client.Client(session=sess)

View File

@ -0,0 +1,16 @@
# Copyright 2014-2015 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# dummy __init__.py to fool syncer into thinking this is a syncable python
# module

View File

@ -0,0 +1,379 @@
# Copyright 2014-2015 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import six
from charmhelpers.fetch import apt_install, apt_update
from charmhelpers.core.hookenv import (
log,
ERROR,
INFO,
TRACE
)
from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
try:
from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
except ImportError:
apt_update(fatal=True)
if six.PY2:
apt_install('python-jinja2', fatal=True)
else:
apt_install('python3-jinja2', fatal=True)
from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
class OSConfigException(Exception):
pass
def get_loader(templates_dir, os_release):
"""
Create a jinja2.ChoiceLoader containing template dirs up to
and including os_release. If directory template directory
is missing at templates_dir, it will be omitted from the loader.
templates_dir is added to the bottom of the search list as a base
loading dir.
A charm may also ship a templates dir with this module
and it will be appended to the bottom of the search list, eg::
hooks/charmhelpers/contrib/openstack/templates
:param templates_dir (str): Base template directory containing release
sub-directories.
:param os_release (str): OpenStack release codename to construct template
loader.
:returns: jinja2.ChoiceLoader constructed with a list of
jinja2.FilesystemLoaders, ordered in descending
order by OpenStack release.
"""
tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
for rel in six.itervalues(OPENSTACK_CODENAMES)]
if not os.path.isdir(templates_dir):
log('Templates directory not found @ %s.' % templates_dir,
level=ERROR)
raise OSConfigException
# the bottom contains tempaltes_dir and possibly a common templates dir
# shipped with the helper.
loaders = [FileSystemLoader(templates_dir)]
helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
if os.path.isdir(helper_templates):
loaders.append(FileSystemLoader(helper_templates))
for rel, tmpl_dir in tmpl_dirs:
if os.path.isdir(tmpl_dir):
loaders.insert(0, FileSystemLoader(tmpl_dir))
if rel == os_release:
break
# demote this log to the lowest level; we don't really need to see these
# lots in production even when debugging.
log('Creating choice loader with dirs: %s' %
[l.searchpath for l in loaders], level=TRACE)
return ChoiceLoader(loaders)
class OSConfigTemplate(object):
"""
Associates a config file template with a list of context generators.
Responsible for constructing a template context based on those generators.
"""
def __init__(self, config_file, contexts, config_template=None):
self.config_file = config_file
if hasattr(contexts, '__call__'):
self.contexts = [contexts]
else:
self.contexts = contexts
self._complete_contexts = []
self.config_template = config_template
def context(self):
ctxt = {}
for context in self.contexts:
_ctxt = context()
if _ctxt:
ctxt.update(_ctxt)
# track interfaces for every complete context.
[self._complete_contexts.append(interface)
for interface in context.interfaces
if interface not in self._complete_contexts]
return ctxt
def complete_contexts(self):
'''
Return a list of interfaces that have satisfied contexts.
'''
if self._complete_contexts:
return self._complete_contexts
self.context()
return self._complete_contexts
@property
def is_string_template(self):
""":returns: Boolean if this instance is a template initialised with a string"""
return self.config_template is not None
class OSConfigRenderer(object):
"""
This class provides a common templating system to be used by OpenStack
charms. It is intended to help charms share common code and templates,
and ease the burden of managing config templates across multiple OpenStack
releases.
Basic usage::
# import some common context generates from charmhelpers
from charmhelpers.contrib.openstack import context
# Create a renderer object for a specific OS release.
configs = OSConfigRenderer(templates_dir='/tmp/templates',
openstack_release='folsom')
# register some config files with context generators.
configs.register(config_file='/etc/nova/nova.conf',
contexts=[context.SharedDBContext(),
context.AMQPContext()])
configs.register(config_file='/etc/nova/api-paste.ini',
contexts=[context.IdentityServiceContext()])
configs.register(config_file='/etc/haproxy/haproxy.conf',
contexts=[context.HAProxyContext()])
configs.register(config_file='/etc/keystone/policy.d/extra.cfg',
contexts=[context.ExtraPolicyContext()
context.KeystoneContext()],
config_template=hookenv.config('extra-policy'))
# write out a single config
configs.write('/etc/nova/nova.conf')
# write out all registered configs
configs.write_all()
**OpenStack Releases and template loading**
When the object is instantiated, it is associated with a specific OS
release. This dictates how the template loader will be constructed.
The constructed loader attempts to load the template from several places
in the following order:
- from the most recent OS release-specific template dir (if one exists)
- the base templates_dir
- a template directory shipped in the charm with this helper file.
For the example above, '/tmp/templates' contains the following structure::
/tmp/templates/nova.conf
/tmp/templates/api-paste.ini
/tmp/templates/grizzly/api-paste.ini
/tmp/templates/havana/api-paste.ini
Since it was registered with the grizzly release, it first seraches
the grizzly directory for nova.conf, then the templates dir.
When writing api-paste.ini, it will find the template in the grizzly
directory.
If the object were created with folsom, it would fall back to the
base templates dir for its api-paste.ini template.
This system should help manage changes in config files through
openstack releases, allowing charms to fall back to the most recently
updated config template for a given release
The haproxy.conf, since it is not shipped in the templates dir, will
be loaded from the module directory's template directory, eg
$CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
us to ship common templates (haproxy, apache) with the helpers.
**Context generators**
Context generators are used to generate template contexts during hook
execution. Doing so may require inspecting service relations, charm
config, etc. When registered, a config file is associated with a list
of generators. When a template is rendered and written, all context
generates are called in a chain to generate the context dictionary
passed to the jinja2 template. See context.py for more info.
"""
def __init__(self, templates_dir, openstack_release):
if not os.path.isdir(templates_dir):
log('Could not locate templates dir %s' % templates_dir,
level=ERROR)
raise OSConfigException
self.templates_dir = templates_dir
self.openstack_release = openstack_release
self.templates = {}
self._tmpl_env = None
if None in [Environment, ChoiceLoader, FileSystemLoader]:
# if this code is running, the object is created pre-install hook.
# jinja2 shouldn't get touched until the module is reloaded on next
# hook execution, with proper jinja2 bits successfully imported.
if six.PY2:
apt_install('python-jinja2')
else:
apt_install('python3-jinja2')
def register(self, config_file, contexts, config_template=None):
"""
Register a config file with a list of context generators to be called
during rendering.
config_template can be used to load a template from a string instead of
using template loaders and template files.
:param config_file (str): a path where a config file will be rendered
:param contexts (list): a list of context dictionaries with kv pairs
:param config_template (str): an optional template string to use
"""
self.templates[config_file] = OSConfigTemplate(
config_file=config_file,
contexts=contexts,
config_template=config_template
)
log('Registered config file: {}'.format(config_file),
level=INFO)
def _get_tmpl_env(self):
if not self._tmpl_env:
loader = get_loader(self.templates_dir, self.openstack_release)
self._tmpl_env = Environment(loader=loader)
def _get_template(self, template):
self._get_tmpl_env()
template = self._tmpl_env.get_template(template)
log('Loaded template from {}'.format(template.filename),
level=INFO)
return template
def _get_template_from_string(self, ostmpl):
'''
Get a jinja2 template object from a string.
:param ostmpl: OSConfigTemplate to use as a data source.
'''
self._get_tmpl_env()
template = self._tmpl_env.from_string(ostmpl.config_template)
log('Loaded a template from a string for {}'.format(
ostmpl.config_file),
level=INFO)
return template
def render(self, config_file):
if config_file not in self.templates:
log('Config not registered: {}'.format(config_file), level=ERROR)
raise OSConfigException
ostmpl = self.templates[config_file]
ctxt = ostmpl.context()
if ostmpl.is_string_template:
template = self._get_template_from_string(ostmpl)
log('Rendering from a string template: '
'{}'.format(config_file),
level=INFO)
else:
_tmpl = os.path.basename(config_file)
try:
template = self._get_template(_tmpl)
except exceptions.TemplateNotFound:
# if no template is found with basename, try looking
# for it using a munged full path, eg:
# /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
_tmpl = '_'.join(config_file.split('/')[1:])
try:
template = self._get_template(_tmpl)
except exceptions.TemplateNotFound as e:
log('Could not load template from {} by {} or {}.'
''.format(
self.templates_dir,
os.path.basename(config_file),
_tmpl
),
level=ERROR)
raise e
log('Rendering from template: {}'.format(config_file),
level=INFO)
return template.render(ctxt)
def write(self, config_file):
"""
Write a single config file, raises if config file is not registered.
"""
if config_file not in self.templates:
log('Config not registered: %s' % config_file, level=ERROR)
raise OSConfigException
_out = self.render(config_file)
if six.PY3:
_out = _out.encode('UTF-8')
with open(config_file, 'wb') as out:
out.write(_out)
log('Wrote template %s.' % config_file, level=INFO)
def write_all(self):
"""
Write out all registered config files.
"""
[self.write(k) for k in six.iterkeys(self.templates)]
def set_release(self, openstack_release):
"""
Resets the template environment and generates a new template loader
based on a the new openstack release.
"""
self._tmpl_env = None
self.openstack_release = openstack_release
self._get_tmpl_env()
def complete_contexts(self):
'''
Returns a list of context interfaces that yield a complete context.
'''
interfaces = []
[interfaces.extend(i.complete_contexts())
for i in six.itervalues(self.templates)]
return interfaces
def get_incomplete_context_data(self, interfaces):
'''
Return dictionary of relation status of interfaces and any missing
required context data. Example:
{'amqp': {'missing_data': ['rabbitmq_password'], 'related': True},
'zeromq-configuration': {'related': False}}
'''
incomplete_context_data = {}
for i in six.itervalues(self.templates):
for context in i.contexts:
for interface in interfaces:
related = False
if interface in context.interfaces:
related = context.get_related()
missing_data = context.missing_data
if missing_data:
incomplete_context_data[interface] = {'missing_data': missing_data}
if related:
if incomplete_context_data.get(interface):
incomplete_context_data[interface].update({'related': True})
else:
incomplete_context_data[interface] = {'related': True}
else:
incomplete_context_data[interface] = {'related': False}
return incomplete_context_data

View File

@ -0,0 +1,126 @@
# Copyright 2018 Canonical Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import charmhelpers.contrib.openstack.alternatives as alternatives
import charmhelpers.contrib.openstack.context as context
import charmhelpers.core.hookenv as hookenv
import charmhelpers.core.host as host
import charmhelpers.core.templating as templating
import charmhelpers.core.unitdata as unitdata
VAULTLOCKER_BACKEND = 'charm-vaultlocker'
class VaultKVContext(context.OSContextGenerator):
"""Vault KV context for interaction with vault-kv interfaces"""
interfaces = ['secrets-storage']
def __init__(self, secret_backend=None):
super(context.OSContextGenerator, self).__init__()
self.secret_backend = (
secret_backend or 'charm-{}'.format(hookenv.service_name())
)
def __call__(self):
db = unitdata.kv()
last_token = db.get('last-token')
secret_id = db.get('secret-id')
for relation_id in hookenv.relation_ids(self.interfaces[0]):
for unit in hookenv.related_units(relation_id):
data = hookenv.relation_get(unit=unit,
rid=relation_id)
vault_url = data.get('vault_url')
role_id = data.get('{}_role_id'.format(hookenv.local_unit()))
token = data.get('{}_token'.format(hookenv.local_unit()))
if all([vault_url, role_id, token]):
token = json.loads(token)
vault_url = json.loads(vault_url)
# Tokens may change when secret_id's are being
# reissued - if so use token to get new secret_id
if token != last_token:
secret_id = retrieve_secret_id(
url=vault_url,
token=token
)
db.set('secret-id', secret_id)
db.set('last-token', token)
db.flush()
ctxt = {
'vault_url': vault_url,
'role_id': json.loads(role_id),
'secret_id': secret_id,
'secret_backend': self.secret_backend,
}
vault_ca = data.get('vault_ca')
if vault_ca:
ctxt['vault_ca'] = json.loads(vault_ca)
self.complete = True
return ctxt
return {}
def write_vaultlocker_conf(context, priority=100):
"""Write vaultlocker configuration to disk and install alternative
:param context: Dict of data from vault-kv relation
:ptype: context: dict
:param priority: Priority of alternative configuration
:ptype: priority: int"""
charm_vl_path = "/var/lib/charm/{}/vaultlocker.conf".format(
hookenv.service_name()
)
host.mkdir(os.path.dirname(charm_vl_path), perms=0o700)
templating.render(source='vaultlocker.conf.j2',
target=charm_vl_path,
context=context, perms=0o600),
alternatives.install_alternative('vaultlocker.conf',
'/etc/vaultlocker/vaultlocker.conf',
charm_vl_path, priority)
def vault_relation_complete(backend=None):
"""Determine whether vault relation is complete
:param backend: Name of secrets backend requested
:ptype backend: string
:returns: whether the relation to vault is complete
:rtype: bool"""
vault_kv = VaultKVContext(secret_backend=backend or VAULTLOCKER_BACKEND)
vault_kv()
return vault_kv.complete
# TODO: contrib a high level unwrap method to hvac that works
def retrieve_secret_id(url, token):
"""Retrieve a response-wrapped secret_id from Vault
:param url: URL to Vault Server
:ptype url: str
:param token: One shot Token to use
:ptype token: str
:returns: secret_id to use for Vault Access
:rtype: str"""
import hvac
client = hvac.Client(url=url, token=token)
response = client._post('/v1/sys/wrapping/unwrap')
if response.status_code == 200:
data = response.json()
return data['data']['secret_id']

View File

@ -0,0 +1 @@
ceph_hooks.py

View File

@ -0,0 +1 @@
ceph_hooks.py

View File

@ -0,0 +1 @@
ceph_hooks.py

View File

@ -0,0 +1 @@
ceph_hooks.py

View File

@ -1364,17 +1364,27 @@ def add_keyring_to_ceph(keyring, secret, hostname, path, done, init_marker):
else:
service_restart('ceph-mon-all')
# NOTE(jamespage): Later ceph releases require explicit
# call to ceph-create-keys to setup the
# admin keys for the cluster; this command
# will wait for quorum in the cluster before
# returning.
# NOTE(fnordahl): Explicitly run `ceph-crate-keys` for older
# ceph releases too. This improves bootstrap
# resilience as the charm will wait for
# presence of peer units before attempting
# to bootstrap. Note that charms deploying
# ceph-mon service should disable running of
# `ceph-create-keys` service in init system.
cmd = ['ceph-create-keys', '--id', hostname]
if cmp_pkgrevno('ceph', '12.0.0') >= 0:
# NOTE(jamespage): Later ceph releases require explicit
# call to ceph-create-keys to setup the
# admin keys for the cluster; this command
# will wait for quorum in the cluster before
# returning.
# NOTE(fnordahl): The default timeout in ceph-create-keys of 600
# seconds is not adequate for all situations.
# seconds is not adequate. Increase timeout when
# timeout parameter available. For older releases
# we rely on retry_on_exception decorator.
# LP#1719436
cmd = ['ceph-create-keys', '--id', hostname, '--timeout', '1800']
subprocess.check_call(cmd)
cmd.extend(['--timeout', '1800'])
subprocess.check_call(cmd)
_client_admin_keyring = '/etc/ceph/ceph.client.admin.keyring'
osstat = os.stat(_client_admin_keyring)
if not osstat.st_size:

View File

@ -27,6 +27,8 @@ extra-bindings:
requires:
mon:
interface: ceph-osd
secrets-storage:
interface: vault-kv
storage:
osd-devices:
type: block

View File

@ -0,0 +1,6 @@
# vaultlocker configuration from ceph-osd charm
[vault]
url = {{ vault_url }}
approle = {{ role_id }}
backend = {{ secret_backend }}
secret_id = {{ secret_id }}

View File

@ -40,6 +40,7 @@ import novaclient
import pika
import swiftclient
from charmhelpers.core.decorators import retry_on_exception
from charmhelpers.contrib.amulet.utils import (
AmuletUtils
)
@ -423,6 +424,7 @@ class OpenStackAmuletUtils(AmuletUtils):
self.log.debug('Checking if tenant exists ({})...'.format(tenant))
return tenant in [t.name for t in keystone.tenants.list()]
@retry_on_exception(num_retries=5, base_delay=1)
def keystone_wait_for_propagation(self, sentry_relation_pairs,
api_version):
"""Iterate over list of sentry and relation tuples and verify that

View File

@ -517,3 +517,82 @@ class CephHooksTestCase(unittest.TestCase):
subprocess.check_call.assert_called_once_with(
['udevadm', 'control', '--reload-rules']
)
@patch.object(ceph_hooks, 'relation_get')
@patch.object(ceph_hooks, 'relation_set')
@patch.object(ceph_hooks, 'prepare_disks_and_activate')
@patch.object(ceph_hooks, 'get_relation_ip')
@patch.object(ceph_hooks, 'socket')
class SecretsStorageTestCase(unittest.TestCase):
def test_secrets_storage_relation_joined(self,
_socket,
_get_relation_ip,
_prepare_disks_and_activate,
_relation_set,
_relation_get):
_get_relation_ip.return_value = '10.23.1.2'
_socket.gethostname.return_value = 'testhost'
ceph_hooks.secrets_storage_joined()
_get_relation_ip.assert_called_with('secrets-storage')
_relation_set.assert_called_with(
relation_id=None,
secret_backend='charm-vaultlocker',
isolated=True,
access_address='10.23.1.2',
hostname='testhost'
)
_socket.gethostname.assert_called_once_with()
def test_secrets_storage_relation_changed(self,
_socket,
_get_relation_ip,
_prepare_disks_and_activate,
_relation_set,
_relation_get):
_relation_get.return_value = None
ceph_hooks.secrets_storage_changed()
_prepare_disks_and_activate.assert_called_once_with()
@patch.object(ceph_hooks, 'cmp_pkgrevno')
@patch.object(ceph_hooks, 'config')
class VaultLockerTestCase(unittest.TestCase):
def test_use_vaultlocker(self, _config, _cmp_pkgrevno):
_test_data = {
'osd-encrypt': True,
'osd-encrypt-keymanager': 'vault',
}
_config.side_effect = lambda x: _test_data.get(x)
_cmp_pkgrevno.return_value = 1
self.assertTrue(ceph_hooks.use_vaultlocker())
def test_use_vaultlocker_no_encryption(self, _config, _cmp_pkgrevno):
_test_data = {
'osd-encrypt': False,
'osd-encrypt-keymanager': 'vault',
}
_config.side_effect = lambda x: _test_data.get(x)
_cmp_pkgrevno.return_value = 1
self.assertFalse(ceph_hooks.use_vaultlocker())
def test_use_vaultlocker_not_vault(self, _config, _cmp_pkgrevno):
_test_data = {
'osd-encrypt': True,
'osd-encrypt-keymanager': 'ceph',
}
_config.side_effect = lambda x: _test_data.get(x)
_cmp_pkgrevno.return_value = 1
self.assertFalse(ceph_hooks.use_vaultlocker())
def test_use_vaultlocker_old_version(self, _config, _cmp_pkgrevno):
_test_data = {
'osd-encrypt': True,
'osd-encrypt-keymanager': 'vault',
}
_config.side_effect = lambda x: _test_data.get(x)
_cmp_pkgrevno.return_value = -1
self.assertRaises(ValueError,
ceph_hooks.use_vaultlocker)

View File

@ -32,6 +32,8 @@ TO_PATCH = [
'get_conf',
'application_version_set',
'get_upstream_version',
'vaultlocker',
'use_vaultlocker',
]
CEPH_MONS = [
@ -47,6 +49,7 @@ class ServiceStatusTestCase(test_utils.CharmTestCase):
super(ServiceStatusTestCase, self).setUp(hooks, TO_PATCH)
self.config.side_effect = self.test_config.get
self.get_upstream_version.return_value = '10.2.2'
self.use_vaultlocker.return_value = False
def test_assess_status_no_monitor_relation(self):
self.relation_ids.return_value = []
@ -77,6 +80,40 @@ class ServiceStatusTestCase(test_utils.CharmTestCase):
self.get_conf.return_value = 'monitor-bootstrap-key'
self.ceph.get_running_osds.return_value = ['12345',
'67890']
self.get_upstream_version.return_value = '12.2.4'
hooks.assess_status()
self.status_set.assert_called_with('active', mock.ANY)
self.application_version_set.assert_called_with('10.2.2')
self.application_version_set.assert_called_with('12.2.4')
def test_assess_status_monitor_vault_missing(self):
_test_relations = {
'mon': ['mon:1'],
}
self.relation_ids.side_effect = lambda x: _test_relations.get(x, [])
self.related_units.return_value = CEPH_MONS
self.vaultlocker.vault_relation_complete.return_value = False
self.use_vaultlocker.return_value = True
self.get_conf.return_value = 'monitor-bootstrap-key'
self.ceph.get_running_osds.return_value = ['12345',
'67890']
self.get_upstream_version.return_value = '12.2.4'
hooks.assess_status()
self.status_set.assert_called_with('blocked', mock.ANY)
self.application_version_set.assert_called_with('12.2.4')
def test_assess_status_monitor_vault_incomplete(self):
_test_relations = {
'mon': ['mon:1'],
'secrets-storage': ['secrets-storage:6']
}
self.relation_ids.side_effect = lambda x: _test_relations.get(x, [])
self.related_units.return_value = CEPH_MONS
self.vaultlocker.vault_relation_complete.return_value = False
self.use_vaultlocker.return_value = True
self.get_conf.return_value = 'monitor-bootstrap-key'
self.ceph.get_running_osds.return_value = ['12345',
'67890']
self.get_upstream_version.return_value = '12.2.4'
hooks.assess_status()
self.status_set.assert_called_with('waiting', mock.ANY)
self.application_version_set.assert_called_with('12.2.4')