Policyd override implementation

This patchset implements policy overrides for cinder.  It uses the
code in charmhelpers.

It also fixes several bugs in the bundles where the actual version of
cinder that was being installed was the distro default rather than the
one that the bundle described.

Change-Id: Ic979dcb96ddb931fadb1fa4a4b36108244ddf306
Closed-Bug: #1741723
This commit is contained in:
Alex Kavanagh 2019-10-01 15:37:29 +01:00
parent cbff05ee1a
commit 6ee32006e5
12 changed files with 149 additions and 17 deletions

View File

@ -28,7 +28,7 @@ Basic, all-in-one using local storage and iSCSI
===============================================
The api server, scheduler and volume service are all deployed into the same
unit. Local storage will be initialized as a LVM phsyical device, and a volume
unit. Local storage will be initialized as a LVM physical device, and a volume
group initialized. Instance volumes will be created locally as logical volumes
and exported to instances via iSCSI. This is ideal for small-scale deployments
or testing:
@ -79,7 +79,7 @@ All-in-one using Ceph-backed RBD volumes
All 3 services can be deployed to the same unit, but instead of relying
on local storage to back volumes an external Ceph cluster is used. This
allows scalability and redundancy needs to be satisified and Cinder's RBD
allows scalability and redundancy needs to be satisfied and Cinder's RBD
driver used to create, export and connect volumes to instances. This assumes
a functioning Ceph cluster has already been deployed using the official Ceph
charm and a relation exists between the Ceph service and the nova-compute
@ -112,7 +112,7 @@ block-device: When using local storage, a block device should be specified to
all nodes that the service may be deployed to.
overwrite: Whether or not to wipe local storage that of data that may prevent
it from being initialized as a LVM phsyical device. This includes
it from being initialized as a LVM physical device. This includes
filesystems and partition tables. *CAUTION*
enabled-services: Can be used to separate cinder services between service
@ -155,17 +155,22 @@ set
Network Space support
---------------------
This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.
This charm supports the use of Juju Network Spaces, allowing the charm to be
bound to network space configurations managed directly by Juju. This is only
supported with Juju 2.0 and above.
API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.
API endpoints can be bound to distinct network spaces supporting the network
separation of public, internal and admin endpoints.
Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.
Access to the underlying MySQL instance can also be bound to a specific space
using the shared-db relation.
To use this feature, use the --bind option when deploying the charm:
juju deploy cinder --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"
alternatively these can also be provided as part of a juju native bundle configuration:
Alternatively these can also be provided as part of a juju native bundle
configuration:
cinder:
charm: cs:xenial/cinder
@ -176,6 +181,53 @@ alternatively these can also be provided as part of a juju native bundle configu
internal: internal-space
shared-db: internal-space
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Spaces must be configured in the underlying provider prior to attempting
to use them.
NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.
NOTE: Existing deployments using os-*-network configuration options will
continue to function; these options are preferred over any network space
binding provided if set.
Policy Overrides
================
This feature allows for policy overrides using the `policy.d` directory. This
is an **advanced** feature and the policies that the OpenStack service supports
should be clearly and unambiguously understood before trying to override, or
add to, the default policies that the service uses. The charm also has some
policy defaults. They should also be understood before being overridden.
> **Caution**: It is possible to break the system (for tenants and other
services) if policies are incorrectly applied to the service.
Policy overrides are YAML files that contain rules that will add to, or
override, existing policy rules in the service. The `policy.d` directory is
a place to put the YAML override files. This charm owns the
`/etc/cinder/policy.d` directory, and as such, any manual changes to it will
be overwritten on charm upgrades.
Overrides are provided to the charm using a Juju resource called
`policyd-override`. The resource is a ZIP file. This file, say
`overrides.zip`, is attached to the charm by:
juju attach-resource cinder policyd-override=overrides.zip
The policy override is enabled in the charm using:
juju config cinder use-policyd-override=true
When `use-policyd-override` is `True` the status line of the charm will be
prefixed with `PO:` indicating that policies have been overridden. If the
installation of the policy override YAML files failed for any reason then the
status line will be prefixed with `PO (broken):`. The log file for the charm
will indicate the reason. No policy override files are installed if the `PO
(broken):` is shown. The status line indicates that the overrides are broken,
not that the policy for the service has failed. The policy will be the defaults
for the charm and service.
Policy overrides on one service may affect the functionality of another
service. Therefore, it may be necessary to provide policy overrides for
multiple service charms to achieve a consistent set of policies across the
OpenStack system. The charms for the other services that may need overrides
should be checked to ensure that they support overrides before proceeding.

View File

@ -17,6 +17,7 @@ import contextlib
import os
import six
import shutil
import sys
import yaml
import zipfile
@ -296,15 +297,28 @@ def maybe_do_policyd_overrides(openstack_release,
restarted.
:type restart_handler: Union[None, Callable[]]
"""
hookenv.log("Running maybe_do_policyd_overrides",
level=POLICYD_LOG_LEVEL_DEFAULT)
if not is_policyd_override_valid_on_this_release(openstack_release):
hookenv.log("... policy overrides not valid on this release: {}"
.format(openstack_release),
level=POLICYD_LOG_LEVEL_DEFAULT)
return
config = hookenv.config()
try:
if not config.get(POLICYD_CONFIG_NAME, False):
remove_policy_success_file()
clean_policyd_dir_for(service, blacklist_paths)
if (os.path.isfile(_policy_success_file()) and
restart_handler is not None and
callable(restart_handler)):
restart_handler()
remove_policy_success_file()
return
except Exception:
return
if not is_policyd_override_valid_on_this_release(openstack_release):
except Exception as e:
hookenv.log("... ERROR: Exception is: {}".format(str(e)),
level=POLICYD_CONFIG_NAME)
import traceback
hookenv.log(traceback.format_exc(), level=POLICYD_LOG_LEVEL_DEFAULT)
return
# from now on it should succeed; if it doesn't then status line will show
# broken.
@ -345,16 +359,30 @@ def maybe_do_policyd_overrides_on_config_changed(openstack_release,
restarted.
:type restart_handler: Union[None, Callable[]]
"""
if not is_policyd_override_valid_on_this_release(openstack_release):
return
hookenv.log("Running maybe_do_policyd_overrides_on_config_changed",
level=POLICYD_LOG_LEVEL_DEFAULT)
config = hookenv.config()
try:
if not config.get(POLICYD_CONFIG_NAME, False):
remove_policy_success_file()
clean_policyd_dir_for(service, blacklist_paths)
if (os.path.isfile(_policy_success_file()) and
restart_handler is not None and
callable(restart_handler)):
restart_handler()
remove_policy_success_file()
return
except Exception:
hookenv.log("... ERROR: Exception is: {}".format(str(e)),
level=POLICYD_CONFIG_NAME)
import traceback
hookenv.log(traceback.format_exc(), level=POLICYD_LOG_LEVEL_DEFAULT)
return
# if the policyd overrides have been performed just return
if os.path.isfile(_policy_success_file()):
hookenv.log("... already setup, so skipping.",
level=POLICYD_LOG_LEVEL_DEFAULT)
return
maybe_do_policyd_overrides(
openstack_release, service, blacklist_paths, blacklist_keys,
@ -430,8 +458,13 @@ def _yamlfiles(zipfile):
"""
l = []
for infolist_item in zipfile.infolist():
if infolist_item.is_dir():
continue
try:
if infolist_item.is_dir():
continue
except AttributeError:
# fallback to "old" way to determine dir entry for pre-py36
if infolist_item.filename.endswith('/'):
continue
_, name_ext = os.path.split(infolist_item.filename)
name, ext = os.path.splitext(name_ext)
ext = ext.lower()
@ -511,7 +544,7 @@ def clean_policyd_dir_for(service, keep_paths=None):
path = policyd_dir_for(service)
if not os.path.exists(path):
ch_host.mkdir(path, owner=service, group=service, perms=0o775)
_scanner = os.scandir if six.PY3 else _py2_scandir
_scanner = os.scandir if sys.version_info > (3, 4) else _py2_scandir
for direntry in _scanner(path):
# see if the path should be kept.
if direntry.path in keep_paths:
@ -641,6 +674,7 @@ def process_policy_resource_file(resource_file,
:returns: True if the processing was successful, False if not.
:rtype: boolean
"""
hookenv.log("Running process_policy_resource_file", level=hookenv.DEBUG)
blacklist_paths = blacklist_paths or []
completed = False
try:

View File

@ -352,3 +352,11 @@ options:
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.
use-policyd-override:
type: boolean
default: False
description: |
If True then use the resource file named 'policyd-override' to install
override YAML files in the service's policy.d directory. The resource
file should be a ZIP file containing at least one yaml file with a .yaml
or .yml extension. If False then remove the overrides.

View File

@ -139,6 +139,11 @@ from charmhelpers.contrib.openstack.context import ADDRESS_TYPES
from charmhelpers.contrib.charmsupport import nrpe
from charmhelpers.contrib.hardening.harden import harden
from charmhelpers.contrib.openstack.policyd import (
maybe_do_policyd_overrides,
maybe_do_policyd_overrides_on_config_changed,
)
hooks = Hooks()
CONFIGS = register_configs()
@ -162,6 +167,11 @@ def install():
if run_in_apache():
disable_package_apache_site()
# call the policy overrides handler which will install any policy overrides
maybe_do_policyd_overrides(
os_release('cinder-common'),
'cinder',
restart_handler=lambda: service_restart('cinder-api'))
@hooks.hook('config-changed')
@ -230,6 +240,12 @@ def config_changed():
for rid in relation_ids('identity-service'):
identity_joined(rid=rid)
# call the policy overrides handler which will install any policy overrides
maybe_do_policyd_overrides_on_config_changed(
os_release('cinder-common'),
'cinder',
restart_handler=lambda: service_restart('cinder-api'))
@hooks.hook('shared-db-relation-joined')
def db_joined():
@ -553,6 +569,11 @@ def upgrade_charm():
juju_log("Package purge detected, restarting services")
for s in services():
service_restart(s)
# call the policy overrides handler which will install any policy overrides
maybe_do_policyd_overrides(
os_release('cinder-common'),
'cinder',
restart_handler=lambda: service_restart('cinder-api'))
@hooks.hook('storage-backend-relation-changed')

View File

@ -48,3 +48,8 @@ requires:
peers:
cluster:
interface: cinder-ha
resources:
policyd-override:
type: file
filename: policyd-override.zip
description: The policy.d overrides file

View File

@ -104,6 +104,7 @@ applications:
series: bionic
num_units: 1
options:
openstack-origin: cloud:bionic-rocky
block-device: /dev/vdb
glance-api-version: 2
overwrite: "true"

View File

@ -104,6 +104,7 @@ applications:
series: bionic
num_units: 1
options:
openstack-origin: cloud:bionic-stein
block-device: /dev/vdb
glance-api-version: 2
overwrite: "true"

View File

@ -104,6 +104,7 @@ applications:
series: xenial
num_units: 1
options:
openstack-origin: cloud:xenial-ocata
block-device: /dev/vdb
glance-api-version: 2
overwrite: "true"

View File

@ -104,6 +104,7 @@ applications:
series: xenial
num_units: 1
options:
openstack-origin: cloud:xenial-pike
block-device: /dev/vdb
glance-api-version: 2
overwrite: "true"

View File

@ -104,6 +104,7 @@ applications:
series: xenial
num_units: 1
options:
openstack-origin: cloud:xenial-queens
block-device: /dev/vdb
glance-api-version: 2
overwrite: "true"

View File

@ -23,3 +23,7 @@ configure:
tests:
- zaza.openstack.charm_tests.cinder.tests.CinderTests
- zaza.openstack.charm_tests.cinder.tests.SecurityTests
- zaza.openstack.charm_tests.policyd.tests.CinderTests
tests_options:
policyd:
service: cinder

View File

@ -82,6 +82,9 @@ TO_PATCH = [
'openstack_upgrade_available',
'os_release',
'run_in_apache',
# charmhelpers.contrib.openstack.policyd
'maybe_do_policyd_overrides',
'maybe_do_policyd_overrides_on_config_changed',
# charmhelpers.contrib.openstack.openstack.ha.utils
'generate_ha_relation_data',
# charmhelpers.contrib.hahelpers.cluster_utils