Juju Charm - Nova Cloud Controller
Go to file
Alex Kavanagh ab75b539ab Fix _goal_state_achieved_for_relid() with unsorted lists
Essentially, the functions returning the related units and expected
units (for goal state) might not be sorted, something the author of
the code (me) hadn't taken into account.  This fixes that by comparing
sorted lists.

Change-Id: Id6ca9fc756b0ef18d00863fa4641ab1c244db939
Closes-bug: #1859050
2020-01-10 10:47:05 +00:00
actions Ensure placement charm related before Train upgrade 2019-10-18 13:52:15 +00:00
charmhelpers Policyd override implementation 2019-10-07 22:16:36 +01:00
files Switch the charm to support py3 2018-10-18 15:43:03 +01:00
hooks Fix _goal_state_achieved_for_relid() with unsorted lists 2020-01-10 10:47:05 +00:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:43:30 +00:00
scripts Sync scripts/. 2013-04-09 11:31:23 -07:00
templates Sync charm-helpers and use "rabbit_use_ssl" for ocata 2019-08-01 23:08:14 -04:00
tests Updates for stable branch creation 2019-10-24 12:27:25 -07:00
unit_tests Fix _goal_state_achieved_for_relid() with unsorted lists 2020-01-10 10:47:05 +00:00
.gitignore Provide compute nodes with domain information 2019-07-12 08:53:11 +00:00
.gitreview Updates for stable branch creation 2019-10-24 12:27:25 -07:00
.project Add pydev project 2013-09-20 17:36:24 +01:00
.pydevproject Add missing fetch helper 2013-09-23 14:26:04 +01:00
.stestr.conf Update tox to remove py27 target 2018-11-02 16:29:59 -05:00
.zuul.yaml Add Python 3 Train unit tests 2019-07-30 10:14:17 -04:00
LICENSE Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
Makefile Update repo to do ch-sync from Git 2017-09-26 09:55:32 +02:00
README.md Policyd override implementation 2019-10-07 22:16:36 +01:00
actions.yaml Correct actions.yaml for clear-unit-knownhost-cache action 2019-07-25 18:17:54 +01:00
bindep.txt Remove charm-helpers from tests dir and use venv instead 2017-05-26 16:24:32 +00:00
charm-helpers-hooks.yaml Updates for stable branch creation 2019-10-24 12:27:25 -07:00
config.yaml Policyd override implementation 2019-10-07 22:16:36 +01:00
copyright Re-license charm as Apache-2.0 2016-07-03 16:38:27 +00:00
hardening.yaml Add hardening support 2016-03-31 19:30:33 +01:00
icon.svg Update charm icon 2017-08-02 18:08:40 +01:00
metadata.yaml Disable nova placement API 2019-10-11 20:00:38 +00:00
requirements.txt Add nova-metadata service 2018-10-03 07:24:05 +00:00
revision [ivoks,r=] Add support for setting neutron-alchemy-flags 2014-07-16 15:50:01 +01:00
setup.cfg Add Python 3 Train unit tests 2019-07-30 10:14:17 -04:00
test-requirements.txt Pin -python-cinderclient <5.0.0 2019-09-26 09:18:56 +01:00
tox.ini Enable functional tests for bionic-train 2019-10-22 15:17:40 -04:00

README.md

nova-cloud-controller

Cloud controller node for OpenStack nova. Contains nova-schedule, nova-api, nova-network and nova-objectstore.

If console access is required then console-proxy-ip should be set to a client accessible IP that resolves to the nova-cloud-controller. If running in HA mode then the public vip is used if console-proxy-ip is set to local. Note: The console access protocol is baked into a guest when it is created, if you change it then console access for existing guests will stop working

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

To use this feature, use the --bind option when deploying the charm:

juju deploy nova-cloud-controller --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

nova-cloud-controller:
  charm: cs:xenial/nova-cloud-controller
  num_units: 1
  bindings:
    public: public-space
    admin: admin-space
    internal: internal-space
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Default Quota Configuration

This charm supports default quota settings for projects. This feature is only available from Openstack Icehouse and later releases.

The default quota settings do not overwrite post-deployment CLI quotas set by operators. Existing projects whose quotas were not modified will adopt the new defaults when a config-changed hook occurs. Newly created projects will also adopt the defaults set in the charm's config.

By default, the charm's quota configs are not set and openstack projects have the values below as default: quota-instances - 10 quota-cores - 20 quota-ram - 51200 quota-metadata_items - 128 quota-injected_files - 5 quota-injected_file_content_bytes - 10240 quota-injected_file_path_length - 255 quota-key_pairs - 100 quota-server_groups - 10 (only available after Icehouse) quota-server_group_members - 10 (only available after Icehouse)

SSH knownhosts caching

This section covers the option involving the caching of SSH host lookups (knownhosts) on each nova-compute unit. Caching of SSH host lookups speeds up deployment of nova-compute units when first deploying a cloud, and when adding a new unit.

There is a Boolean configuration key cache-known-hosts that ensures that any given host lookup to be performed just once. The default is true which means that caching is performed.

Note: A cloud can be deployed with the cache-known-hosts key set to false, and be set to true post-deployment. At that point the hosts will have been cached. The key only controls whether the cache is used or not.

If the above key is set, a new Juju action clear-unit-knownhost-cache is provided to clear the cache. This can be applied to a unit, service, or an entire nova-cloud-controller application. This would be needed if DNS resolution had changed in an existing cloud or during a cloud deployment. Not clearing the cache in such cases could result in an inconsistent set of knownhosts files.

This action will cause DNS resolution to be performed (for unit/service/application), thus potentially triggering a relation-set on the nova-cloud-controller unit(s) and subsequent changed hook on the related nova-compute units.

The action is used as follows, based on unit, service, or application, respectively:

juju run-action nova-cloud-controller/0 clear-unit-knownhost-cache target=nova-compute/2
juju run-action nova-cloud-controller/0 clear-unit-knownhost-cache target=nova-compute
juju run-action nova-cloud-controller/0 clear-unit-knownhost-cache

In a high-availability setup, the action must be run on all nova-cloud-controller units.

Policy Overrides

This feature allows for policy overrides using the policy.d directory. This is an advanced feature and the policies that the OpenStack service supports should be clearly and unambiguously understood before trying to override, or add to, the default policies that the service uses. The charm also has some policy defaults. They should also be understood before being overridden.

Caution: It is possible to break the system (for tenants and other services) if policies are incorrectly applied to the service.

Policy overrides are YAML files that contain rules that will add to, or override, existing policy rules in the service. The policy.d directory is a place to put the YAML override files. This charm owns the /etc/keystone/policy.d directory, and as such, any manual changes to it will be overwritten on charm upgrades.

Overrides are provided to the charm using a Juju resource called policyd-override. The resource is a ZIP file. This file, say overrides.zip, is attached to the charm by:

juju attach-resource nova-cloud-controller policyd-override=overrides.zip

The policy override is enabled in the charm using:

juju config nova-cloud-controller use-policyd-override=true

When use-policyd-override is True the status line of the charm will be prefixed with PO: indicating that policies have been overridden. If the installation of the policy override YAML files failed for any reason then the status line will be prefixed with PO (broken):. The log file for the charm will indicate the reason. No policy override files are installed if the PO (broken): is shown. The status line indicates that the overrides are broken, not that the policy for the service has failed. The policy will be the defaults for the charm and service.

Policy overrides on one service may affect the functionality of another service. Therefore, it may be necessary to provide policy overrides for multiple service charms to achieve a consistent set of policies across the OpenStack system. The charms for the other services that may need overrides should be checked to ensure that they support overrides before proceeding.