Juju Charm - Ceilometer
Go to file
Jenkins 4d44464cb1 Merge "Sync charm-helpers to pick up Ocata UCA support" 2016-12-06 16:55:00 +00:00
actions Re-license charm as Apache-2.0 2016-06-28 11:36:46 +01:00
charmhelpers Sync charm-helpers to pick up Ocata UCA support 2016-12-01 21:36:09 +00:00
hooks ceilometer-agent-central has been replaced by ceilometer-polling 2016-10-26 14:27:51 +01:00
lib Stop/Start ceilometer-api if Apache has changed 2016-10-11 15:08:44 +00:00
ocf/openstack ceilometer-agent-central has been replaced by ceilometer-polling 2016-10-26 14:27:51 +01:00
templates Fix support for Keystone v3 domain auth 2016-11-29 15:22:51 +01:00
tests Update Amulet defs, series metadata and c-h sync 2016-12-03 16:44:32 +00:00
unit_tests ceilometer-agent-central has been replaced by ceilometer-polling 2016-10-26 14:27:51 +01:00
.coveragerc Add unit tests, tidylint 2013-10-20 12:32:35 -07:00
.gitignore Re-license charm as Apache-2.0 2016-06-28 11:36:46 +01:00
.gitreview Add gitreview prior to migration to openstack 2016-02-24 21:53:27 +00:00
.project Add missing files 2013-03-31 12:21:09 +01:00
.pydevproject Add unit tests, tidylint 2013-10-20 12:32:35 -07:00
.testr.conf Add tox support 2015-11-02 09:42:43 +00:00
LICENSE Re-license charm as Apache-2.0 2016-06-28 11:36:46 +01:00
Makefile Use bundletester for amulet test execution 2016-07-19 03:31:58 +00:00
README.md DNS HA 2016-06-15 11:25:22 -07:00
actions.yaml Rebased on trunk 2015-09-22 15:34:48 +00:00
charm-helpers-hooks.yaml Add support for application version 2016-09-20 11:59:16 +01:00
charm-helpers-tests.yaml Added amulet tests 2015-02-17 14:09:16 +00:00
config.yaml Add metering and event ttl config options 2016-07-11 09:25:04 +01:00
copyright Re-license charm as Apache-2.0 2016-06-28 11:36:46 +01:00
hardening.yaml Add hardening support 2016-03-24 11:12:40 +00:00
icon.svg [trivial] Add icon 2014-01-31 12:01:25 +00:00
metadata.yaml Update Amulet defs, series metadata and c-h sync 2016-12-03 16:44:32 +00:00
requirements.txt Add tox support 2015-11-02 09:42:43 +00:00
revision resync charmhelper 2014-03-12 13:10:19 +01:00
setup.cfg Account for ssl port monitoring. 2014-12-17 16:22:18 -07:00
test-requirements.txt Use bundletester for amulet test execution 2016-07-19 03:31:58 +00:00
tox.ini Update tox.ini files from release-tools gold copy 2016-09-09 19:42:22 +00:00

README.md

Overview

This charm provides the Ceilometer service for OpenStack. It is intended to be used alongside the other OpenStack components, starting with the Folsom release.

Ceilometer is made up of 2 separate services: an API service, and a collector service. This charm allows them to be deployed in different combination, depending on user preference and requirements.

This charm was developed to support deploying Folsom on both Ubuntu Quantal and Ubuntu Precise. Since Ceilometer is only available for Ubuntu 12.04 via the Ubuntu Cloud Archive, deploying this charm to a Precise machine will by default install Ceilometer and its dependencies from the Cloud Archive.

Usage

In order to deploy Ceilometer service, the MongoDB service is required:

juju deploy mongodb
juju deploy ceilometer
juju add-relation ceilometer mongodb

then Keystone and Rabbit relationships need to be established:

juju add-relation ceilometer rabbitmq
juju add-relation ceilometer keystone:identity-service
juju add-relation ceilometer keystone:identity-notifications

In order to capture the calculations, a Ceilometer compute agent needs to be installed in each nova node, and be related with Ceilometer service:

juju deploy ceilometer-agent
juju add-relation ceilometer-agent nova-compute
juju add-relation ceilometer:ceilometer-service ceilometer-agent:ceilometer-service

Ceilometer provides an API service that can be used to retrieve Openstack metrics.

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

To use this feature, use the --bind option when deploying the charm:

juju deploy ceilometer --bind "public=public-space internal=internal-space admin=admin-space"

alternatively these can also be provided as part of a juju native bundle configuration:

ceilometer:
  charm: cs:xenial/ceilometer
  bindings:
    public: public-space
    admin: admin-space
    internal: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.