Juju Charm - OpenStack dashboard
Go to file
James Page efda3b9ddf Fix minor openstack upgrade issues
Ensure that any new package requirements are installed for the
new OpenStack release version.

Reset os_release cache during the upgrade process to pickup the
new release.

Change-Id: Ied85f83ad4fabf1fb946df7db94aa4c847b4c075
Closes-Bug: 1715624
2017-09-07 17:26:46 +01:00
actions Re-license charm as Apache-2.0 2016-07-03 17:40:18 +01:00
hooks Fix minor openstack upgrade issues 2017-09-07 17:26:46 +01:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:22:14 +00:00
scripts Sync scripts/. 2013-04-09 11:40:10 -07:00
templates add local_settings template for Ocata 2017-08-24 12:11:50 +03:00
tests Remove deprecated series metadata and tests 2017-08-23 09:55:29 -05:00
unit_tests Fix minor openstack upgrade issues 2017-09-07 17:26:46 +01:00
.coveragerc Recover hooks testing, tweak coverage settings 2013-07-18 09:54:13 +01:00
.gitignore DNS HA 2016-06-23 09:24:49 +01:00
.gitreview Add gitreview prior to migration to openstack 2016-02-24 21:53:35 +00:00
.project Rebase on trunk 2014-02-24 17:46:45 +00:00
.pydevproject Rebase on trunk 2014-02-24 17:46:45 +00:00
.testr.conf Add tox support 2016-02-15 22:14:49 +00:00
LICENSE Re-license charm as Apache-2.0 2016-07-03 17:40:18 +01:00
Makefile Use bundletester for amulet test execution 2016-07-19 03:38:49 +00:00
README.md Typo fix: accross => across 2017-01-23 16:45:12 +07:00
actions.yaml Add pause/resume actions and sync charm-helpers 2016-03-29 07:20:36 +00:00
charm-helpers-hooks.yaml Add support for application version 2016-09-20 12:40:17 +01:00
charm-helpers-tests.yaml Fix alphanumeric comparisons for openstack and ubuntu releases 2017-03-29 10:52:42 +01:00
config.yaml Cleanup config.yaml 2017-07-05 17:59:47 +01:00
copyright Re-license charm as Apache-2.0 2016-07-03 17:40:18 +01:00
hardening.yaml Add hardening support 2016-03-24 11:19:46 +00:00
icon.svg Update charm icon 2017-08-02 18:11:30 +01:00
metadata.yaml Remove deprecated series metadata and tests 2017-08-23 09:55:29 -05:00
requirements.txt Enable Zesty-Ocata Amulet Tests 2017-04-28 08:42:59 -07:00
revision added syslog functionality 2014-02-03 13:34:56 +01:00
setup.cfg Test coverage tweaks, unit testing 2013-07-18 09:37:37 +01:00
test-requirements.txt Enable Zesty-Ocata Amulet Tests 2017-04-28 08:42:59 -07:00
tox.ini Fix alphanumeric comparisons for openstack and ubuntu releases 2017-03-29 10:52:42 +01:00

README.md

Overview

The OpenStack Dashboard provides a Django based web interface for use by both administrators and users of an OpenStack Cloud.

It allows you to manage Nova, Glance, Cinder and Neutron resources within the cloud.

Usage

The OpenStack Dashboard is deployed and related to keystone:

juju deploy openstack-dashboard
juju add-relation openstack-dashboard keystone

The dashboard will use keystone for user authentication and authorization and to interact with the catalog of services within the cloud.

The dashboard is accessible on:

http(s)://service_unit_address/horizon

At a minimum, the cloud must provide Glance and Nova services.

SSL configuration

To fully secure your dashboard services, you can provide a SSL key and certificate for installation and configuration. These are provided as base64 encoded configuration options::

juju set openstack-dashboard ssl_key="$(base64 my.key)" \
    ssl_cert="$(base64 my.cert)"

The service will be reconfigured to use the supplied information.

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are set

Whichever method has been used to cluster the charm the 'secret' option should be set to ensure that the Django secret is consistent across all units.

Keystone V3

If the charm is being deployed into a keystone v3 enabled environment then the charm needs to be related to a database to store session information. This is only supported for Mitaka or later.

Use with a Load Balancing Proxy

Instead of deploying with the hacluster charm for load balancing, its possible to also deploy the dashboard with load balancing proxy such as HAProxy:

juju deploy haproxy
juju add-relation haproxy openstack-dashboard
juju add-unit -n 2 openstack-dashboard

This option potentially provides better scale-out than using the charm in conjunction with the hacluster charm.