DNS HA
Implement DNS high availability. Pass the correct information to hacluster to register a DNS entry with MAAS 2.0 or greater rather than using a virtual IP. Charm-helpers sync to bring in DNS HA helpers Change-Id: Iec7b94b2e97d770dfc4b6f4f0c52983e39078b98
This commit is contained in:
parent
5646a4dd55
commit
facd49cb96
|
@ -5,3 +5,4 @@ tags
|
|||
.tox
|
||||
*.sw[nop]
|
||||
*.pyc
|
||||
.unit-state.db
|
||||
|
|
40
README.md
40
README.md
|
@ -36,23 +36,39 @@ base64 encoded configuration options::
|
|||
|
||||
The service will be reconfigured to use the supplied information.
|
||||
|
||||
High Availability
|
||||
=================
|
||||
HA/Clustering
|
||||
=============
|
||||
|
||||
The OpenStack Dashboard charm supports HA in-conjunction with the hacluster
|
||||
charm:
|
||||
There are two mutually exclusive high availability options: using virtual
|
||||
IP(s) or DNS. In both cases, a relationship to hacluster is required which
|
||||
provides the corosync back end HA functionality.
|
||||
|
||||
juju deploy hacluster dashboard-hacluster
|
||||
juju set openstack-dashboard vip="192.168.1.200"
|
||||
juju add-relation openstack-dashboard dashboard-hacluster
|
||||
juju add-unit -n 2 openstack-dashboard
|
||||
To use virtual IP(s) the clustered nodes must be on the same subnet such that
|
||||
the VIP is a valid IP on the subnet for one of the node's interfaces and each
|
||||
node has an interface in said subnet. The VIP becomes a highly-available API
|
||||
endpoint.
|
||||
|
||||
After addition of the extra 2 units completes, the dashboard will be
|
||||
accessible on 192.168.1.200 with full load-balancing across all three units.
|
||||
At a minimum, the config option 'vip' must be set in order to use virtual IP
|
||||
HA. If multiple networks are being used, a VIP should be provided for each
|
||||
network, separated by spaces. Optionally, vip_iface or vip_cidr may be
|
||||
specified.
|
||||
|
||||
Please refer to the charm configuration for full details on all HA config
|
||||
options.
|
||||
To use DNS high availability there are several prerequisites. However, DNS HA
|
||||
does not require the clustered nodes to be on the same subnet.
|
||||
Currently the DNS HA feature is only available for MAAS 2.0 or greater
|
||||
environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must
|
||||
have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s)
|
||||
must be pre-registered in MAAS before use with DNS HA.
|
||||
|
||||
At a minimum, the config option 'dns-ha' must be set to true and at least one
|
||||
of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must
|
||||
be set in order to use DNS HA. One or more of the above hostnames may be set.
|
||||
|
||||
The charm will throw an exception in the following circumstances:
|
||||
If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster
|
||||
If both 'vip' and 'dns-ha' are set as they are mutually exclusive
|
||||
If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are
|
||||
set
|
||||
|
||||
Use with a Load Balancing Proxy
|
||||
===============================
|
||||
|
|
42
config.yaml
42
config.yaml
|
@ -76,6 +76,12 @@ options:
|
|||
description: |
|
||||
Default role for Horizon operations that will be created in
|
||||
Keystone upon introduction of an identity-service relation.
|
||||
dns-ha:
|
||||
type: boolean
|
||||
default: False
|
||||
description: |
|
||||
Use DNS HA with MAAS 2.0. Note if this is set do not set vip
|
||||
settings below.
|
||||
vip:
|
||||
type: string
|
||||
default:
|
||||
|
@ -105,6 +111,42 @@ options:
|
|||
description: |
|
||||
Default multicast port number that will be used to communicate between
|
||||
HA Cluster nodes.
|
||||
os-public-hostname:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
The hostname or address of the public endpoints created for
|
||||
openstack-dashboard.
|
||||
|
||||
This value will be used for public endpoints. For example, an
|
||||
os-public-hostname set to 'horizon.example.com' with will create
|
||||
the following public endpoint for the swift-proxy:
|
||||
|
||||
https://horizon.example.com/horizon
|
||||
os-internal-hostname:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
The hostname or address of the internal endpoints created for
|
||||
openstack-dashboard.
|
||||
|
||||
This value will be used for internal endpoints. For example, an
|
||||
os-internal-hostname set to 'horizon.internal.example.com' with will
|
||||
create the following internal endpoint for the swift-proxy:
|
||||
|
||||
https://horizon.internal.example.com/horizon
|
||||
os-admin-hostname:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
The hostname or address of the admin endpoints created for
|
||||
openstack-dashboard.
|
||||
|
||||
This value will be used for admin endpoints. For example, an
|
||||
os-admin-hostname set to 'horizon.admin.example.com' with will create
|
||||
the following admin endpoint for the swift-proxy:
|
||||
|
||||
https://horizon.admin.example.com/horizon
|
||||
# User provided SSL cert/key/ca
|
||||
ssl_cert:
|
||||
type: string
|
||||
|
|
|
@ -280,14 +280,14 @@ def get_hacluster_config(exclude_keys=None):
|
|||
for initiating a relation to hacluster:
|
||||
|
||||
ha-bindiface, ha-mcastport, vip, os-internal-hostname,
|
||||
os-admin-hostname, os-public-hostname
|
||||
os-admin-hostname, os-public-hostname, os-access-hostname
|
||||
|
||||
param: exclude_keys: list of setting key(s) to be excluded.
|
||||
returns: dict: A dict containing settings keyed by setting name.
|
||||
raises: HAIncompleteConfig if settings are missing or incorrect.
|
||||
'''
|
||||
settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname',
|
||||
'os-admin-hostname', 'os-public-hostname']
|
||||
'os-admin-hostname', 'os-public-hostname', 'os-access-hostname']
|
||||
conf = {}
|
||||
for setting in settings:
|
||||
if exclude_keys and setting in exclude_keys:
|
||||
|
@ -324,7 +324,7 @@ def valid_hacluster_config():
|
|||
# If dns-ha then one of os-*-hostname must be set
|
||||
if dns:
|
||||
dns_settings = ['os-internal-hostname', 'os-admin-hostname',
|
||||
'os-public-hostname']
|
||||
'os-public-hostname', 'os-access-hostname']
|
||||
# At this point it is unknown if one or all of the possible
|
||||
# network spaces are in HA. Validate at least one is set which is
|
||||
# the minimum required.
|
||||
|
|
|
@ -36,6 +36,10 @@ from charmhelpers.core.hookenv import (
|
|||
DEBUG,
|
||||
)
|
||||
|
||||
from charmhelpers.core.host import (
|
||||
lsb_release
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.openstack.ip import (
|
||||
resolve_address,
|
||||
)
|
||||
|
@ -63,8 +67,11 @@ def update_dns_ha_resource_params(resources, resource_params,
|
|||
DNS HA
|
||||
"""
|
||||
|
||||
# Validate the charm environment for DNS HA
|
||||
assert_charm_supports_dns_ha()
|
||||
|
||||
settings = ['os-admin-hostname', 'os-internal-hostname',
|
||||
'os-public-hostname']
|
||||
'os-public-hostname', 'os-access-hostname']
|
||||
|
||||
# Check which DNS settings are set and update dictionaries
|
||||
hostname_group = []
|
||||
|
@ -109,3 +116,15 @@ def update_dns_ha_resource_params(resources, resource_params,
|
|||
msg = 'DNS HA: Hostname group has no members.'
|
||||
status_set('blocked', msg)
|
||||
raise DNSHAException(msg)
|
||||
|
||||
|
||||
def assert_charm_supports_dns_ha():
|
||||
"""Validate prerequisites for DNS HA
|
||||
The MAAS client is only available on Xenial or greater
|
||||
"""
|
||||
if lsb_release().get('DISTRIB_RELEASE') < '16.04':
|
||||
msg = ('DNS HA is only supported on 16.04 and greater '
|
||||
'versions of Ubuntu.')
|
||||
status_set('blocked', msg)
|
||||
raise DNSHAException(msg)
|
||||
return True
|
||||
|
|
|
@ -725,15 +725,14 @@ def git_install_requested():
|
|||
requirements_dir = None
|
||||
|
||||
|
||||
def git_default_repos(projects_yaml):
|
||||
def git_default_repos(projects):
|
||||
"""
|
||||
Returns default repos if a default openstack-origin-git value is specified.
|
||||
"""
|
||||
service = service_name()
|
||||
core_project = service
|
||||
|
||||
for default, branch in GIT_DEFAULT_BRANCHES.iteritems():
|
||||
if projects_yaml == default:
|
||||
if projects == default:
|
||||
|
||||
# add the requirements repo first
|
||||
repo = {
|
||||
|
@ -743,41 +742,34 @@ def git_default_repos(projects_yaml):
|
|||
}
|
||||
repos = [repo]
|
||||
|
||||
# neutron-* and nova-* charms require some additional repos
|
||||
if service in ['neutron-api', 'neutron-gateway',
|
||||
'neutron-openvswitch']:
|
||||
core_project = 'neutron'
|
||||
for project in ['neutron-fwaas', 'neutron-lbaas',
|
||||
'neutron-vpnaas']:
|
||||
# neutron and nova charms require some additional repos
|
||||
if service == 'neutron':
|
||||
for svc in ['neutron-fwaas', 'neutron-lbaas', 'neutron-vpnaas']:
|
||||
repo = {
|
||||
'name': project,
|
||||
'repository': GIT_DEFAULT_REPOS[project],
|
||||
'name': svc,
|
||||
'repository': GIT_DEFAULT_REPOS[svc],
|
||||
'branch': branch,
|
||||
}
|
||||
repos.append(repo)
|
||||
|
||||
elif service in ['nova-cloud-controller', 'nova-compute']:
|
||||
core_project = 'nova'
|
||||
elif service == 'nova':
|
||||
repo = {
|
||||
'name': 'neutron',
|
||||
'repository': GIT_DEFAULT_REPOS['neutron'],
|
||||
'branch': branch,
|
||||
}
|
||||
repos.append(repo)
|
||||
elif service == 'openstack-dashboard':
|
||||
core_project = 'horizon'
|
||||
|
||||
# finally add the current service's core project repo
|
||||
# finally add the current service's repo
|
||||
repo = {
|
||||
'name': core_project,
|
||||
'repository': GIT_DEFAULT_REPOS[core_project],
|
||||
'name': service,
|
||||
'repository': GIT_DEFAULT_REPOS[service],
|
||||
'branch': branch,
|
||||
}
|
||||
repos.append(repo)
|
||||
|
||||
return yaml.dump(dict(repositories=repos))
|
||||
|
||||
return projects_yaml
|
||||
return projects
|
||||
|
||||
|
||||
def _git_yaml_load(projects_yaml):
|
||||
|
|
|
@ -29,6 +29,9 @@ from charmhelpers.contrib.openstack.utils import (
|
|||
os_release,
|
||||
save_script_rc,
|
||||
)
|
||||
from charmhelpers.contrib.openstack.ha.utils import (
|
||||
update_dns_ha_resource_params,
|
||||
)
|
||||
from horizon_utils import (
|
||||
determine_packages,
|
||||
register_configs,
|
||||
|
@ -175,7 +178,7 @@ def cluster_relation():
|
|||
|
||||
|
||||
@hooks.hook('ha-relation-joined')
|
||||
def ha_relation_joined():
|
||||
def ha_relation_joined(relation_id=None):
|
||||
cluster_config = get_hacluster_config()
|
||||
resources = {
|
||||
'res_horizon_haproxy': 'lsb:haproxy'
|
||||
|
@ -185,34 +188,39 @@ def ha_relation_joined():
|
|||
'res_horizon_haproxy': 'op monitor interval="5s"'
|
||||
}
|
||||
|
||||
vip_group = []
|
||||
for vip in cluster_config['vip'].split():
|
||||
if is_ipv6(vip):
|
||||
res_vip = 'ocf:heartbeat:IPv6addr'
|
||||
vip_params = 'ipv6addr'
|
||||
else:
|
||||
res_vip = 'ocf:heartbeat:IPaddr2'
|
||||
vip_params = 'ip'
|
||||
if config('dns-ha'):
|
||||
update_dns_ha_resource_params(relation_id=relation_id,
|
||||
resources=resources,
|
||||
resource_params=resource_params)
|
||||
else:
|
||||
vip_group = []
|
||||
for vip in cluster_config['vip'].split():
|
||||
if is_ipv6(vip):
|
||||
res_vip = 'ocf:heartbeat:IPv6addr'
|
||||
vip_params = 'ipv6addr'
|
||||
else:
|
||||
res_vip = 'ocf:heartbeat:IPaddr2'
|
||||
vip_params = 'ip'
|
||||
|
||||
iface = (get_iface_for_address(vip) or
|
||||
config('vip_iface'))
|
||||
netmask = (get_netmask_for_address(vip) or
|
||||
config('vip_cidr'))
|
||||
iface = (get_iface_for_address(vip) or
|
||||
config('vip_iface'))
|
||||
netmask = (get_netmask_for_address(vip) or
|
||||
config('vip_cidr'))
|
||||
|
||||
if iface is not None:
|
||||
vip_key = 'res_horizon_{}_vip'.format(iface)
|
||||
resources[vip_key] = res_vip
|
||||
resource_params[vip_key] = (
|
||||
'params {ip}="{vip}" cidr_netmask="{netmask}"'
|
||||
' nic="{iface}"'.format(ip=vip_params,
|
||||
vip=vip,
|
||||
iface=iface,
|
||||
netmask=netmask)
|
||||
)
|
||||
vip_group.append(vip_key)
|
||||
if iface is not None:
|
||||
vip_key = 'res_horizon_{}_vip'.format(iface)
|
||||
resources[vip_key] = res_vip
|
||||
resource_params[vip_key] = (
|
||||
'params {ip}="{vip}" cidr_netmask="{netmask}"'
|
||||
' nic="{iface}"'.format(ip=vip_params,
|
||||
vip=vip,
|
||||
iface=iface,
|
||||
netmask=netmask)
|
||||
)
|
||||
vip_group.append(vip_key)
|
||||
|
||||
if len(vip_group) > 1:
|
||||
relation_set(groups={'grp_horizon_vips': ' '.join(vip_group)})
|
||||
if len(vip_group) > 1:
|
||||
relation_set(groups={'grp_horizon_vips': ' '.join(vip_group)})
|
||||
|
||||
init_services = {
|
||||
'res_horizon_haproxy': 'haproxy'
|
||||
|
@ -220,7 +228,8 @@ def ha_relation_joined():
|
|||
clones = {
|
||||
'cl_horizon_haproxy': 'res_horizon_haproxy'
|
||||
}
|
||||
relation_set(init_services=init_services,
|
||||
relation_set(relation_id=relation_id,
|
||||
init_services=init_services,
|
||||
corosync_bindiface=cluster_config['ha-bindiface'],
|
||||
corosync_mcastport=cluster_config['ha-mcastport'],
|
||||
resources=resources,
|
||||
|
|
|
@ -52,6 +52,7 @@ TO_PATCH = [
|
|||
'update_nrpe_config',
|
||||
'lsb_release',
|
||||
'status_set',
|
||||
'update_dns_ha_resource_params',
|
||||
]
|
||||
|
||||
|
||||
|
@ -178,6 +179,7 @@ class TestHorizonHooks(CharmTestCase):
|
|||
self.get_hacluster_config.return_value = conf
|
||||
self._call_hook('ha-relation-joined')
|
||||
ex_args = {
|
||||
'relation_id': None,
|
||||
'corosync_mcastport': '37373',
|
||||
'init_services': {
|
||||
'res_horizon_haproxy': 'haproxy'},
|
||||
|
@ -208,6 +210,7 @@ class TestHorizonHooks(CharmTestCase):
|
|||
self.get_hacluster_config.return_value = conf
|
||||
self._call_hook('ha-relation-joined')
|
||||
ex_args = {
|
||||
'relation_id': None,
|
||||
'corosync_mcastport': '37373',
|
||||
'init_services': {
|
||||
'res_horizon_haproxy': 'haproxy'},
|
||||
|
@ -230,6 +233,41 @@ class TestHorizonHooks(CharmTestCase):
|
|||
self.assertRaises(HAIncompleteConfig, self._call_hook,
|
||||
'ha-relation-joined')
|
||||
|
||||
def test_ha_joined_dns_ha(self):
|
||||
def _fake_update(resources, resource_params, relation_id=None):
|
||||
resources.update({'res_horizon_public_hostname': 'ocf:maas:dns'})
|
||||
resource_params.update({'res_horizon_public_hostname':
|
||||
'params fqdn="keystone.maas" '
|
||||
'ip_address="10.0.0.1"'})
|
||||
|
||||
self.test_config.set('dns-ha', True)
|
||||
self.get_hacluster_config.return_value = {
|
||||
'vip': None,
|
||||
'ha-bindiface': 'em0',
|
||||
'ha-mcastport': '8080',
|
||||
'os-admin-hostname': None,
|
||||
'os-internal-hostname': None,
|
||||
'os-public-hostname': 'keystone.maas',
|
||||
}
|
||||
args = {
|
||||
'relation_id': None,
|
||||
'corosync_bindiface': 'em0',
|
||||
'corosync_mcastport': '8080',
|
||||
'init_services': {'res_horizon_haproxy': 'haproxy'},
|
||||
'resources': {'res_horizon_public_hostname': 'ocf:maas:dns',
|
||||
'res_horizon_haproxy': 'lsb:haproxy'},
|
||||
'resource_params': {
|
||||
'res_horizon_public_hostname': 'params fqdn="keystone.maas" '
|
||||
'ip_address="10.0.0.1"',
|
||||
'res_horizon_haproxy': 'op monitor interval="5s"'},
|
||||
'clones': {'cl_horizon_haproxy': 'res_horizon_haproxy'}
|
||||
}
|
||||
self.update_dns_ha_resource_params.side_effect = _fake_update
|
||||
|
||||
hooks.ha_relation_joined()
|
||||
self.assertTrue(self.update_dns_ha_resource_params.called)
|
||||
self.relation_set.assert_called_with(**args)
|
||||
|
||||
@patch('horizon_hooks.keystone_joined')
|
||||
@patch.object(hooks, 'git_install_requested')
|
||||
def test_config_changed_no_upgrade(self, _git_requested, _joined):
|
||||
|
|
Loading…
Reference in New Issue