Swift Global Cluster
This patchset adds a support for Swift Global Cluster feature as described at: https://docs.openstack.org/swift/latest/overview_global_cluster.html It allows specifying affinity settings as parrt of the deployment. Moreover, the master - slave relation is introduced for the purpose of rings distribution across proxy nodes participating in the Swift Global Cluster. Change-Id: I406445493e2226aa5ae40a09c9053ac8633a46e9 Closes-Bug: 1815879 Depends-On: I11b6c7802e5bfbd61b06e4d11c65804a165781b6
This commit is contained in:
parent
dba095bd8b
commit
44df5db97d
61
README.md
61
README.md
|
@ -104,6 +104,67 @@ single service unit. New units will be distributed across the existing zones.
|
|||
# swift-storage/4 is assigned to zone 2.
|
||||
etc.
|
||||
|
||||
**Global Cluster.**
|
||||
|
||||
This charm supports Swift Global Cluster feature as described at
|
||||
https://docs.openstack.org/swift/latest/overview_global_cluster.html .
|
||||
In order to enable it the 'enable-multi-region' option has to be set to 'True'.
|
||||
Additional options ('read-affinity', 'write-affinity' and
|
||||
'write-affinity-node-count') can be used to influence how the objects will be
|
||||
read and written.
|
||||
|
||||
In addition storage nodes have to be configured with the 'region' option and
|
||||
related to all proxies participating in the global cluster. More than one proxy
|
||||
can be deployed, but they have to be related using 'rings-distributor'
|
||||
/ 'rings-consumer' endpoints and the 'swift-hash' option has to be unique across
|
||||
them. Only one proxy can act as a rings-distributor at a time.
|
||||
|
||||
$ cat >swift.cfg <<END
|
||||
sp-r1:
|
||||
region: RegionOne
|
||||
zone-assignment: manual
|
||||
replicas: 2
|
||||
enable-multi-region: true
|
||||
swift-hash: "global-cluster"
|
||||
read-affinity: "r1=100, r2=200"
|
||||
write-affinity: "r1, r2"
|
||||
write-affinity-node-count: "1"
|
||||
sp-r2:
|
||||
region: RegionTwo
|
||||
zone-assignment: manual
|
||||
replicas: 2
|
||||
enable-multi-region: true
|
||||
swift-hash: "global-cluster"
|
||||
read-affinity: "r2=100, r1=200"
|
||||
write-affinity: "r2, r1"
|
||||
write-affinity-node-count: "1"
|
||||
ss-r1:
|
||||
storage-region: 1
|
||||
zone: 1
|
||||
block-device: /etc/swift/storage.img|2G
|
||||
ss-r2:
|
||||
storage-region: 2
|
||||
zone: 1
|
||||
block-device: /etc/swift/storage.img|2G
|
||||
END
|
||||
$ juju deploy --config=swift.cfg swift-proxy sp-r1
|
||||
$ juju deploy --config=swift.cfg swift-proxy sp-r2
|
||||
$ juju deploy --config=swift.cfg swift-storage ss-r1
|
||||
$ juju deploy --config=swift.cfg swift-storage ss-r2
|
||||
$ juju add-relation sp-r1:swift-storage ss-r1:swift-storage
|
||||
$ juju add-relation sp-r1:swift-storage ss-r2:swift-storage
|
||||
$ juju add-relation sp-r2:swift-storage ss-r1:swift-storage
|
||||
$ juju add-relation sp-r2:swift-storage ss-r2:swift-storage
|
||||
$ juju add-relation sp-r1:rings-distributor sp-r2:rings-consumer
|
||||
|
||||
In case of the failure of 'sp-r1', if it is not possible to recover it, the
|
||||
relation should be removed:
|
||||
|
||||
$ juju remove-relation sp-r2:rings-consumer sp-r1:rings-distributor
|
||||
|
||||
Additional proxy can be deployed later and related to 'swift-proxy-region2'
|
||||
using 'rings-distributor' / 'rings-consumer' endpoints.
|
||||
|
||||
**Installation repository.**
|
||||
|
||||
The 'openstack-origin' setting allows Swift to be installed from installation
|
||||
|
|
53
config.yaml
53
config.yaml
|
@ -385,3 +385,56 @@ options:
|
|||
.
|
||||
Ex. Setting this to 1000 would allow up to 1000 5GB object segments
|
||||
to be uploaded for a maximum large object size of 5TB.
|
||||
enable-multi-region:
|
||||
type: boolean
|
||||
default: False
|
||||
description: |
|
||||
Enables Swift Global Cluster feature as described at
|
||||
https://docs.openstack.org/swift/latest/overview_global_cluster.html
|
||||
Should be used in conjunction with 'read-affinity', 'write-affinity' and
|
||||
'write-affinity-node-count' options.
|
||||
read-affinity:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Which backend servers to prefer on reads. Format is r<N> for region N or
|
||||
r<N>z<M> for region N, zone M. The value after the equals is the
|
||||
priority; lower numbers are higher priority.
|
||||
.
|
||||
For example first read from region 1 zone 1, then region 1 zone 2, then
|
||||
anything in region 2, then everything else - read_affinity = r1z1=100,
|
||||
r1z2=200, r2=300
|
||||
.
|
||||
Default is empty, meaning no preference.
|
||||
.
|
||||
NOTE: use only when 'enable-multi-region=True'
|
||||
write-affinity:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
This setting lets you trade data distribution for throughput. It makes
|
||||
the proxy server prefer local back-end servers for object PUT requests
|
||||
over non-local ones. Note that only object PUT requests are affected by
|
||||
the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and
|
||||
account/container PUT requests are not affected. The format is r<N> for
|
||||
region N. If this is set, then when handling an object PUT request, some
|
||||
number (see the write_affinity_node_count setting) of local backend
|
||||
servers will be tried before any nonlocal ones.
|
||||
.
|
||||
For example try to write to regions 1 and 2 before writing to any other
|
||||
nodes - write_affinity = r1, r2
|
||||
.
|
||||
NOTE: use only when 'enable-multi-region=True'
|
||||
write-affinity-node-count:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
This setting is only useful in conjunction with write_affinity;
|
||||
it governs how many local object servers will be tried before falling
|
||||
back to non-local ones.
|
||||
.
|
||||
For example assuming 3 replicas and 'write-affinity: r1' then
|
||||
'write-affinity-node-count: 2 * replicas' will make object PUTs try
|
||||
storing the object’s replicas on up to 6 disks.
|
||||
.
|
||||
NOTE: use only when 'enable-multi-region=True'
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -0,0 +1 @@
|
|||
swift_hooks.py
|
|
@ -66,8 +66,17 @@ from lib.swift_utils import (
|
|||
try_initialize_swauth,
|
||||
clear_storage_rings_available,
|
||||
determine_replicas,
|
||||
fetch_swift_rings_and_builders,
|
||||
is_ring_consumer,
|
||||
is_ring_distributor,
|
||||
set_role_ring_consumer,
|
||||
set_role_ring_distributor,
|
||||
unset_role_ring_consumer,
|
||||
unset_role_ring_distributor,
|
||||
)
|
||||
|
||||
from lib.swift_context import get_swift_hash
|
||||
|
||||
import charmhelpers.contrib.openstack.utils as openstack
|
||||
|
||||
from charmhelpers.contrib.openstack.ha.utils import (
|
||||
|
@ -94,6 +103,8 @@ from charmhelpers.core.hookenv import (
|
|||
Hooks, UnregisteredHookError,
|
||||
open_port,
|
||||
status_set,
|
||||
is_leader,
|
||||
leader_get,
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
service_reload,
|
||||
|
@ -183,8 +194,10 @@ def config_changed():
|
|||
do_openstack_upgrade(CONFIGS)
|
||||
status_set('maintenance', 'Running openstack upgrade')
|
||||
|
||||
status_set('maintenance', 'Updating and (maybe) balancing rings')
|
||||
update_rings(min_part_hours=config('min-hours'))
|
||||
if not leader_get('swift-proxy-rings-consumer'):
|
||||
status_set('maintenance', 'Updating and (maybe) balancing rings')
|
||||
update_rings(min_part_hours=config('min-hours'),
|
||||
replicas=config('replicas'))
|
||||
|
||||
if not config('disable-ring-balance') and is_elected_leader(SWIFT_HA_RES):
|
||||
# Try ring balance. If rings are balanced, no sync will occur.
|
||||
|
@ -266,7 +279,9 @@ def storage_joined(rid=None):
|
|||
|
||||
|
||||
def get_host_ip(rid=None, unit=None):
|
||||
addr = relation_get('private-address', rid=rid, unit=unit)
|
||||
addr = relation_get(rid=rid, unit=unit).get('ip_cls')
|
||||
if not addr:
|
||||
addr = relation_get('private-address', rid=rid, unit=unit)
|
||||
if config('prefer-ipv6'):
|
||||
host_ip = format_ipv6_addr(addr)
|
||||
if host_ip:
|
||||
|
@ -332,6 +347,15 @@ def storage_changed():
|
|||
'object_port': relation_get('object_port'),
|
||||
'container_port': relation_get('container_port'),
|
||||
}
|
||||
node_repl_settings = {
|
||||
'ip_rep': relation_get('ip_rep'),
|
||||
'region': relation_get('region'),
|
||||
'account_port_rep': relation_get('account_port_rep'),
|
||||
'object_port_rep': relation_get('object_port_rep'),
|
||||
'container_port_rep': relation_get('container_port_rep')}
|
||||
|
||||
if any(node_repl_settings.values()):
|
||||
node_settings.update(node_repl_settings)
|
||||
|
||||
if None in node_settings.values():
|
||||
missing = [k for k, v in node_settings.items() if v is None]
|
||||
|
@ -339,8 +363,11 @@ def storage_changed():
|
|||
"relation (missing={})".format(', '.join(missing)), level=INFO)
|
||||
return None
|
||||
|
||||
for k in ['zone', 'account_port', 'object_port', 'container_port']:
|
||||
node_settings[k] = int(node_settings[k])
|
||||
for k in ['region', 'zone', 'account_port', 'account_port_rep',
|
||||
'object_port', 'object_port_rep', 'container_port',
|
||||
'container_port_rep']:
|
||||
if node_settings.get(k) is not None:
|
||||
node_settings[k] = int(node_settings[k])
|
||||
|
||||
CONFIGS.write_all()
|
||||
|
||||
|
@ -761,6 +788,70 @@ def post_series_upgrade():
|
|||
raise Exception("{} didn't start cleanly.".format(service))
|
||||
|
||||
|
||||
@hooks.hook('rings-distributor-relation-joined')
|
||||
def rings_distributor_joined():
|
||||
if is_ring_consumer():
|
||||
msg = ("Swift Proxy cannot act as both rings distributor and rings "
|
||||
"consumer")
|
||||
status_set('blocked', msg)
|
||||
raise SwiftProxyCharmException(msg)
|
||||
if is_leader():
|
||||
set_role_ring_distributor()
|
||||
|
||||
|
||||
@hooks.hook('rings-distributor-relation-changed')
|
||||
def rings_distributor_changed():
|
||||
broadcast_rings_available()
|
||||
|
||||
|
||||
@hooks.hook('rings-distributor-relation-departed')
|
||||
def rings_distributor_departed():
|
||||
if is_leader():
|
||||
unset_role_ring_distributor()
|
||||
|
||||
|
||||
@hooks.hook('rings-consumer-relation-joined')
|
||||
def rings_consumer_joined():
|
||||
if is_ring_distributor():
|
||||
msg = ("Swift Proxy cannot act as both rings distributor and rings "
|
||||
"consumer")
|
||||
status_set('blocked', msg)
|
||||
elif is_ring_consumer():
|
||||
msg = "Swift Proxy already acting as rings consumer"
|
||||
status_set('blocked', msg)
|
||||
elif is_leader():
|
||||
set_role_ring_consumer()
|
||||
|
||||
|
||||
@hooks.hook('rings-consumer-relation-changed')
|
||||
@sync_builders_and_rings_if_changed
|
||||
@restart_on_change(restart_map())
|
||||
def rings_consumer_changed():
|
||||
"""Based on 'swift_storage_relation_changed' function from the
|
||||
swift-storage charm."""
|
||||
rings_url = relation_get('rings_url')
|
||||
swift_hash = relation_get('swift_hash')
|
||||
if not all([rings_url, swift_hash]):
|
||||
log('rings_consumer_relation_changed: Peer not ready?')
|
||||
return
|
||||
if swift_hash != get_swift_hash():
|
||||
msg = "Swift hash has to be unique in multi-region setup"
|
||||
status_set('blocked', msg)
|
||||
raise SwiftProxyCharmException(msg)
|
||||
try:
|
||||
fetch_swift_rings_and_builders(rings_url)
|
||||
except CalledProcessError:
|
||||
log("Failed to sync rings from {} - no longer available from that "
|
||||
"unit?".format(rings_url), level=WARNING)
|
||||
broadcast_rings_available()
|
||||
|
||||
|
||||
@hooks.hook('rings-consumer-relation-departed')
|
||||
def rings_consumer_departed():
|
||||
if is_leader():
|
||||
unset_role_ring_consumer()
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
hooks.execute(sys.argv)
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import re
|
||||
import uuid
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
|
@ -11,6 +12,7 @@ from charmhelpers.core.hookenv import (
|
|||
service_name,
|
||||
leader_get,
|
||||
DEBUG,
|
||||
status_set,
|
||||
)
|
||||
from charmhelpers.contrib.openstack.context import (
|
||||
OSContextGenerator,
|
||||
|
@ -36,6 +38,10 @@ SWIFT_HASH_FILE = '/var/lib/juju/swift-hash-path.conf'
|
|||
WWW_DIR = '/var/www/swift-rings'
|
||||
|
||||
|
||||
class SwiftProxyCharmException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class HAProxyContext(OSContextGenerator):
|
||||
interfaces = ['cluster']
|
||||
|
||||
|
@ -132,7 +138,11 @@ class SwiftIdentityContext(OSContextGenerator):
|
|||
'statsd_port': config('statsd-port'),
|
||||
'statsd_sample_rate': config('statsd-sample-rate'),
|
||||
'static_large_object_segments': config(
|
||||
'static-large-object-segments')
|
||||
'static-large-object-segments'),
|
||||
'enable_multi_region': config('enable-multi-region'),
|
||||
'read_affinity': get_read_affinity(),
|
||||
'write_affinity': get_write_affinity(),
|
||||
'write_affinity_node_count': get_write_affinity_node_count()
|
||||
}
|
||||
|
||||
admin_key = leader_get('swauth-admin-key')
|
||||
|
@ -253,6 +263,72 @@ def get_swift_hash():
|
|||
return swift_hash
|
||||
|
||||
|
||||
def get_read_affinity():
|
||||
""" Gets read-affinity config option (lp1815879)
|
||||
|
||||
Checks whether read-affinity config option is set correctly and if so
|
||||
returns its value.
|
||||
|
||||
:returns: read-affinity config option
|
||||
:rtype: str
|
||||
:raises: SwiftProxyCharmException
|
||||
"""
|
||||
if config('read-affinity'):
|
||||
read_affinity = config('read-affinity')
|
||||
pattern = re.compile("^r\d+z?(\d+)?=\d+(,\s?r\d+z?(\d+)?=\d+)*$")
|
||||
if not pattern.match(read_affinity):
|
||||
msg = "'read-affinity' config option is malformed"
|
||||
status_set('blocked', msg)
|
||||
raise SwiftProxyCharmException(msg)
|
||||
return read_affinity
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_write_affinity():
|
||||
""" Gets write-affinity config option (lp1815879)
|
||||
|
||||
Checks whether write-affinity config option is set correctly and if so
|
||||
returns its value.
|
||||
|
||||
:returns: write-affinity config option
|
||||
:rtype: str
|
||||
:raises: SwiftProxyCharmException
|
||||
"""
|
||||
if config('write-affinity'):
|
||||
write_affinity = config('write-affinity')
|
||||
pattern = re.compile("^r\d+(,\s?r\d+)*$")
|
||||
if not pattern.match(write_affinity):
|
||||
msg = "'write-affinity' config option is malformed"
|
||||
status_set('blocked', msg)
|
||||
raise SwiftProxyCharmException(msg)
|
||||
return write_affinity
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_write_affinity_node_count():
|
||||
""" Gets write-affinity-node-count config option (lp1815879)
|
||||
|
||||
Checks whether write-affinity-node-count config option is set correctly
|
||||
and if so returns its value.
|
||||
|
||||
:returns: write-affinity-node-count config option
|
||||
:rtype: str
|
||||
:raises: SwiftProxyCharmException
|
||||
"""
|
||||
if config('write-affinity-node-count'):
|
||||
write_affinity_node_count = config('write-affinity-node-count')
|
||||
pattern = re.compile("^\d+(\s\*\sreplicas)?$")
|
||||
if not pattern.match(write_affinity_node_count):
|
||||
msg = "'write-affinity-node-count' config option is malformed"
|
||||
status_set('blocked', msg)
|
||||
raise SwiftProxyCharmException(msg)
|
||||
return write_affinity_node_count
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class SwiftHashContext(OSContextGenerator):
|
||||
|
||||
def __call__(self):
|
||||
|
|
|
@ -201,6 +201,10 @@ CONFIG_FILES = OrderedDict([
|
|||
}),
|
||||
])
|
||||
|
||||
RING_CONSUMER_ROLE = 'swift-proxy-rings-consumer'
|
||||
|
||||
RING_DISTRIBUTOR_ROLE = 'swift-proxy-rings-distributor'
|
||||
|
||||
|
||||
class SwiftProxyCharmException(Exception):
|
||||
pass
|
||||
|
@ -540,6 +544,7 @@ def exists_in_ring(ring_path, node):
|
|||
|
||||
def add_to_ring(ring_path, node):
|
||||
port = _ring_port(ring_path, node)
|
||||
port_rep = _ring_port_rep(ring_path, node)
|
||||
|
||||
# Note: this code used to attempt to calculate new dev ids, but made
|
||||
# various assumptions (e.g. in order devices, all devices in the ring
|
||||
|
@ -554,6 +559,12 @@ def add_to_ring(ring_path, node):
|
|||
'weight': 100,
|
||||
'meta': '',
|
||||
}
|
||||
if port_rep:
|
||||
new_dev.update({
|
||||
'region': node['region'],
|
||||
'replication_ip': node['ip_rep'],
|
||||
'replication_port': port_rep,
|
||||
})
|
||||
get_manager().add_dev(ring_path, new_dev)
|
||||
msg = 'Added new device to ring {}: {}'.format(ring_path, new_dev)
|
||||
log(msg, level=INFO)
|
||||
|
@ -609,6 +620,27 @@ def _ring_port(ring_path, node):
|
|||
return node[('{}_port'.format(name))]
|
||||
|
||||
|
||||
def _ring_port_rep(ring_path, node):
|
||||
""" Determine replication port (lp1815879)
|
||||
|
||||
Determine correct replication port from relation settings for a given ring
|
||||
file.
|
||||
|
||||
:param ring_path: path to the ring
|
||||
:param node: storage node
|
||||
:type ring_path: str
|
||||
:type node: str
|
||||
:returns: replication port
|
||||
:rtype: int
|
||||
"""
|
||||
for name in ['account', 'object', 'container']:
|
||||
if name in ring_path:
|
||||
try:
|
||||
return node[('{}_port_rep'.format(name))]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
|
||||
def get_zone(assignment_policy):
|
||||
"""Determine appropriate zone based on configured assignment policy.
|
||||
|
||||
|
@ -909,7 +941,7 @@ def sync_builders_and_rings_if_changed(f):
|
|||
|
||||
|
||||
@sync_builders_and_rings_if_changed
|
||||
def update_rings(nodes=None, min_part_hours=None):
|
||||
def update_rings(nodes=None, min_part_hours=None, replicas=None):
|
||||
"""Update builder with node settings and balance rings if necessary.
|
||||
|
||||
Also update min_part_hours if provided.
|
||||
|
@ -948,10 +980,30 @@ def update_rings(nodes=None, min_part_hours=None):
|
|||
add_to_ring(ring, node)
|
||||
balance_required = True
|
||||
|
||||
if replicas is not None:
|
||||
for ring, path in SWIFT_RINGS.items():
|
||||
current_replicas = get_current_replicas(path)
|
||||
if replicas != current_replicas:
|
||||
update_replicas(path, replicas)
|
||||
balance_required = True
|
||||
|
||||
if balance_required:
|
||||
balance_rings()
|
||||
|
||||
|
||||
def get_current_replicas(path):
|
||||
""" Gets replicas from the ring (lp1815879)
|
||||
|
||||
Proxy to the 'manager.py:get_current_replicas()' function
|
||||
|
||||
:param path: path to the ring
|
||||
:type path: str
|
||||
:returns: replicas
|
||||
:rtype: int
|
||||
"""
|
||||
return get_manager().get_current_replicas(path)
|
||||
|
||||
|
||||
def get_min_part_hours(path):
|
||||
"""Just a proxy to the manager.py:get_min_part_hours() function
|
||||
|
||||
|
@ -970,6 +1022,25 @@ def set_min_part_hours(path, value):
|
|||
"Failed to set min_part_hours={} on {}".format(value, path))
|
||||
|
||||
|
||||
def update_replicas(path, replicas):
|
||||
""" Updates replicas (lp1815879)
|
||||
|
||||
Updates the number of replicas in the ring.
|
||||
|
||||
:param path: path to the ring
|
||||
:param replicas: number of replicas
|
||||
:type path: str
|
||||
:type replicas: int
|
||||
:raises: SwiftProxyCharmException
|
||||
"""
|
||||
cmd = ['swift-ring-builder', path, 'set_replicas', str(replicas)]
|
||||
try:
|
||||
subprocess.check_call(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
raise SwiftProxyCharmException(
|
||||
"Failed to set replicas={} on {}".format(replicas, path))
|
||||
|
||||
|
||||
@sync_builders_and_rings_if_changed
|
||||
def balance_rings():
|
||||
"""Rebalance each ring and notify peers that new rings are available."""
|
||||
|
@ -1061,7 +1132,7 @@ def broadcast_rings_available(storage=True, builders_only=False,
|
|||
if storage:
|
||||
# TODO: get ack from storage units that they are synced before
|
||||
# syncing proxies.
|
||||
notify_storage_rings_available(broker_timestamp)
|
||||
notify_storage_and_consumers_rings_available(broker_timestamp)
|
||||
else:
|
||||
log("Skipping notify storage relations", level=DEBUG)
|
||||
|
||||
|
@ -1111,7 +1182,7 @@ def cluster_sync_rings(peers_only=False, builders_only=False, token=None):
|
|||
relation_set(relation_id=rid, relation_settings=rq)
|
||||
|
||||
|
||||
def notify_storage_rings_available(broker_timestamp):
|
||||
def notify_storage_and_consumers_rings_available(broker_timestamp):
|
||||
"""Notify peer swift-storage relations that they should synchronise ring
|
||||
and builder files.
|
||||
|
||||
|
@ -1141,6 +1212,13 @@ def notify_storage_rings_available(broker_timestamp):
|
|||
relation_set(relation_id=relid, swift_hash=get_swift_hash(),
|
||||
rings_url=rings_url, broker_timestamp=broker_timestamp,
|
||||
trigger=trigger)
|
||||
# Notify consumer proxy nodes that there is a new ring to fetch.
|
||||
log("Notifying consumer proxy nodes (if any) that new rings are ready for "
|
||||
"sync.", level=INFO)
|
||||
for relid in relation_ids('rings-distributor'):
|
||||
relation_set(relation_id=relid, swift_hash=get_swift_hash(),
|
||||
rings_url=rings_url, broker_timestamp=broker_timestamp,
|
||||
trigger=trigger)
|
||||
|
||||
|
||||
def clear_storage_rings_available():
|
||||
|
@ -1411,3 +1489,74 @@ def determine_replicas(ring):
|
|||
return config('replicas')
|
||||
else:
|
||||
return config('replicas')
|
||||
|
||||
|
||||
def fetch_swift_rings_and_builders(rings_url):
|
||||
""" Fetches Swift rings and builders (lp1815879)
|
||||
|
||||
Fetches Swift rings and builders from the distributor Swift Proxy. Based on
|
||||
the 'fetch_swift_rings' function from the swift-storage charm.
|
||||
|
||||
:param rings_url: URL to the rings store
|
||||
:type rings_url: str
|
||||
"""
|
||||
log('Fetching swift rings from proxy @ {}.'.format(rings_url), level=INFO)
|
||||
target = SWIFT_CONF_DIR
|
||||
tmpdir = tempfile.mkdtemp(prefix='swiftrings')
|
||||
try:
|
||||
synced = []
|
||||
for server in ['account', 'object', 'container']:
|
||||
for ext in [SWIFT_RING_EXT, 'builder']:
|
||||
url = '{}/{}.{}'.format(rings_url, server, ext)
|
||||
log('Fetching {}.'.format(url), level=DEBUG)
|
||||
ring = '{}.{}'.format(server, ext)
|
||||
cmd = ['wget', url, '--retry-connrefused', '-t', '10', '-O',
|
||||
os.path.join(tmpdir, ring)]
|
||||
subprocess.check_call(cmd)
|
||||
synced.append(ring)
|
||||
|
||||
# Once all have been successfully downloaded, move them to actual
|
||||
# location.
|
||||
for f in synced:
|
||||
os.rename(os.path.join(tmpdir, f), os.path.join(target, f))
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
|
||||
|
||||
def is_role(role_name):
|
||||
return leader_get(role_name)
|
||||
|
||||
|
||||
def set_role(role_name, has_role=True):
|
||||
if has_role:
|
||||
# The value is irrelevant the presence of the key in the leaderdb shows
|
||||
# that this app has role_name. However set it to True to make things
|
||||
# clear for any casual observer
|
||||
leader_set({role_name: True})
|
||||
else:
|
||||
# Unset the key to show it does not have the role.
|
||||
leader_set({role_name: None})
|
||||
|
||||
is_ring_consumer = functools.partial(
|
||||
is_role,
|
||||
role_name=RING_CONSUMER_ROLE)
|
||||
is_ring_distributor = functools.partial(
|
||||
is_role,
|
||||
role_name=RING_DISTRIBUTOR_ROLE)
|
||||
|
||||
set_role_ring_consumer = functools.partial(
|
||||
set_role,
|
||||
role_name=RING_CONSUMER_ROLE,
|
||||
has_role=True)
|
||||
set_role_ring_distributor = functools.partial(
|
||||
set_role,
|
||||
role_name=RING_DISTRIBUTOR_ROLE,
|
||||
has_role=True)
|
||||
unset_role_ring_consumer = functools.partial(
|
||||
set_role,
|
||||
role_name=RING_CONSUMER_ROLE,
|
||||
has_role=False)
|
||||
unset_role_ring_distributor = functools.partial(
|
||||
set_role,
|
||||
role_name=RING_DISTRIBUTOR_ROLE,
|
||||
has_role=False)
|
||||
|
|
|
@ -33,6 +33,8 @@ provides:
|
|||
scope: container
|
||||
object-store:
|
||||
interface: swift-proxy
|
||||
rings-distributor:
|
||||
interface: swift-global-cluster
|
||||
requires:
|
||||
swift-storage:
|
||||
interface: swift
|
||||
|
@ -45,6 +47,8 @@ requires:
|
|||
interface: rabbitmq
|
||||
certificates:
|
||||
interface: tls-certificates
|
||||
rings-consumer:
|
||||
interface: swift-global-cluster
|
||||
peers:
|
||||
cluster:
|
||||
interface: swift-ha
|
||||
|
|
|
@ -94,9 +94,12 @@ def add_dev(ring_path, dev):
|
|||
The dev is in the form of:
|
||||
|
||||
new_dev = {
|
||||
'region': node['region'],
|
||||
'zone': node['zone'],
|
||||
'ip': node['ip'],
|
||||
'replication_ip': node['ip_rep']
|
||||
'port': port,
|
||||
'replication_port': port_rep,
|
||||
'device': node['device'],
|
||||
'weight': 100,
|
||||
'meta': '',
|
||||
|
@ -120,6 +123,18 @@ def get_min_part_hours(ring_path):
|
|||
return builder.min_part_hours
|
||||
|
||||
|
||||
def get_current_replicas(ring_path):
|
||||
""" Gets replicas from the ring (lp1815879)
|
||||
|
||||
:param ring_path: The path for the ring
|
||||
:type ring_path: str
|
||||
:returns: replicas
|
||||
:rtype: int
|
||||
"""
|
||||
builder = _load_builder(ring_path)
|
||||
return builder.min_part_hours
|
||||
|
||||
|
||||
def get_zone(ring_path):
|
||||
"""Determine the zone for the ring_path
|
||||
|
||||
|
@ -184,12 +199,19 @@ def has_minimum_zones(rings):
|
|||
"result": False
|
||||
}
|
||||
builder = _load_builder(ring).to_dict()
|
||||
if not builder['devs']:
|
||||
return {
|
||||
"result": False
|
||||
}
|
||||
replicas = builder['replicas']
|
||||
regions = [dev['region'] for dev in builder['devs'] if dev]
|
||||
zones = [dev['zone'] for dev in builder['devs'] if dev]
|
||||
num_regions = len(set(regions))
|
||||
num_zones = len(set(zones))
|
||||
if num_zones < replicas:
|
||||
log = ("Not enough zones ({:d}) defined to satisfy minimum "
|
||||
"replicas (need >= {:d})".format(num_zones, replicas))
|
||||
num_zones_in_regions = num_regions * num_zones
|
||||
if num_zones_in_regions < replicas:
|
||||
log = ("Not enough zones ({}) defined to satisfy minimum "
|
||||
"replicas (need >= {})".format(num_zones, int(replicas)))
|
||||
return {
|
||||
"result": False,
|
||||
"log": log,
|
||||
|
|
|
@ -48,7 +48,10 @@ class TestSwiftManager(unittest.TestCase):
|
|||
for ring in MOCK_SWIFT_RINGS:
|
||||
mock_rings[ring] = {
|
||||
'replicas': 3,
|
||||
'devs': [{'zone': 1}, {'zone': 2}, None, {'zone': 3}],
|
||||
'devs': [{'region': 1, 'zone': 1},
|
||||
{'region': 1, 'zone': 2},
|
||||
None,
|
||||
{'region': 1, 'zone': 3}],
|
||||
}
|
||||
ret = manager.has_minimum_zones(MOCK_SWIFT_RINGS)
|
||||
self.assertTrue(ret['result'])
|
||||
|
|
|
@ -38,6 +38,18 @@ allow_account_management = true
|
|||
{% if auth_type == 'keystone' %}account_autocreate = true{% endif %}
|
||||
node_timeout = {{ node_timeout }}
|
||||
recoverable_node_timeout = {{ recoverable_node_timeout }}
|
||||
{% if enable_multi_region %}
|
||||
sorting_method = affinity
|
||||
{% if read_affinity %}
|
||||
read_affinity = {{ read_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity %}
|
||||
write_affinity = {{ write_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity_node_count %}
|
||||
write_affinity_node_count = {{ write_affinity_node_count }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
[filter:tempauth]
|
||||
use = egg:swift#tempauth
|
||||
|
|
|
@ -39,6 +39,18 @@ allow_account_management = true
|
|||
{% if auth_type == 'keystone' %}account_autocreate = true{% endif %}
|
||||
node_timeout = {{ node_timeout }}
|
||||
recoverable_node_timeout = {{ recoverable_node_timeout }}
|
||||
{% if enable_multi_region %}
|
||||
sorting_method = affinity
|
||||
{% if read_affinity %}
|
||||
read_affinity = {{ read_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity %}
|
||||
write_affinity = {{ write_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity_node_count %}
|
||||
write_affinity_node_count = {{ write_affinity_node_count }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
[filter:tempauth]
|
||||
use = egg:swift#tempauth
|
||||
|
|
|
@ -39,6 +39,18 @@ allow_account_management = true
|
|||
{% if auth_type == 'keystone' %}account_autocreate = true{% endif %}
|
||||
node_timeout = {{ node_timeout }}
|
||||
recoverable_node_timeout = {{ recoverable_node_timeout }}
|
||||
{% if enable_multi_region %}
|
||||
sorting_method = affinity
|
||||
{% if read_affinity %}
|
||||
read_affinity = {{ read_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity %}
|
||||
write_affinity = {{ write_affinity }}
|
||||
{% endif %}
|
||||
{% if write_affinity_node_count %}
|
||||
write_affinity_node_count = {{ write_affinity_node_count }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
[filter:tempauth]
|
||||
use = egg:swift#tempauth
|
||||
|
|
|
@ -152,6 +152,75 @@ class SwiftContextTestCase(unittest.TestCase):
|
|||
|
||||
self.assertTrue(mock_config.called)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_read_affinity_no_config(self, mock_config):
|
||||
mock_config.return_value = None
|
||||
read_affinity = swift_context.get_read_affinity()
|
||||
|
||||
self.assertIsNone(read_affinity)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_read_affinity_config_not_malformed(self, mock_config):
|
||||
mock_config.return_value = 'r1z1=100, r1=200, r2=300'
|
||||
expected = 'r1z1=100, r1=200, r2=300'
|
||||
read_affinity = swift_context.get_read_affinity()
|
||||
|
||||
self.assertEqual(expected, read_affinity)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_read_affinity_config_malformed(self, mock_config):
|
||||
mock_config.return_value = 'XYZ'
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
swift_context.get_read_affinity()
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_no_config(self, mock_config):
|
||||
mock_config.return_value = None
|
||||
write_affinity = swift_context.get_write_affinity()
|
||||
|
||||
self.assertIsNone(write_affinity)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_config_not_malformed(self, mock_config):
|
||||
mock_config.return_value = 'r1, r2, r3'
|
||||
expected = 'r1, r2, r3'
|
||||
write_affinity = swift_context.get_write_affinity()
|
||||
|
||||
self.assertEqual(expected, write_affinity)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_config_malformed(self, mock_config):
|
||||
mock_config.return_value = 'XYZ'
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
swift_context.get_write_affinity()
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_node_count_no_config(self, mock_config):
|
||||
mock_config.return_value = None
|
||||
write_affinity_node_count = \
|
||||
swift_context.get_write_affinity_node_count()
|
||||
|
||||
self.assertIsNone(write_affinity_node_count)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_node_count_config_not_malformed(self,
|
||||
mock_config):
|
||||
mock_config.return_value = '2 * replicas'
|
||||
expected = '2 * replicas'
|
||||
write_affinity_node_count = \
|
||||
swift_context.get_write_affinity_node_count()
|
||||
|
||||
self.assertEqual(expected, write_affinity_node_count)
|
||||
|
||||
@mock.patch('lib.swift_context.config')
|
||||
def test_get_write_affinity_node_count_config_malformed(self, mock_config):
|
||||
mock_config.return_value = 'XYZ'
|
||||
|
||||
with self.assertRaises(Exception):
|
||||
swift_context.get_write_affinity_node_count()
|
||||
|
||||
|
||||
class SwiftS3ContextTestCase(unittest.TestCase):
|
||||
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
# limitations under the License.
|
||||
|
||||
import importlib
|
||||
import subprocess
|
||||
import sys
|
||||
import uuid
|
||||
|
||||
|
@ -23,17 +24,19 @@ from mock import (
|
|||
patch,
|
||||
MagicMock,
|
||||
)
|
||||
|
||||
import lib.swift_utils
|
||||
# python-apt is not installed as part of test-requirements but is imported by
|
||||
# some charmhelpers modules so create a fake import.
|
||||
sys.modules['apt'] = MagicMock()
|
||||
sys.modules['apt_pkg'] = MagicMock()
|
||||
|
||||
with patch('charmhelpers.contrib.hardening.harden.harden') as mock_dec, \
|
||||
patch('lib.swift_utils.sync_builders_and_rings_if_changed') as rdec, \
|
||||
patch('charmhelpers.core.hookenv.log'), \
|
||||
patch('lib.swift_utils.register_configs'):
|
||||
mock_dec.side_effect = (lambda *dargs, **dkwargs: lambda f:
|
||||
lambda *args, **kwargs: f(*args, **kwargs))
|
||||
rdec.side_effect = lambda f: f
|
||||
import hooks.swift_hooks as swift_hooks
|
||||
importlib.reload(swift_hooks)
|
||||
|
||||
|
@ -262,3 +265,235 @@ class SwiftHooksTestCase(unittest.TestCase):
|
|||
)
|
||||
try_initialize_swauth.assert_called_once()
|
||||
mock_clear_storage_rings_available.assert_called_once()
|
||||
|
||||
@patch.object(swift_hooks, 'log')
|
||||
@patch.object(swift_hooks, 'service_restart')
|
||||
@patch.object(swift_hooks.openstack, 'is_unit_paused_set')
|
||||
@patch.object(swift_hooks, 'update_rings')
|
||||
@patch.object(swift_hooks, 'config')
|
||||
@patch.object(swift_hooks, 'get_zone')
|
||||
@patch.object(swift_hooks, 'update_rsync_acls')
|
||||
@patch.object(swift_hooks, 'get_host_ip')
|
||||
@patch.object(swift_hooks, 'is_elected_leader')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_swift_storage_changed(self, relation_get, is_elected_leader,
|
||||
get_host_ip, update_rsync_acls, get_zone,
|
||||
config, update_rings, is_unit_paused_set,
|
||||
service_restart, log):
|
||||
is_elected_leader.return_value = True
|
||||
get_host_ip.return_value = '10.0.0.10'
|
||||
rel_data = {
|
||||
'account_port': '6002',
|
||||
'container_port': '6001',
|
||||
'device': 'vdc',
|
||||
'egress-subnets': '10.5.0.37/32',
|
||||
'ingress-address': '10.5.0.37',
|
||||
'object_port': '6000',
|
||||
'private-address': '10.5.0.37',
|
||||
'zone': '1'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
swift_hooks.storage_changed()
|
||||
update_rings.assert_called_once_with([{
|
||||
'ip': '10.0.0.10',
|
||||
'zone': 1,
|
||||
'account_port': 6002,
|
||||
'object_port': 6000,
|
||||
'container_port': 6001,
|
||||
'device': 'vdc'}])
|
||||
|
||||
@patch.object(swift_hooks, 'status_set')
|
||||
@patch.object(swift_hooks, 'is_leader')
|
||||
@patch.object(lib.swift_utils, 'leader_get')
|
||||
@patch.object(lib.swift_utils, 'leader_set')
|
||||
def test_rings_distributor_joined(self, leader_set, leader_get, is_leader,
|
||||
status_set):
|
||||
leader_get.return_value = None
|
||||
is_leader.return_value = True
|
||||
swift_hooks.rings_distributor_joined()
|
||||
leader_set.assert_called_once_with(
|
||||
{'swift-proxy-rings-distributor': True})
|
||||
leader_set.reset_mock()
|
||||
is_leader.return_value = False
|
||||
swift_hooks.rings_distributor_joined()
|
||||
self.assertFalse(leader_set.called)
|
||||
|
||||
@patch.object(swift_hooks, 'status_set')
|
||||
@patch.object(lib.swift_utils, 'leader_get')
|
||||
def test_rings_distributor_joined_consumer(self, leader_get, status_set):
|
||||
leader_get.return_value = True
|
||||
with self.assertRaises(lib.swift_utils.SwiftProxyCharmException):
|
||||
swift_hooks.rings_distributor_joined()
|
||||
status_set.assert_called_once_with(
|
||||
'blocked',
|
||||
('Swift Proxy cannot act as both rings distributor and rings '
|
||||
'consumer'))
|
||||
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
def test_rings_distributor_changed(self, broadcast_rings_available):
|
||||
swift_hooks.rings_distributor_changed()
|
||||
broadcast_rings_available.assert_called_once_with()
|
||||
|
||||
@patch.object(swift_hooks, 'is_leader')
|
||||
@patch.object(lib.swift_utils, 'leader_set')
|
||||
def test_rings_distributor_departed(self, leader_set, is_leader):
|
||||
is_leader.return_value = True
|
||||
swift_hooks.rings_distributor_departed()
|
||||
leader_set.assert_called_once_with(
|
||||
{'swift-proxy-rings-distributor': None})
|
||||
leader_set.reset_mock()
|
||||
is_leader.return_value = False
|
||||
swift_hooks.rings_distributor_departed()
|
||||
self.assertFalse(leader_set.called)
|
||||
|
||||
@patch.object(swift_hooks, 'is_leader')
|
||||
@patch.object(lib.swift_utils, 'leader_get')
|
||||
@patch.object(lib.swift_utils, 'leader_set')
|
||||
def test_rings_consumer_joined(self, leader_set, leader_get, is_leader):
|
||||
leader_data = {}
|
||||
leader_get.side_effect = lambda x: leader_data.get(x)
|
||||
is_leader.return_value = True
|
||||
swift_hooks.rings_consumer_joined()
|
||||
leader_set.assert_called_once_with(
|
||||
{'swift-proxy-rings-consumer': True})
|
||||
leader_set.reset_mock()
|
||||
is_leader.return_value = False
|
||||
swift_hooks.rings_consumer_joined()
|
||||
self.assertFalse(leader_set.called)
|
||||
|
||||
@patch.object(lib.swift_utils, 'leader_get')
|
||||
@patch.object(swift_hooks, 'status_set')
|
||||
def test_rings_consumer_joined_distributor(self, status_set, leader_get):
|
||||
leader_data = {
|
||||
'swift-proxy-rings-distributor': True}
|
||||
leader_get.side_effect = lambda x: leader_data.get(x)
|
||||
swift_hooks.rings_consumer_joined()
|
||||
status_set.assert_called_once_with(
|
||||
'blocked',
|
||||
('Swift Proxy cannot act as both rings distributor and rings '
|
||||
'consumer'))
|
||||
|
||||
@patch.object(lib.swift_utils, 'leader_get')
|
||||
@patch.object(swift_hooks, 'status_set')
|
||||
def test_rings_consumer_joined_consumer(self, status_set, leader_get):
|
||||
leader_data = {
|
||||
'swift-proxy-rings-consumer': True}
|
||||
leader_get.side_effect = lambda x: leader_data.get(x)
|
||||
swift_hooks.rings_consumer_joined()
|
||||
status_set.assert_called_once_with(
|
||||
'blocked',
|
||||
'Swift Proxy already acting as rings consumer')
|
||||
|
||||
@patch.object(swift_hooks, 'get_swift_hash')
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
@patch.object(swift_hooks, 'fetch_swift_rings_and_builders')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_rings_consumer_changed(self, relation_get,
|
||||
fetch_swift_rings_and_builders,
|
||||
broadcast_rings_available,
|
||||
get_swift_hash):
|
||||
rel_data = {
|
||||
'rings_url': 'http://some-url:999',
|
||||
'swift_hash': 'swhash'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
get_swift_hash.return_value = 'swhash'
|
||||
swift_hooks.rings_consumer_changed()
|
||||
fetch_swift_rings_and_builders.assert_called_once_with(
|
||||
'http://some-url:999')
|
||||
broadcast_rings_available.assert_called_once_with()
|
||||
|
||||
@patch.object(swift_hooks, 'log')
|
||||
@patch.object(swift_hooks, 'get_swift_hash')
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
@patch.object(swift_hooks, 'fetch_swift_rings_and_builders')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_rings_consumer_changed_no_url(self, relation_get,
|
||||
fetch_swift_rings_and_builders,
|
||||
broadcast_rings_available,
|
||||
get_swift_hash,
|
||||
log):
|
||||
rel_data = {'swift_hash': 'swhash'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
swift_hooks.rings_consumer_changed()
|
||||
self.assertFalse(fetch_swift_rings_and_builders.called)
|
||||
self.assertFalse(broadcast_rings_available.called)
|
||||
log.assert_called_once_with(
|
||||
'rings_consumer_relation_changed: Peer not ready?')
|
||||
|
||||
@patch.object(swift_hooks, 'log')
|
||||
@patch.object(swift_hooks, 'get_swift_hash')
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
@patch.object(swift_hooks, 'fetch_swift_rings_and_builders')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_rings_consumer_changed_empty_str(self, relation_get,
|
||||
fetch_swift_rings_and_builders,
|
||||
broadcast_rings_available,
|
||||
get_swift_hash,
|
||||
log):
|
||||
rel_data = {
|
||||
'rings_url': '',
|
||||
'swift_hash': 'swhash'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
swift_hooks.rings_consumer_changed()
|
||||
self.assertFalse(fetch_swift_rings_and_builders.called)
|
||||
self.assertFalse(broadcast_rings_available.called)
|
||||
log.assert_called_once_with(
|
||||
'rings_consumer_relation_changed: Peer not ready?')
|
||||
|
||||
@patch.object(swift_hooks, 'status_set')
|
||||
@patch.object(swift_hooks, 'get_swift_hash')
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
@patch.object(swift_hooks, 'fetch_swift_rings_and_builders')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_rings_consumer_changed_hash_miss(self, relation_get,
|
||||
fetch_swift_rings_and_builders,
|
||||
broadcast_rings_available,
|
||||
get_swift_hash,
|
||||
status_set):
|
||||
rel_data = {
|
||||
'rings_url': 'http://some-url:999',
|
||||
'swift_hash': 'swhash'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
get_swift_hash.return_value = 'mismatch'
|
||||
with self.assertRaises(lib.swift_utils.SwiftProxyCharmException):
|
||||
swift_hooks.rings_consumer_changed()
|
||||
self.assertFalse(fetch_swift_rings_and_builders.called)
|
||||
self.assertFalse(broadcast_rings_available.called)
|
||||
status_set.assert_called_once_with(
|
||||
'blocked',
|
||||
'Swift hash has to be unique in multi-region setup')
|
||||
|
||||
@patch.object(swift_hooks, 'log')
|
||||
@patch.object(swift_hooks, 'get_swift_hash')
|
||||
@patch.object(swift_hooks, 'broadcast_rings_available')
|
||||
@patch.object(swift_hooks, 'fetch_swift_rings_and_builders')
|
||||
@patch.object(swift_hooks, 'relation_get')
|
||||
def test_rings_consumer_changed_fetch_fail(self, relation_get,
|
||||
fetch_swift_rings_and_builders,
|
||||
broadcast_rings_available,
|
||||
get_swift_hash,
|
||||
log):
|
||||
rel_data = {
|
||||
'rings_url': 'http://some-url:999',
|
||||
'swift_hash': 'swhash'}
|
||||
relation_get.side_effect = lambda x: rel_data.get(x)
|
||||
get_swift_hash.return_value = 'swhash'
|
||||
|
||||
def _fetch(url):
|
||||
raise subprocess.CalledProcessError('cmd', 1)
|
||||
fetch_swift_rings_and_builders.side_effect = _fetch
|
||||
swift_hooks.rings_consumer_changed()
|
||||
fetch_swift_rings_and_builders.assert_called_once_with(
|
||||
'http://some-url:999')
|
||||
log.assert_called_once_with(
|
||||
('Failed to sync rings from http://some-url:999 - no longer '
|
||||
'available from that unit?'),
|
||||
level='WARNING')
|
||||
broadcast_rings_available.assert_called_once_with()
|
||||
|
||||
@patch.object(swift_hooks, 'is_leader')
|
||||
@patch.object(lib.swift_utils, 'leader_set')
|
||||
def test_rings_consumer_departed(self, leader_set, is_leader):
|
||||
is_leader.return_value = True
|
||||
swift_hooks.rings_consumer_departed()
|
||||
leader_set.assert_called_once_with(
|
||||
{'swift-proxy-rings-consumer': None})
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import copy
|
||||
import mock
|
||||
import os
|
||||
import shutil
|
||||
|
@ -226,6 +227,9 @@ class SwiftUtilsTestCase(unittest.TestCase):
|
|||
mock_log,
|
||||
mock_balance_rings):
|
||||
|
||||
_SWIFT_CONF_DIR = copy.deepcopy(swift_utils.SWIFT_CONF_DIR)
|
||||
_SWIFT_RINGS = copy.deepcopy(swift_utils.SWIFT_RINGS)
|
||||
|
||||
@swift_utils.sync_builders_and_rings_if_changed
|
||||
def mock_balance():
|
||||
for ring, builder in swift_utils.SWIFT_RINGS.items():
|
||||
|
@ -239,14 +243,45 @@ class SwiftUtilsTestCase(unittest.TestCase):
|
|||
|
||||
mock_balance_rings.side_effect = mock_balance
|
||||
|
||||
init_ring_paths(tempfile.mkdtemp())
|
||||
tmp_ring_dir = tempfile.mkdtemp()
|
||||
init_ring_paths(tmp_ring_dir)
|
||||
try:
|
||||
swift_utils.balance_rings()
|
||||
finally:
|
||||
shutil.rmtree(swift_utils.SWIFT_CONF_DIR)
|
||||
shutil.rmtree(tmp_ring_dir)
|
||||
|
||||
self.assertTrue(mock_update_www_rings.called)
|
||||
self.assertTrue(mock_cluster_sync_rings.called)
|
||||
swift_utils.SWIFT_CONF_DIR = _SWIFT_CONF_DIR
|
||||
swift_utils.SWIFT_RINGS = _SWIFT_RINGS
|
||||
|
||||
def test__ring_port_rep(self):
|
||||
node = {
|
||||
'region': 1,
|
||||
'zone': 1,
|
||||
'ip': '172.16.0.2',
|
||||
'ip_rep': '172.16.0.2',
|
||||
'account_port': 6000,
|
||||
'account_port_rep': 6010,
|
||||
'device': '/dev/sdb',
|
||||
}
|
||||
expected = node['account_port_rep']
|
||||
actual = swift_utils._ring_port_rep('/etc/swift/account.builder', node)
|
||||
self.assertEqual(actual, expected)
|
||||
|
||||
@mock.patch.object(swift_utils, 'get_manager')
|
||||
def test_get_current_replicas(self, mock_get_manager):
|
||||
swift_utils.get_current_replicas('/etc/swift/account.builder')
|
||||
mock_get_manager().get_current_replicas.assert_called_once_with(
|
||||
'/etc/swift/account.builder')
|
||||
|
||||
@mock.patch.object(subprocess, 'check_call')
|
||||
def test_update_replicas(self, check_call):
|
||||
swift_utils.update_replicas('/etc/swift/account.builder', 3)
|
||||
check_call.assert_called_once_with(['swift-ring-builder',
|
||||
'/etc/swift/account.builder',
|
||||
'set_replicas',
|
||||
'3'])
|
||||
|
||||
@mock.patch('lib.swift_utils.get_www_dir')
|
||||
def test_mark_www_rings_deleted(self, mock_get_www_dir):
|
||||
|
@ -413,19 +448,44 @@ class SwiftUtilsTestCase(unittest.TestCase):
|
|||
def test_add_to_ring(self, mock_get_manager):
|
||||
ring = 'account'
|
||||
node = {
|
||||
'ip': '172.16.0.2',
|
||||
'region': 1,
|
||||
'account_port': 6000,
|
||||
'zone': 1,
|
||||
'ip': '172.16.0.2',
|
||||
'account_port': 6000,
|
||||
'device': '/dev/sdb',
|
||||
}
|
||||
swift_utils.add_to_ring(ring, node)
|
||||
mock_get_manager().add_dev.assert_called_once_with('account', {
|
||||
'meta': '',
|
||||
'zone': 1,
|
||||
'ip': '172.16.0.2',
|
||||
'device': '/dev/sdb',
|
||||
'port': 6000,
|
||||
'device': '/dev/sdb',
|
||||
'meta': '',
|
||||
'weight': 100
|
||||
})
|
||||
|
||||
@mock.patch.object(swift_utils, 'get_manager')
|
||||
def test_add_to_ring_rep(self, mock_get_manager):
|
||||
ring = 'account'
|
||||
node = {
|
||||
'region': 1,
|
||||
'zone': 1,
|
||||
'ip': '172.16.0.2',
|
||||
'ip_rep': '172.16.0.2',
|
||||
'account_port': 6000,
|
||||
'account_port_rep': 6010,
|
||||
'device': '/dev/sdb',
|
||||
}
|
||||
swift_utils.add_to_ring(ring, node)
|
||||
mock_get_manager().add_dev.assert_called_once_with('account', {
|
||||
'region': 1,
|
||||
'zone': 1,
|
||||
'ip': '172.16.0.2',
|
||||
'replication_ip': '172.16.0.2',
|
||||
'port': 6000,
|
||||
'replication_port': 6010,
|
||||
'device': '/dev/sdb',
|
||||
'meta': '',
|
||||
'weight': 100
|
||||
})
|
||||
|
||||
|
@ -537,34 +597,46 @@ class SwiftUtilsTestCase(unittest.TestCase):
|
|||
@mock.patch.object(swift_utils, 'format_ipv6_addr')
|
||||
@mock.patch.object(swift_utils, 'get_hostaddr')
|
||||
@mock.patch.object(swift_utils, 'is_elected_leader')
|
||||
def test_notify_storage_rings_available(self, mock_is_leader,
|
||||
mock_get_hostaddr,
|
||||
mock_format_ipv6_addr,
|
||||
mock_get_www_dir,
|
||||
mock_relation_ids,
|
||||
mock_log,
|
||||
mock_get_swift_hash,
|
||||
mock_relation_set,
|
||||
mock_uuid):
|
||||
def test_notify_storage_and_consumers_rings_available(
|
||||
self,
|
||||
mock_is_leader,
|
||||
mock_get_hostaddr,
|
||||
mock_format_ipv6_addr,
|
||||
mock_get_www_dir,
|
||||
mock_relation_ids,
|
||||
mock_log,
|
||||
mock_get_swift_hash,
|
||||
mock_relation_set,
|
||||
mock_uuid):
|
||||
|
||||
mock_is_leader.return_value = True
|
||||
mock_get_hostaddr.return_value = '10.0.0.1'
|
||||
mock_format_ipv6_addr.return_value = None
|
||||
mock_get_www_dir.return_value = 'some/dir'
|
||||
mock_relation_ids.return_value = ['storage:0']
|
||||
mock_relation_ids.side_effect = [['storage:0'],
|
||||
['rings-distributor:0']]
|
||||
mock_get_swift_hash.return_value = 'greathash'
|
||||
mock_uuid.return_value = 'uuid-1234'
|
||||
swift_utils.notify_storage_rings_available('1.234')
|
||||
mock_relation_set.assert_called_once_with(
|
||||
broker_timestamp='1.234',
|
||||
relation_id='storage:0',
|
||||
rings_url='http://10.0.0.1/dir',
|
||||
swift_hash='greathash',
|
||||
trigger='uuid-1234')
|
||||
calls = [mock.call(broker_timestamp='1.234',
|
||||
relation_id='storage:0',
|
||||
rings_url='http://10.0.0.1/dir',
|
||||
swift_hash='greathash',
|
||||
trigger='uuid-1234'),
|
||||
mock.call(broker_timestamp='1.234',
|
||||
relation_id='rings-distributor:0',
|
||||
rings_url='http://10.0.0.1/dir',
|
||||
swift_hash='greathash',
|
||||
trigger='uuid-1234')]
|
||||
swift_utils.notify_storage_and_consumers_rings_available('1.234')
|
||||
mock_relation_set.assert_has_calls(calls)
|
||||
|
||||
@mock.patch.object(swift_utils, 'relation_set')
|
||||
@mock.patch.object(swift_utils, 'relation_ids')
|
||||
def test_clear_notify_storage_rings_available(self, mock_relation_ids,
|
||||
mock_relation_set):
|
||||
def test_clear_notify_storage_and_consumers_rings_available(
|
||||
self,
|
||||
mock_relation_ids,
|
||||
mock_relation_set):
|
||||
|
||||
mock_relation_ids.return_value = ['storage:0']
|
||||
swift_utils.clear_storage_rings_available()
|
||||
mock_relation_set.assert_called_once_with(
|
||||
|
@ -688,3 +760,26 @@ class SwiftUtilsTestCase(unittest.TestCase):
|
|||
'replicas-container': 2}[key]
|
||||
replicas = swift_utils.determine_replicas('object')
|
||||
self.assertEqual(replicas, 3)
|
||||
|
||||
def test_fetch_swift_rings_and_builders(self):
|
||||
"""
|
||||
Based on the 'test_fetch_swift_rings' function from the swift-storage
|
||||
charm.
|
||||
"""
|
||||
url = 'http://someproxynode/rings'
|
||||
_SWIFT_CONF_DIR = copy.deepcopy(swift_utils.SWIFT_CONF_DIR)
|
||||
swift_utils.SWIFT_CONF_DIR = tempfile.mkdtemp()
|
||||
try:
|
||||
swift_utils.fetch_swift_rings_and_builders(url)
|
||||
wgets = []
|
||||
for s in ['account', 'object', 'container']:
|
||||
for ext in ['ring.gz', 'builder']:
|
||||
_c = mock.call(['wget', '%s/%s.%s' % (url, s, ext),
|
||||
'--retry-connrefused', '-t', '10',
|
||||
'-O', swift_utils.SWIFT_CONF_DIR +
|
||||
'/%s.%s' % (s, ext)])
|
||||
wgets.append(_c)
|
||||
self.assertEqual(wgets, self.check_call.call_args_list)
|
||||
except:
|
||||
shutil.rmtree(swift_utils.SWIFT_CONF_DIR)
|
||||
swift_utils.SWIFT_CONF_DIR = _SWIFT_CONF_DIR
|
||||
|
|
Loading…
Reference in New Issue