Commit Graph

31 Commits

Author SHA1 Message Date
Matthew Oliver fce7ad5f18 Ring: Change py2 only tests to py3
We have some purposely brittle ring tests that test rebalances that are
PY2 only. This was because the random seeding between PY2 and PY3 is
different. Now that we're stepping towards py3 only, these tests needed
to be migrated so we don't loose important test coverage.

This patch flips these PY2 only tests to be PY3 only, for the same
reason as before. It took quite a bit of effort to find seeds that would
behave like the ones in the PY2 birttle test.. but found them.

Change-Id: I0978041830ed3e231476719efed66bf9b337ff8a
2022-03-31 15:10:32 +11:00
Tim Burke 642f79965a py3: port common/ring/ and common/utils.py
I can't imagine us *not* having a py3 proxy server at some point, and
that proxy server is going to need a ring.

While we're at it (and since they were so close anyway), port

* cli/ringbuilder.py and
* common/linkat.py
* common/daemon.py

Change-Id: Iec8d97e0ce925614a86b516c4c6ed82809d0ba9b
2018-02-12 06:42:24 +00:00
Clay Gerrard 68906dac43 Further extract a builder method to a util function
... to make testing more targeted and obvious

Related-Change-Id: I89439286b211f2c5ef19deffa77c202f48f07cf8
Change-Id: I93b99128a4fb35395e8e9caf11649e216f824fdf
2018-01-15 11:45:30 +00:00
Clay Gerrard 7013e70ca6 Represent dispersion worse than one replicanth
With a sufficiently undispersed ring it's possible to move an entire
replicas worth of parts and yet the value of dispersion may not get any
better (even though in reality dispersion has dramatically improved).
The problem is dispersion will currently only represent up to one whole
replica worth of parts being undispersed.

However with EC rings it's possible for more than one whole replicas
worth of partitions to be undispersed, in these cases the builder will
require multiple rebalance operations to fully disperse replicas - but
the dispersion value should improve with every rebalance.

N.B. with this change it's possible for rings with a bad dispersion
value to measure as having a significantly smaller dispersion value
after a rebalance (even though they may not have had their dispersion
change) because the total amount of bad dispersion we can measure has
been increased but we're normalizing within a similar range.

Closes-Bug: #1697543

Change-Id: Ifefff0260deac0c3e8b369a1e158686c89936686
2017-12-28 11:16:17 -08:00
Kota Tsuyuzaki 2321966456 Accept a trade off of dispersion for balance
... but only if we *have* to!

During the initial gather for balance we prefer to avoid replicas on
over-weight devices that are already under-represented in any of it's
tiers (i.e. if a zone has to have at least one, but may have as many of
two, don't take the only replica).  Instead we hope by going for
replicas on over-weight devices that are at the limits of their
dispersion we might have a better than even chance we find a better
place for them during placement!

This normally works on out - and especially so for rings which can
disperse and balance.  But for existing rings where we'd have to
sacrifice dispersion to improve balance the existing optimistic gather
will end up refusing to trade dispersion for balance - and instead get
stuck without solving either!

You should always be able to solve for *either* dispersion or balance.
But if you can't solve *both* - we bail out on our optimistic gather
much more quickly and instead just focus on improving balance.  With
this change, the ring can get into balanced (and un-dispersed) states
much more quickly!

Change-Id: I17ac627f94f64211afaccad15596a9fcab2fada2
Related-Change-Id: Ie6e2d116b65938edac29efa6171e2470bb3e8e12
Closes-Bug: 1699636
Closes-Bug: 1701472
2017-09-25 12:37:49 -07:00
Clay Gerrard 806b18c6f5 Extend device tier pretty print normalization
* use pretty device tier normalization in additional debug message
* refactor pretty device tier normalization for consistency
* unittest asserting consistency

Change-Id: Ide32e35ff9387f1cc1e4997eb133314d06b794f3
2017-06-28 15:40:34 -07:00
cheng ce26e78902 For any part, only one replica can move in a rebalance
With a min_part_hours of zero, it's possible to move more than one
replicas of the same part in a single rebalance.

This change in behavior only effects min_part_hour zero rings, which
are understood to be uncommon in production mostly because of this
very specific and strange behavior of min_part_hour zero rings.

With this change, no matter how small your min_part_hours it will
always require at least N rebalances to move N part-replicas of the
same part.

To supplement the existing persisted _last_part_moves structure to
enforce min_part_hours, this change adds a _part_moved_bitmap that
exists only during the life of the rebalance, to track when rebalance
moves a part in order to prevent another replicas of the same part
from being moved in the same rebalance.

Add authors: Clay Gerrard, clay.gerrard@gmail.com
             Christian Schwede, cschwede@redhat.com

Closes-bug: #1586167

Change-Id: Ia1629abd5ce6e1b3acc2e94f818ed8223eed993a
2016-09-27 15:01:41 +00:00
Jenkins e9f5e7966a Merge "Moved ipv4 & ipv6 validations to the common utils" 2016-07-29 21:25:16 +00:00
Nandini Tata 5c9732ac8e Moved ipv4 & ipv6 validations to the common utils
Validating ip addresses for ipv4 and ipv6 formats have more generic
use cases outside of rings. swift-get-nodes and other utilities that
need to handle ipv6 adrresses often require importing ip validation
methods from swift/common/rings/utils (see Related-Change). Also,
expand_ipv6 method already exists in swift/common/utils. Hence moving
validation of ips also into swift/common/utils from
swift/common/ring/utils.

Related-Change: I6551d65241950c65e7160587cc414deb4a2122f5
Change-Id: I720a9586469cf55acab74b4b005907ce106b3da4
2016-07-28 12:08:06 -07:00
Cheng Li 5817f00005 make 0 be avaiable value of options
value 0 is regard as not available by swift-ring-builder
$ swift-ring-builder testring add --region 0 --zone 1
--ip 127.0.0.2 --port 6000 --device sdb --weight 100

Required argument -r/--region not specified.
The on-disk ring builder is unchanged.

this patch is to make value 0 available.

Change-Id: Id941d44d8dbfe438bf921ed905908b838c88a644
Closes-bug: #1547137
2016-07-10 20:33:05 +08:00
Shashirekha Gundur cf48e75c25 change default ports for servers
Changing the recommended ports for Swift services
from ports 6000-6002 to unused ports 6200-6202;
so they do not conflict with X-Windows or other services.

Updated SAIO docs.

DocImpact
Closes-Bug: #1521339
Change-Id: Ie1c778b159792c8e259e2a54cb86051686ac9d18
2016-04-29 14:47:38 -04:00
Clay Gerrard 7035639dfd Put part-replicas where they go
It's harder than it sounds.  There was really three challenges.

Challenge #1 Initial Assignment
===============================

Before starting to assign parts on this new shiny ring you've
constructed, maybe we'll pause for a moment up front and consider the
lay of the land.  This process is called the replica_plan.

The replica_plan approach is separating part assignment failures into
two modes:

 1) we considered the cluster topology and it's weights and came up with
    the wrong plan

 2) we failed to execute on the plan

I failed at both parts plenty of times before I got it this close.  I'm
sure a counter example still exists, but when we find it the new helper
methods will let us reason about where things went wrong.

Challenge #2 Fixing Placement
=============================

With a sound plan in hand, it's much easier to fail to execute on it the
less material you have to execute with - so we gather up as many parts
as we can - as long as we think we can find them a better home.

Picking the right parts for gather is a black art - when you notice a
balance is slow it's because it's spending so much time iterating over
replica2part2dev trying to decide just the right parts to gather.

The replica plan can help at least in the gross dispersion collection to
gather up the worst offenders first before considering balance.  I think
trying to avoid picking up parts that are stuck to the tier before
falling into a forced grab on anything over parts_wanted helps with
stability generally - but depending on where the parts_wanted are in
relation to the full devices it's pretty easy pick up something that'll
end up really close to where it started.

I tried to break the gather methods into smaller pieces so it looked
like I knew what I was doing.

Going with a MAXIMUM gather iteration instead of balance (which doesn't
reflect the replica_plan) doesn't seem to be costing me anything - most
of the time the exit condition is either solved or all the parts overly
aggressively locked up on min_part_hours.  So far, it mostly seemds if
the thing is going to balance this round it'll get it in the first
couple of shakes.

Challenge #3 Crazy replica2part2dev tables
==========================================

I think there's lots of ways "scars" can build up a ring which can
result in very particular replica2part2dev tables that are physically
difficult to dig out of.  It's repairing these scars that will take
multiple rebalances to resolve.

... but at this point ...

... lacking a counter example ...

I've been able to close up all the edge cases I was able to find.  It
may not be quick, but progress will be made.

Basically my strategy just required a better understanding of how
previous algorithms were able to *mostly* keep things moving by brute
forcing the whole mess with a bunch of randomness.  Then when we detect
our "elegant" careful part selection isn't making progress - we can fall
back to same old tricks.

Validation
==========

We validate against duplicate part replica assignment after rebalance
and raise an ERROR if we detect more than one replica of a part assigned
to the same device.

In order to meet that requirement we have to have as many devices as
replicas, so attempting to rebalance with too few devices w/o changing
your replica_count is also an ERROR not a warning.

Random Thoughts
===============

As usual with rings, the test diff can be hard to reason about -
hopefully I've added enough comments to assure future me that these
assertions make sense.

Despite being a large rewrite of a lot of important code, the existing
code is known to have failed us.  This change fixes a critical bug that's
trivial to reproduce in a critical component of the system.

There's probably a bunch of error messages and exit status stuff that's
not as helpful as it could be considering the new behaviors.

Change-Id: I1bbe7be38806fc1c8b9181a722933c18a6c76e05
Closes-Bug: #1452431
2015-12-07 16:06:42 -08:00
janonymous f5f9d791b0 pep8 fix: assertEquals -> assertEqual
assertEquals is deprecated in py3, replacing it.

Change-Id: Ida206abbb13c320095bb9e3b25a2b66cc31bfba8
Co-Authored-By: Ondřej Nový <ondrej.novy@firma.seznam.cz>
2015-10-11 12:57:25 +02:00
Lisak, Peter a5d2faab90 swift-ring-builder can't select id=0
Currently, it is not possible to change weight of device with id=0
by swift-ring-builder cli. Instead of change the help is shown.
Example:
$ swift-ring-builder object.builder set_weight --id 0 1.00

But id=0 is generated by swift for the first device if not provided.
Also --weight, --zone and --region cause the same bug.

There is problem to detect new command format in validate_args
function if zero is as valid value for some args.

Change-Id: I4ee379c242f090d116cd2504e21d0e1904cdc2fc
2015-10-08 15:57:01 +02:00
Clay Gerrard 5070869ac0 Validate against duplicate device part replica assignment
We should never assign multiple replicas of the same partition to the
same device - our on-disk layout can only support a single replica of a
given part on a single device.  We should not do this, so we validate
against it and raise a loud warning if this terrible state is ever
observed after a rebalance.

Unfortunately currently there's a couple not necessarily uncommon
scenarios which will trigger this observed state today:

 1. If we have less devices than replicas
 2. If a server or zones aggregate device weight make it the most
    appropriate candidate for multiple replicas and you're a bit unlucky

Fixing #1 would be easy, we should just not allow that state anymore.
Really we never did - if you have a 3 replica ring with one device - you
have one replica.  Everything that iter_nodes'd would de-dupe.  We
should just be insisting that you explicitly acknowledge your replica
count with set_replicas.

I have been lost in the abyss for days searching for a general solutions
to #2.  I'm sure it exists, but I will not have wrestled it to
submission by RC1.  In the meantime we can eliminate a great deal of the
luck required simply by refusing to place more than one replica of a
part on a device in assign_parts.

The meat of the change is a small update to the .validate method in
RingBuilder.  It basically unrolls a pre-existing (part, replica) loop
so that all the replicas of the part come out in order so that we can
build up the set of dev_id's for which all the replicas of a given part
are assigned part-by-part.

If we observe any duplicates - we raise a warning.

To clean the cobwebs out of the rest of the corner cases we're going to
delay get_required_overload from kicking in until we achive dispersion,
and a small check was added when selecting a device subtier to validate
if it's already being used - picking any other device in the tier works
out much better.  If no other devices are available in the tier - we
raise a warning.  A more elegant or optimized solution may exist.

Many unittests did not meet the criteria #1, but the fix was straight
forward after being identified by the pigeonhole check.

However, many more tests were affected by #2 - but again the fix came to
be simply adding more devices.  The fantasy that all failure domains
contain at least replica count devices is prevalent in both our ring
placement algorithm and it's tests.  These tests were trying to
demonstrate some complex characteristics of our ring placement algorithm
and I believe we just got a bit too carried away trying to find the
simplest possible example to demonstrate the desirable trait.  I think
a better example looks more like a real ring - with many devices in each
server and many servers in each zone - I think more devices makes the
tests better.  As much as possible I've tried to maintain the original
intent of the tests - when adding devices I've either spread the weight
out amongst them or added proportional weights to the other tiers.

I added an example straw man test to validate that three devices with
different weights in three different zones won't blow up.  Once we can
do that without raising warnings and assigning duplicate device part
replicas - we can add more.  And more importantly change the warnings to
errors - because we would much prefer to not do that #$%^ anymore.

Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Related-Bug: #1452431
Change-Id: I592d5b611188670ae842fe3d030aa3b340ac36f9
2015-10-02 16:42:25 -07:00
Samuel Merritt ccf0758ef1 Add ring-builder analyzer.
This is a tool to help developers quantify changes to the ring
builder. It takes a scenario (JSON file) describing the builder's
basic parameters (part_power, replicas, etc.) and a number of
"rounds", where each round is a set of operations to perform on the
builder. For each round, the operations are applied, and then the
builder is rebalanced until it reaches a steady state.

The idea is that a developer observes the ring builder behaving
suboptimally, writes a scenario to reproduce the behavior, modifies
the ring builder to fix it, and references the scenario with the
commit so that others can see that things have improved.

I decided to write this after writing my fourth or fifth hacky one-off
script to reproduce some bad behavior in the ring builder.

Change-Id: I114242748368f142304aab90a6d99c1337bced4c
2015-07-02 08:16:03 -07:00
Darrell Bishop df134df901 Allow 1+ object-servers-per-disk deployment
Enabled by a new > 0 integer config value, "servers_per_port" in the
[DEFAULT] config section for object-server and/or replication server
configs.  The setting's integer value determines how many different
object-server workers handle requests for any single unique local port
in the ring.  In this mode, the parent swift-object-server process
continues to run as the original user (i.e. root if low-port binding
is required), binds to all ports as defined in the ring, and forks off
the specified number of workers per listen socket.  The child, per-port
servers drop privileges and behave pretty much how object-server workers
always have, except that because the ring has unique ports per disk, the
object-servers will only be handling requests for a single disk.  The
parent process detects dead servers and restarts them (with the correct
listen socket), starts missing servers when an updated ring file is
found with a device on the server with a new port, and kills extraneous
servers when their port is found to no longer be in the ring.  The ring
files are stat'ed at most every "ring_check_interval" seconds, as
configured in the object-server config (same default of 15s).

Immediately stopping all swift-object-worker processes still works by
sending the parent a SIGTERM.  Likewise, a SIGHUP to the parent process
still causes the parent process to close all listen sockets and exit,
allowing existing children to finish serving their existing requests.
The drop_privileges helper function now has an optional param to
suppress the setsid() call, which otherwise screws up the child workers'
process management.

The class method RingData.load() can be told to only load the ring
metadata (i.e. everything except replica2part2dev_id) with the optional
kwarg, header_only=True.  This is used to keep the parent and all
forked off workers from unnecessarily having full copies of all storage
policy rings in memory.

A new helper class, swift.common.storage_policy.BindPortsCache,
provides a method to return a set of all device ports in all rings for
the server on which it is instantiated (identified by its set of IP
addresses).  The BindPortsCache instance will track mtimes of ring
files, so they are not opened more frequently than necessary.

This patch includes enhancements to the probe tests and
object-replicator/object-reconstructor config plumbing to allow the
probe tests to work correctly both in the "normal" config (same IP but
unique ports for each SAIO "server") and a server-per-port setup where
each SAIO "server" must have a unique IP address and unique port per
disk within each "server".  The main probe tests only work with 4
servers and 4 disks, but you can see the difference in the rings for the
EC probe tests where there are 2 disks per server for a total of 8
disks.  Specifically, swift.common.ring.utils.is_local_device() will
ignore the ports when the "my_port" argument is None.  Then,
object-replicator and object-reconstructor both set self.bind_port to
None if server_per_port is enabled.  Bonus improvement for IPv6
addresses in is_local_device().

This PR for vagrant-swift-all-in-one will aid in testing this patch:
https://github.com/swiftstack/vagrant-swift-all-in-one/pull/16/

Also allow SAIO to answer is_local_device() better; common SAIO setups
have multiple "servers" all on the same host with different ports for
the different "servers" (which happen to match the IPs specified in the
rings for the devices on each of those "servers").

However, you can configure the SAIO to have different localhost IP
addresses (e.g. 127.0.0.1, 127.0.0.2, etc.) in the ring and in the
servers' config files' bind_ip setting.

This new whataremyips() implementation combined with a little plumbing
allows is_local_device() to accurately answer, even on an SAIO.

In the default case (an unspecified bind_ip defaults to '0.0.0.0') as
well as an explict "bind to everything" like '0.0.0.0' or '::',
whataremyips() behaves as it always has, returning all IP addresses for
the server.

Also updated probe tests to handle each "server" in the SAIO having a
unique IP address.

For some (noisy) benchmarks that show servers_per_port=X is at least as
good as the same number of "normal" workers:
https://gist.github.com/dbishop/c214f89ca708a6b1624a#file-summary-md

Benchmarks showing the benefits of I/O isolation with a small number of
slow disks:
https://gist.github.com/dbishop/fd0ab067babdecfb07ca#file-results-md

If you were wondering what the overhead of threads_per_disk looks like:
https://gist.github.com/dbishop/1d14755fedc86a161718#file-tabular_results-md

DocImpact

Change-Id: I2239a4000b41a7e7cc53465ce794af49d44796c6
2015-06-18 12:43:50 -07:00
Samuel Merritt af72881d1d Use just IP, not port, when determining partition placement
In the ring builder, we place partitions with maximum possible
dispersion across tiers, where a "tier" is region, then zone, then
IP/port,then device. Now, instead of IP/port, just use IP. The port
wasn't really getting us anything; two different object servers on two
different ports on one machine aren't separate failure
domains. However, if someone has only a few machines and is using one
object server on its own port per disk, then the ring builder would
end up with every disk in its own IP/port tier, resulting in bad (with
respect to durability) partition placement.

For example: assume 1 region, 1 zone, 4 machines, 48 total disks (12
per machine), and one object server (and hence one port) per
disk. With the old behavior, partition replicas will all go in the one
region, then the one zone, then pick one of 48 IP/port pairs, then
pick the one disk therein. This gives the same result as randomly
picking 3 disks (without replacement) to store data on; it completely
ignores machine boundaries.

With the new behavior, the replica placer will pick the one region,
then the one zone, then one of 4 IPs, then one of 12 disks
therein. This gives the optimal placement with respect to durability.

The same applies to Ring.get_more_nodes().

Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>

Change-Id: Ibbd740c51296b7e360845b5309d276d7383a3742
2015-06-17 11:31:55 -07:00
Samuel Merritt a9b5982d52 Fix account-reaper
As part of commit efb39a5, the account reaper grew a bind_port
attribute, but it wasn't being converted to int, so naturally "6002"
!= 6002, and it wouldn't reap anything.

The bind_port was only used for determining the local devices. Rather
than fix the code to call int(), this commit removes the need for
bind_port entirely by skipping the port check. If your rings have IPs,
this is the same behavior as pre-efb39a5, and if your rings have
hostnames, this still works.

Change-Id: I7bd18e9952f7b9e0d7ce2dce230ee54c5e23709a
2015-02-12 13:28:29 -08:00
Hisashi Osanai efb39a5665 Allow hostnames for nodes in Rings
This change modifies the swift-ring-builder and introduces new format
of sub-commands (search, list_parts, set_weight, set_info and remove)
in addition to add sub-command so that hostnames can be used in place
of an ip-address for the sub-commands.
The account reaper, container synchronizer, and replicators were also
updated so that they still have a way to identify a particular device
as being "local".

Previously this was Change-Id:
Ie471902413002872fc6755bacd36af3b9c613b74

Change-Id: Ieff583ffb932133e3820744a3f8f9f491686b08d
Co-Authored-By: Alex Pecoraro <alex.pecoraro@emc.com>
Implements: blueprint allow-hostnames-for-nodes-in-rings
2015-02-02 05:06:03 +09:00
Clay Gerrard a8bd2f737c Add dispersion command to swift-ring-builder
Output a dispersion report that shows how many parts have each replica count
at each tier along with some additional context.  Also the max_dispersion is a
good canary for what a reasonable overload might be.

Also display a warning on rebalance if the ring's dispersion is sub-optimal.

The primitive form of the dispersion graph is cached on the builder, but the
dispersion command will build it on the fly if you have a ring that was last
rebalanced before the change.

Also add --force option to rebalance to make it write a ring even if less than
1% of parts moved.

Try to clarify some dispersion and balance a little bit in the ring section of
the architectural overview.

Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Darrell Bishop <darrell@swiftstack.com>

Change-Id: I7696df25d092fac56588080722e0a4167ed2c824
2015-01-08 18:40:27 -08:00
Christian Schwede 0a5268c34c Fix bug in swift-ring-builder list_parts
The number of shown replicas in the partition list might differ from the
actual number of replicas (as shown in the bugreport).

This codes simply iterates for the builder._replica2part2dev and
remembers the number of replicas for each partition.

The code to find the partitions was moved to swift/common/ring/utils.py
to make it easier to test, and a test to ensure the correct number of
replicas is returned was added.

Closes-Bug: 1370070
Change-Id: Id6a3ed437bb86df2f43f8b0b79aa8ccb50bbe13e
2014-09-29 19:38:54 +00:00
anc 6aff48c6f1 Fix trivial typos
Fixes a few typos I have stumbled across recently.

Change-Id: Ib232924f6b23c08578c52a8dd63aaaa8789f9da7
2014-07-24 16:23:13 +01:00
Clay Gerrard 37e0654adb in case you lose your builder backups
Change-Id: Ica555be2be492c3ec5fdeab738058ff35989a603
2013-11-20 21:11:45 -08:00
Jenkins 0b594bc3af Merge "Change OpenStack LLC to Foundation" 2013-10-07 16:09:37 +00:00
ZhiQiang Fan f72704fc82 Change OpenStack LLC to Foundation
Change-Id: I7c3df47c31759dbeb3105f8883e2688ada848d58
Closes-bug: #1214176
2013-09-20 01:02:31 +08:00
Clay Gerrard b0aeed1ec7 Fix default replication options for ring-builder add
Change-Id: I957deeb0e711bfe7cd9d852726c77179a4613ee0
2013-09-16 20:02:20 -07:00
Peter Portante be1cff4f1f Pep8 unit test modules w/ <= 10 violations (5 of 12)
Change-Id: I8e82c14ada52d44df5a31e08982ac79cd7e5c969
Signed-off-by: Peter Portante <peter.portante@redhat.com>
2013-09-01 15:12:48 -04:00
Ilya Kharin 43bf568f48 Move parse search logic outside from builder
Dramatic part of RingBuilder.search_devs which parse a complex format
of a search device string moved to the swift-ring-builder script.
Instead, the search_devs has a simple interface to search devices.

blueprint argparse-in-swift-ring-builder

Change-Id: If3dd77b297b474fb9a058e4693fef2dfb11fca3d
2013-05-24 17:12:34 +04:00
Samuel Merritt ebcd60f7d9 Add a region tier to Swift's ring.
The region is one level above the zone; it is intended to represent a
chunk of machines that is distant from others with respect to
bandwidth and latency.

Old rings will default to having all their devices in region 1. Since
everything is in the same region by default, the ring builder will
simply distribute across zones as it did before, so your partition
assignment won't move because of this change. If you start adding
devices in other regions, of course, the assignment will change to
take that into account.

swift-ring-builder still accepts the same syntax as before, but will
default added devices to region 1 if no region is specified.

Examples:

$ swift-ring-builder foo.builder add r2z1-1.2.3.4:555/sda

$ swift-ring-builder foo.builder add r1z3-1.2.3.4:555/sda

$ swift-ring-builder foo.builder add z3-1.2.3.4:555/sda

Also, some updates to ring-overview doc.

Change-Id: Ifefbb839cdcf033e6c9201fadca95224c7303a29
2013-03-13 10:00:58 -07:00
Kun Huang d9130d79e5 Correct docstring for swift.common.ring.utils.build_tier_tree and add
unit test for it.

Some mistakes is in original docstring of that method. There's no unit
test for two methods in swift.common.ring.utils.

Fixes: bug #1070621

Change-Id: I6f4f211ea67d7fb8ccfe659f30bb0f5d394aca6b
2013-02-25 23:08:55 +08:00