Per aggregate scheduling weight

This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.

blueprint: per-aggregate-scheduling-weight

Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
This commit is contained in:
Yikun Jiang 2019-01-03 20:18:19 +08:00
parent 313becd5ff
commit e66443770d
26 changed files with 799 additions and 57 deletions

View File

@ -856,6 +856,24 @@ Hosts and cells are weighted based on the following options in the
- By default, the scheduler spreads instances across all hosts evenly.
Set the ``ram_weight_multiplier`` option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
If the per aggregate ``ram_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [DEFAULT]
- ``disk_weight_multiplier``
- By default, the scheduler spreads instances across all hosts evenly.
Set the ``disk_weight_multiplier`` option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
If the per aggregate ``disk_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [DEFAULT]
- ``cpu_weight_multiplier``
- By default, the scheduler spreads instances across all hosts evenly.
Set the ``cpu_weight_multiplier`` option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
If the per aggregate ``cpu_weight_multiplier`` metadata is set, this
multiplier will override the configuration option value.
* - [DEFAULT]
- ``scheduler_host_subset_size``
- New instances are scheduled on a host that is chosen randomly from a
@ -871,22 +889,37 @@ Hosts and cells are weighted based on the following options in the
- ``io_ops_weight_multiplier``
- Multiplier used for weighing host I/O operations. A negative value means
a preference to choose light workload compute hosts.
If the per aggregate ``io_ops_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [filter_scheduler]
- ``soft_affinity_weight_multiplier``
- Multiplier used for weighing hosts for group soft-affinity. Only a
positive value is allowed.
* - [filter_scheduler]
If the per aggregate ``soft_affinity_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
- ``soft_anti_affinity_weight_multiplier``
- Multiplier used for weighing hosts for group soft-anti-affinity. Only a
positive value is allowed.
If the per aggregate ``soft_anti_affinity_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [filter_scheduler]
- ``build_failure_weight_multiplier``
- Multiplier used for weighing hosts which have recent build failures. A
positive value increases the significance of build failures reported by
the host recently, making them less likely to be chosen.
If the per aggregate ``build_failure_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [metrics]
- ``weight_multiplier``
- Multiplier for weighting meters. Use a floating-point value.
If the per aggregate ``metrics_weight_multiplier``
metadata is set, this multiplier will override the configuration option
value.
* - [metrics]
- ``weight_setting``
- Determines how meters are weighted. Use a comma-separated list of
@ -991,8 +1024,9 @@ map flavors to host aggregates. Administrators do this by setting metadata on
a host aggregate, and matching flavor extra specifications. The scheduler then
endeavors to match user requests for instance of the given flavor to a host
aggregate with the same key-value pair in its metadata. Compute nodes can be
in more than one host aggregate.
in more than one host aggregate. Weight multipliers can be controlled on a
per-aggregate basis by setting the desired ``xxx_weight_multiplier`` aggregate
metadata.
Administrators are able to optionally expose a host aggregate as an
availability zone. Availability zones are different from host aggregates in
that they are explicitly exposed to the user, and hosts can only be in a single

View File

@ -403,24 +403,54 @@ The Filter Scheduler weighs hosts based on the config option
:oslo.config:option:`filter_scheduler.ram_weight_multiplier`, is negative, the
host with least RAM available will win (useful for stacking hosts, instead
of spreading).
Starting with the Stein release, if per-aggregate value with the key
`ram_weight_multiplier` is found, this value would be chosen as the ram
weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.ram_weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |CPUWeigher| Compute weight based on available vCPUs on the compute node.
Sort with the largest weight winning. If the multiplier,
:oslo.config:option:`filter_scheduler.cpu_weight_multiplier`, is negative, the
host with least CPUs available will win (useful for stacking hosts, instead
of spreading).
Starting with the Stein release, if per-aggregate value with the key
`cpu_weight_multiplier` is found, this value would be chosen as the cpu
weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.cpu_weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |DiskWeigher| Hosts are weighted and sorted by free disk space with the largest
weight winning. If the multiplier is negative, the host with less disk space available
will win (useful for stacking hosts, instead of spreading).
Starting with the Stein release, if per-aggregate value with the key
`disk_weight_multiplier` is found, this value would be chosen as the disk
weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.disk_weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |MetricsWeigher| This weigher can compute the weight based on the compute node
host's various metrics. The to-be weighed metrics and their weighing ratio
are specified in the configuration file as the followings::
metrics_weight_setting = name1=1.0, name2=-1.0
Starting with the Stein release, if per-aggregate value with the key
`metrics_weight_multiplier` is found, this value would be chosen as the
metrics weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`metrics.weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |IoOpsWeigher| The weigher can compute the weight based on the compute node
host's workload. The default is to preferably choose light workload compute
hosts. If the multiplier is positive, the weigher prefer choosing heavy
workload compute hosts, the weighing has the opposite effect of the default.
Starting with the Stein release, if per-aggregate value with the key
`io_ops_weight_multiplier` is found, this value would be chosen as the IO
ops weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.io_ops_weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |PCIWeigher| Compute a weighting based on the number of PCI devices on the
host and the number of PCI devices requested by the instance. For example,
@ -440,19 +470,43 @@ The Filter Scheduler weighs hosts based on the config option
force non-PCI instances away from non-PCI hosts, thus, causing future
scheduling issues.
Starting with the Stein release, if per-aggregate value with the key
`pci_weight_multiplier` is found, this value would be chosen as the pci
weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.pci_weight_multiplier`. If more than
one value is found for a host in aggregate metadata, the minimum value will
be used.
* |ServerGroupSoftAffinityWeigher| The weigher can compute the weight based
on the number of instances that run on the same server group. The largest
weight defines the preferred host for the new instance. For the multiplier
only a positive value is allowed for the calculation.
Starting with the Stein release, if per-aggregate value with the key
`soft_affinity_weight_multiplier` is found, this value would be chosen
as the soft affinity weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.soft_affinity_weight_multiplier`.
If more than one value is found for a host in aggregate metadata, the
minimum value will be used.
* |ServerGroupSoftAntiAffinityWeigher| The weigher can compute the weight based
on the number of instances that run on the same server group as a negative
value. The largest weight defines the preferred host for the new instance.
For the multiplier only a positive value is allowed for the calculation.
Starting with the Stein release, if per-aggregate value with the key
`soft_anti_affinity_weight_multiplier` is found, this value would be chosen
as the soft anti-affinity weight multiplier. Otherwise, it will fall back to
the :oslo.config:option:`filter_scheduler.soft_anti_affinity_weight_multiplier`.
If more than one value is found for a host in aggregate metadata, the
minimum value will be used.
* |BuildFailureWeigher| Weigh hosts by the number of recent failed boot attempts.
It considers the build failure counter and can negatively weigh hosts with
recent failures. This avoids taking computes fully out of rotation.
Starting with the Stein release, if per-aggregate value with the key
`build_failure_weight_multiplier` is found, this value would be chosen
as the build failure weight multiplier. Otherwise, it will fall back to the
:oslo.config:option:`filter_scheduler.build_failure_weight_multiplier`.
If more than one value is found for a host in aggregate metadata, the
minimum value will be used.
Filter Scheduler makes a local list of acceptable hosts by repeated filtering and
weighing. Each time it chooses a host, it virtually consumes resources on it,

View File

@ -36,7 +36,7 @@ class MuteChildWeigher(weights.BaseCellWeigher):
MUTE_WEIGH_VALUE = 1.0
def weight_multiplier(self):
def weight_multiplier(self, host_state):
# negative multiplier => lower weight
return CONF.cells.mute_weight_multiplier

View File

@ -27,7 +27,7 @@ CONF = nova.conf.CONF
class RamByInstanceTypeWeigher(weights.BaseCellWeigher):
"""Weigh cells by instance_type requested."""
def weight_multiplier(self):
def weight_multiplier(self, host_state):
return CONF.cells.ram_weight_multiplier
def _weigh_object(self, cell, weight_properties):

View File

@ -31,7 +31,7 @@ class WeightOffsetWeigher(weights.BaseCellWeigher):
its weight_offset to 999999999999999 (highest weight wins)
"""
def weight_multiplier(self):
def weight_multiplier(self, host_state):
return CONF.cells.offset_weight_multiplier
def _weigh_object(self, cell, weight_properties):

View File

@ -34,6 +34,7 @@ from nova.objects import base as obj_base
from nova.objects import instance as obj_instance
from nova import rc_fields as fields
from nova import rpc
from nova.scheduler.filters import utils as filters_utils
LOG = logging.getLogger(__name__)
@ -1007,3 +1008,30 @@ def claim_resources(ctx, client, spec_obj, instance_uuid, alloc_req,
return client.claim_resources(ctx, instance_uuid, alloc_req, project_id,
user_id, allocation_request_version=allocation_request_version,
consumer_generation=None)
def get_weight_multiplier(host_state, multiplier_name, multiplier_config):
"""Given a HostState object, multplier_type name and multiplier_config,
returns the weight multiplier.
It reads the "multiplier_name" from "aggregate metadata" in host_state
to override the multiplier_config. If the aggregate metadata doesn't
contain the multiplier_name, the multiplier_config will be returned
directly.
:param host_state: The HostState object, which contains aggregate metadata
:param multiplier_name: The weight multiplier name, like
"cpu_weight_multiplier".
:param multiplier_config: The weight multiplier configuration value
"""
aggregate_vals = filters_utils.aggregate_values_from_key(host_state,
multiplier_name)
try:
value = filters_utils.validate_num_values(
aggregate_vals, multiplier_config, cast_to=float)
except ValueError as e:
LOG.warning("Could not decode '%(name)s' weight multiplier: %(exce)s",
{'exce': e, 'name': multiplier_name})
value = multiplier_config
return value

View File

@ -25,6 +25,7 @@ by preferring the hosts that has less instances from the given group.
from oslo_config import cfg
from oslo_log import log as logging
from nova.scheduler import utils
from nova.scheduler import weights
CONF = cfg.CONF
@ -55,15 +56,19 @@ class _SoftAffinityWeigherBase(weights.BaseHostWeigher):
class ServerGroupSoftAffinityWeigher(_SoftAffinityWeigherBase):
policy_name = 'soft-affinity'
def weight_multiplier(self):
return CONF.filter_scheduler.soft_affinity_weight_multiplier
def weight_multiplier(self, host_state):
return utils.get_weight_multiplier(
host_state, 'soft_affinity_weight_multiplier',
CONF.filter_scheduler.soft_affinity_weight_multiplier)
class ServerGroupSoftAntiAffinityWeigher(_SoftAffinityWeigherBase):
policy_name = 'soft-anti-affinity'
def weight_multiplier(self):
return CONF.filter_scheduler.soft_anti_affinity_weight_multiplier
def weight_multiplier(self, host_state):
return utils.get_weight_multiplier(
host_state, 'soft_anti_affinity_weight_multiplier',
CONF.filter_scheduler.soft_anti_affinity_weight_multiplier)
def _weigh_object(self, host_state, request_spec):
weight = super(ServerGroupSoftAntiAffinityWeigher, self)._weigh_object(

View File

@ -16,15 +16,18 @@ BuildFailure Weigher. Weigh hosts by the number of recent failed boot attempts.
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
class BuildFailureWeigher(weights.BaseHostWeigher):
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier. Note this is negated."""
return -1 * CONF.filter_scheduler.build_failure_weight_multiplier
return -1 * utils.get_weight_multiplier(
host_state, 'build_failure_weight_multiplier',
CONF.filter_scheduler.build_failure_weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. Our multiplier is negative, so reduce our

View File

@ -17,11 +17,13 @@
CPU Weigher. Weigh hosts by their CPU usage.
The default is to spread instances across all hosts evenly. If you prefer
stacking, you can set the 'cpu_weight_multiplier' option to a negative
number and the weighing has the opposite effect of the default.
stacking, you can set the 'cpu_weight_multiplier' option (by configuration
or aggregate metadata) to a negative number and the weighing has the opposite
effect of the default.
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
@ -30,9 +32,11 @@ CONF = nova.conf.CONF
class CPUWeigher(weights.BaseHostWeigher):
minval = 0
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.filter_scheduler.cpu_weight_multiplier
return utils.get_weight_multiplier(
host_state, 'cpu_weight_multiplier',
CONF.filter_scheduler.cpu_weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default."""

View File

@ -16,11 +16,13 @@
Disk Weigher. Weigh hosts by their disk usage.
The default is to spread instances across all hosts evenly. If you prefer
stacking, you can set the 'disk_weight_multiplier' option to a negative
number and the weighing has the opposite effect of the default.
stacking, you can set the 'disk_weight_multiplier' option (by configuration
or aggregate metadata) to a negative number and the weighing has the opposite
effect of the default.
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
@ -29,9 +31,11 @@ CONF = nova.conf.CONF
class DiskWeigher(weights.BaseHostWeigher):
minval = 0
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.filter_scheduler.disk_weight_multiplier
return utils.get_weight_multiplier(
host_state, 'disk_weight_multiplier',
CONF.filter_scheduler.disk_weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default."""

View File

@ -17,11 +17,12 @@ Io Ops Weigher. Weigh hosts by their io ops number.
The default is to preferably choose light workload compute hosts. If you prefer
choosing heavy workload compute hosts, you can set 'io_ops_weight_multiplier'
option to a positive number and the weighing has the opposite effect of the
default.
option (by configuration or aggregate metadata) to a positive number and the
weighing has the opposite effect of the default.
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
@ -30,9 +31,11 @@ CONF = nova.conf.CONF
class IoOpsWeigher(weights.BaseHostWeigher):
minval = 0
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.filter_scheduler.io_ops_weight_multiplier
return utils.get_weight_multiplier(
host_state, 'io_ops_weight_multiplier',
CONF.filter_scheduler.io_ops_weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want to choose light workload host

View File

@ -45,9 +45,11 @@ class MetricsWeigher(weights.BaseHostWeigher):
converter=float,
name="metrics.weight_setting")
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.metrics.weight_multiplier
return utils.get_weight_multiplier(
host_state, 'metrics_weight_multiplier',
CONF.metrics.weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
value = 0.0
@ -69,7 +71,7 @@ class MetricsWeigher(weights.BaseHostWeigher):
# factor, i.e. set the value to make this obj would be
# at the end of the ordered weighed obj list
# Do nothing if ratio or weight_multiplier is 0.
if ratio * self.weight_multiplier() != 0:
if ratio * self.weight_multiplier(host_state) != 0:
return CONF.metrics.weight_of_unavailable
return value

View File

@ -18,10 +18,11 @@ PCI Affinity Weigher. Weigh hosts by their PCI availability.
Prefer hosts with PCI devices for instances with PCI requirements and vice
versa. Configure the importance of this affinitization using the
'pci_weight_multiplier' option.
'pci_weight_multiplier' option (by configuration or aggregate metadata).
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
@ -37,9 +38,11 @@ MAX_DEVS = 100
class PCIWeigher(weights.BaseHostWeigher):
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.filter_scheduler.pci_weight_multiplier
return utils.get_weight_multiplier(
host_state, 'pci_weight_multiplier',
CONF.filter_scheduler.pci_weight_multiplier)
def _weigh_object(self, host_state, request_spec):
"""Higher weights win. We want to keep PCI hosts free unless needed.

View File

@ -16,11 +16,13 @@
RAM Weigher. Weigh hosts by their RAM usage.
The default is to spread instances across all hosts evenly. If you prefer
stacking, you can set the 'ram_weight_multiplier' option to a negative
number and the weighing has the opposite effect of the default.
stacking, you can set the 'ram_weight_multiplier' option (by configuration
or aggregate metadata) to a negative number and the weighing has the opposite
effect of the default.
"""
import nova.conf
from nova.scheduler import utils
from nova.scheduler import weights
CONF = nova.conf.CONF
@ -29,9 +31,11 @@ CONF = nova.conf.CONF
class RAMWeigher(weights.BaseHostWeigher):
minval = 0
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""Override the weight multiplier."""
return CONF.filter_scheduler.ram_weight_multiplier
return utils.get_weight_multiplier(
host_state, 'ram_weight_multiplier',
CONF.filter_scheduler.ram_weight_multiplier)
def _weigh_object(self, host_state, weight_properties):
"""Higher weights win. We want spreading to be the default."""

View File

@ -20,6 +20,7 @@ from nova.scheduler.client import report
from nova.scheduler import utils
from nova import test
from nova.tests.unit import fake_instance
from nova.tests.unit.scheduler import fakes
class TestUtils(test.NoDBTestCase):
@ -888,3 +889,52 @@ class TestUtils(test.NoDBTestCase):
self.assertTrue(res)
mock_is_rebuild.assert_called_once_with(mock.sentinel.spec_obj)
self.assertFalse(mock_client.claim_resources.called)
def test_get_weight_multiplier(self):
host_attr = {'vcpus_total': 4, 'vcpus_used': 6,
'cpu_allocation_ratio': 1.0}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': 'invalid'},
)]
# Get value from default given value if the agg meta is invalid.
self.assertEqual(
1.0,
utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0)
)
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '1.9'},
)]
# Get value from aggregate metadata
self.assertEqual(
1.9,
utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0)
)
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '1.9'}),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '1.8'}),
]
# Get min value from aggregate metadata
self.assertEqual(
1.8,
utils.get_weight_multiplier(host1, 'cpu_weight_multiplier', 1.0)
)

View File

@ -83,6 +83,7 @@ class SoftAffinityWeigherTestCase(SoftWeigherTestBase):
def setUp(self):
super(SoftAffinityWeigherTestCase, self).setUp()
self.weighers = [affinity.ServerGroupSoftAffinityWeigher()]
self.softaffin_weigher = affinity.ServerGroupSoftAffinityWeigher()
def test_soft_affinity_weight_multiplier_by_default(self):
self._do_test(policy='soft-affinity',
@ -104,12 +105,67 @@ class SoftAffinityWeigherTestCase(SoftWeigherTestBase):
expected_weight=2.0,
expected_host='host2')
def test_soft_affinity_weight_multiplier(self):
self.flags(soft_affinity_weight_multiplier=0.0,
group='filter_scheduler')
host_attr = {'instances': {'instance1': mock.sentinel}}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(0.0, self.softaffin_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'soft_affinity_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.softaffin_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'soft_affinity_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'soft_affinity_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.softaffin_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(soft_affinity_weight_multiplier=0.0,
group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'soft_affinity_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
weighed_host = self._get_weighed_host(hostinfo_list,
'soft-affinity')
self.assertEqual(1.5, weighed_host.weight)
self.assertEqual('host2', weighed_host.obj.host)
class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase):
def setUp(self):
super(SoftAntiAffinityWeigherTestCase, self).setUp()
self.weighers = [affinity.ServerGroupSoftAntiAffinityWeigher()]
self.antiaffin_weigher = affinity.ServerGroupSoftAntiAffinityWeigher()
def test_soft_anti_affinity_weight_multiplier_by_default(self):
self._do_test(policy='soft-anti-affinity',
@ -130,3 +186,57 @@ class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase):
self._do_test(policy='soft-anti-affinity',
expected_weight=2.0,
expected_host='host3')
def test_soft_anti_affinity_weight_multiplier(self):
self.flags(soft_anti_affinity_weight_multiplier=0.0,
group='filter_scheduler')
host_attr = {'instances': {'instance1': mock.sentinel}}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(0.0, self.antiaffin_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'soft_anti_affinity_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.antiaffin_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'soft_anti_affinity_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'soft_anti_affinity_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.antiaffin_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(soft_anti_affinity_weight_multiplier=0.0,
group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'soft_anti_affinity_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
weighed_host = self._get_weighed_host(hostinfo_list,
'soft-anti-affinity')
self.assertEqual(1.5, weighed_host.weight)
self.assertEqual('host3', weighed_host.obj.host)

View File

@ -14,6 +14,7 @@
Tests For Scheduler build failure weights.
"""
from nova import objects
from nova.scheduler import weights
from nova.scheduler.weights import compute
from nova import test
@ -25,6 +26,7 @@ class BuildFailureWeigherTestCase(test.NoDBTestCase):
super(BuildFailureWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [compute.BuildFailureWeigher()]
self.buildfailure_weigher = compute.BuildFailureWeigher()
def _get_weighed_host(self, hosts):
return self.weight_handler.get_weighed_objects(self.weighers,
@ -55,3 +57,60 @@ class BuildFailureWeigherTestCase(test.NoDBTestCase):
weighed_hosts = self._get_weighed_host(hosts)
self.assertEqual([0, -10, -100, -1000],
[wh.weight for wh in weighed_hosts])
def test_build_failure_weight_multiplier(self):
self.flags(build_failure_weight_multiplier=0.0,
group='filter_scheduler')
host_attr = {'failed_builds': 1}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(0.0,
self.buildfailure_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'build_failure_weight_multiplier': '1000.0'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(-1000,
self.buildfailure_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'build_failure_weight_multiplier': '500'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'build_failure_weight_multiplier': '1000'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(-500,
self.buildfailure_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(build_failure_weight_multiplier=0.0,
group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'build_failure_weight_multiplier': '1000'},
)]
for h in hostinfo_list:
h.aggregates = aggs
weights = self.weight_handler.get_weighed_objects(self.weighers,
hostinfo_list, {})
self.assertEqual([0, -10, -100, -1000],
[wh.weight for wh in weights])

View File

@ -16,6 +16,7 @@
Tests For Scheduler CPU weights.
"""
from nova import objects
from nova.scheduler import weights
from nova.scheduler.weights import cpu
from nova import test
@ -27,6 +28,7 @@ class CPUWeigherTestCase(test.NoDBTestCase):
super(CPUWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [cpu.CPUWeigher()]
self.cpu_weigher = cpu.CPUWeigher()
def _get_weighed_host(self, hosts, weight_properties=None):
if weight_properties is None:
@ -127,3 +129,62 @@ class CPUWeigherTestCase(test.NoDBTestCase):
weighed_host = weights[-1]
self.assertEqual(0, weighed_host.weight)
self.assertEqual('negative', weighed_host.obj.host)
def test_cpu_weigher_multiplier(self):
self.flags(cpu_weight_multiplier=-1.0, group='filter_scheduler')
host_attr = {'vcpus_total': 4, 'vcpus_used': 6,
'cpu_allocation_ratio': 1.0}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the cpu_weight_multiplier configuration directly
self.assertEqual(-1, self.cpu_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.cpu_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'cpu_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.cpu_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(cpu_weight_multiplier=-1.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'cpu_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
# host1: vcpus_free=0
# host2: vcpus_free=2
# host3: vcpus_free=6
# host4: vcpus_free=8
# so, host4 should win
weights = self.weight_handler.get_weighed_objects(self.weighers,
hostinfo_list, {})
weighed_host = weights[0]
self.assertEqual(1.5, weighed_host.weight)
self.assertEqual('host4', weighed_host.obj.host)

View File

@ -16,6 +16,7 @@
Tests For Scheduler disk weights.
"""
from nova import objects
from nova.scheduler import weights
from nova.scheduler.weights import disk
from nova import test
@ -27,6 +28,7 @@ class DiskWeigherTestCase(test.NoDBTestCase):
super(DiskWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [disk.DiskWeigher()]
self.disk_weigher = disk.DiskWeigher()
def _get_weighed_host(self, hosts, weight_properties=None):
if weight_properties is None:
@ -109,3 +111,61 @@ class DiskWeigherTestCase(test.NoDBTestCase):
weighed_host = weights[-1]
self.assertEqual(0, weighed_host.weight)
self.assertEqual('negative', weighed_host.obj.host)
def test_disk_weigher_multiplier(self):
self.flags(disk_weight_multiplier=-1.0, group='filter_scheduler')
host_attr = {'free_disk_mb': 5120}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(-1, self.disk_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'disk_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.disk_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'disk_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'disk_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.disk_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(disk_weight_multiplier=-1.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'disk_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
# host1: free_disk_mb=5120
# host2: free_disk_mb=10240
# host3: free_disk_mb=30720
# host4: free_disk_mb=81920
# so, host4 should win:
weights = self.weight_handler.get_weighed_objects(self.weighers,
hostinfo_list, {})
weighed_host = weights[0]
self.assertEqual(1.0 * 1.5, weighed_host.weight)
self.assertEqual('host4', weighed_host.obj.host)

View File

@ -13,6 +13,7 @@
Tests For Scheduler IoOpsWeigher weights
"""
from nova import objects
from nova.scheduler import weights
from nova.scheduler.weights import io_ops
from nova import test
@ -25,6 +26,7 @@ class IoOpsWeigherTestCase(test.NoDBTestCase):
super(IoOpsWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [io_ops.IoOpsWeigher()]
self.ioops_weigher = io_ops.IoOpsWeigher()
def _get_weighed_host(self, hosts, io_ops_weight_multiplier):
if io_ops_weight_multiplier is not None:
@ -67,3 +69,58 @@ class IoOpsWeigherTestCase(test.NoDBTestCase):
self._do_test(io_ops_weight_multiplier=2.0,
expected_weight=2.0,
expected_host='host4')
def test_io_ops_weight_multiplier(self):
self.flags(io_ops_weight_multiplier=0.0,
group='filter_scheduler')
host_attr = {'num_io_ops': 1}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(0.0, self.ioops_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'io_ops_weight_multiplier': '1'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(1.0, self.ioops_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'io_ops_weight_multiplier': '1'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'io_ops_weight_multiplier': '0.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(0.5, self.ioops_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(io_ops_weight_multiplier=0.0,
group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'io_ops_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
weights = self.weight_handler.get_weighed_objects(self.weighers,
hostinfo_list, {})
weighed_host = weights[0]
self.assertEqual(1.0 * 1.5, weighed_host.weight)
self.assertEqual('host4', weighed_host.obj.host)

View File

@ -14,6 +14,7 @@ Tests For Scheduler metrics weights.
"""
from nova import exception
from nova import objects
from nova.objects import fields
from nova.objects import monitor_metric
from nova.scheduler import weights
@ -27,11 +28,21 @@ kernel = fields.MonitorMetricType.CPU_KERNEL_TIME
user = fields.MonitorMetricType.CPU_USER_TIME
def fake_metric(name, value):
return monitor_metric.MonitorMetric(name=name, value=value)
def fake_list(objs):
m_list = [fake_metric(name, val) for name, val in objs]
return monitor_metric.MonitorMetricList(objects=m_list)
class MetricsWeigherTestCase(test.NoDBTestCase):
def setUp(self):
super(MetricsWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [metrics.MetricsWeigher()]
self.metrics_weigher = metrics.MetricsWeigher()
def _get_weighed_host(self, hosts, setting, weight_properties=None):
if not weight_properties:
@ -42,13 +53,6 @@ class MetricsWeigherTestCase(test.NoDBTestCase):
hosts, weight_properties)[0]
def _get_all_hosts(self):
def fake_metric(name, value):
return monitor_metric.MonitorMetric(name=name, value=value)
def fake_list(objs):
m_list = [fake_metric(name, val) for name, val in objs]
return monitor_metric.MonitorMetricList(objects=m_list)
host_values = [
('host1', 'node1', {'metrics': fake_list([(idle, 512),
(kernel, 1)])}),
@ -181,3 +185,59 @@ class MetricsWeigherTestCase(test.NoDBTestCase):
self.flags(required=False, group='metrics')
setting = [idle + '=0.0001', user + '=-1']
self._do_test(setting, 1.0, 'host5')
def test_metrics_weigher_multiplier(self):
self.flags(weight_multiplier=-1.0, group='metrics')
host_attr = {'metrics': fake_list([(idle, 512), (kernel, 1)])}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(-1, self.metrics_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'metrics_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.metrics_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'metrics_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'metrics_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.metrics_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
# host1: idle=512, kernel=1
# host2: idle=1024, kernel=2
# host3: idle=3072, kernel=1
# host4: idle=8192, kernel=0
# so, host4 should win:
setting = [idle + '=0.0001', kernel]
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4', 'host5', 'host6'],
metadata={'metrics_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
weighed_host = self._get_weighed_host(hostinfo_list, setting)
self.assertEqual(1.5, weighed_host.weight)
self.assertEqual('host4', weighed_host.obj.host)

View File

@ -26,30 +26,32 @@ from nova.tests.unit import fake_pci_device_pools as fake_pci
from nova.tests.unit.scheduler import fakes
def _create_pci_pool(count):
test_dict = copy.copy(fake_pci.fake_pool_dict)
test_dict['count'] = count
return objects.PciDevicePool.from_dict(test_dict)
def _create_pci_stats(counts):
if counts is None: # the pci_stats column is nullable
return None
pools = [_create_pci_pool(count) for count in counts]
return stats.PciDeviceStats(pools)
class PCIWeigherTestCase(test.NoDBTestCase):
def setUp(self):
super(PCIWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [pci.PCIWeigher()]
self.pci_weigher = pci.PCIWeigher()
def _get_weighed_hosts(self, hosts, request_spec):
return self.weight_handler.get_weighed_objects(self.weighers,
hosts, request_spec)
def _get_all_hosts(self, host_values):
def _create_pci_pool(count):
test_dict = copy.copy(fake_pci.fake_pool_dict)
test_dict['count'] = count
return objects.PciDevicePool.from_dict(test_dict)
def _create_pci_stats(counts):
if counts is None: # the pci_stats column is nullable
return None
pools = [_create_pci_pool(count) for count in counts]
return stats.PciDeviceStats(pools)
return [fakes.FakeHostState(
host, node, {'pci_stats': _create_pci_stats(values)})
for host, node, values in host_values]
@ -169,3 +171,62 @@ class PCIWeigherTestCase(test.NoDBTestCase):
for weighed_host in weighed_hosts:
# the weigher normalizes all weights to 0 if they're all equal
self.assertEqual(0.0, weighed_host.weight)
def test_pci_weigher_multiplier(self):
self.flags(pci_weight_multiplier=0.0, group='filter_scheduler')
hosts = [500]
host1 = fakes.FakeHostState(
'fake-host', 'node', {'pci_stats': _create_pci_stats(hosts)})
# By default, return the weight_multiplier configuration directly
self.assertEqual(0.0, self.pci_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'pci_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.pci_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'pci_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'pci_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.pci_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(pci_weight_multiplier=0.0, group='filter_scheduler')
hosts = [
('host1', 'node1', [2, 2, 2]), # 6 devs
('host2', 'node2', [3, 1]), # 4 devs
]
hostinfo_list = self._get_all_hosts(hosts)
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2'],
metadata={'pci_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
spec_obj = objects.RequestSpec(pci_requests=None)
# host2, which has the least free PCI devices, should win
weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0]
self.assertEqual(1.5, weighed_host.weight)
self.assertEqual('host2', weighed_host.obj.host)

View File

@ -16,6 +16,7 @@
Tests For Scheduler RAM weights.
"""
from nova import objects
from nova.scheduler import weights
from nova.scheduler.weights import ram
from nova import test
@ -27,6 +28,7 @@ class RamWeigherTestCase(test.NoDBTestCase):
super(RamWeigherTestCase, self).setUp()
self.weight_handler = weights.HostWeightHandler()
self.weighers = [ram.RAMWeigher()]
self.ram_weigher = ram.RAMWeigher()
def _get_weighed_host(self, hosts, weight_properties=None):
if weight_properties is None:
@ -109,3 +111,61 @@ class RamWeigherTestCase(test.NoDBTestCase):
weighed_host = weights[-1]
self.assertEqual(0, weighed_host.weight)
self.assertEqual('negative', weighed_host.obj.host)
def test_ram_weigher_multiplier(self):
self.flags(ram_weight_multiplier=-1.0, group='filter_scheduler')
host_attr = {'free_ram_mb': 5120}
host1 = fakes.FakeHostState('fake-host', 'node', host_attr)
# By default, return the weight_multiplier configuration directly
self.assertEqual(-1, self.ram_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'ram_weight_multiplier': '2'},
)]
# read the weight multiplier from metadata to override the config
self.assertEqual(2.0, self.ram_weigher.weight_multiplier(host1))
host1.aggregates = [
objects.Aggregate(
id=1,
name='foo',
hosts=['fake-host'],
metadata={'ram_weight_multiplier': '2'},
),
objects.Aggregate(
id=2,
name='foo',
hosts=['fake-host'],
metadata={'ram_weight_multiplier': '1.5'},
)]
# If the host is in multiple aggs and there are conflict weight values
# in the metadata, we will use the min value among them
self.assertEqual(1.5, self.ram_weigher.weight_multiplier(host1))
def test_host_with_agg(self):
self.flags(ram_weight_multiplier=-1.0, group='filter_scheduler')
hostinfo_list = self._get_all_hosts()
aggs = [
objects.Aggregate(
id=1,
name='foo',
hosts=['host1', 'host2', 'host3', 'host4'],
metadata={'ram_weight_multiplier': '1.5'},
)]
for h in hostinfo_list:
h.aggregates = aggs
# host1: free_ram_mb=512
# host2: free_ram_mb=1024
# host3: free_ram_mb=3072
# host4: free_ram_mb=8192
# so, host4 should win:
weights = self.weight_handler.get_weighed_objects(self.weighers,
hostinfo_list, {})
weighed_host = weights[0]
self.assertEqual(1.0 * 1.5, weighed_host.weight)
self.assertEqual('host4', weighed_host.obj.host)

View File

@ -32,7 +32,7 @@ class TestWeigher(test.NoDBTestCase):
pass
self.assertEqual(1.0,
FakeWeigher().weight_multiplier())
FakeWeigher().weight_multiplier(None))
def test_no_weight_object(self):
class FakeWeigher(weights.BaseWeigher):

View File

@ -76,12 +76,17 @@ class BaseWeigher(object):
minval = None
maxval = None
def weight_multiplier(self):
def weight_multiplier(self, host_state):
"""How weighted this weigher should be.
Override this method in a subclass, so that the returned value is
read from a configuration option to permit operators specify a
multiplier for the weigher.
multiplier for the weigher. If the host is in an aggregate, this
method of subclass can read the ``weight_multiplier`` from aggregate
metadata of ``host_state``, and use it to overwrite multiplier
configuration.
:param host_state: The HostState object.
"""
return 1.0
@ -138,6 +143,6 @@ class BaseWeightHandler(loadables.BaseLoader):
for i, weight in enumerate(weights):
obj = weighed_objs[i]
obj.weight += weigher.weight_multiplier() * weight
obj.weight += weigher.weight_multiplier(obj.obj) * weight
return sorted(weighed_objs, key=lambda x: x.weight, reverse=True)

View File

@ -0,0 +1,15 @@
---
features:
- |
Added the ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.
Such as, for the CPUWeigher, it weighs hosts based on available vCPUs
on the compute node, and multiplies it by the cpu weight multiplier. If
per-aggregate value (which the key is "cpu_weight_multiplier") is found,
this value would be chosen as the cpu weight multiplier. Otherwise, it
will fall back to the ``[filter_scheduler]/cpu_weight_multiplier``. If
more than one value is found for a host in aggregate metadata, the minimum
value will be used.