Commit Graph

13 Commits

Author SHA1 Message Date
Balazs Gibizer 154ab7b2f9 Add debug log for scheduler weight calculation
We have all the weighers enabled by default and each can have its own
multiplier making the final compute node order calculation pretty
complex. This patch adds some debug logging that helps understanding how
the final ordering was reached.

Change-Id: I7606d6eb3e08548c1df9dc245ab39cced7de1fb5
2021-11-11 19:10:32 +01:00
Sean Mooney f3d48000b1 Add autopep8 to tox and pre-commit
autopep8 is a code formating tool that makes python code pep8
compliant without changing everything. Unlike black it will
not radically change all code and the primary change to the
existing codebase is adding a new line after class level doc strings.

This change adds a new tox autopep8 env to manually run it on your
code before you submit a patch, it also adds autopep8 to pre-commit
so if you use pre-commit it will do it for you automatically.

This change runs autopep8 in diff mode with --exit-code in the pep8
tox env so it will fail if autopep8 would modify your code if run
in in-place mode. This allows use to gate on autopep8 not modifying
patches that are submited. This will ensure authorship of patches is
maintianed.

The intent of this change is to save the large amount of time we spend
on ensuring style guidlines are followed automatically to make it
simpler for both new and old contibutors to work on nova and save
time and effort for all involved.

Change-Id: Idd618d634cc70ae8d58fab32f322e75bfabefb9d
2021-11-08 12:37:27 +00:00
Takashi Natsume 5191b4f2f0 Remove six.add_metaclass
Replace six.add_metaclass with Python 3 style code.

Change-Id: Ifc3f2bcb8fcdd2b555864bd4e22a973a7858c272
Implements: blueprint six-removal
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
2020-08-15 07:45:39 +00:00
Johannes Kulik 5ab9ef11e2 Don't recompute weighers' minval/maxval attributes
Changing the minval/maxval attribute to the minimum/maxium of every
weigher run changes the outcome of future runs. We noticed it in the
SoftAffinityWeigher, where a previous run with a host hosting a lot of
instances for a server-group would make a later run use that maximum.
This resulted in the weight being lower than 1 for a host hosting all
instances of another server-group, if the number of instances of that
server-group on that host is less than a previous server-group's
instances on any host.

Previously, there were two places that computed the maxval/minval - once
in normalize() and once in weigh_objects() - but only the one in
weigh_objects() saved the values to the weigher.

The code now uses the maxval/minval as defined by the weigher and keeps
the weights inside the maxval-minval range. There's also only one place
to compute the minval/maxval now, if the weigher did not set a value:
normalize().

Closes-Bug: 1870096

Change-Id: I60a90dabcd21b4e049e218c7c55fa075bb7ff933
2020-04-01 11:46:48 +02:00
Yikun Jiang e66443770d Per aggregate scheduling weight
This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.

blueprint: per-aggregate-scheduling-weight

Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
2019-01-21 11:48:44 +08:00
yuhui_inspur fdd2c1ed98 Correct some spelling errors
Change-Id: I3c139565fc9300449eb25d87dfcc9d4177bc2085
2017-02-25 02:45:30 +00:00
jichenjc 139835900f Skip only one host weight calculation
If there is only one host available, calculate the weight
make no sense because whatever the weight it is, nova will
use the host.

Closes-Bug: 1448015

Change-Id: I38aed6a6e45d24dc0daf2e96c353f394f3ef5e3f
2015-05-05 11:40:09 +08:00
Hans Lindgren c126d36640 Make scheduler filters/weighers only load once
Right now, filters/weighers are instantiated on every invocation of the
scheduler. This is both time consuming and unnecessary. In cases where
a filter/weigher tries to be smart and store/cache something in between
invocations this actually prohibits that.

This change make base filter/weigher functions take objects instead of
classes and then let schedulers create objects only once and then reuse
them.

This fixes a known bug in trusted_filter that tries to cache things.

Related to blueprint scheduler-optimization

Change-Id: I3174ab7968b51c43c0711033bac5d4bc30938b95
Closes-Bug: #1223450
2014-12-09 18:58:49 +01:00
Lianhao Lu 069ee2aed2 Using six.add_metaclass
Using six.add_metaclass instead of '__metaclass__' for Python 3.x
compatibility.

Change-Id: I04848196c8bc553fec19dd447a8fdd6dacdf64b8
2014-02-11 09:45:20 +08:00
Alvaro Lopez Garcia e5ba849437 Normalize the weights instead of using raw values
The weight system is being used by the scheduler and the cells code.
Currently this system is using the raw values instead of normalizing them.
This makes difficult to properly use multipliers for establishing the
relative importance between two wheighers (one big magnitude could
shade a smaller one). This change introduces weight normalization so
that:

- From an operator point of view we can prioritize the weighers that
  we are applying. The only way to do this is being sure that all the
  weighers will give a value in a known range, so that it is
  not needed to artificially use a huge multiplier to prioritize a
  weigher.

- From a weigher developer point of view, somebody willing to implement
  one has to care about 1) returning a list of values, 2) setting the
  minimum and maximum values where the weights can range, if they are
  needed and they are significant for the weighing. For a weigher
  developer there are two use cases:

    Case 1: Use of a percentage instead of absolute values (for example, %
    of free RAM). If we compare two nodes focusing on the percentage of free
    ram, the maximum value for the weigher is 100. If we have two nodes one
    with 2048 total/1024 free, and the second one 1024 total/512 free they
    will get both the same weight, since they have the same % of free RAM
    (that is, the 50%).

    Case 2: Use of absolute values. In this case, the maximum of the weigher
    will be the maximum of the values in the list (in the case above, 1024)
    or the maximum value that the magnitude could take (in the case above,
    2048). How this maximum is set, is a decision of the developer. He may
    let the operator choose the behaviour of the weigher though.

- From the point of view of the scheduler we ensure that it is using
  normalized values, and not leveraging the normalization mechanism to the
  weighers.

Changes introduced this commit:

1) it introduces weight normalization so that we can apply multipliers
   easily. All the weights for an object will be normalized between 0.0 and
   1.0 before being sumed up, so that the final weight for a host will be:

    weight = w1_multiplier * norm(w1) + w2_multiplier * norm(w2) + ...

2) weights.BaseWeigher has been changed into an ABC so that we enforce
   that all weighers have the expected methods.

3) weights.BaseWeigher.weigh_objects() does no longer sum up the
   computer weighs to the object, but it rather returns a list that will be
   then normalized and added to the existing weight by BaseWeightHandler

4) Adapt the existing weighers to the above changes. Namely
    - New 'offset_weight_multiplier' for the cell weigher
      nova.cells.weights.weight_offset.WeightOffsetWeigher
    - Changed the name of the existing multiplier methods.

5) unittests for all of the introduced changes.

Implements blueprint normalize-scheduler-weights

DocImpact: Now weights for an object are normalized before suming them
up. This means that each weigher will take a maximum value of 1. This
may have an impact for operators that are using more than one weigher
(currently there is only one weigher: RAMWeiger) and for operators using
cells (where we have several weighers). It is needed to review then the
multipliers used and adjust them properly in case they have been
modified.

Docimpact: There is a new configuration option 'offset_weight_multiplier'
in nova.cells.weights.weight_offset.WeightOffsetWeigher

Change-Id: I81bf90898d3cb81541f4390596823cc00106eb20
2013-12-11 20:24:16 +01:00
Alex Glikson f16f41b1f2 Fixes typos in the files in the nova folder
blueprint fix-nova-typos

Change-Id: I0971b98999381183c0c77fff1d569180606e338b
2013-10-07 23:40:01 +02:00
Kurt Taylor d17f9ab13d Update OpenStack LLC to Foundation
Update all references of "LLC" to "Foundation".

Change-Id: I009e86784ef4dcf38882d64b0eff484576e04efe
2013-02-26 19:15:29 -05:00
Chris Behrens 2c6ab62ae2 Refactor scheduling weights.
This makes scheduling weights more plugin friendly and creates shared
code that can be used by the host scheduler as well as the future cells
scheduler.  Weighing classes can now be specified much like you can
specify scheduling host filters.

The new weights code reverses the old behavior where lower weights win.
Higher weights are now the winners.

The least_cost module and configs have been deprecated, but are still
supported for backwards compatibility.  The code has moved to
nova.scheduler.weights.least_cost and been modified to work with the new
loadable-class code.  If any of the least_cost related config options are
specified, this least_cost weigher will be used.

For those not overriding the default least_cost config values, the new
RamWeigher class will be used.  The default behavior of the RamWeigher
class is the same default behavior as the old least_cost module.

The new weights code introduces a new config option
'scheduler_weight_classes' which is used to specify which weigher classes
to use.  The default is 'all classes', but modified if least_cost
deprecated config options are used, as mentioned above.

The RamWeigher class introduces a new config option
'ram_weight_multiplier'.  The default of 1.0 causes weights equal to the
free memory in MB to be returned, thus hosts with more free memory are
preferred (causes equal spreading).  Changing this value to a negative
number such as -1.0 will cause reverse behavior (fill first).

DocImpact

Change-Id: I1e5e5039c299db02f7287f2d33299ebf0b9732ce
2012-11-14 19:04:17 +00:00