Update to latest status of codes

This patch updates the codes to the latest status at the time of commit.
The updated codes should be compatible with stable icehouse version openstack.

Change-Id: I6d1d3a7da929f0737fbe6279be796e3bb4648991
This commit is contained in:
Xinyuan Huang 2015-05-15 00:45:58 -07:00
parent b92837f2d5
commit d87382e663
113 changed files with 5247 additions and 8545 deletions

469
README.md
View File

@ -1,398 +1,129 @@
Openstack Nova Solver Scheduler
===============================
# Openstack Nova Solver Scheduler
Solver Scheduler is an Openstack Nova Scheduler driver that provides a smarter, complex constraints optimization based resource scheduling in Nova. It is a pluggable scheduler driver, that can leverage existing complex constraint solvers available in open source such as PULP, CVXOPT, Google OR-TOOLS, etc. It can be easily extended to add complex constraint models for various use cases, written using any of the available open source constraint solving frameworks.
## Overview
Key modules
-----------
Solver Scheduler is an OpenStack Nova scheduler driver that provides smart, efficient, and optimization based compute resource scheduling in Nova. It is a pluggable scheduler driver, that can leverage existing constraint solvers available in open source such as PULP, CVXOPT, Google OR-TOOLS, etc. It can be easily extended to add complex constraint models for various use cases, and to solve complex scheduling problems with pulggable solving frameworks.
* The new scheduler driver module:
## From filter-scheduler to solver-scheduler
nova/scheduler/solver_scheduler.py
The nova-solver-scheduler works in a similar (but different) way as Nova's default filter-scheduler. It is designed to have the following 3-layer pluggable architecture, compared with filter-scheduler's 2-layer architecture.
* The code includes a reference implementation of a solver that models the scheduling problem as a Linear Programming model, written using the PULP LP modeling language. It uses a PULP_CBC_CMD, which is a packaged constraint solver, included in the coinor-pulp python package.
* Filter-scheduler architecture
- Scheduler driver: FilterScheduler. This is a driver that realizes the filter based scheduling functionalities. It is plugged into Nova scheduler service, and configured by the second layer plug-in's, which are known as weighers and filters.
- Configurable plug-ins: Weights/Filters. They are the configuration units that defines the behaviour of the filter-scheduler, where filters decide which hosts are available for user requested instance, and weights are then used to sort filtered hosts to find a best one to place the instance.
nova/scheduler/solvers/hosts_pulp_solver.py
* Solver-scheduler architecture
- Scheduler driver: SolverScheduler. This is a driver that realizes constraint optimization based scheduling functionalities. It sits in parellel with FilterScheduler and can be used as an (advanced) alternative. It uses pluggable opeimization solver modules to solve scheduling problems.
- Solver drivers. The solver drivers are pluggavle optimization solvers that solve scheduling problems defined by its lower layer plug-in's, which are costs/constraints. The pluggable solvers provide more flexibility for complex scheduling scenarios as well as the ability to give more optimimal scheduling solutions.
- Configurable plug-ins: Costs/Constraints. Similar as weights/filters in the filter-scheduler. The constraints define the hard restrictions of hosts that cannot be violated, and the costs define the soft objectives that the scheduler should tend to achieve. Scheduler solver will give an overall optimized solution for the placement of all requested instances in each single instance creation request. The costs/constraints can be plugged and configured in a similar way as weights/filters in Nova's default filter scheduler.
* The pluggable solvers using coinor-pulp package, where costs functions and linear constraints can be plugged into the solver.
nova/scheduler/solvers/pluggable_hosts_pulp_solver.py
Additional modules
------------------
* The cost functions pluggable to solver:
nova/scheduler/solvers/costs/ram_cost.py
nova/scheduler/solvers/costs/volume_affinity_cost.py
* The linear constraints that are pluggable to solver:
nova/scheduler/solvers/linearconstraints/active_host_constraint.py
nova/scheduler/solvers/linearconstraints/all_hosts_constraint.py
nova/scheduler/solvers/linearconstraints/availability_zone_constraint.py
nova/scheduler/solvers/linearconstraints/different_host_constraint.py
nova/scheduler/solvers/linearconstraints/same_host_constraint.py
nova/scheduler/solvers/linearconstraints/io_ops_constraint.py
nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py
nova/scheduler/solvers/linearconstraints/max_ram_allocation_constraint.py
nova/scheduler/solvers/linearconstraints/max_vcpu_allocation_constraint.py
nova/scheduler/solvers/linearconstraints/max_instances_per_host_constraint.py
nova/scheduler/solvers/linearconstraints/non_trivial_solution_constraint.py
Requirements
------------
* coinor.pulp>=1.0.4
Installation
------------
We provide 2 ways to install the solver-scheduler code. In this section, we will guide you through installing the solver scheduler with the minimum configuration. For instructions of configuring a fully functional solver-scheduler, please check out the next sections.
* **Note:**
- Make sure you have an existing installation of **Openstack Icehouse**.
- The automatic installation scripts are of **alpha** version, which was tested on **Ubuntu 14.04** and **OpenStack Icehouse** only.
- We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified:
$NOVA_CONFIG_PARENT_DIR/nova.conf
$NOVA_PARENT_DIR/nova/volume/cinder.py
$NOVA_PARENT_DIR/nova/tests/scheduler/fakes.py
(replace the $... with actual directory names.)
* **Prerequisites**
- Please install the python package: coinor.pulp >= 1.0.4
* **Manual Installation**
- Make sure you have performed backups properly.
- Clone the repository to your local host where nova-scheduler is run.
- Navigate to the local repository and copy the contents in 'nova' sub-directory to the corresponding places in existing nova, e.g.
```cp -r $LOCAL_REPOSITORY_DIR/nova $NOVA_PARENT_DIR```
(replace the $... with actual directory name.)
- Update the nova configuration file (e.g. /etc/nova/nova.conf) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide.
```
[DEFAULT]
...
scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler
```
- Restart the nova scheduler.
```service nova-scheduler restart```
- Done. The nova-solver-scheduler should be working with a demo configuration.
* **Automatic Installation**
- Make sure you have performed backups properly.
- Clone the repository to your local host where nova-scheduler is run.
- Navigate to the installation directory and run installation script.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./install.sh
```
(replace the $... with actual directory name.)
- Done. The installation code should setup the solver-scheduler with the minimum configuration below. Check the "Configurations" section for a full configuration guide.
```
[DEFAULT]
...
scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler
```
* **Uninstallation**
- If you need to switch to other scheduler, simply open the nova configuration file and edit the option ```scheduler_driver``` (the default: ```scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler```), then restart nova-scheduler.
- There is no need to remove the solver-scheduler code after installation since it is supposed to be fully compatible with the default nova scheduler. However, in case you really want to restore the code, you can either do it manually with your backup, or use the uninstallation script provided in the installation directory.
```
cd $LOCAL_REPOSITORY_DIR/installation
sudo bash ./uninstall.sh
```
(replace the $... with actual directory name.)
- Please remember to restart the nova-scheduler service everytime when changes are made in the code or config file.
* **Troubleshooting**
In case the automatic installation/uninstallation process is not complete, please check the followings:
- Make sure your OpenStack version is Icehouse.
- Check the variables in the beginning of the install.sh/uninstall.sh scripts. Your installation directories may be different from the default values we provide.
- The installation code will automatically backup the related codes to:
$NOVA_PARENT_DIR/nova/.solver-scheduler-installation-backup
Please do not make changes to the backup if you do not have to. If you encounter problems during installation, you can always find the backup files in this directory.
- The automatic uninstallation script can only work when you used automatic installation beforehand. If you installed manually, please also uninstall manually (though there is no need to actually "uninstall").
- In case the automatic installation does not work, try to install manually.
Configurations
--------------
* This is a (default) configuration sample for the solver-scheduler. Please add/modify these options in /etc/nova/nova.conf.
* Note:
- Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name.
- The solver class 'nova.scheduler.solvers.hosts_pulp_solver.HostsPulpSolver' is used by default in installation for demo purpose. It is self-inclusive and non-pluggable for costs and constraints. Please switch to 'nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver' for a fully functional (pluggable) solver.
- Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints.
While it appears to the user that the solver scheduler with costs/constraints provides similar functionalities as filter scheduler with weights/filters, solver scheduler can be more efficient in large scale high amount scheduling problems (e.g. placing 500 instances in a 1000 node cluster in a single request), the scheduling solution from the solver scheduler is also more optimal compared with that from filter scheduler due to its more flexible designs.
## Basic configurations
In order to enable nova-solver-scheduler, we need to have the following minimal configurations in the "[default]" section of nova.conf. Please overwrite the config options' values if the option keys already exist in the configuration file.
```
[DEFAULT]
...
#
# Options defined in nova.scheduler.manager
#
# Default driver to use for the scheduler (string value)
... (other options)
scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler
scheduler_host_manager=nova.scheduler.solver_scheduler_host_manager.SolverSchedulerHostManager
```
#
# Options defined in nova.scheduler.filters.core_filter
#
# Virtual CPU to physical CPU allocation ratio which affects
# all CPU filters. This configuration specifies a global ratio
# for CoreFilter. For AggregateCoreFilter, it will fall back
# to this configuration value if no per-aggregate setting
# found. This option is also used in Solver Scheduler for the
# MaxVcpuAllocationPerHostConstraint (floating point value)
cpu_allocation_ratio=16.0
#
# Options defined in nova.scheduler.filters.disk_filter
#
# Virtual disk to physical disk allocation ratio (floating
# point value)
disk_allocation_ratio=1.0
#
# Options defined in nova.scheduler.filters.num_instances_filter
#
# Ignore hosts that have too many instances (integer value)
max_instances_per_host=50
#
# Options defined in nova.scheduler.filters.io_ops_filter
#
# Ignore hosts that have too many
# builds/resizes/snaps/migrations. (integer value)
max_io_ops_per_host=8
#
# Options defined in nova.scheduler.filters.ram_filter
#
# Virtual ram to physical ram allocation ratio which affects
# all ram filters. This configuration specifies a global ratio
# for RamFilter. For AggregateRamFilter, it will fall back to
# this configuration value if no per-aggregate setting found.
# (floating point value)
ram_allocation_ratio=1.5
#
# Options defined in nova.scheduler.weights.ram
#
# Multiplier used for weighing ram. Negative numbers mean to
# stack vs spread. (floating point value)
ram_weight_multiplier=1.0
#
# Options defined in nova.volume.cinder
#
# Keystone Cinder account username (string value)
cinder_admin_user=<None>
# Keystone Cinder account password (string value)
cinder_admin_password=<None>
# Keystone Cinder account tenant name (string value)
cinder_admin_tenant_name=service
# Complete public Identity API endpoint (string value)
cinder_auth_uri=<None>
## Solvers
We provide 2 solvers that can plug-in-and-solve the scheduling problems by satisfying all the configured constraints: FastSolver (default), and PulpSolver. The FastSolver runs a fast algorithm that can solve large scale scheduling requests efficiently while giving optimal solutions. The PulpSolver translates the scheduling problems into standard LP (Linear Programming) problems, and invokes a 3rd party LP solver interface (coinor.pulp >= 1.0.4) to solve the scheduling problems. While PulpSolver might be more flexible for complex constraints (which might happen in the future), it works slower than the FastSolver especially in large scale scheduling problems.
We recommend using FastSolver at the current stage, as it covers all (currently) known constraint requirements, and scales better.
The following option in the "[solver_scheduler]" section of nova config should be used to specify which solver to use. Please add a new section title called "[solver_scheduler]" if it (probably) doesn't already exist in the nova config file.
```
[solver_scheduler]
scheduler_host_solver=nova.scheduler.solvers.fast_solver.FastSolver
```
#
# Options defined in nova.scheduler.solver_scheduler
#
## Costs and Constraints
# The pluggable solver implementation to use. By default, a
# reference solver implementation is included that models the
# problem as a Linear Programming (LP) problem using PULP.
# To use a fully functional (pluggable) solver, set the option as
# "nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver"
# (string value)
scheduler_host_solver=nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver
### Configuring which costs/constraints to use
Solver-scheduler uses "costs" and "constraints" to configure the behaviour of scheduler. They can be set in a similar way as "weights" and "filters" in the filter-scheduler.
#
# Options defined in nova.scheduler.solvers
#
# Which constraints to use in scheduler solver (list value)
scheduler_solver_constraints=ActiveHostConstraint, NonTrivialSolutionConstraint
# Assign weight for each cost (list value)
scheduler_solver_cost_weights=RamCost:1.0
# Which cost matrices to use in the scheduler solver.
# (list value)
scheduler_solver_costs=RamCost
Here is an example for setting which "costs" to use, pleas put these options in the "[solver_scheduler]" section of nova config, each "cost" has a multiplier associated with it to specify its weight in the scheduler decision:
```
Configuration Details
---------------------
* Available costs
- **RamCost**
Help to balance (or stack) ram usage of hosts.
The following option should be set in configuration when using this cost:
```
ram_weight_multiplier = <a real number>
ram_allocation_ratio = <a float value>
```
set the multiplier to negative number for balanced ram usage,
set the multiplier to positive number for stacked ram usage.
- **VolumeAffinityCost**
Help to place instances at the same host as a specific volume, if possible.
In order to use this cost, you need to pass a hint to the scheduler on booting a server.
```nova boot ... --hint affinity_volume_id=<id of the affinity volume> ...```
You also need to have the following options set in the '[DEFAULT]' section of the configuration.
```
cinder_admin_user=<cinder username>
cinder_admin_password=<cinder password>
cinder_admin_tenant_name=<cinder tenant name>
cinder_auth_uri=<the identity api endpoint>
```
* Available linear constraints
- **ActiveHostConstraint**
By enabling this constraint, only enabled and operational hosts are allowed to be selected.
Normally this constraint should always be enabled.
- **NonTrivialSolutionConstraint**
The purpose of this constraint is to avoid trivial solution (i.e. instances placed nowhere).
Normally this constraint should always be enabled.
- **MaxRamAllocationPerHostConstraint**
Cap the virtual ram allocation of hosts.
The following option should be set in configuration when using this constraint:
```ram_allocation_ratio = <a positive real number>``` (virtual-to-physical ram allocation ratio, if >1.0 then over-allocation is allowed.)
- **MaxDiskAllocationPerHostConstraint**
Cap the virtual disk allocation of hosts.
The following option should be set in configuration when using this constraint:
```disk_allocation_ratio = <a positive real number>``` (virtual-to-physical disk allocation ratio, if >1.0 then over-allocation is allowed.)
- **MaxVcpuAllocationPerHostConstraint**
Cap the vcpu allocation of hosts.
The following option should be set in configuration when using this constraint:
```cpu_allocation_ratio = <a positive real number>``` (virtual-to-physical cpu allocation ratio, if >1.0 then over-allocation is allowed.)
- **NumInstancesPerHostConstraint**
Specify the maximum number of instances that can be placed in each host.
The following option is expected in the configuration:
```max_instances_per_host = <a positive integer>```
- **DifferentHostConstraint**
Force instances to be placed at different hosts as specified instance(s).
The following scheduler hint is expected when using this constraint:
```different_host = <a (list of) instance uuid(s)>```
- **SameHostConstraint**
Force instances to be placed at same hosts as specified instance(s).
The following scheduler hint is expected when using this constraint:
```same_host = <a (list) of instance uuid(s)>```
- **AvailablilityZoneConstraint**
Select hosts belongong to an availability zone.
The following option should be set in configuration when using this constraint:
```default_availability_zone = <availability zone>```
- **IoOpsConstraint**
Ensure the concurrent I/O operations number of selected hosts are within a threshold.
The following option should be set in configuration when using this constraint:
```max_io_ops_per_host = <a positive number>```
Examples
--------
This is an example usage for creating VMs with volume affinity using the solver scheduler.
* Install the solver scheduler.
* Update the nova.conf with following options:
```
[DEFAULT]
...
# Default driver to use for the scheduler
scheduler_driver = nova.scheduler.solver_scheduler.ConstraintSolverScheduler
# Virtual-to-physical disk allocation ratio
disk_allocation_ratio = 1.0
# Virtual-to-physical ram allocation ratio
ram_allocation_ratio = 1.5
# Keystone Cinder account username (string value)
cinder_admin_user=<cinder username>
# Keystone Cinder account password (string value)
cinder_admin_password=<cinder password>
# Keystone Cinder account tenant name (string value)
cinder_admin_tenant_name=<cinder tenant name>
# Complete public Identity API endpoint (string value)
cinder_auth_uri=<the identity api endpoint, e.g. "http://controller:5000/v2.0">
[solver_scheduler]
... (other options)
scheduler_solver_costs=RamCost,AffinityCost,AntiAffinityCost
ram_cost_multiplier=1.0
affinity_cost_multiplier=2.0
anti_affinity_cost_multiplier=0.5
```
# Default solver to use for the solver scheduler
scheduler_host_solver = nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver
**Notes**
Tips about the cost multipliers' values:
# Cost functions to use in the linear solver
scheduler_solver_costs = VolumeAffinityCost
# Weight of each cost (every cost function used should be given a weight.)
scheduler_solver_cost_weights = VolumeAffinityCost:1.0
# Constraints used in the solver
scheduler_solver_constraints = ActiveHostConstraint, NonTrivialSolutionConstraint, MaxDiskAllocationPerHostConstraint, MaxRamAllocationPerHostConstraint
Cost class | Multiplier
---------- | ----------
RamCost | \> 0: the scheduler will tend to balance the usage of RAM. The higher the value, the more weight the cost will get in scheduler's decision.<br>\< 0: the scheduler will tend to stack the RAM usage. The higher the *absolute* value, the more weight the cost will get in scheduler's decision.<br>= 0: the cost will be ignored.
MetricsCost | \> 0: The higher the value, the more weight the cost will get in scheduler's decision.<br>\< 0: Not recommended. Might not make the cost meaningful.<br>= 0: the cost will be ignored.
The followings is an example of how to set which "constraints" to be used by the solver-scheduler.
```
[solver_scheduler]
... (other options)
scheduler_solver_constraints=ActiveHostsConstraint,RamConstraint,NumInstancesConstraint
```
* Restart nova-scheduler and then do the followings:
In the following section we will discuss the detailed cost and constraint classes.
* Create multiple volumes at different hosts
### Transition from filter-scheduler to solver-scheduler
* Run the following command to boot a new instance. (The id of a volume you want to use should be provided as scheduler hint.)
```
nova boot --image=<image-id> --flavor=<flavor-id> --hint affinity_volume_id=<volume-id> <server-name>
```
The table below lists supported constraints and their counterparts in filter scheduler. Those costs and constraints can be used in the same way as the weights/filters in the filter scheduler except the above option-setting. Please refer to [OpenStack Configuration Reference](http://docs.openstack.org/icehouse/config-reference/content/section_compute-scheduler.html) for detailed explanation of available weights and filters and their usage instructions.
* The instance should be created at the same host as the chosen volume as long as the host is active and has enough resources.
Weight | Cost
------ | ----
MetricsWeigher | MetricsCost
RAMWeigher | RamCost
Filter | Constraint
------ | ----------
AggregateCoreFilter | AggregateVcpuConstraint
AggregateDiskFilter | AggregateDiskConstraint
AggregateImagePropertiesIsolation | AggregateImagePropertiesIsolationConstraint
AggregateInstanceExtraSpecsFilter | AggregateInstanceExtraSpecsConstraint
AggregateMultiTenancyIsolation | AggregateMultiTenancyIsolationConstraint
AggregateRamFilter | AggregateRamConstraint
AggregateTypeAffinityFilter | AggregateTypeAffinityConstraint
AllHostsFilter | NoConstraint
AvailabilityZoneFilter | AvailabilityZoneConstraint
ComputeCapabilitiesFilter | ComputeCapabilitiesConstraint
ComputeFilter | ActiveHostsConstraint
CoreFilter | VcpuConstraint
DifferentHostFilter | DifferentHostConstraint
DiskFilter | DiskConstraint
ImagePropertiesFilter | ImagePropertiesConstraint
IsolatedHostsFilter | IsolatedHostsConstraint
IoOpsFilter | IoOpsConstraint
JsonFilter | JsonConstraint
MetricsFilter | MetricsConstraint
NumInstancesFilter | NumInstancesConstraint
PciPassthroughFilter | PciPassthroughConstraint
RamFilter | RamConstraint
RetryFilter | RetryConstraint
SameHostFilter | SameHostConstraint
ServerGroupAffinityFilter | ServerGroupAffinityConstraint
ServerGroupAntiAffinityFilter | ServerGroupAntiAffinityConstraint
SimpleCIDRAffinityFilter | SimpleCidrAffinityConstraint
TrustedFilter | TrustedHostsConstraint
TypeAffinityFilter | TypeAffinityConstraint
**Notes**
Some of the above constraints directly invoke their filter counterparts to check host availability, others (in the following list) are implemented with improved logic that may result in more optimal placement decisions for multi-instance requests:
- DiskConstraint
- AggregateDiskConstraint (inherited from DiskConstraint)
- RamConstraint
- AggregateRamConstraint (inherited from RamConstraint)
- VcpuConstraint
- AggregateVcpuConstraint (inherited from VcpuConstraint)
- IoOpsConstraint
- NumInstancesConstraint
- PciPassthroughConstraint
- ServerGroupAffinityConstraint
- ServerGroupAntiAffinityConstraint

File diff suppressed because it is too large Load Diff

View File

@ -1,108 +0,0 @@
[DEFAULT]
#
# Options defined in nova.scheduler.manager
#
# Default driver to use for the scheduler (string value)
scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler
#
# Options defined in nova.scheduler.filters.core_filter
#
# Virtual CPU to physical CPU allocation ratio which affects
# all CPU filters. This configuration specifies a global ratio
# for CoreFilter. For AggregateCoreFilter, it will fall back
# to this configuration value if no per-aggregate setting
# found. This option is also used in Solver Scheduler for the
# MaxVcpuAllocationPerHostConstraint (floating point value)
cpu_allocation_ratio=16.0
#
# Options defined in nova.scheduler.filters.disk_filter
#
# Virtual disk to physical disk allocation ratio (floating
# point value)
disk_allocation_ratio=1.0
#
# Options defined in nova.scheduler.filters.num_instances_filter
#
# Ignore hosts that have too many instances (integer value)
max_instances_per_host=50
#
# Options defined in nova.scheduler.filters.io_ops_filter
#
# Ignore hosts that have too many
# builds/resizes/snaps/migrations. (integer value)
max_io_ops_per_host=8
#
# Options defined in nova.scheduler.filters.ram_filter
#
# Virtual ram to physical ram allocation ratio which affects
# all ram filters. This configuration specifies a global ratio
# for RamFilter. For AggregateRamFilter, it will fall back to
# this configuration value if no per-aggregate setting found.
# (floating point value)
ram_allocation_ratio=1.5
#
# Options defined in nova.scheduler.weights.ram
#
# Multiplier used for weighing ram. Negative numbers mean to
# stack vs spread. (floating point value)
ram_weight_multiplier=1.0
#
# Options defined in nova.volume.cinder
#
# Keystone Cinder account username (string value)
cinder_admin_user=<None>
# Keystone Cinder account password (string value)
cinder_admin_password=<None>
# Keystone Cinder account tenant name (string value)
cinder_admin_tenant_name=service
# Complete public Identity API endpoint (string value)
cinder_auth_uri=<None>
[solver_scheduler]
#
# Options defined in nova.scheduler.solver_scheduler
#
# The pluggable solver implementation to use. By default, a
# reference solver implementation is included that models the
# problem as a Linear Programming (LP) problem using PULP.
# To use a fully functional (pluggable) solver, set the option as
# "nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver"
# (string value)
scheduler_host_solver=nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver
#
# Options defined in nova.scheduler.solvers
#
# Which constraints to use in scheduler solver (list value)
scheduler_solver_constraints=ActiveHostConstraint, NonTrivialSolutionConstraint
# Assign weight for each cost (list value)
scheduler_solver_cost_weights=RamCost:1.0
# Which cost matrices to use in the scheduler solver.
# (list value)
scheduler_solver_costs=RamCost

View File

@ -1,117 +0,0 @@
#!/bin/bash
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
_NOVA_CONF_DIR="/etc/nova"
_NOVA_CONF_FILE="nova.conf"
_NOVA_DIR="/usr/lib/python2.7/dist-packages/nova"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../nova"
_BACKUP_DIR="${_NOVA_DIR}/.solver-scheduler-installation-backup"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NOVA_DIR}" ] ; then
echo "Could not find the nova installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" ] ; then
echo "Could not find nova config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking previous installation..."
if [ -d "${_BACKUP_DIR}/nova" ] ; then
echo "It seems nova-solver-scheduler has already been installed!"
echo "Please check README for solution if this is not true."
exit 1
fi
echo "backing up current files that might be overwritten..."
mkdir -p "${_BACKUP_DIR}/nova"
mkdir -p "${_BACKUP_DIR}/etc/nova"
cp -r "${_NOVA_DIR}/scheduler" "${_BACKUP_DIR}/nova/" && cp -r "${_NOVA_DIR}/volume" "${_BACKUP_DIR}/nova/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/nova"
echo "Error in code backup, aborted."
exit 1
fi
cp "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" "${_BACKUP_DIR}/etc/nova/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/nova"
rm -r "${_BACKUP_DIR}/etc"
echo "Error in config backup, aborted."
exit 1
fi
echo "copying in new files..."
cp -r "${_CODE_DIR}" `dirname ${_NOVA_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}` && rm -r "${_BACKUP_DIR}/nova"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please install manually."
fi
exit 1
fi
echo "updating config file..."
sed -i.backup -e "/scheduler_driver *=/d" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}"
sed -i -e "/\[DEFAULT\]/a \\
scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}"
if [ $? -ne 0 ] ; then
echo "Error in updating, aborted."
echo "Recovering original files..."
cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}` && rm -r "${_BACKUP_DIR}/nova"
if [ $? -ne 0 ] ; then
echo "Recovering /nova failed! Please install manually."
fi
cp "${_BACKUP_DIR}/etc/nova/${_NOVA_CONF_FILE}" "${_NOVA_CONF_DIR}" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
echo "Recovering config failed! Please install manually."
fi
exit 1
fi
echo "restarting nova scheduler..."
service nova-scheduler restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart nova scheduler manually."
exit 1
fi
echo "Completed."
echo "See README to get started."
exit 0

View File

@ -1,119 +0,0 @@
#!/bin/bash
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
_NOVA_CONF_DIR="/etc/nova"
_NOVA_CONF_FILE="nova.conf"
_NOVA_DIR="/usr/lib/python2.7/dist-packages/nova"
# if you did not make changes to the installation files,
# please do not edit the following directories.
_CODE_DIR="../nova"
_BACKUP_DIR="${_NOVA_DIR}/.solver-scheduler-installation-backup"
#_SCRIPT_NAME="${0##*/}"
#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log"
if [[ ${EUID} -ne 0 ]]; then
echo "Please run as root."
exit 1
fi
##Redirecting output to logfile as well as stdout
#exec > >(tee -a ${_SCRIPT_LOGFILE})
#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2)
cd `dirname $0`
echo "checking installation directories..."
if [ ! -d "${_NOVA_DIR}" ] ; then
echo "Could not find the nova installation. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
if [ ! -f "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" ] ; then
echo "Could not find nova config file. Please check the variables in the beginning of the script."
echo "aborted."
exit 1
fi
echo "checking backup..."
if [ ! -d "${_BACKUP_DIR}/nova" ] ; then
echo "Could not find backup files. It is possible that the solver-scheduler has been uninstalled."
echo "If this is not the case, then please uninstall manually."
exit 1
fi
echo "backing up current files that might be overwritten..."
if [ -d "${_BACKUP_DIR}/uninstall" ] ; then
rm -r "${_BACKUP_DIR}/uninstall"
fi
mkdir -p "${_BACKUP_DIR}/uninstall/nova"
mkdir -p "${_BACKUP_DIR}/uninstall/etc/nova"
cp -r "${_NOVA_DIR}/scheduler" "${_BACKUP_DIR}/uninstall/nova/" && cp -r "${_NOVA_DIR}/volume" "${_BACKUP_DIR}/uninstall/nova/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/nova"
echo "Error in code backup, aborted."
exit 1
fi
cp "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/nova/"
if [ $? -ne 0 ] ; then
rm -r "${_BACKUP_DIR}/uninstall/nova"
rm -r "${_BACKUP_DIR}/uninstall/etc"
echo "Error in config backup, aborted."
exit 1
fi
echo "restoring code to the status before installing solver-scheduler..."
cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}`
if [ $? -ne 0 ] ; then
echo "Error in copying, aborted."
echo "Recovering current files..."
cp -r "${_BACKUP_DIR}/uninstall/nova" `dirname ${_NOVA_DIR}`
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please uninstall manually."
fi
exit 1
fi
echo "updating config file..."
sed -i.uninstall.backup -e "/scheduler_driver *=/d" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}"
if [ $? -ne 0 ] ; then
echo "Error in updating, aborted."
echo "Recovering current files..."
cp "${_BACKUP_DIR}/uninstall/etc/nova/${_NOVA_CONF_FILE}" "${_NOVA_CONF_DIR}"
if [ $? -ne 0 ] ; then
echo "Recovering failed! Please uninstall manually."
fi
exit 1
fi
echo "cleaning up backup files..."
rm -r "${_BACKUP_DIR}/nova" && rm -r "${_BACKUP_DIR}/etc"
if [ $? -ne 0 ] ; then
echo "There was an error when cleaning up the backup files."
fi
echo "restarting nova scheduler..."
service nova-scheduler restart
if [ $? -ne 0 ] ; then
echo "There was an error in restarting the service, please restart nova scheduler manually."
exit 1
fi
echo "Completed."
exit 0

View File

@ -22,11 +22,9 @@ A default solver implementation that uses PULP is included.
from oslo.config import cfg
from nova import exception
from nova.openstack.common.gettextutils import _
from nova.openstack.common import importutils
from nova.openstack.common import log as logging
from nova.scheduler import driver
from nova.scheduler import filter_scheduler
from nova.scheduler import weights
@ -35,11 +33,9 @@ LOG = logging.getLogger(__name__)
solver_opts = [
cfg.StrOpt('scheduler_host_solver',
default='nova.scheduler.solvers'
'.pluggable_hosts_pulp_solver.HostsPulpSolver',
help='The pluggable solver implementation to use. By default, a '
'reference solver implementation is included that models '
'the problem as a Linear Programming (LP) problem using PULP.'),
default='nova.scheduler.solvers.fast_solver.FastSolver',
help='The pluggable solver implementation to use. By '
'default, use the FastSolver.'),
]
CONF.register_opts(solver_opts, group='solver_scheduler')
@ -55,82 +51,11 @@ class ConstraintSolverScheduler(filter_scheduler.FilterScheduler):
self.hosts_solver = importutils.import_object(
CONF.solver_scheduler.scheduler_host_solver)
def schedule_run_instance(self, context, request_spec,
admin_password, injected_files,
requested_networks, is_first_time,
filter_properties, legacy_bdm_in_spec):
"""This method is called from nova.compute.api to provision
an instance. We first create a build plan (a list of WeightedHosts)
and then provision.
Returns a list of the instances created.
"""
payload = dict(request_spec=request_spec)
self.notifier.info(context, 'scheduler.run_instance.start', payload)
instance_uuids = request_spec.get('instance_uuids')
LOG.info(_("Attempting to build %(num_instances)d instance(s) "
"uuids: %(instance_uuids)s"),
{'num_instances': len(instance_uuids),
'instance_uuids': instance_uuids})
LOG.debug(_("Request Spec: %s") % request_spec)
# Stuff network requests into filter_properties
# NOTE (Xinyuan): currently for POC only.
filter_properties['requested_networks'] = requested_networks
weighed_hosts = self._schedule(context, request_spec,
filter_properties, instance_uuids)
# NOTE: Pop instance_uuids as individual creates do not need the
# set of uuids. Do not pop before here as the upper exception
# handler fo NoValidHost needs the uuid to set error state
instance_uuids = request_spec.pop('instance_uuids')
# NOTE(comstud): Make sure we do not pass this through. It
# contains an instance of RpcContext that cannot be serialized.
filter_properties.pop('context', None)
for num, instance_uuid in enumerate(instance_uuids):
request_spec['instance_properties']['launch_index'] = num
try:
try:
weighed_host = weighed_hosts.pop(0)
LOG.info(_("Choosing host %(weighed_host)s "
"for instance %(instance_uuid)s"),
{'weighed_host': weighed_host,
'instance_uuid': instance_uuid})
except IndexError:
raise exception.NoValidHost(reason="")
self._provision_resource(context, weighed_host,
request_spec,
filter_properties,
requested_networks,
injected_files, admin_password,
is_first_time,
instance_uuid=instance_uuid,
legacy_bdm_in_spec=legacy_bdm_in_spec)
except Exception as ex:
# NOTE(vish): we don't reraise the exception here to make sure
# that all instances in the request get set to
# error properly
driver.handle_schedule_error(context, ex, instance_uuid,
request_spec)
# scrub retry host list in case we're scheduling multiple
# instances:
retry = filter_properties.get('retry', {})
retry['hosts'] = []
self.notifier.info(context, 'scheduler.run_instance.end', payload)
def _schedule(self, context, request_spec, filter_properties,
instance_uuids=None):
"""Returns a list of hosts that meet the required specs,
ordered by their fitness.
"""
elevated = context.elevated()
instance_properties = request_spec['instance_properties']
instance_type = request_spec.get("instance_type", None)
@ -147,52 +72,57 @@ class ConstraintSolverScheduler(filter_scheduler.FilterScheduler):
properties['uuid'] = instance_uuids[0]
self._populate_retry(filter_properties, properties)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
# initilize an empty key-value cache to be used in solver for internal
# temporary data storage
solver_cache = {}
filter_properties.update({'context': context,
'request_spec': request_spec,
'config_options': config_options,
'instance_type': instance_type})
'instance_type': instance_type,
'num_instances': num_instances,
'instance_uuids': instance_uuids,
'solver_cache': solver_cache})
self.populate_filter_properties(request_spec,
filter_properties)
self.populate_filter_properties(request_spec, filter_properties)
# Note: Moving the host selection logic to a new method so that
# NOTE(Yathi): Moving the host selection logic to a new method so that
# the subclasses can override the behavior.
return self._get_final_host_list(elevated, request_spec,
filter_properties,
instance_properties,
update_group_hosts,
instance_uuids)
selected_hosts = self._get_selected_hosts(context, filter_properties)
def _get_final_host_list(self, context, request_spec, filter_properties,
instance_properties, update_group_hosts=False,
instance_uuids=None):
"""Returns the final list of hosts that meet the required specs for
# clear solver's memory after scheduling process
filter_properties.pop('solver_cache')
return selected_hosts
def _get_selected_hosts(self, context, filter_properties):
"""Returns the list of hosts that meet the required specs for
each instance in the list of instance_uuids.
Here each instance in instance_uuids have the same requirement
as specified by request_spec.
"""
elevated = context.elevated()
# this returns a host iterator
hosts = self._get_all_host_states(context)
hosts = self._get_all_host_states(elevated)
selected_hosts = []
hosts = self.host_manager.get_hosts_stripping_ignored_and_forced(
hosts, filter_properties)
list_hosts = list(hosts)
host_instance_tuples_list = self.hosts_solver.host_solve(
list_hosts, instance_uuids,
request_spec, filter_properties)
LOG.debug(_("solver results: %(host_instance_list)s") %
{"host_instance_list": host_instance_tuples_list})
host_instance_combinations = self.hosts_solver.solve(
list_hosts, filter_properties)
LOG.debug(_("solver results: %(host_instance_tuples_list)s") %
{"host_instance_tuples_list": host_instance_combinations})
# NOTE(Yathi): Not using weights in solver scheduler,
# but creating a list of WeighedHosts with a default weight of 1
# to match the common method signatures of the
# FilterScheduler class
selected_hosts = [weights.WeighedHost(host, 1)
for (host, instance) in host_instance_tuples_list]
for chosen_host in selected_hosts:
# Update the host state after deducting the
# resource used by the instance
chosen_host.obj.consume_from_instance(instance_properties)
if update_group_hosts is True:
filter_properties['group_hosts'].add(chosen_host.obj.host)
for (host, instance) in host_instance_combinations]
return selected_hosts

View File

@ -19,256 +19,50 @@ Manage hosts in the current zone.
from oslo.config import cfg
from nova.compute import task_states
from nova.compute import vm_states
from nova import db
from nova.objects import aggregate as aggregate_obj
from nova.objects import instance as instance_obj
from nova.openstack.common.gettextutils import _
from nova.openstack.common import jsonutils
from nova.openstack.common import log as logging
from nova.openstack.common import timeutils
from nova.pci import pci_request
from nova.pci import pci_stats
from nova.scheduler import host_manager
physnet_config_file_opts = [
cfg.StrOpt('physnet_config_file',
default='/etc/neutron/plugins/ml2/ml2_conf_cisco.ini',
help='The config file specifying the physical network topology')
]
CONF = cfg.CONF
CONF.import_opt('scheduler_available_filters', 'nova.scheduler.host_manager')
CONF.import_opt('scheduler_default_filters', 'nova.scheduler.host_manager')
CONF.import_opt('scheduler_weight_classes', 'nova.scheduler.host_manager')
CONF.register_opts(physnet_config_file_opts)
LOG = logging.getLogger(__name__)
class HostState(host_manager.HostState):
class SolverSchedulerHostState(host_manager.HostState):
"""Mutable and immutable information tracked for a host.
This is an attempt to remove the ad-hoc data structures
previously used and lock down access.
"""
def __init__(self, *args, **kwargs):
super(HostState, self).__init__(*args, **kwargs)
super(SolverSchedulerHostState, self).__init__(*args, **kwargs)
self.projects = []
# For network constraints
# NOTE(Xinyuan): currently for POC only, and may require Neurtron
self.networks = []
self.physnet_config = []
self.rack_networks = []
# For host aggregate constraints
self.host_aggregates_stats = {}
def update_from_hosted_instances(self, context, compute):
service = compute['service']
if not service:
LOG.warn(_("No service for compute ID %s") % compute['id'])
return
host = service['host']
# retrieve instances for each hosts to extract needed infomation
# NOTE: ideally we should use get_by_host_and_node, but there's a bug
# in the Icehouse release, that doesn't allow 'expected_attrs' here.
instances = instance_obj.InstanceList.get_by_host(context, host,
expected_attrs=['info_cache'])
# get hosted networks
# NOTE(Xinyuan): POC.
instance_networks = []
for inst in instances:
network_info = inst.get('info_cache', {}).get('network_info', [])
instance_networks.extend([vif['network']['id']
for vif in network_info])
self.networks = list(set(instance_networks))
def update_from_compute_node(self, compute):
"""Update information about a host from its compute_node info."""
if (self.updated and compute['updated_at']
and self.updated > compute['updated_at']):
return
all_ram_mb = compute['memory_mb']
# Assume virtual size is all consumed by instances if use qcow2 disk.
free_gb = compute['free_disk_gb']
least_gb = compute.get('disk_available_least')
if least_gb is not None:
if least_gb > free_gb:
# can occur when an instance in database is not on host
LOG.warn(_("Host has more disk space than database expected"
" (%(physical)sgb > %(database)sgb)") %
{'physical': least_gb, 'database': free_gb})
free_gb = min(least_gb, free_gb)
free_disk_mb = free_gb * 1024
self.disk_mb_used = compute['local_gb_used'] * 1024
#NOTE(jogo) free_ram_mb can be negative
self.free_ram_mb = compute['free_ram_mb']
self.total_usable_ram_mb = all_ram_mb
self.total_usable_disk_gb = compute['local_gb']
self.free_disk_mb = free_disk_mb
self.vcpus_total = compute['vcpus']
self.vcpus_used = compute['vcpus_used']
self.updated = compute['updated_at']
if 'pci_stats' in compute:
self.pci_stats = pci_stats.PciDeviceStats(compute['pci_stats'])
else:
self.pci_stats = None
# All virt drivers report host_ip
self.host_ip = compute['host_ip']
self.hypervisor_type = compute.get('hypervisor_type')
self.hypervisor_version = compute.get('hypervisor_version')
self.hypervisor_hostname = compute.get('hypervisor_hostname')
self.cpu_info = compute.get('cpu_info')
if compute.get('supported_instances'):
self.supported_instances = jsonutils.loads(
compute.get('supported_instances'))
# Don't store stats directly in host_state to make sure these don't
# overwrite any values, or get overwritten themselves. Store in self so
# filters can schedule with them.
stats = compute.get('stats', None) or '{}'
self.stats = jsonutils.loads(stats)
self.hypervisor_version = compute['hypervisor_version']
# Track number of instances on host
self.num_instances = int(self.stats.get('num_instances', 0))
# Track number of instances by project_id
super(SolverSchedulerHostState, self).update_from_compute_node(
compute)
# Track projects in the compute node
project_id_keys = [k for k in self.stats.keys() if
k.startswith("num_proj_")]
for key in project_id_keys:
project_id = key[9:]
self.num_instances_by_project[project_id] = int(self.stats[key])
# Track number of instances in certain vm_states
vm_state_keys = [k for k in self.stats.keys() if
k.startswith("num_vm_")]
for key in vm_state_keys:
vm_state = key[7:]
self.vm_states[vm_state] = int(self.stats[key])
# Track number of instances in certain task_states
task_state_keys = [k for k in self.stats.keys() if
k.startswith("num_task_")]
for key in task_state_keys:
task_state = key[9:]
self.task_states[task_state] = int(self.stats[key])
# Track number of instances by host_type
os_keys = [k for k in self.stats.keys() if
k.startswith("num_os_type_")]
for key in os_keys:
os = key[12:]
self.num_instances_by_os_type[os] = int(self.stats[key])
# Track the number of projects on host
self.projects = [k[9:] for k in self.stats.keys() if
k.startswith("num_proj_") and int(self.stats[k]) > 0]
self.num_io_ops = int(self.stats.get('io_workload', 0))
# update metrics
self._update_metrics_from_compute_node(compute)
if int(self.stats[key]) > 0:
self.projects.append(project_id)
def consume_from_instance(self, instance):
"""Incrementally update host state from an instance."""
disk_mb = (instance['root_gb'] + instance['ephemeral_gb']) * 1024
ram_mb = instance['memory_mb']
vcpus = instance['vcpus']
self.free_ram_mb -= ram_mb
self.free_disk_mb -= disk_mb
self.vcpus_used += vcpus
self.updated = timeutils.utcnow()
# Track number of instances on host
self.num_instances += 1
# Track number of instances by project_id
project_id = instance.get('project_id')
if project_id not in self.num_instances_by_project:
self.num_instances_by_project[project_id] = 0
self.num_instances_by_project[project_id] += 1
# Track number of instances in certain vm_states
vm_state = instance.get('vm_state', vm_states.BUILDING)
if vm_state not in self.vm_states:
self.vm_states[vm_state] = 0
self.vm_states[vm_state] += 1
# Track number of instances in certain task_states
task_state = instance.get('task_state')
if task_state not in self.task_states:
self.task_states[task_state] = 0
self.task_states[task_state] += 1
# Track number of instances by host_type
os_type = instance.get('os_type')
if os_type not in self.num_instances_by_os_type:
self.num_instances_by_os_type[os_type] = 0
self.num_instances_by_os_type[os_type] += 1
pci_requests = pci_request.get_instance_pci_requests(instance)
if pci_requests and self.pci_stats:
self.pci_stats.apply_requests(pci_requests)
vm_state = instance.get('vm_state', vm_states.BUILDING)
task_state = instance.get('task_state')
if vm_state == vm_states.BUILDING or task_state in [
task_states.RESIZE_MIGRATING, task_states.REBUILDING,
task_states.RESIZE_PREP, task_states.IMAGE_SNAPSHOT,
task_states.IMAGE_BACKUP]:
self.num_io_ops += 1
# Track the number of projects
super(SolverSchedulerHostState, self).consume_from_instance(instance)
# Track projects in the compute node
project_id = instance.get('project_id')
if project_id not in self.projects:
self.projects.append(project_id)
# Track aggregate stats
project_id = instance.get('project_id')
for aggr in self.host_aggregates_stats:
aggregate_project_list = self.host_aggregates_stats[aggr].get(
'projects', [])
if project_id not in aggregate_project_list:
self.host_aggregates_stats[aggr]['projects'].append(project_id)
def update_from_networks(self, requested_networks):
for network_id, fixed_ip, port_id in requested_networks:
if network_id:
if network_id not in self.networks:
self.networks.append(network_id)
if not network_id not in self.aggregated_networks:
for device in self.aggregated_networks:
self.aggregated_networks[device].append(network_id)
# do this for host aggregates
for aggr in self.host_aggregates_stats:
host_aggr_network_list = self.host_aggregates_stats[
aggr].get('networks', [])
if network_id not in host_aggr_network_list:
self.host_aggregates_stats[aggr][
'networks'].append(network_id)
def __repr__(self):
return ("(%s, %s) ram:%s disk:%s io_ops:%s instances:%s "
"physnet_config:%s networks:%s rack_networks:%s "
"projects:%s aggregate_stats:%s" %
(self.host, self.nodename, self.free_ram_mb, self.free_disk_mb,
self.num_io_ops, self.num_instances, self.physnet_config,
self.networks, self.rack_networks, self.projects,
self.host_aggregates_stats))
class SolverSchedulerHostManager(host_manager.HostManager):
"""HostManager class for solver scheduler."""
# Can be overridden in a subclass
host_state_cls = HostState
host_state_cls = SolverSchedulerHostState
def __init__(self, *args, **kwargs):
super(SolverSchedulerHostManager, self).__init__(*args, **kwargs)
@ -342,194 +136,3 @@ class SolverSchedulerHostManager(host_manager.HostManager):
hosts = name_to_cls_map.itervalues()
return hosts
def get_filtered_hosts(self, hosts, filter_properties,
filter_class_names=None, index=0):
"""Filter hosts and return only ones passing all filters."""
# NOTE(Yathi): Calling the method to apply ignored and forced options
hosts = self.get_hosts_stripping_ignored_and_forced(hosts,
filter_properties)
force_hosts = filter_properties.get('force_hosts', [])
force_nodes = filter_properties.get('force_nodes', [])
if force_hosts or force_nodes:
# NOTE: Skip filters when forcing host or node
return list(hosts)
filter_classes = self._choose_host_filters(filter_class_names)
return self.filter_handler.get_filtered_objects(filter_classes,
hosts, filter_properties, index)
def _get_aggregate_stats(self, context, host_state_map):
"""Update certain stats for the aggregates of the hosts."""
aggregates = aggregate_obj.AggregateList.get_all(context)
host_state_list_map = {}
for (host, node) in host_state_map.keys():
current_list = host_state_list_map.get(host, None)
state = host_state_map[(host, node)]
if not current_list:
host_state_list_map[host] = [state]
else:
host_state_list_map[host] = current_list.append(state)
for aggregate in aggregates:
hosts = aggregate.hosts
projects = set()
networks = set()
# Collect all the projects from all the member hosts
aggr_host_states = []
for host in hosts:
host_state_list = host_state_list_map.get(host, None) or []
aggr_host_states += host_state_list
for host_state in host_state_list:
projects = projects.union(host_state.projects)
networks = networks.union(host_state.networks)
aggregate_stats = {'hosts': hosts,
'projects': list(projects),
'networks': list(networks),
'metadata': aggregate.metadata}
# Now set this value to all the member host_states
for host_state in aggr_host_states:
host_state.host_aggregates_stats[
aggregate.name] = aggregate_stats
def _get_rack_states(self, context, host_state_map):
"""Retrieve the physical and virtual network states of the hosts.
"""
def _get_physnet_mappings():
"""Get physical network topologies from a Neutron config file.
This is a hard-coded function which only supports Cisco Nexus
driver for Neutron ML2 plugin currently.
"""
# NOTE(Xinyuan): This feature is for POC only!
# TODO(Xinyuan): further works are required in implementing
# Neutron API extensions to get related information.
host2device_map = {}
device2host_map = {}
sections = {}
state_keys = host_state_map.keys()
hostname_list = [host for (host, node) in state_keys]
try:
physnet_config_parser = cfg.ConfigParser(
CONF.physnet_config_file, sections)
physnet_config_parser.parse()
except Exception:
LOG.warn(_("Physnet config file was not parsed properly."))
# Example section:
# [ml2_mech_cisco_nexus:1.1.1.1]
# compute1=1/1
# compute2=1/2
# ssh_port=22
# username=admin
# password=mySecretPassword
for parsed_item in sections.keys():
dev_id, sep, dev_ip = parsed_item.partition(':')
if dev_id.lower() == 'ml2_mech_cisco_nexus':
for key, value in sections[parsed_item].items():
if key in hostname_list:
hostname = key
portid = value[0]
host2device_map.setdefault(hostname, [])
host2device_map[hostname].append((dev_ip, portid))
device2host_map.setdefault(dev_ip, [])
device2host_map[dev_ip].append((hostname, portid))
return host2device_map, device2host_map
def _get_rack_networks(host_dev_map, dev_host_map, host_state_map):
"""Aggregate the networks associated with a group of hosts in
same physical groups (e.g. under same ToR switches...)
"""
rack_networks = {}
if not dev_host_map or not host_dev_map:
return rack_networks
host_networks = {}
for state_key in host_state_map.keys():
(host, node) = state_key
host_state = host_state_map[state_key]
host_networks.setdefault(host, set())
host_networks[host] = host_networks[host].union(
host_state.networks)
# aggregate hosted networks for each upper level device
dev_networks = {}
for dev_id in dev_host_map.keys():
current_dev_networks = set()
for (host_name, port_id) in dev_host_map[dev_id]:
current_dev_networks = current_dev_networks.union(
host_networks.get(host_name, []))
dev_networks[dev_id] = list(current_dev_networks)
# make aggregated networks list for each hosts
for host_name in host_dev_map.keys():
dev_list = list(set([dev_id for (dev_id, physport)
in host_dev_map.get(host_name, [])]))
host_rack_networks = {}
for dev in dev_list:
host_rack_networks[dev] = dev_networks.get(dev, [])
rack_networks[host_name] = host_rack_networks
return rack_networks
host_dev_map, dev_host_map = _get_physnet_mappings()
rack_networks = _get_rack_networks(
host_dev_map, dev_host_map, host_state_map)
for state_key in host_state_map.keys():
host_state = self.host_state_map[state_key]
(host, node) = state_key
host_state.physnet_config = host_dev_map.get(host, [])
host_state.rack_networks = rack_networks.get(host, [])
def get_all_host_states(self, context):
"""Returns a list of HostStates that represents all the hosts
the HostManager knows about. Also, each of the consumable resources
in HostState are pre-populated and adjusted based on data in the db.
"""
# Get resource usage across the available compute nodes:
compute_nodes = db.compute_node_get_all(context)
seen_nodes = set()
for compute in compute_nodes:
service = compute['service']
if not service:
LOG.warn(_("No service for compute ID %s") % compute['id'])
continue
host = service['host']
node = compute.get('hypervisor_hostname')
state_key = (host, node)
capabilities = self.service_states.get(state_key, None)
host_state = self.host_state_map.get(state_key)
if host_state:
host_state.update_capabilities(capabilities,
dict(service.iteritems()))
else:
host_state = self.host_state_cls(host, node,
capabilities=capabilities,
service=dict(service.iteritems()))
self.host_state_map[state_key] = host_state
host_state.update_from_compute_node(compute)
# update information from hosted instances
host_state.update_from_hosted_instances(context, compute)
seen_nodes.add(state_key)
# remove compute nodes from host_state_map if they are not active
dead_nodes = set(self.host_state_map.keys()) - seen_nodes
for state_key in dead_nodes:
host, node = state_key
LOG.info(_("Removing dead compute node %(host)s:%(node)s "
"from scheduler") % {'host': host, 'node': node})
del self.host_state_map[state_key]
# get information from groups of hosts
# NOTE(Xinyaun): currently for POC only.
self._get_rack_states(context, self.host_state_map)
self._get_aggregate_stats(context, self.host_state_map)
return self.host_state_map.itervalues()

View File

@ -17,73 +17,73 @@
Scheduler host constraint solvers
"""
from nova.scheduler.solvers import costs
from nova.scheduler.solvers import linearconstraints
from oslo.config import cfg
scheduler_solver_costs_opt = cfg.ListOpt(
'scheduler_solver_costs',
default=['RamCost'],
help='Which cost matrices to use in the scheduler solver.')
from nova.scheduler.solvers import constraints
from nova.scheduler.solvers import costs
from nova import solver_scheduler_exception as exception
# (xinyuan) This option should be changed to DictOpt type
# when bug #1276859 is fixed.
scheduler_solver_cost_weights_opt = cfg.ListOpt(
'scheduler_solver_cost_weights',
default=['RamCost:1.0'],
help='Assign weight for each cost')
scheduler_solver_constraints_opt = cfg.ListOpt(
'scheduler_solver_constraints',
default=[],
help='Which constraints to use in scheduler solver')
scheduler_solver_opts = [
cfg.ListOpt('scheduler_solver_costs',
default=['RamCost'],
help='Which cost matrices to use in the '
'scheduler solver.'),
cfg.ListOpt('scheduler_solver_constraints',
default=['ActiveHostsConstraint'],
help='Which constraints to use in scheduler solver'),
]
CONF = cfg.CONF
CONF.register_opt(scheduler_solver_costs_opt, group='solver_scheduler')
CONF.register_opt(scheduler_solver_cost_weights_opt, group='solver_scheduler')
CONF.register_opt(scheduler_solver_constraints_opt, group='solver_scheduler')
SOLVER_CONF = CONF.solver_scheduler
CONF.register_opts(scheduler_solver_opts, group='solver_scheduler')
class BaseHostSolver(object):
"""Base class for host constraint solvers."""
def __init__(self):
super(BaseHostSolver, self).__init__()
def _get_cost_classes(self):
"""Get cost classes from configuration."""
cost_classes = []
bad_cost_names = []
cost_handler = costs.CostHandler()
all_cost_classes = cost_handler.get_all_classes()
expected_costs = SOLVER_CONF.scheduler_solver_costs
for cost in all_cost_classes:
if cost.__name__ in expected_costs:
cost_classes.append(cost)
all_cost_names = [c.__name__ for c in all_cost_classes]
expected_costs = CONF.solver_scheduler.scheduler_solver_costs
for cost in expected_costs:
if cost in all_cost_names:
cost_classes.append(all_cost_classes[
all_cost_names.index(cost)])
else:
bad_cost_names.append(cost)
if bad_cost_names:
msg = ", ".join(bad_cost_names)
raise exception.SchedulerSolverCostNotFound(cost_name=msg)
return cost_classes
def _get_constraint_classes(self):
"""Get constraint classes from configuration."""
constraint_classes = []
constraint_handler = linearconstraints.LinearConstraintHandler()
bad_constraint_names = []
constraint_handler = constraints.ConstraintHandler()
all_constraint_classes = constraint_handler.get_all_classes()
expected_constraints = SOLVER_CONF.scheduler_solver_constraints
for constraint in all_constraint_classes:
if constraint.__name__ in expected_constraints:
constraint_classes.append(constraint)
all_constraint_names = [c.__name__ for c in all_constraint_classes]
expected_constraints = (
CONF.solver_scheduler.scheduler_solver_constraints)
for constraint in expected_constraints:
if constraint in all_constraint_names:
constraint_classes.append(all_constraint_classes[
all_constraint_names.index(constraint)])
else:
bad_constraint_names.append(constraint)
if bad_constraint_names:
msg = ", ".join(bad_constraint_names)
raise exception.SchedulerSolverConstraintNotFound(
constraint_name=msg)
return constraint_classes
def _get_cost_weights(self):
"""Get cost weights from configuration."""
cost_weights = {}
# (xinyuan) This is a temporary workaround for bug #1276859,
# need to wait until DictOpt is supported by config sample generator.
weights_str_list = SOLVER_CONF.scheduler_solver_cost_weights
for weight_str in weights_str_list:
(key, sep, val) = weight_str.partition(':')
cost_weights[str(key)] = float(val)
return cost_weights
def host_solve(self, hosts, instance_uuids, request_spec,
filter_properties):
def solve(self, hosts, filter_properties):
"""Return the list of host-instance tuples after
solving the constraints.
Implement this in a subclass.

View File

@ -0,0 +1,97 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Constraints for scheduler constraint solvers
"""
from nova import loadables
from nova.scheduler import filters
class BaseConstraint(object):
"""Base class for constraints."""
precedence = 0
def get_components(self, variables, hosts, filter_properties):
"""Return the components of the constraint."""
raise NotImplementedError()
class BaseLinearConstraint(BaseConstraint):
"""Base class of LP constraint."""
def __init__(self):
self._reset()
def _reset(self):
self.variables = []
self.coefficients = []
self.constants = []
self.operators = []
def _generate_components(self, variables, hosts, filter_properties):
# override in a sub class
pass
def get_components(self, variables, hosts, filter_properties):
# deprecated currently, reserve for future use
self._reset()
self._generate_components(variables, hosts, filter_properties)
return (self.variables, self.coefficients, self.constants,
self.operators)
def get_constraint_matrix(self, hosts, filter_properties):
raise NotImplementedError()
class BaseFilterConstraint(BaseLinearConstraint):
"""Base class for constraints that correspond to 1-time host filters."""
# override this in sub classes
host_filter_cls = filters.BaseHostFilter
def __init__(self):
super(BaseFilterConstraint, self).__init__()
self.host_filter = self.host_filter_cls()
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
for i in xrange(num_hosts):
host_passes = self.host_filter.host_passes(
hosts[i], filter_properties)
if not host_passes:
constraint_matrix[i] = [False for j in xrange(num_instances)]
return constraint_matrix
class ConstraintHandler(loadables.BaseLoader):
def __init__(self):
super(ConstraintHandler, self).__init__(BaseConstraint)
def all_constraints():
"""Return a list of constraint classes found in this directory.
This method is used as the default for available constraints for
scheduler and returns a list of all constraint classes available.
"""
return ConstraintHandler().get_all_classes()

View File

@ -0,0 +1,22 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import compute_filter
from nova.scheduler.solvers import constraints
class ActiveHostsConstraint(constraints.BaseFilterConstraint):
"""Constraint that only allows active hosts to be selected."""
host_filter_cls = compute_filter.ComputeFilter

View File

@ -0,0 +1,34 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import affinity_filter
from nova.scheduler.solvers import constraints
class SameHostConstraint(constraints.BaseFilterConstraint):
"""Schedule the instance on the same host as another instance in a set
of instances.
"""
host_filter_cls = affinity_filter.SameHostFilter
class DifferentHostConstraint(constraints.BaseFilterConstraint):
"""Schedule the instance on a different host from a set of instances."""
host_filter_cls = affinity_filter.DifferentHostFilter
class SimpleCidrAffinityConstraint(constraints.BaseFilterConstraint):
"""Schedule the instance on a host with a particular cidr."""
host_filter_cls = affinity_filter.SimpleCIDRAffinityFilter

View File

@ -0,0 +1,60 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova import db
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers.constraints import disk_constraint
CONF = cfg.CONF
CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter')
LOG = logging.getLogger(__name__)
class AggregateDiskConstraint(disk_constraint.DiskConstraint):
"""AggregateDiskConstraint with per-aggregate disk subscription flag.
Fall back to global disk_allocation_ratio if no per-aggregate setting
found.
"""
def _get_disk_allocation_ratio(self, host_state, filter_properties):
context = filter_properties['context'].elevated()
# TODO(uni): DB query in filter is a performance hit, especially for
# system with lots of hosts. Will need a general solution here to fix
# all filters with aggregate DB call things.
metadata = db.aggregate_metadata_get_by_host(
context, host_state.host, key='disk_allocation_ratio')
aggregate_vals = metadata.get('disk_allocation_ratio', set())
num_values = len(aggregate_vals)
if num_values == 0:
return CONF.disk_allocation_ratio
if num_values > 1:
LOG.warn(_("%(num_values)d ratio values found, "
"of which the minimum value will be used."),
{'num_values': num_values})
try:
ratio = min(map(float, aggregate_vals))
except ValueError as e:
LOG.warning(_("Could not decode disk_allocation_ratio: '%s'"), e)
ratio = CONF.disk_allocation_ratio
return ratio

View File

@ -0,0 +1,24 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import aggregate_image_properties_isolation
from nova.scheduler.solvers import constraints
class AggregateImagePropertiesIsolationConstraint(
constraints.BaseFilterConstraint):
"""AggregateImagePropertiesIsolation works with image properties."""
host_filter_cls = aggregate_image_properties_isolation.\
AggregateImagePropertiesIsolation

View File

@ -0,0 +1,23 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import aggregate_instance_extra_specs
from nova.scheduler.solvers import constraints
class AggregateInstanceExtraSpecsConstraint(constraints.BaseFilterConstraint):
"""AggregateInstanceExtraSpecsFilter works with InstanceType records."""
host_filter_cls = aggregate_instance_extra_specs.\
AggregateInstanceExtraSpecsFilter

View File

@ -0,0 +1,24 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import aggregate_multitenancy_isolation
from nova.scheduler.solvers import constraints
class AggregateMultiTenancyIsolationConstraint(
constraints.BaseFilterConstraint):
"""Isolate tenants in specific aggregates."""
host_filter_cls = aggregate_multitenancy_isolation.\
AggregateMultiTenancyIsolation

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova import db
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers.constraints import ram_constraint
CONF = cfg.CONF
CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter')
LOG = logging.getLogger(__name__)
class AggregateRamConstraint(ram_constraint.RamConstraint):
"""AggregateRamConstraint with per-aggregate ram subscription flag.
Fall back to global ram_allocation_ratio if no per-aggregate setting found.
"""
def _get_ram_allocation_ratio(self, host_state, filter_properties):
context = filter_properties['context'].elevated()
# TODO(uni): DB query in filter is a performance hit, especially for
# system with lots of hosts. Will need a general solution here to fix
# all filters with aggregate DB call things.
metadata = db.aggregate_metadata_get_by_host(
context, host_state.host, key='ram_allocation_ratio')
aggregate_vals = metadata.get('ram_allocation_ratio', set())
num_values = len(aggregate_vals)
if num_values == 0:
return CONF.ram_allocation_ratio
if num_values > 1:
LOG.warn(_("%(num_values)d ratio values found, "
"of which the minimum value will be used."),
{'num_values': num_values})
try:
ratio = min(map(float, aggregate_vals))
except ValueError as e:
LOG.warning(_("Could not decode ram_allocation_ratio: '%s'"), e)
ratio = CONF.ram_allocation_ratio
return ratio

View File

@ -0,0 +1,26 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import type_filter
from nova.scheduler.solvers import constraints
class AggregateTypeAffinityConstraint(constraints.BaseFilterConstraint):
"""AggregateTypeAffinityFilter limits instance_type by aggregate
return True if no instance_type key is set or if the aggregate metadata
key 'instance_type' has the instance_type name as a value
"""
host_filter_cls = type_filter.AggregateTypeAffinityFilter

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova import db
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers.constraints import vcpu_constraint
CONF = cfg.CONF
CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter')
LOG = logging.getLogger(__name__)
class AggregateVcpuConstraint(vcpu_constraint.VcpuConstraint):
"""AggregateVcpuConstraint with per-aggregate CPU subscription flag.
Fall back to global cpu_allocation_ratio if no per-aggregate setting found.
"""
def _get_cpu_allocation_ratio(self, host_state, filter_properties):
context = filter_properties['context'].elevated()
# TODO(uni): DB query in filter is a performance hit, especially for
# system with lots of hosts. Will need a general solution here to fix
# all filters with aggregate DB call things.
metadata = db.aggregate_metadata_get_by_host(
context, host_state.host, key='cpu_allocation_ratio')
aggregate_vals = metadata.get('cpu_allocation_ratio', set())
num_values = len(aggregate_vals)
if num_values == 0:
return CONF.cpu_allocation_ratio
if num_values > 1:
LOG.warning(_("%(num_values)d ratio values found, "
"of which the minimum value will be used."),
{'num_values': num_values})
try:
ratio = min(map(float, aggregate_vals))
except ValueError as e:
LOG.warning(_("Could not decode cpu_allocation_ratio: '%s'"), e)
ratio = CONF.cpu_allocation_ratio
return ratio

View File

@ -0,0 +1,27 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import availability_zone_filter
from nova.scheduler.solvers import constraints
class AvailabilityZoneConstraint(constraints.BaseFilterConstraint):
"""Selects Hosts by availability zone.
Works with aggregate metadata availability zones, using the key
'availability_zone'
Note: in theory a compute node can be part of multiple availability_zones
"""
host_filter_cls = availability_zone_filter.AvailabilityZoneFilter

View File

@ -0,0 +1,22 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import compute_capabilities_filter
from nova.scheduler.solvers import constraints
class ComputeCapabilitiesConstraint(constraints.BaseFilterConstraint):
"""Hard-coded to work with InstanceType records."""
host_filter_cls = compute_capabilities_filter.ComputeCapabilitiesFilter

View File

@ -0,0 +1,77 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
CONF = cfg.CONF
CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter')
LOG = logging.getLogger(__name__)
class DiskConstraint(constraints.BaseLinearConstraint):
"""Constraint of the maximum total disk demand acceptable on each host."""
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
# get requested disk
instance_type = filter_properties.get('instance_type') or {}
requested_disk = (1024 * (instance_type.get('root_gb', 0) +
instance_type.get('ephemeral_gb', 0)) +
instance_type.get('swap', 0))
for inst_type_key in ['root_gb', 'ephemeral_gb', 'swap']:
if inst_type_key not in instance_type:
LOG.warn(_("Disk information of requested instances\' %s "
"is incomplete, use 0 as the requested size.") %
inst_type_key)
if requested_disk <= 0:
LOG.warn(_("DiskConstraint is skipped because requested "
"instance disk size is 0 or invalid."))
return constraint_matrix
for i in xrange(num_hosts):
# get usable disk
free_disk_mb = hosts[i].free_disk_mb
total_usable_disk_mb = hosts[i].total_usable_disk_gb * 1024
disk_mb_limit = total_usable_disk_mb * CONF.disk_allocation_ratio
used_disk_mb = total_usable_disk_mb - free_disk_mb
usable_disk_mb = disk_mb_limit - used_disk_mb
acceptable_num_instances = int(usable_disk_mb / requested_disk)
if acceptable_num_instances < num_instances:
inacceptable_num = (num_instances - acceptable_num_instances)
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to DiskConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
disk_gb_limit = disk_mb_limit / 1024
hosts[i].limits['disk_gb'] = disk_gb_limit
return constraint_matrix

View File

@ -0,0 +1,28 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import image_props_filter
from nova.scheduler.solvers import constraints
class ImagePropertiesConstraint(constraints.BaseFilterConstraint):
"""Select compute nodes that satisfy instance image properties.
The ImagePropertiesConstraint selects compute nodes that satisfy
any architecture, hypervisor type, or virtual machine mode properties
specified on the instance's image properties. Image properties are
contained in the image dictionary in the request_spec.
"""
host_filter_cls = image_props_filter.ImagePropertiesFilter

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('max_io_ops_per_host', 'nova.scheduler.filters.io_ops_filter')
class IoOpsConstraint(constraints.BaseLinearConstraint):
"""A constraint to ensure only those hosts are selected whose number of
concurrent I/O operations are within a set threshold.
"""
def get_constraint_matrix(self, hosts, filter_properties):
max_io_ops = CONF.max_io_ops_per_host
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
for i in xrange(num_hosts):
num_io_ops = hosts[i].num_io_ops
acceptable_num_instances = int(max_io_ops - num_io_ops)
if acceptable_num_instances < 0:
acceptable_num_instances = 0
if acceptable_num_instances < num_instances:
inacceptable_num = (num_instances - acceptable_num_instances)
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to IoOpsConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
return constraint_matrix

View File

@ -0,0 +1,22 @@
# Copyright (c) 2011-2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import isolated_hosts_filter
from nova.scheduler.solvers import constraints
class IsolatedHostsConstraint(constraints.BaseFilterConstraint):
"""Keep specified images to selected hosts."""
host_filter_cls = isolated_hosts_filter.IsolatedHostsFilter

View File

@ -0,0 +1,24 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import json_filter
from nova.scheduler.solvers import constraints
class JsonConstraint(constraints.BaseFilterConstraint):
"""Constraint to allow simple JSON-based grammar for
selecting hosts.
"""
host_filter_cls = json_filter.JsonFilter

View File

@ -0,0 +1,24 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import metrics_filter
from nova.scheduler.solvers import constraints
class MetricsConstraint(constraints.BaseFilterConstraint):
"""This constraint is used to filter out those hosts which don't have the
corresponding metrics.
"""
host_filter_cls = metrics_filter.MetricsFilter

View File

@ -0,0 +1,29 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers import constraints
class NoConstraint(constraints.BaseLinearConstraint):
"""No-op constraint that returns empty linear constraint components."""
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
return constraint_matrix

View File

@ -0,0 +1,59 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
CONF = cfg.CONF
CONF.import_opt("max_instances_per_host",
"nova.scheduler.filters.num_instances_filter")
LOG = logging.getLogger(__name__)
class NumInstancesConstraint(constraints.BaseLinearConstraint):
"""Constraint that specifies the maximum number of instances that
each host can launch.
"""
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
max_instances = CONF.max_instances_per_host
for i in xrange(num_hosts):
num_host_instances = hosts[i].num_instances
acceptable_num_instances = int(max_instances - num_host_instances)
if acceptable_num_instances < 0:
acceptable_num_instances = 0
if acceptable_num_instances < num_instances:
inacceptable_num = num_instances - acceptable_num_instances
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to NumInstancesConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
return constraint_matrix

View File

@ -0,0 +1,80 @@
# Copyright (c) 2011-2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
LOG = logging.getLogger(__name__)
class PciPassthroughConstraint(constraints.BaseLinearConstraint):
"""Constraint that schedules instances on a host if the host has devices
to meet the device requests in the 'extra_specs' for the flavor.
PCI resource tracker provides updated summary information about the
PCI devices for each host, like:
[{"count": 5, "vendor_id": "8086", "product_id": "1520",
"extra_info":'{}'}],
and VM requests PCI devices via PCI requests, like:
[{"count": 1, "vendor_id": "8086", "product_id": "1520",}].
The constraint checks if the host passes or not based on this information.
"""
def _get_acceptable_pci_requests_times(self, max_times_to_try,
pci_requests, host_pci_stats):
acceptable_times = 0
while acceptable_times < max_times_to_try:
if host_pci_stats.support_requests(pci_requests):
acceptable_times += 1
host_pci_stats.apply_requests(pci_requests)
else:
break
return acceptable_times
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
pci_requests = filter_properties.get('pci_requests')
if not pci_requests:
LOG.warn(_("PciPassthroughConstraint check is skipped because "
"requested instance PCI requests is unavailable."))
return constraint_matrix
for i in xrange(num_hosts):
host_pci_stats = copy.deepcopy(hosts[i].pci_stats)
acceptable_num_instances = (
self._get_acceptable_pci_requests_times(num_instances,
pci_requests, host_pci_stats))
if acceptable_num_instances < num_instances:
inacceptable_num = num_instances - acceptable_num_instances
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to PciPassthroughConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
return constraint_matrix

View File

@ -0,0 +1,77 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
CONF = cfg.CONF
CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter')
LOG = logging.getLogger(__name__)
class RamConstraint(constraints.BaseLinearConstraint):
"""Constraint of the total ram demand acceptable on each host."""
def _get_ram_allocation_ratio(self, host_state, filter_properties):
return CONF.ram_allocation_ratio
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
# get requested ram
instance_type = filter_properties.get('instance_type') or {}
requested_ram = instance_type.get('memory_mb', 0)
if 'memory_mb' not in instance_type:
LOG.warn(_("No information about requested instances\' RAM size "
"was found, default value (0) is used."))
if requested_ram <= 0:
LOG.warn(_("RamConstraint is skipped because requested "
"instance RAM size is 0 or invalid."))
return constraint_matrix
for i in xrange(num_hosts):
ram_allocation_ratio = self._get_ram_allocation_ratio(
hosts[i], filter_properties)
# get available ram
free_ram_mb = hosts[i].free_ram_mb
total_usable_ram_mb = hosts[i].total_usable_ram_mb
memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio
used_ram_mb = total_usable_ram_mb - free_ram_mb
usable_ram = memory_mb_limit - used_ram_mb
acceptable_num_instances = int(usable_ram / requested_ram)
if acceptable_num_instances < num_instances:
inacceptable_num = num_instances - acceptable_num_instances
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to RamConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
hosts[i].limits['memory_mb'] = memory_mb_limit
return constraint_matrix

View File

@ -0,0 +1,22 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import retry_filter
from nova.scheduler.solvers import constraints
class RetryConstraint(constraints.BaseFilterConstraint):
"""Selects nodes that have not been attempted for scheduling purposes."""
host_filter_cls = retry_filter.RetryFilter

View File

@ -0,0 +1,80 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers import constraints
class ServerGroupAffinityConstraint(constraints.BaseLinearConstraint):
"""Force to select hosts which host given server group."""
def __init__(self, *args, **kwargs):
super(ServerGroupAffinityConstraint, self).__init__(*args, **kwargs)
self.policy_name = 'affinity'
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
policies = filter_properties.get('group_policies', [])
if self.policy_name not in policies:
return constraint_matrix
group_hosts = filter_properties.get('group_hosts')
if not group_hosts:
constraint_matrix = [
([False for j in xrange(num_instances - 1)] + [True])
for i in xrange(num_hosts)]
else:
for i in xrange(num_hosts):
if hosts[i].host not in group_hosts:
constraint_matrix[i] = [False for
j in xrange(num_instances)]
return constraint_matrix
class ServerGroupAntiAffinityConstraint(constraints.BaseLinearConstraint):
"""Force to select hosts which host given server group."""
def __init__(self, *args, **kwargs):
super(ServerGroupAntiAffinityConstraint, self).__init__(
*args, **kwargs)
self.policy_name = 'anti-affinity'
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
policies = filter_properties.get('group_policies', [])
if self.policy_name not in policies:
return constraint_matrix
group_hosts = filter_properties.get('group_hosts')
for i in xrange(num_hosts):
if hosts[i].host in group_hosts:
constraint_matrix[i] = [False for j in xrange(num_instances)]
else:
constraint_matrix[i] = ([True] + [False for
j in xrange(num_instances - 1)])
return constraint_matrix

View File

@ -0,0 +1,30 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import trusted_filter
from nova.scheduler.solvers import constraints
class TrustedHostsConstraint(constraints.BaseFilterConstraint):
"""Constraint to add support for Trusted Computing Pools.
Allows a host to be selected by scheduler only when the integrity (trust)
of that host matches the trust requested in the `extra_specs' for the
flavor. The `extra_specs' will contain a key/value pair where the
key is `trust'. The value of this pair (`trusted'/`untrusted') must
match the integrity of that host (obtained from the Attestation
service) before the task can be scheduled on that host.
"""
host_filter_cls = trusted_filter.TrustedFilter

View File

@ -0,0 +1,22 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.filters import type_filter
from nova.scheduler.solvers import constraints
class TypeAffinityConstraint(constraints.BaseFilterConstraint):
"""TypeAffinityConstraint doesn't allow more then one VM type per host."""
host_filter_cls = type_filter.TypeAffinityFilter

View File

@ -0,0 +1,79 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import constraints
CONF = cfg.CONF
CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter')
LOG = logging.getLogger(__name__)
class VcpuConstraint(constraints.BaseLinearConstraint):
"""Constraint of the total vcpu demand acceptable on each host."""
def _get_cpu_allocation_ratio(self, host_state, filter_properties):
return CONF.cpu_allocation_ratio
def get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
# get requested vcpus
instance_type = filter_properties.get('instance_type') or {}
if not instance_type:
return constraint_matrix
else:
instance_vcpus = instance_type['vcpus']
if instance_vcpus <= 0:
LOG.warn(_("VcpuConstraint is skipped because requested "
"instance vCPU number is 0 or invalid."))
return constraint_matrix
for i in xrange(num_hosts):
cpu_allocation_ratio = self._get_cpu_allocation_ratio(
hosts[i], filter_properties)
# get available vcpus
if not hosts[i].vcpus_total:
LOG.warn(_("vCPUs of %(host)s not set; assuming CPU "
"collection broken."), {'host': hosts[i]})
continue
else:
vcpus_total = hosts[i].vcpus_total * cpu_allocation_ratio
usable_vcpus = vcpus_total - hosts[i].vcpus_used
acceptable_num_instances = int(usable_vcpus / instance_vcpus)
if acceptable_num_instances < num_instances:
inacceptable_num = num_instances - acceptable_num_instances
constraint_matrix[i] = (
[True for j in xrange(acceptable_num_instances)] +
[False for j in xrange(inacceptable_num)])
LOG.debug(_("%(host)s can accept %(num)s requested instances "
"according to VcpuConstraint."),
{'host': hosts[i],
'num': acceptable_num_instances})
if vcpus_total > 0:
hosts[i].limits['vcpu'] = vcpus_total
return constraint_matrix

View File

@ -23,33 +23,50 @@ from nova import loadables
class BaseCost(object):
"""Base class for cost."""
def get_cost_matrix(self, hosts, instance_uuids, request_spec,
filter_properties):
"""Return the cost matrix. Implement this in a subclass."""
precedence = 0
def cost_multiplier(self):
"""How weighted this cost should be.
Override this method in a subclass, so that the returned value is
read from a configuration option to permit operators specify a
multiplier for the cost.
"""
return 1.0
def get_components(self, variables, hosts, filter_properties):
"""Return the components of the cost."""
raise NotImplementedError()
def normalize_cost_matrix(self, cost_matrix, lower_bound=0.0,
upper_bound=1.0):
"""Normalize the cost matrix. By default scale to [0,1]."""
if (lower_bound > upper_bound):
return None
cost_array = []
normalized_cost_matrix = list(cost_matrix)
for i in range(len(cost_matrix)):
for j in range(len(cost_matrix[i])):
cost_array.append(cost_matrix[i][j])
max_cost = max(cost_array)
min_cost = min(cost_array)
for i in range(len(normalized_cost_matrix)):
for j in range(len(normalized_cost_matrix[i])):
if max_cost == min_cost:
normalized_cost_matrix[i][j] = (upper_bound
+ lower_bound) / 2
else:
normalized_cost_matrix[i][j] = (lower_bound +
(cost_matrix[i][j] - min_cost) * (upper_bound -
lower_bound) / (max_cost - min_cost))
return normalized_cost_matrix
class BaseLinearCost(BaseCost):
"""Base class of LP cost."""
def __init__(self):
self.variables = []
self.coefficients = []
def _generate_components(self, variables, hosts, filter_properties):
# override in a sub class.
pass
def get_components(self, variables, hosts, filter_properties):
# deprecated currently, reserve for future use
self._generate_components(variables, hosts, filter_properties)
return (self.variables, self.coefficients)
def get_extended_cost_matrix(self, hosts, filter_properties):
raise NotImplementedError()
def get_init_costs(self, hosts, filter_properties):
x_cost_mat = self.get_extended_cost_matrix(hosts, filter_properties)
init_costs = [row[0] for row in x_cost_mat]
return init_costs
def get_cost_matrix(self, hosts, filter_properties):
x_cost_mat = self.get_extended_cost_matrix(hosts, filter_properties)
cost_matrix = [row[1:] for row in x_cost_mat]
return cost_matrix
class CostHandler(loadables.BaseLoader):

View File

@ -1,57 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Host network cost."""
from nova.openstack.common import log as logging
from nova.scheduler.solvers import costs as solvercosts
from oslo.config import cfg
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class HostNetworkAffinityCost(solvercosts.BaseCost):
"""The cost is evaluated by the existence of
requested networks in hosts.
"""
def get_cost_matrix(self, hosts, instance_uuids, request_spec,
filter_properties):
"""Calculate the cost matrix."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
costs = [[0 for j in range(num_instances)]
for i in range(num_hosts)]
requested_networks = filter_properties.get('requested_networks', None)
if requested_networks is None:
return costs
for i in range(num_hosts):
host_cost = 0
for network_id, requested_ip, port_id in requested_networks:
if network_id:
if network_id in hosts[i].networks:
host_cost -= 1
costs[i] = [host_cost for j in range(num_instances)]
return costs

View File

@ -0,0 +1,105 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Metrics Cost. Calculate hosts' costs by their metrics.
This can compute the costs based on the compute node hosts' various
metrics. The to-be computed metrics and their weighing ratio are specified
in the configuration file as the followings:
[metrics]
weight_setting = name1=1.0, name2=-1.0
The final weight would be name1.value * 1.0 + name2.value * -1.0.
"""
from oslo.config import cfg
from nova.scheduler.solvers import costs as solver_costs
from nova.scheduler.solvers.costs import utils as cost_utils
from nova.scheduler import utils
metrics_cost_opts = [
cfg.FloatOpt('metrics_cost_multiplier',
default=1.0,
help='Multiplier used for metrics costs.'),
]
metrics_weight_opts = [
cfg.FloatOpt('weight_multiplier_of_unavailable',
default=(-1.0),
help='If any one of the metrics set by weight_setting '
'is unavailable, the metric weight of the host '
'will be set to (minw + (maxw - minw) * m), '
'where maxw and minw are the max and min weights '
'among all hosts, and m is the multiplier.'),
]
CONF = cfg.CONF
CONF.register_opts(metrics_cost_opts, group='solver_scheduler')
CONF.register_opts(metrics_weight_opts, group='metrics')
CONF.import_opt('weight_setting', 'nova.scheduler.weights.metrics',
group='metrics')
class MetricsCost(solver_costs.BaseLinearCost):
def __init__(self):
self._parse_setting()
def _parse_setting(self):
self.setting = utils.parse_options(CONF.metrics.weight_setting,
sep='=',
converter=float,
name="metrics.weight_setting")
def cost_multiplier(self):
return CONF.solver_scheduler.metrics_cost_multiplier
def get_extended_cost_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances')
host_weights = []
numeric_values = []
for host in hosts:
metric_sum = 0.0
for (name, ratio) in self.setting:
metric = host.metrics.get(name, None)
if metric:
metric_sum += metric.value * ratio
else:
metric_sum = None
break
host_weights.append(metric_sum)
if metric_sum:
numeric_values.append(metric_sum)
if numeric_values:
minval = min(numeric_values)
maxval = max(numeric_values)
weight_of_unavailable = (minval + (maxval - minval) *
CONF.metrics.weight_multiplier_of_unavailable)
for i in range(num_hosts):
if host_weights[i] is None:
host_weights[i] = weight_of_unavailable
else:
host_weights = [0 for i in xrange(num_hosts)]
extended_cost_matrix = [[(-host_weights[i])
for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
extended_cost_matrix = cost_utils.normalize_cost_matrix(
extended_cost_matrix)
return extended_cost_matrix

View File

@ -1,57 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Rack network cost."""
from nova.openstack.common import log as logging
from nova.scheduler.solvers import costs as solvercosts
from oslo.config import cfg
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class RackNetworkAffinityCost(solvercosts.BaseCost):
"""The cost is evaluated by the existence of
requested networks in racks.
"""
def get_cost_matrix(self, hosts, instance_uuids, request_spec,
filter_properties):
"""Calculate the cost matrix."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
costs = [[0 for j in range(num_instances)]
for i in range(num_hosts)]
requested_networks = filter_properties.get('requested_networks', None)
if requested_networks is None:
return costs
for i in range(num_hosts):
host_cost = 0
for network_id, requested_ip, port_id in requested_networks:
if network_id:
if network_id in sum(hosts[i].rack_networks.values(), []):
host_cost -= 1
costs[i] = [host_cost for j in range(num_instances)]
return costs

View File

@ -13,33 +13,65 @@
# License for the specific language governing permissions and limitations
# under the License.
"""Ram cost."""
"""
RAM Cost. Calculate instance placement costs by hosts' RAM usage.
from nova.openstack.common import log as logging
from nova.scheduler.solvers import costs as solvercosts
The default is to spread instances across all hosts evenly. If you prefer
stacking, you can set the 'ram_cost_multiplier' option to a positive
number and the cost has the opposite effect of the default.
"""
from oslo.config import cfg
LOG = logging.getLogger(__name__)
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import costs as solver_costs
from nova.scheduler.solvers.costs import utils
ram_cost_opts = [
cfg.FloatOpt('ram_cost_multiplier',
default=1.0,
help='Multiplier used for ram costs. Negative '
'numbers mean to stack vs spread.'),
]
CONF = cfg.CONF
CONF.register_opts(ram_cost_opts, group='solver_scheduler')
LOG = logging.getLogger(__name__)
class RamCost(solvercosts.BaseCost):
"""The cost is evaluated by the production of hosts' free memory
and a pre-defined multiplier.
"""
class RamCost(solver_costs.BaseLinearCost):
def get_cost_matrix(self, hosts, instance_uuids, request_spec,
filter_properties):
"""Calculate the cost matrix."""
def cost_multiplier(self):
return CONF.solver_scheduler.ram_cost_multiplier
def get_extended_cost_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
num_instances = filter_properties.get('num_instances')
instance_type = filter_properties.get('instance_type') or {}
requested_ram = instance_type.get('memory_mb', 0)
if 'memory_mb' not in instance_type:
LOG.warn(_("No information about requested instances\' RAM size "
"was found, default value (0) is used."))
extended_cost_matrix = [[0 for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
if requested_ram == 0:
extended_cost_matrix = [
[(-hosts[i].free_ram_mb)
for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
else:
num_instances = request_spec.get('num_instances', 1)
costs = [[hosts[i].free_ram_mb * CONF.ram_weight_multiplier
for j in range(num_instances)] for i in range(num_hosts)]
return costs
# we use int approximation here to avoid scaling problems after
# normalization, in the case that the free ram in all hosts are
# of very small values
extended_cost_matrix = [
[-int(hosts[i].free_ram_mb / requested_ram) + j
for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
extended_cost_matrix = utils.normalize_cost_matrix(
extended_cost_matrix)
return extended_cost_matrix

View File

@ -0,0 +1,43 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utility methods for scheduler solver costs."""
def normalize_cost_matrix(cost_matrix):
"""This is a special normalization method for cost matrix, it
preserves the linear relationships among current host states (first
column of the matrix) while scaling their maximum absolute value to 1.
Notice that by this method, the matrix is not scaled to a fixed range.
"""
normalized_matrix = cost_matrix
if not cost_matrix:
return normalized_matrix
first_column = [row[0] for row in cost_matrix]
maxval = max(first_column)
minval = min(first_column)
maxabs = max(abs(maxval), abs(minval))
if maxabs == 0:
return normalized_matrix
scale_factor = 1.0 / maxabs
for i in xrange(len(cost_matrix)):
for j in xrange(len(cost_matrix[i])):
normalized_matrix[i][j] = cost_matrix[i][j] * scale_factor
return normalized_matrix

View File

@ -1,94 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Volume affinity cost.
This pluggable cost provides a way to schedule a VM on a host that has
a specified volume. In the cost matrix used for the linear programming
optimization problem, the entries for the host that contains the
specified volume is given as 0, and 1 for other hosts. So all the other
hosts have equal cost and are considered equal. Currently this solution
allows you to provide only one volume_id as a hint, so this solution
works best for scheduling a single VM. Another limitation is that the
user needs to have an admin context to obtain the host information from
the cinderclient. Without the knowledge of the host containing the volume
all hosts will have the same cost of 1.
"""
from cinderclient import exceptions as client_exceptions
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler import driver as scheduler_driver
from nova.scheduler.solvers import costs as solvercosts
import nova.volume.cinder as volumecinder
LOG = logging.getLogger(__name__)
class VolumeAffinityCost(solvercosts.BaseCost):
"""The cost is 0 for same-as-volume host and 1 otherwise."""
hint_name = 'affinity_volume_id'
def get_cost_matrix(self, hosts, instance_uuids, request_spec,
filter_properties):
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
context = filter_properties.get('context')
scheduler_hints = filter_properties.get('scheduler_hints', None)
cost_matrix = [[1.0 for j in range(num_instances)]
for i in range(num_hosts)]
if scheduler_hints is not None:
volume_id = scheduler_hints.get(self.hint_name, None)
LOG.debug(_("volume id: %s") % volume_id)
if volume_id:
volume = None
volume_host = None
try:
volume = volumecinder.cinderclient(context).volumes.get(
volume_id)
if volume:
volume = volumecinder.cinderadminclient().volumes.get(
volume_id)
volume_host = getattr(volume, 'os-vol-host-attr:host',
None)
LOG.debug(_("volume host: %s") % volume_host)
except client_exceptions.NotFound:
LOG.warning(
_("volume with provided id ('%s') was not found")
% volume_id)
except client_exceptions.Unauthorized:
LOG.warning(_("Failed to retrieve volume %s: unauthorized")
% volume_id)
except:
LOG.warning(_("Failed to retrieve volume due to an error"))
if volume_host:
for i in range(num_hosts):
host_state = hosts[i]
if host_state.host == volume_host:
cost_matrix[i] = [0.0
for j in range(num_instances)]
LOG.debug(_("this host: %(h1)s volume host: %(h2)s") %
{"h1": host_state.host, "h2": volume_host})
else:
LOG.warning(_("Cannot find volume host."))
return cost_matrix

View File

@ -0,0 +1,138 @@
# Copyright (c) 2015 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import operator
from nova.scheduler import solvers as scheduler_solver
class FastSolver(scheduler_solver.BaseHostSolver):
def __init__(self):
super(FastSolver, self).__init__()
self.cost_classes = self._get_cost_classes()
self.constraint_classes = self._get_constraint_classes()
def _get_cost_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances', 1)
solver_cache = filter_properties['solver_cache']
# initialize cost matrix
cost_matrix = [[0 for j in xrange(num_instances)]
for i in xrange(num_hosts)]
solver_cache['cost_matrix'] = cost_matrix
cost_objects = [cost() for cost in self.cost_classes]
cost_objects.sort(key=lambda cost: cost.precedence)
precedence_level = 0
for cost_object in cost_objects:
if cost_object.precedence > precedence_level:
# update cost matrix in the solver cache
solver_cache['cost_matrix'] = cost_matrix
precedence_level = cost_object.precedence
cost_multiplier = cost_object.cost_multiplier()
this_cost_mat = cost_object.get_cost_matrix(hosts,
filter_properties)
if not this_cost_mat:
continue
cost_matrix = [[cost_matrix[i][j] +
this_cost_mat[i][j] * cost_multiplier
for j in xrange(num_instances)]
for i in xrange(num_hosts)]
# update cost matrix in the solver cache
solver_cache['cost_matrix'] = cost_matrix
return cost_matrix
def _get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances', 1)
solver_cache = filter_properties['solver_cache']
# initialize constraint_matrix
constraint_matrix = [[True for j in xrange(num_instances)]
for i in xrange(num_hosts)]
solver_cache['constraint_matrix'] = constraint_matrix
constraint_objects = [cons() for cons in self.constraint_classes]
constraint_objects.sort(key=lambda cons: cons.precedence)
precedence_level = 0
for constraint_object in constraint_objects:
if constraint_object.precedence > precedence_level:
# update constraint matrix in the solver cache
solver_cache['constraint_matrix'] = constraint_matrix
precedence_level = constraint_object.precedence
this_cons_mat = constraint_object.get_constraint_matrix(hosts,
filter_properties)
if not this_cons_mat:
continue
constraint_matrix = [[constraint_matrix[i][j] &
this_cons_mat[i][j] for j in xrange(num_instances)]
for i in xrange(num_hosts)]
# update constraint matrix in the solver cache
solver_cache['constraint_matrix'] = constraint_matrix
return constraint_matrix
def solve(self, hosts, filter_properties):
host_instance_combinations = []
num_instances = filter_properties['num_instances']
num_hosts = len(hosts)
instance_uuids = filter_properties.get('instance_uuids') or [
'(unknown_uuid)' + str(i) for i in xrange(num_instances)]
filter_properties.setdefault('solver_cache', {})
filter_properties['solver_cache'].update(
{'cost_matrix': [],
'constraint_matrix': []})
cost_matrix = self._get_cost_matrix(hosts, filter_properties)
constraint_matrix = self._get_constraint_matrix(hosts,
filter_properties)
placement_cost_tuples = []
for i in xrange(num_hosts):
for j in xrange(num_instances):
if constraint_matrix[i][j]:
host_idx = i
inst_num = j + 1
cost_val = cost_matrix[i][j]
placement_cost_tuples.append(
(host_idx, inst_num, cost_val))
sorted_placement_costs = sorted(placement_cost_tuples,
key=operator.itemgetter(2))
host_inst_alloc = [0 for i in xrange(num_hosts)]
allocated_inst_num = 0
for (host_idx, inst_num, cost_val) in sorted_placement_costs:
delta = inst_num - host_inst_alloc[host_idx]
if (delta <= 0) or (allocated_inst_num + delta > num_instances):
continue
host_inst_alloc[host_idx] += delta
allocated_inst_num += delta
if allocated_inst_num == num_instances:
break
instances_iter = iter(instance_uuids)
for i in xrange(len(host_inst_alloc)):
num = host_inst_alloc[i]
for n in xrange(num):
host_instance_combinations.append(
(hosts[i], instances_iter.next()))
return host_instance_combinations

View File

@ -1,224 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A reference solver implementation that models the scheduling problem as a
Linear Programming (LP) problem using the PULP modeling framework. This
implementation includes disk and memory constraints, and uses the free ram as
a cost metric to maximize or minimize for the LP problem.
"""
from oslo.config import cfg
from pulp import constants
from pulp import pulp
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler import solvers as novasolvers
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter')
CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter')
CONF.import_opt('ram_weight_multiplier', 'nova.scheduler.weights.ram')
class HostsPulpSolver(novasolvers.BaseHostSolver):
"""A LP based constraint solver implemented using PULP modeler."""
def host_solve(self, hosts, instance_uuids, request_spec,
filter_properties):
"""This method returns a list of tuples - (host, instance_uuid)
that are returned by the solver. Here the assumption is that
all instance_uuids have the same requirement as specified in
filter_properties
"""
host_instance_tuples_list = []
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
instance_uuids = ['unset_uuid%s' % i
for i in xrange(num_instances)]
num_hosts = len(hosts)
host_ids = ['Host%s' % i for i in range(num_hosts)]
LOG.debug(_("All Hosts: %s") % [h.host for h in hosts])
for host in hosts:
LOG.debug(_("Host state: %s") % host)
host_id_dict = dict(zip(host_ids, hosts))
instances = ['Instance%s' % i for i in range(num_instances)]
instance_id_dict = dict(zip(instances, instance_uuids))
# supply is a dictionary for the number of units of
# resource for each Host.
# Currently using only the disk_mb and memory_mb
# as the two resources to satisfy. Need to eventually be able to
# plug-in different resources. An example supply dictionary:
# supply = {"Host1": [1000, 1000],
# "Host2": [4000, 1000]}
supply = dict((host_ids[i],
[self._get_usable_disk_mb(hosts[i]),
self._get_usable_memory_mb(hosts[i]), ])
for i in range(len(host_ids)))
number_of_resource_types_per_host = 2
required_disk_mb = self._get_required_disk_mb(filter_properties)
required_memory_mb = self._get_required_memory_mb(filter_properties)
# demand is a dictionary for the number of
# units of resource required for each Instance.
# An example demand dictionary:
# demand = {"Instance0":[200, 300],
# "Instance1":[900, 100],
# "Instance2":[1800, 200],
# "Instance3":[200, 300],
# "Instance4":[700, 800], }
# However for the current scenario, all instances to be scheduled
# per request have the same requirements. Need to eventually
# to support requests to specify different instance requirements
demand = dict((instances[i],
[required_disk_mb, required_memory_mb, ])
for i in range(num_instances))
# Creates a list of costs of each Host-Instance assignment
# Currently just like the nova.scheduler.weights.ram.RAMWeigher,
# using host_state.free_ram_mb * ram_weight_multiplier
# as the cost. A negative ram_weight_multiplier means to stack,
# vs spread.
# An example costs list:
# costs = [ # Instances
# # 1 2 3 4 5
# [2, 4, 5, 2, 1], # A Hosts
# [3, 1, 3, 2, 3] # B
# ]
# Multiplying -1 as we want to use the same behavior of
# ram_weight_multiplier as used by ram weigher.
costs = [[-1 * host.free_ram_mb *
CONF.ram_weight_multiplier
for i in range(num_instances)]
for host in hosts]
costs = pulp.makeDict([host_ids, instances], costs, 0)
# The PULP LP problem variable used to add all the problem data
prob = pulp.LpProblem("Host Instance Scheduler Problem",
constants.LpMinimize)
all_host_instance_tuples = [(w, b)
for w in host_ids
for b in instances]
vars = pulp.LpVariable.dicts("IA", (host_ids, instances),
0, 1, constants.LpInteger)
# The objective function is added to 'prob' first
prob += (pulp.lpSum([vars[w][b] * costs[w][b]
for (w, b) in all_host_instance_tuples]),
"Sum_of_Host_Instance_Scheduling_Costs")
# The supply maximum constraints are added to
# prob for each supply node (Host)
for w in host_ids:
for i in range(number_of_resource_types_per_host):
prob += (pulp.lpSum([vars[w][b] * demand[b][i]
for b in instances])
<= supply[w][i],
"Sum_of_Resource_%s" % i + "_provided_by_Host_%s" % w)
# The number of Hosts required per Instance, in this case it is only 1
for b in instances:
prob += (pulp.lpSum([vars[w][b] for w in host_ids])
== 1, "Sum_of_Instance_Assignment%s" % b)
# The demand minimum constraints are added to prob for
# each demand node (Instance)
for b in instances:
for j in range(number_of_resource_types_per_host):
prob += (pulp.lpSum([vars[w][b] * demand[b][j]
for w in host_ids])
>= demand[b][j],
"Sum_of_Resource_%s" % j + "_required_by_Instance_%s" % b)
# The problem is solved using PuLP's choice of Solver
prob.solve()
if pulp.LpStatus[prob.status] == 'Optimal':
for v in prob.variables():
if v.name.startswith('IA'):
(host_id, instance_id) = v.name.lstrip('IA').lstrip(
'_').split('_')
if v.varValue == 1.0:
host_instance_tuples_list.append(
(host_id_dict[host_id],
instance_id_dict[instance_id]))
return host_instance_tuples_list
def _get_usable_disk_mb(self, host_state):
"""This method returns the usable disk in mb for the given host.
Takes into account the disk allocation ratio.
(virtual disk to physical disk allocation ratio).
"""
free_disk_mb = host_state.free_disk_mb
total_usable_disk_mb = host_state.total_usable_disk_gb * 1024
disk_allocation_ratio = CONF.disk_allocation_ratio
disk_mb_limit = total_usable_disk_mb * disk_allocation_ratio
used_disk_mb = total_usable_disk_mb - free_disk_mb
usable_disk_mb = disk_mb_limit - used_disk_mb
return usable_disk_mb
def _get_required_disk_mb(self, filter_properties):
"""This method returns the required disk in mb from
the given filter_properties dictionary object.
"""
requested_disk_mb = 0
instance_type = filter_properties.get('instance_type')
if instance_type is not None:
requested_disk_mb = 1024 * (instance_type.get('root_gb', 0) +
instance_type.get('ephemeral_gb', 0))
return requested_disk_mb
def _get_usable_memory_mb(self, host_state):
"""This method returns the usable memory in mb for the given host.
Takes into account the ram allocation ratio.
(Virtual ram to physical ram allocation ratio).
"""
free_ram_mb = host_state.free_ram_mb
total_usable_ram_mb = host_state.total_usable_ram_mb
ram_allocation_ratio = CONF.ram_allocation_ratio
memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio
used_ram_mb = total_usable_ram_mb - free_ram_mb
usable_ram_mb = memory_mb_limit - used_ram_mb
return usable_ram_mb
def _get_required_memory_mb(self, filter_properties):
"""This method returns the required memory in mb from
the given filter_properties dictionary object
"""
required_ram_mb = 0
instance_type = filter_properties.get('instance_type')
if instance_type is not None:
required_ram_mb = instance_type.get('memory_mb', 0)
return required_ram_mb

View File

@ -1,91 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Linear constraints for scheduler linear constraint solvers
"""
from nova.compute import api as compute
from nova import loadables
class BaseLinearConstraint(object):
"""Base class for linear constraint."""
# The linear constraint should be formed as:
# coeff_vector * var_vector' <operator> <constants>
# where <operator> is ==, >, >=, <, <=, !=, etc.
# For convenience, the <constants> can be merged into left-hand-side,
# thus the right-hand-side is always 0.
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
pass
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Retruns a list of coefficient vectors."""
raise NotImplementedError()
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Returns a list of variable vectors."""
raise NotImplementedError()
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Returns a list of operations."""
raise NotImplementedError()
class AffinityConstraint(BaseLinearConstraint):
def __init__(self, variables, hosts, instance_uuids, request_spec, filter_properties):
self.compute_api = compute.API()
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(hosts,instance_uuids,request_spec)
def _get_host_instance_nums(self,hosts,instance_uuids,request_spec):
"""This method calculates number of hosts and instances"""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts,num_instances]
class ResourceAllocationConstraint(BaseLinearConstraint):
"""Base class of resource allocation constraints."""
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
class LinearConstraintHandler(loadables.BaseLoader):
def __init__(self):
super(LinearConstraintHandler, self).__init__(BaseLinearConstraint)
def all_linear_constraints():
"""Return a list of lineear constraint classes found in this directory.
This method is used as the default for available linear constraints for
scheduler and returns a list of all linearconstraint classes available.
"""
return LinearConstraintHandler().get_all_classes()

View File

@ -1,85 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
from nova import servicegroup
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class ActiveHostConstraint(linearconstraints.BaseLinearConstraint):
"""Constraint that only allows active hosts to be selected."""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
self.servicegroup_api = servicegroup.API()
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Calculate the coeffivient vectors."""
# Coefficients are 0 for active hosts and 1 otherwise
coefficient_matrix = []
for host in hosts:
service = host.service
if service['disabled'] or not self.servicegroup_api.service_is_up(
service):
coefficient_matrix.append([1 for j in range(
self.num_instances)])
LOG.debug(_("%s is not active") % host.host)
else:
coefficient_matrix.append([0 for j in range(
self.num_instances)])
LOG.debug(_("%s is ok") % host.host)
return coefficient_matrix
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Reorganize the variables."""
# The variable_matrix[i,j] denotes the relationship between
# host[i] and instance[j].
variable_matrix = []
variable_matrix = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_matrix
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Set operations for each constraint function."""
# Operations are '=='.
operations = [(lambda x: x == 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,83 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
from nova import servicegroup
LOG = logging.getLogger(__name__)
class AllHostsConstraint(linearconstraints.BaseLinearConstraint):
"""NoOp constraint. Passes all hosts."""
# The linear constraint should be formed as:
# coeff_vector * var_vector' <operator> <constants>
# where <operator> is ==, >, >=, <, <=, !=, etc.
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
self.servicegroup_api = servicegroup.API()
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
self._check_variables_size(variables)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
def _check_variables_size(self, variables):
"""This method checks the size of variable matirx."""
# Supposed to be a <num_hosts> by <num_instances> matrix.
if len(variables) != self.num_hosts:
raise ValueError(_('Variables row length should match'
'number of hosts.'))
for row in variables:
if len(row) != self.num_instances:
raise ValueError(_('Variables column length should'
'match number of instances.'))
return True
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Calculate the coeffivient vectors."""
# Coefficients are 0 for active hosts and 1 otherwise
coefficient_vectors = []
for host in hosts:
coefficient_vectors.append([0 for j in range(self.num_instances)])
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Reorganize the variables."""
# The variable_vectors[i][j] denotes the relationship between host[i]
# and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in
range(self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Set operations for each constraint function."""
# Operations are '=='.
operations = [(lambda x: x == 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,95 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.compute import api as compute
from nova import db
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('default_availability_zone', 'nova.availability_zones')
class AvailabilityZoneConstraint(linearconstraints.BaseLinearConstraint):
"""To select only the hosts belonging to an availability zone.
"""
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
self.compute_api = compute.API()
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# Coefficients are 0 for hosts in the availability zone, 1 for others
props = request_spec.get('instance_properties', {})
availability_zone = props.get('availability_zone')
coefficient_vectors = []
for host in hosts:
if availability_zone:
context = filter_properties['context'].elevated()
metadata = db.aggregate_metadata_get_by_host(context,
host.host, key='availability_zone')
if 'availability_zone' in metadata:
if availability_zone in metadata['availability_zone']:
coefficient_vectors.append([0 for j in range(
self.num_instances)])
else:
coefficient_vectors.append([1 for j in range(
self.num_instances)])
elif availability_zone == CONF.default_availability_zone:
coefficient_vectors.append([0 for j in range(
self.num_instances)])
else:
coefficient_vectors.append([1 for j in range(
self.num_instances)])
else:
coefficient_vectors.append([0 for j in range(
self.num_instances)])
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_vectors[i,j] denotes the relationship between
# host[i] and instance[j].
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '=='.
operations = [(lambda x: x == 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,50 +0,0 @@
from oslo.config import cfg
from nova.compute import api as compute
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
class DifferentHostConstraint(linearconstraints.AffinityConstraint):
"""Force to select hosts which are different from a set of given instances."""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def get_coefficient_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# Coefficients are 1 for same hosts and 0 for different hosts.
context = filter_properties['context']
scheduler_hints = filter_properties.get('scheduler_hints', {})
affinity_uuids = scheduler_hints.get('different_host', [])
if isinstance(affinity_uuids, basestring):
affinity_uuids = [affinity_uuids]
coefficient_vectors = []
for host in hosts:
if affinity_uuids:
if self.compute_api.get_all(context,
{'host':host.host,
'uuid':affinity_uuids,
'deleted':False}):
coefficient_vectors.append([1 for j in range(self.num_instances)])
else:
coefficient_vectors.append([0 for j in range(self.num_instances)])
else:
coefficient_vectors.append([0 for j in range(self.num_instances)])
return coefficient_vectors
def get_variable_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# The variable_vectors[i,j] denotes the relationship between host[i] and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# Operations are '=='.
operations = [(lambda x: x==0) for i in range(self.num_hosts)]
return operations

View File

@ -1,89 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.compute import api as compute
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.import_opt('max_io_ops_per_host', 'nova.scheduler.filters.io_ops_filter')
class IoOpsConstraint(linearconstraints.BaseLinearConstraint):
"""A constraint to ensure only those hosts are selected whose number of
concurrent I/O operations are within a set threshold.
"""
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
self.compute_api = compute.API()
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# Coefficients are 0 for hosts within the limit and 1 for other hosts.
coefficient_vectors = []
for host in hosts:
num_io_ops = host.num_io_ops
max_io_ops = CONF.max_io_ops_per_host
passes = num_io_ops < max_io_ops
if passes:
coefficient_vectors.append([0 for j in range(
self.num_instances)])
else:
coefficient_vectors.append([1 for j in range(
self.num_instances)])
LOG.debug(_("%(host)s fails I/O ops check: Max IOs per host "
"is set to %(max_io_ops)s"),
{'host': host,
'max_io_ops': max_io_ops})
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_vectors[i,j] denotes the relationship between
# host[i] and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '=='.
operations = [(lambda x: x == 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,85 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.scheduler.solvers import linearconstraints
CONF = cfg.CONF
CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter')
class MaxDiskAllocationPerHostConstraint(
linearconstraints.ResourceAllocationConstraint):
"""Constraint of the maximum total disk demand acceptable on each host."""
# The linear constraint should be formed as:
# coeff_vectors * var_vectors' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# Give demand as coefficient for each variable and -supply as
# constant in each constraint.
demand = [self._get_required_disk_mb(filter_properties)
for j in range(self.num_instances)]
supply = [self._get_usable_disk_mb(hosts[i])
for i in range(self.num_hosts)]
coefficient_vectors = [demand + [-supply[i]]
for i in range(self.num_hosts)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_vectors[i,j] denotes the relationship between host[i]
# and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j]
for j in range(self.num_instances)]
+ [1] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '<='.
operations = [(lambda x: x <= 0) for i in range(self.num_hosts)]
return operations
def _get_usable_disk_mb(self, host_state):
"""This method returns the usable disk in mb for the given host.
Takes into account the disk allocation ratio (virtual disk to
physical disk allocation ratio)
"""
free_disk_mb = host_state.free_disk_mb
total_usable_disk_mb = host_state.total_usable_disk_gb * 1024
disk_mb_limit = total_usable_disk_mb * CONF.disk_allocation_ratio
used_disk_mb = total_usable_disk_mb - free_disk_mb
usable_disk_mb = disk_mb_limit - used_disk_mb
return usable_disk_mb
def _get_required_disk_mb(self, filter_properties):
"""This method returns the required disk in mb from
the given filter_properties dictionary object.
"""
requested_disk_mb = 0
instance_type = filter_properties.get('instance_type')
if instance_type is not None:
requested_disk_mb = (1024 * (instance_type.get('root_gb', 0) +
instance_type.get('ephemeral_gb', 0)) +
instance_type.get('swap', 0))
return requested_disk_mb

View File

@ -1,85 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.scheduler.solvers import linearconstraints
CONF = cfg.CONF
CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter')
class MaxRamAllocationPerHostConstraint(
linearconstraints.ResourceAllocationConstraint):
"""Constraint of the total ram demand acceptable on each host."""
# The linear constraint should be formed as:
# coeff_vectors * var_vectors' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# Give demand as coefficient for each variable and -supply as constant
# in each constraint.
[num_hosts, num_instances] = self._get_host_instance_nums(hosts,
instance_uuids, request_spec)
demand = [self._get_required_memory_mb(filter_properties)
for j in range(self.num_instances)]
supply = [self._get_usable_memory_mb(hosts[i])
for i in range(self.num_hosts)]
coefficient_vectors = [demand + [-supply[i]]
for i in range(self.num_hosts)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_vectors[i,j] denotes the relationship between host[i]
# and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j]
for j in range(self.num_instances)]
+ [1] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '<='.
operations = [(lambda x: x <= 0) for i in range(self.num_hosts)]
return operations
def _get_usable_memory_mb(self, host_state):
"""This method returns the usable memory in mb for the given host.
Takes into account the ram allocation ratio (Virtual ram to
physical ram allocation ratio)
"""
free_ram_mb = host_state.free_ram_mb
total_usable_ram_mb = host_state.total_usable_ram_mb
memory_mb_limit = total_usable_ram_mb * CONF.ram_allocation_ratio
used_ram_mb = total_usable_ram_mb - free_ram_mb
usable_ram_mb = memory_mb_limit - used_ram_mb
return usable_ram_mb
def _get_required_memory_mb(self, filter_properties):
"""This method returns the required memory in mb from
the given filter_properties dictionary object.
"""
required_ram_mb = 0
instance_type = filter_properties.get('instance_type')
if instance_type is not None:
required_ram_mb = instance_type.get('memory_mb', 0)
return required_ram_mb

View File

@ -1,79 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.scheduler.solvers import linearconstraints
CONF = cfg.CONF
CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter')
class MaxVcpuAllocationPerHostConstraint(
linearconstraints.ResourceAllocationConstraint):
"""Constraint of the total vcpu demand acceptable on each host."""
# The linear constraint should be formed as:
# coeff_vectors * var_vectors' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# Give demand as coefficient for each variable and -supply as constant
# in each constraint.
[num_hosts, num_instances] = self._get_host_instance_nums(hosts,
instance_uuids, request_spec)
demand = [self._get_required_vcpus(filter_properties)
for j in range(self.num_instances)]
supply = [self._get_usable_vcpus(hosts[i])
for i in range(self.num_hosts)]
coefficient_vectors = [demand + [-supply[i]]
for i in range(self.num_hosts)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_vectors[i,j] denotes the relationship between host[i]
# and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j]
for j in range(self.num_instances)]
+ [1] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '<='.
operations = [(lambda x: x <= 0) for i in range(self.num_hosts)]
return operations
def _get_usable_vcpus(self, host_state):
"""This method returns the usable vcpus for the given host.
"""
vcpus_total = host_state.vcpus_total * CONF.cpu_allocation_ratio
usable_vcpus = vcpus_total - host_state.vcpus_used
return usable_vcpus
def _get_required_vcpus(self, filter_properties):
"""This method returns the required vcpus from
the given filter_properties dictionary object.
"""
required_vcpus = 1
instance_type = filter_properties.get('instance_type')
if instance_type is not None:
required_vcpus = instance_type.get('vcpus', 1)
return required_vcpus

View File

@ -1,68 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
class NonTrivialSolutionConstraint(linearconstraints.BaseLinearConstraint):
"""Constraint that forces each instance to be placed
at exactly one host, so as to avoid trivial solutions.
"""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The coefficient for each variable is 1 and
# constant in each constraint is (-1).
coefficient_vectors = [[1 for i in range(self.num_hosts)] + [-1]
for j in range(self.num_instances)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
# The variable_matrix[i,j] denotes the relationship between
# instance[i] and host[j]
variable_vectors = []
variable_vectors = [[variables[i][j] for i in range(self.num_hosts)] +
[1] for j in range(self.num_instances)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
# Operations are '=='.
operations = [(lambda x: x == 0) for j in range(self.num_instances)]
return operations

View File

@ -1,87 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
CONF = cfg.CONF
CONF.import_opt("max_instances_per_host", "nova.scheduler.filters.num_instances_filter")
LOG = logging.getLogger(__name__)
class NumInstancesPerHostConstraint(linearconstraints.BaseLinearConstraint):
"""Constraint that specifies the maximum number of instances that
each host can launch.
"""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def __init__(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
[self.num_hosts, self.num_instances] = self._get_host_instance_nums(
hosts, instance_uuids, request_spec)
def _get_host_instance_nums(self, hosts, instance_uuids, request_spec):
"""This method calculates number of hosts and instances."""
num_hosts = len(hosts)
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
return [num_hosts, num_instances]
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Calculate the coeffivient vectors."""
# The coefficient for each variable is 1 and constant in
# each constraint is -(max_instances_per_host)
supply = [self._get_usable_instance_num(hosts[i])
for i in range(self.num_hosts)]
coefficient_matrix = [[1 for j in range(self.num_instances)] +
[-supply[i]] for i in range(self.num_hosts)]
return coefficient_matrix
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Reorganize the variables."""
# The variable_matrix[i,j] denotes the relationship between
# host[i] and instance[j].
variable_matrix = []
variable_matrix = [[variables[i][j] for j in range(
self.num_instances)] + [1] for i in range(self.num_hosts)]
return variable_matrix
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Set operations for each constraint function."""
# Operations are '<='.
operations = [(lambda x: x <= 0) for i in range(self.num_hosts)]
return operations
def _get_usable_instance_num(self, host_state):
"""This method returns the usable number of instance
for the given host.
"""
num_instances = host_state.num_instances
max_instances_allowed = CONF.max_instances_per_host
usable_instance_num = max_instances_allowed - num_instances
return usable_instance_num

View File

@ -1,90 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
max_host_networks_opts = [
cfg.IntOpt('max_networks_per_host',
default=4094,
help='The maximum number of networks allowed in a host')
]
CONF = cfg.CONF
CONF.register_opts(max_host_networks_opts)
LOG = logging.getLogger(__name__)
class NumNetworksPerHostConstraint(
linearconstraints.BaseLinearConstraint):
"""Constraint that specifies the maximum number of networks that
each host can launch.
"""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Calculate the coeffivient vectors."""
# The coefficient for each variable is 1 and constant in
# each constraint is -(max_instances_per_host)
usable_network_nums = [self._get_usable_network_num(hosts[i])
for i in range(self.num_hosts)]
requested_networks = filter_properties.get('requested_networks', None)
num_new_networks = [0 for i in range(self.num_hosts)]
for i in range(self.num_hosts):
for network_id, requested_ip, port_id in requested_networks:
if network_id:
if network_id not in hosts[i].networks:
num_new_networks[i] += 1
coefficient_vectors = [
[num_new_networks[i] - usable_network_nums[i]
for j in range(self.num_instances)]
for i in range(self.num_hosts)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Reorganize the variables."""
# The variable_matrix[i,j] denotes the relationship between
# host[i] and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Set operations for each constraint function."""
# Operations are '<='.
operations = [(lambda x: x <= 0) for i in range(self.num_hosts)]
return operations
def _get_usable_network_num(self, host_state):
"""This method returns the usable number of network
for the given host.
"""
num_networks = len(host_state.networks)
max_networks_allowed = CONF.max_networks_per_host
usable_network_num = max_networks_allowed - num_networks
return usable_network_num

View File

@ -1,87 +0,0 @@
# Copyright (c) 2014 Cisco Systems Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo.config import cfg
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
max_rack_networks_opts = [
cfg.IntOpt('max_networks_per_rack',
default=4094,
help='The maximum number of networks allowed in a rack')
]
CONF = cfg.CONF
CONF.register_opts(max_rack_networks_opts)
LOG = logging.getLogger(__name__)
class NumNetworksPerRackConstraint(
linearconstraints.BaseLinearConstraint):
"""Constraint that specifies the maximum number of networks that
each rack can launch.
"""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) (constants)
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the (constants) is merged into left-hand-side,
# thus the right-hand-side is 0.
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Calculate the coeffivient vectors."""
# The coefficient for each variable is 1 and constant in
# each constraint is -(max_instances_per_host)
requested_networks = filter_properties.get('requested_networks', None)
max_networks_allowed = CONF.max_networks_per_rack
host_coeffs = [1 for i in range(self.num_hosts)]
for i in range(self.num_hosts):
rack_networks = hosts[i].rack_networks
for rack in rack_networks.keys():
this_rack_networks = rack_networks[rack]
num_networks = len(this_rack_networks)
num_new_networks = 0
for network_id, requested_ip, port_id in requested_networks:
if network_id:
if network_id not in this_rack_networks:
num_new_networks += 1
if num_networks + num_new_networks > max_networks_allowed:
host_coeffs[i] = -1
break
coefficient_vectors = [
[host_coeffs[i] for j in range(self.num_instances)]
for i in range(self.num_hosts)]
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
"""Reorganize the variables."""
# The variable_matrix[i,j] denotes the relationship between
# host[i] and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids, request_spec,
filter_properties):
"""Set operations for each constraint function."""
# Operations are '>='.
operations = [(lambda x: x >= 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,49 +0,0 @@
from oslo.config import cfg
from nova.compute import api as compute
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
class SameHostConstraint(linearconstraints.AffinityConstraint):
"""Force to select hosts which are same as a set of given instances'."""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def get_coefficient_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# Coefficients are 0 for same hosts and 1 for different hosts.
context = filter_properties['context']
scheduler_hints = filter_properties.get('scheduler_hints', {})
affinity_uuids = scheduler_hints.get('same_host', [])
if isinstance(affinity_uuids, basestring):
affinity_uuids = [affinity_uuids]
coefficient_vectors = []
for host in hosts:
if affinity_uuids:
if self.compute_api.get_all(context,
{'host':host.host,
'uuid':affinity_uuids,
'deleted':False}):
coefficient_vectors.append([0 for j in range(self.num_instances)])
else:
coefficient_vectors.append([1 for j in range(self.num_instances)])
else:
coefficient_vectors.append([0 for j in range(self.num_instances)])
return coefficient_vectors
def get_variable_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# The variable_vectors[i,j] denotes the relationship between host[i] and instance[j].
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self,variables,hosts,instance_uuids,request_spec,filter_properties):
# Operations are '=='.
operations = [(lambda x: x==0) for i in range(self.num_hosts)]
return operations

View File

@ -1,70 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
class ServerGroupAffinityConstraint(linearconstraints.AffinityConstraint):
"""Force to select hosts which host given server group."""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def __init__(self, *args, **kwargs):
super(ServerGroupAffinityConstraint, self).__init__(*args, **kwargs)
self.policy_name = 'affinity'
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
coefficient_vectors = []
policies = filter_properties.get('group_policies', [])
if self.policy_name not in policies:
coefficient_vectors = [[0 for j in range(self.num_instances)]
for i in range(self.num_hosts)]
return coefficient_vectors
group_hosts = filter_properties.get('group_hosts')
if not group_hosts:
# when the group is empty, we need to place all the instances in
# a same host.
coefficient_vectors = [[1 - self.num_instances] +
[1 for j in range(self.num_instances - 1)] for
i in range(self.num_hosts)]
for host in hosts:
if host.host in group_hosts:
coefficient_vectors.append([0 for j
in range(self.num_instances)])
else:
coefficient_vectors.append([1 for j
in range(self.num_instances)])
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
operations = [(lambda x: x == 0) for i in range(self.num_hosts)]
return operations

View File

@ -1,65 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.openstack.common import log as logging
from nova.scheduler.solvers import linearconstraints
LOG = logging.getLogger(__name__)
class ServerGroupAntiAffinityConstraint(linearconstraints.AffinityConstraint):
"""Force to select hosts which host given server group."""
# The linear constraint should be formed as:
# coeff_matrix * var_matrix' (operator) constant_vector
# where (operator) is ==, >, >=, <, <=, !=, etc.
# For convenience, the constant_vector is merged into left-hand-side,
# thus the right-hand-side is always 0.
def __init__(self, *args, **kwargs):
super(ServerGroupAntiAffinityConstraint, self).__init__(
*args, **kwargs)
self.policy_name = 'anti-affinity'
def get_coefficient_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
coefficient_vectors = []
policies = filter_properties.get('group_policies', [])
if self.policy_name not in policies:
coefficient_vectors = [[0 for j in range(self.num_instances)] + [0]
for i in range(self.num_hosts)]
return coefficient_vectors
group_hosts = filter_properties.get('group_hosts')
for host in hosts:
if host.host in group_hosts:
coefficient_vectors.append([1 for j
in range(self.num_instances)] + [1])
else:
coefficient_vectors.append([1 for j
in range(self.num_instances)] + [0])
return coefficient_vectors
def get_variable_vectors(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
variable_vectors = []
variable_vectors = [[variables[i][j] for j in range(
self.num_instances)] + [1] for i in range(self.num_hosts)]
return variable_vectors
def get_operations(self, variables, hosts, instance_uuids,
request_spec, filter_properties):
operations = [(lambda x: x <= 1) for i in range(self.num_hosts)]
return operations

View File

@ -1,122 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from pulp import constants
from pulp import pulp
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler import solvers as scheduler_solver
LOG = logging.getLogger(__name__)
class HostsPulpSolver(scheduler_solver.BaseHostSolver):
"""A LP based pluggable LP solver implemented using PULP modeler."""
def __init__(self):
self.cost_classes = self._get_cost_classes()
self.constraint_classes = self._get_constraint_classes()
self.cost_weights = self._get_cost_weights()
def host_solve(self, hosts, instance_uuids, request_spec,
filter_properties):
"""This method returns a list of tuples - (host, instance_uuid)
that are returned by the solver. Here the assumption is that
all instance_uuids have the same requirement as specified in
filter_properties.
"""
host_instance_tuples_list = []
if instance_uuids:
num_instances = len(instance_uuids)
else:
num_instances = request_spec.get('num_instances', 1)
#Setting a unset uuid string for each instance.
instance_uuids = ['unset_uuid' + str(i)
for i in xrange(num_instances)]
num_hosts = len(hosts)
LOG.debug(_("All Hosts: %s") % [h.host for h in hosts])
for host in hosts:
LOG.debug(_("Host state: %s") % host)
# Create dictionaries mapping host/instance IDs to hosts/instances.
host_ids = ['Host' + str(i) for i in range(num_hosts)]
host_id_dict = dict(zip(host_ids, hosts))
instance_ids = ['Instance' + str(i) for i in range(num_instances)]
instance_id_dict = dict(zip(instance_ids, instance_uuids))
# Create the 'prob' variable to contain the problem data.
prob = pulp.LpProblem("Host Instance Scheduler Problem",
constants.LpMinimize)
# Create the 'variables' matrix to contain the referenced variables.
variables = [[pulp.LpVariable("IA" + "_Host" + str(i) + "_Instance" +
str(j), 0, 1, constants.LpInteger) for j in
range(num_instances)] for i in range(num_hosts)]
# Get costs and constraints and formulate the linear problem.
self.cost_objects = [cost() for cost in self.cost_classes]
self.constraint_objects = [constraint(variables, hosts,
instance_uuids, request_spec, filter_properties)
for constraint in self.constraint_classes]
costs = [[0 for j in range(num_instances)] for i in range(num_hosts)]
for cost_object in self.cost_objects:
cost = cost_object.get_cost_matrix(hosts, instance_uuids,
request_spec, filter_properties)
cost = cost_object.normalize_cost_matrix(cost, 0.0, 1.0)
weight = float(self.cost_weights[cost_object.__class__.__name__])
costs = [[costs[i][j] + weight * cost[i][j]
for j in range(num_instances)] for i in range(num_hosts)]
prob += (pulp.lpSum([costs[i][j] * variables[i][j]
for i in range(num_hosts) for j in range(num_instances)]),
"Sum_of_Host_Instance_Scheduling_Costs")
for constraint_object in self.constraint_objects:
coefficient_vectors = constraint_object.get_coefficient_vectors(
variables, hosts, instance_uuids,
request_spec, filter_properties)
variable_vectors = constraint_object.get_variable_vectors(
variables, hosts, instance_uuids,
request_spec, filter_properties)
operations = constraint_object.get_operations(
variables, hosts, instance_uuids,
request_spec, filter_properties)
for i in range(len(operations)):
operation = operations[i]
len_vector = len(variable_vectors[i])
prob += (operation(pulp.lpSum([coefficient_vectors[i][j]
* variable_vectors[i][j] for j in range(len_vector)])),
"Costraint_Name_%s" % constraint_object.__class__.__name__
+ "_No._%s" % i)
# The problem is solved using PULP's choice of Solver.
prob.solve()
# Create host-instance tuples from the solutions.
if pulp.LpStatus[prob.status] == 'Optimal':
for v in prob.variables():
if v.name.startswith('IA'):
(host_id, instance_id) = v.name.lstrip('IA').lstrip(
'_').split('_')
if v.varValue == 1.0:
host_instance_tuples_list.append(
(host_id_dict[host_id],
instance_id_dict[instance_id]))
return host_instance_tuples_list

View File

@ -0,0 +1,220 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from pulp import constants
from pulp import pulp
from pulp import solvers as pulp_solver_classes
from oslo.config import cfg
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.scheduler import solvers as scheduler_solver
pulp_solver_opts = [
cfg.IntOpt('pulp_solver_timeout_seconds',
default=20,
help='How much time in seconds is allowed for solvers to '
'solve the scheduling problem. If this time limit '
'is exceeded the solver will be stopped.'),
]
CONF = cfg.CONF
CONF.register_opts(pulp_solver_opts, group='solver_scheduler')
LOG = logging.getLogger(__name__)
class PulpSolver(scheduler_solver.BaseHostSolver):
"""A LP based pluggable LP solver implemented using PULP modeler."""
def __init__(self):
super(PulpSolver, self).__init__()
self.cost_classes = self._get_cost_classes()
self.constraint_classes = self._get_constraint_classes()
def _get_cost_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances', 1)
solver_cache = filter_properties['solver_cache']
# initialize cost matrix
cost_matrix = [[0 for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
solver_cache['cost_matrix'] = cost_matrix
cost_objects = [cost() for cost in self.cost_classes]
cost_objects.sort(key=lambda cost: cost.precedence)
precedence_level = 0
for cost_object in cost_objects:
if cost_object.precedence > precedence_level:
# update cost matrix in the solver cache
solver_cache['cost_matrix'] = cost_matrix
precedence_level = cost_object.precedence
cost_multiplier = cost_object.cost_multiplier()
this_cost_mat = cost_object.get_extended_cost_matrix(hosts,
filter_properties)
if not this_cost_mat:
continue
cost_matrix = [[cost_matrix[i][j] +
this_cost_mat[i][j] * cost_multiplier
for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
# update cost matrix in the solver cache
solver_cache['cost_matrix'] = cost_matrix
return cost_matrix
def _get_constraint_matrix(self, hosts, filter_properties):
num_hosts = len(hosts)
num_instances = filter_properties.get('num_instances', 1)
solver_cache = filter_properties['solver_cache']
# initialize constraint_matrix
constraint_matrix = [[True for j in xrange(num_instances + 1)]
for i in xrange(num_hosts)]
solver_cache['constraint_matrix'] = constraint_matrix
constraint_objects = [cons() for cons in self.constraint_classes]
constraint_objects.sort(key=lambda cons: cons.precedence)
precedence_level = 0
for constraint_object in constraint_objects:
if constraint_object.precedence > precedence_level:
# update constraint matrix in the solver cache
solver_cache['constraint_matrix'] = constraint_matrix
precedence_level = constraint_object.precedence
this_cons_mat = constraint_object.get_constraint_matrix(hosts,
filter_properties)
if not this_cons_mat:
continue
for i in xrange(num_hosts):
constraint_matrix[i][1:] = [constraint_matrix[i][j + 1] &
this_cons_mat[i][j] for j in xrange(num_instances)]
# update constraint matrix in the solver cache
solver_cache['constraint_matrix'] = constraint_matrix
return constraint_matrix
def _adjust_cost_matrix(self, cost_matrix):
"""Modify cost matrix to fit the optimization problem."""
new_cost_matrix = cost_matrix
if not cost_matrix:
return new_cost_matrix
first_column = [row[0] for row in cost_matrix]
last_column = [row[-1] for row in cost_matrix]
if sum(first_column) < sum(last_column):
offset = min(first_column)
sign = 1
else:
offset = max(first_column)
sign = -1
for i in xrange(len(cost_matrix)):
for j in xrange(len(cost_matrix[i])):
new_cost_matrix[i][j] = sign * (
(cost_matrix[i][j] - offset) ** 2)
return new_cost_matrix
def solve(self, hosts, filter_properties):
"""This method returns a list of tuples - (host, instance_uuid)
that are returned by the solver. Here the assumption is that
all instance_uuids have the same requirement as specified in
filter_properties.
"""
host_instance_combinations = []
num_instances = filter_properties['num_instances']
num_hosts = len(hosts)
instance_uuids = filter_properties.get('instance_uuids') or [
'(unknown_uuid)' + str(i) for i in xrange(num_instances)]
filter_properties.setdefault('solver_cache', {})
filter_properties['solver_cache'].update(
{'cost_matrix': [],
'constraint_matrix': []})
cost_matrix = self._get_cost_matrix(hosts, filter_properties)
cost_matrix = self._adjust_cost_matrix(cost_matrix)
constraint_matrix = self._get_constraint_matrix(hosts,
filter_properties)
# Create dictionaries mapping temporary host/instance keys to
# hosts/instance_uuids. These temorary keys are to be used in the
# solving process since we need a convention of lp variable names.
host_keys = ['Host' + str(i) for i in xrange(num_hosts)]
host_key_map = dict(zip(host_keys, hosts))
instance_num_keys = ['InstanceNum' + str(i) for
i in xrange(num_instances + 1)]
instance_num_key_map = dict(zip(instance_num_keys,
xrange(num_instances + 1)))
# create the pulp variables
variable_matrix = [
[pulp.LpVariable('HI_' + host_key + '_' + instance_num_key,
0, 1, constants.LpInteger)
for instance_num_key in instance_num_keys]
for host_key in host_keys]
# create the 'prob' variable to contain the problem data.
prob = pulp.LpProblem("Host Instance Scheduler Problem",
constants.LpMinimize)
# add cost function to pulp solver
cost_variables = [variable_matrix[i][j] for i in xrange(num_hosts)
for j in xrange(num_instances + 1)]
cost_coefficients = [cost_matrix[i][j] for i in xrange(num_hosts)
for j in xrange(num_instances + 1)]
prob += (pulp.lpSum([cost_coefficients[i] * cost_variables[i]
for i in xrange(len(cost_variables))]), "Sum_Costs")
# add constraints to pulp solver
for i in xrange(num_hosts):
for j in xrange(num_instances + 1):
if constraint_matrix[i][j] is False:
prob += (variable_matrix[i][j] == 0,
"Cons_Host_%s" % i + "_NumInst_%s" % j)
# add additional constraints to ensure the problem is valid
# (1) non-trivial solution: number of all instances == that requested
prob += (pulp.lpSum([variable_matrix[i][j] * j for i in
xrange(num_hosts) for j in xrange(num_instances + 1)]) ==
num_instances, "NonTrivialCons")
# (2) valid solution: each host is assigned 1 num-instances value
for i in xrange(num_hosts):
prob += (pulp.lpSum([variable_matrix[i][j] for j in
xrange(num_instances + 1)]) == 1, "ValidCons_Host_%s" % i)
# The problem is solved using PULP's choice of Solver.
prob.solve(pulp_solver_classes.PULP_CBC_CMD(
maxSeconds=CONF.solver_scheduler.pulp_solver_timeout_seconds))
# Create host-instance tuples from the solutions.
if pulp.LpStatus[prob.status] == 'Optimal':
num_insts_on_host = {}
for v in prob.variables():
if v.name.startswith('HI'):
(host_key, instance_num_key) = v.name.lstrip('HI').lstrip(
'_').split('_')
if v.varValue == 1:
num_insts_on_host[host_key] = (
instance_num_key_map[instance_num_key])
instances_iter = iter(instance_uuids)
for host_key in host_keys:
num_insts_on_this_host = num_insts_on_host.get(host_key, 0)
for i in xrange(num_insts_on_this_host):
host_instance_combinations.append(
(host_key_map[host_key], instances_iter.next()))
else:
LOG.warn(_("Pulp solver didnot find optimal solution! reason: %s")
% pulp.LpStatus[prob.status])
host_instance_combinations = []
return host_instance_combinations

View File

@ -0,0 +1,30 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import exception
from nova.openstack.common.gettextutils import _
class SolverFailed(exception.NovaException):
msg_fmt = _("Scheduler solver failed to find a solution. %(reason)s")
class SchedulerSolverCostNotFound(exception.NovaException):
msg_fmt = _("Scheduler solver cost cannot be found: %(cost_name)s")
class SchedulerSolverConstraintNotFound(exception.NovaException):
msg_fmt = _("Scheduler solver constraint cannot be found: "
"%(constraint_name)s")

View File

@ -21,9 +21,8 @@ import mox
from nova.compute import vm_states
from nova import db
from nova.openstack.common import jsonutils
from nova.scheduler import filter_scheduler
from nova.scheduler import host_manager
from nova.scheduler import solver_scheduler
from nova.scheduler import solver_scheduler_host_manager
COMPUTE_NODES = [
@ -188,19 +187,15 @@ INSTANCES = [
]
class FakeFilterScheduler(filter_scheduler.FilterScheduler):
def __init__(self, *args, **kwargs):
super(FakeFilterScheduler, self).__init__(*args, **kwargs)
self.host_manager = host_manager.HostManager()
class FakeSolverScheduler(solver_scheduler.ConstraintSolverScheduler):
def __init__(self, *args, **kwargs):
super(FakeSolverScheduler, self).__init__(*args, **kwargs)
self.host_manager = host_manager.HostManager()
self.host_manager = (
solver_scheduler_host_manager.SolverSchedulerHostManager())
class FakeHostManager(host_manager.HostManager):
class FakeSolverSchedulerHostManager(
solver_scheduler_host_manager.SolverSchedulerHostManager):
"""host1: free_ram_mb=1024-512-512=0, free_disk_gb=1024-512-512=0
host2: free_ram_mb=2048-512=1536 free_disk_gb=2048-512=1536
host3: free_ram_mb=4096-1024=3072 free_disk_gb=4096-1024=3072
@ -208,7 +203,7 @@ class FakeHostManager(host_manager.HostManager):
"""
def __init__(self):
super(FakeHostManager, self).__init__()
super(FakeSolverSchedulerHostManager, self).__init__()
self.service_states = {
'host1': {
@ -226,9 +221,10 @@ class FakeHostManager(host_manager.HostManager):
}
class FakeHostState(host_manager.HostState):
class FakeSolverSchedulerHostState(
solver_scheduler_host_manager.SolverSchedulerHostState):
def __init__(self, host, node, attribute_dict):
super(FakeHostState, self).__init__(host, node)
super(FakeSolverSchedulerHostState, self).__init__(host, node)
for (key, val) in attribute_dict.iteritems():
setattr(self, key, val)

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import active_hosts_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestActiveHostsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestActiveHostsConstraint, self).setUp()
self.constraint_cls = active_hosts_constraint.ActiveHostsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'active_hosts_constraint.ActiveHostsConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,107 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import affinity_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestSameHostConstraint(test.NoDBTestCase):
def setUp(self):
super(TestSameHostConstraint, self).setUp()
self.constraint_cls = affinity_constraint.SameHostConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'affinity_constraint.SameHostConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
class TestDifferentHostConstraint(test.NoDBTestCase):
def setUp(self):
super(TestDifferentHostConstraint, self).setUp()
self.constraint_cls = affinity_constraint.DifferentHostConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'affinity_constraint.DifferentHostConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
class TestSimpleCidrAffinityConstraint(test.NoDBTestCase):
def setUp(self):
super(TestSimpleCidrAffinityConstraint, self).setUp()
self.constraint_cls = affinity_constraint.SimpleCidrAffinityConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'affinity_constraint.SimpleCidrAffinityConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,63 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import context
from nova.scheduler.solvers.constraints import aggregate_disk
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateDiskConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateDiskConstraint, self).setUp()
self.constraint_cls = aggregate_disk.AggregateDiskConstraint
self.context = context.RequestContext('fake', 'fake')
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'context': self.context,
'instance_type': {'root_gb': 1, 'ephemeral_gb': 1, 'swap': 0},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'free_disk_mb': 1024, 'total_usable_disk_gb': 2})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'free_disk_mb': 1024, 'total_usable_disk_gb': 2})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'free_disk_mb': 1024, 'total_usable_disk_gb': 2})
self.fake_hosts = [host1, host2, host3]
@mock.patch('nova.db.aggregate_metadata_get_by_host')
def test_get_constraint_matrix(self, agg_mock):
self.flags(disk_allocation_ratio=1.0)
def _agg_mock_side_effect(*args, **kwargs):
if args[1] == 'host1':
return {'disk_allocation_ratio': set(['2.0', '3.0'])}
if args[1] == 'host2':
return {'disk_allocation_ratio': set(['3.0'])}
if args[1] == 'host3':
return {'disk_allocation_ratio': set()}
agg_mock.side_effect = _agg_mock_side_effect
expected_cons_mat = [
[True, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)

View File

@ -0,0 +1,51 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import \
aggregate_image_properties_isolation
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateImagePropertiesIsolationConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateImagePropertiesIsolationConstraint, self).setUp()
self.constraint_cls = aggregate_image_properties_isolation.\
AggregateImagePropertiesIsolationConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'aggregate_image_properties_isolation.'
'AggregateImagePropertiesIsolationConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,50 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import aggregate_instance_extra_specs
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateInstanceExtraSpecsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateInstanceExtraSpecsConstraint, self).setUp()
self.constraint_cls = aggregate_instance_extra_specs.\
AggregateInstanceExtraSpecsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'aggregate_instance_extra_specs.'
'AggregateInstanceExtraSpecsConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,51 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import \
aggregate_multitenancy_isolation
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateMultiTenancyIsolationConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateMultiTenancyIsolationConstraint, self).setUp()
self.constraint_cls = aggregate_multitenancy_isolation.\
AggregateMultiTenancyIsolationConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'aggregate_multitenancy_isolation.'
'AggregateMultiTenancyIsolationConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,63 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import context
from nova.scheduler.solvers.constraints import aggregate_ram
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateRamConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateRamConstraint, self).setUp()
self.constraint_cls = aggregate_ram.AggregateRamConstraint
self.context = context.RequestContext('fake', 'fake')
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'context': self.context,
'instance_type': {'memory_mb': 1024},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'free_ram_mb': 512, 'total_usable_ram_mb': 1024})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'free_ram_mb': 512, 'total_usable_ram_mb': 1024})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'free_ram_mb': 512, 'total_usable_ram_mb': 1024})
self.fake_hosts = [host1, host2, host3]
@mock.patch('nova.db.aggregate_metadata_get_by_host')
def test_get_constraint_matrix(self, agg_mock):
self.flags(ram_allocation_ratio=1.0)
def _agg_mock_side_effect(*args, **kwargs):
if args[1] == 'host1':
return {'ram_allocation_ratio': set(['1.0', '2.0'])}
if args[1] == 'host2':
return {'ram_allocation_ratio': set(['3.0'])}
if args[1] == 'host3':
return {'ram_allocation_ratio': set()}
agg_mock.side_effect = _agg_mock_side_effect
expected_cons_mat = [
[False, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)

View File

@ -0,0 +1,50 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import aggregate_type_affinity
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateTypeAffinityConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateTypeAffinityConstraint, self).setUp()
self.constraint_cls = aggregate_type_affinity.\
AggregateTypeAffinityConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'aggregate_type_affinity.AggregateTypeAffinityConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,64 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova import context
from nova.scheduler.solvers.constraints import aggregate_vcpu
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAggregateVcpuConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAggregateVcpuConstraint, self).setUp()
self.constraint_cls = aggregate_vcpu.AggregateVcpuConstraint
self.context = context.RequestContext('fake', 'fake')
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'context': self.context,
'instance_type': {'vcpus': 2},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'vcpus_total': 16, 'vcpus_used': 16})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'vcpus_total': 16, 'vcpus_used': 16})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'vcpus_total': 16, 'vcpus_used': 16})
self.fake_hosts = [host1, host2, host3]
@mock.patch('nova.db.aggregate_metadata_get_by_host')
def test_get_constraint_matrix(self, agg_mock):
self.flags(cpu_allocation_ratio=1.0)
def _agg_mock_side_effect(*args, **kwargs):
if args[1] == 'host1':
return {'cpu_allocation_ratio': set(['1.0', '2.0'])}
if args[1] == 'host2':
return {'cpu_allocation_ratio': set(['3.0'])}
if args[1] == 'host3':
return {'cpu_allocation_ratio': set()}
agg_mock.side_effect = _agg_mock_side_effect
expected_cons_mat = [
[False, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,50 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import availability_zone_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestAvailabilityZoneConstraint(test.NoDBTestCase):
def setUp(self):
super(TestAvailabilityZoneConstraint, self).setUp()
self.constraint_cls = availability_zone_constraint.\
AvailabilityZoneConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'availability_zone_constraint.'
'AvailabilityZoneConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,50 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import compute_capabilities_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestComputeCapabilitiesConstraint(test.NoDBTestCase):
def setUp(self):
super(TestComputeCapabilitiesConstraint, self).setUp()
self.constraint_cls = compute_capabilities_constraint.\
ComputeCapabilitiesConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'compute_capabilities_constraint.'
'ComputeCapabilitiesConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,82 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for solver scheduler constraints.
"""
import mock
from nova import context
from nova.scheduler.solvers import constraints
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class ConstraintTestBase(test.NoDBTestCase):
"""Base test case for constraints."""
def setUp(self):
super(ConstraintTestBase, self).setUp()
self.context = context.RequestContext('fake', 'fake')
constraint_handler = constraints.ConstraintHandler()
classes = constraint_handler.get_matching_classes(
['nova.scheduler.solvers.constraints.all_constraints'])
self.class_map = {}
for c in classes:
self.class_map[c.__name__] = c
class ConstraintsTestCase(ConstraintTestBase):
def test_all_constraints(self):
"""Test the existence of all constraint classes."""
self.assertIn('NoConstraint', self.class_map)
self.assertIn('ActiveHostsConstraint', self.class_map)
self.assertIn('DifferentHostConstraint', self.class_map)
def test_base_linear_constraints(self):
blc = constraints.BaseLinearConstraint()
variables, coefficients, constants, operators = (
blc.get_components(None, None, None))
self.assertEqual([], variables)
self.assertEqual([], coefficients)
self.assertEqual([], constants)
self.assertEqual([], operators)
class TestBaseFilterConstraint(ConstraintTestBase):
def setUp(self):
super(TestBaseFilterConstraint, self).setUp()
self.constraint_cls = constraints.BaseFilterConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'BaseFilterConstraint.host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,63 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import disk_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestDiskConstraint(test.NoDBTestCase):
def setUp(self):
super(TestDiskConstraint, self).setUp()
self.constraint_cls = disk_constraint.DiskConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_type': {'root_gb': 1, 'ephemeral_gb': 1,
'swap': 512},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 2})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'free_disk_mb': 10 * 1024, 'total_usable_disk_gb': 12})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 6})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
self.flags(disk_allocation_ratio=1.0)
expected_cons_mat = [
[False, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_oversubscribe(self):
self.flags(disk_allocation_ratio=2.0)
expected_cons_mat = [
[True, False],
[True, True],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
self.assertEqual(2 * 2.0, self.fake_hosts[0].limits['disk_gb'])
self.assertEqual(12 * 2.0, self.fake_hosts[1].limits['disk_gb'])
self.assertEqual(6 * 2.0, self.fake_hosts[2].limits['disk_gb'])

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import image_props_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestImagePropsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestImagePropsConstraint, self).setUp()
self.constraint_cls = image_props_constraint.ImagePropertiesConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'image_props_constraint.ImagePropertiesConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,58 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import io_ops_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestIoOpsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestIoOpsConstraint, self).setUp()
self.constraint_cls = io_ops_constraint.IoOpsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'num_io_ops': 6})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'num_io_ops': 10})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'num_io_ops': 15})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
self.flags(max_io_ops_per_host=7)
expected_cons_mat = [
[True, False],
[False, False],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix2(self):
self.flags(max_io_ops_per_host=15)
expected_cons_mat = [
[True, True],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,50 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import isolated_hosts_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestIsolatedHostsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestIsolatedHostsConstraint, self).setUp()
self.constraint_cls = isolated_hosts_constraint.\
IsolatedHostsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'isolated_hosts_constraint.IsolatedHostsConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import json_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestJsonConstraint(test.NoDBTestCase):
def setUp(self):
super(TestJsonConstraint, self).setUp()
self.constraint_cls = json_constraint.JsonConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'json_constraint.JsonConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import metrics_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestMetricsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestMetricsConstraint, self).setUp()
self.constraint_cls = metrics_constraint.MetricsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'metrics_constraint.MetricsConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,42 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import no_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestNoConstraint(test.NoDBTestCase):
def setUp(self):
super(TestNoConstraint, self).setUp()
self.constraint_cls = no_constraint.NoConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
def test_get_constraint_matrix(self):
expected_cons_mat = [
[True, True, True],
[True, True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,48 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import num_instances_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestNumInstancesConstraint(test.NoDBTestCase):
def setUp(self):
super(TestNumInstancesConstraint, self).setUp()
self.constraint_cls = num_instances_constraint.NumInstancesConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'num_instances': 1})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'num_instances': 4})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'num_instances': 5})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
self.flags(max_instances_per_host=5)
expected_cons_mat = [
[True, True],
[True, False],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,56 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.pci import pci_stats
from nova.scheduler.solvers.constraints import pci_passthrough_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestPciPassthroughConstraint(test.NoDBTestCase):
def setUp(self):
super(TestPciPassthroughConstraint, self).setUp()
self.constraint_cls = \
pci_passthrough_constraint.PciPassthroughConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
requests = [{'count': 1, 'spec': [{'vendor_id': '8086'}]}]
self.fake_filter_properties = {
'pci_requests': requests,
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'pci_stats': pci_stats.PciDeviceStats()})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'pci_stats': pci_stats.PciDeviceStats()})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'pci_stats': pci_stats.PciDeviceStats()})
self.fake_hosts = [host1, host2, host3]
@mock.patch('nova.pci.pci_stats.PciDeviceStats.support_requests')
@mock.patch('nova.pci.pci_stats.PciDeviceStats.apply_requests')
def test_get_constraint_matrix(self, apl_reqs, spt_reqs):
spt_reqs.side_effect = [True, False] + [False] + [True, True, False]
expected_cons_mat = [
[True, False],
[False, False],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,62 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import ram_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestRamConstraint(test.NoDBTestCase):
def setUp(self):
super(TestRamConstraint, self).setUp()
self.constraint_cls = ram_constraint.RamConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_type': {'memory_mb': 1024},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'free_ram_mb': 512, 'total_usable_ram_mb': 1024})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'free_ram_mb': 2048, 'total_usable_ram_mb': 2048})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1',
{'free_ram_mb': -256, 'total_usable_ram_mb': 512})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
self.flags(ram_allocation_ratio=1.0)
expected_cons_mat = [
[False, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_oversubscribe(self):
self.flags(ram_allocation_ratio=2.0)
expected_cons_mat = [
[True, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
self.assertEqual(1024 * 2.0, self.fake_hosts[0].limits['memory_mb'])
self.assertEqual(2048 * 2.0, self.fake_hosts[1].limits['memory_mb'])
self.assertEqual(512 * 2.0, self.fake_hosts[2].limits['memory_mb'])

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import retry_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestRetryConstraint(test.NoDBTestCase):
def setUp(self):
super(TestRetryConstraint, self).setUp()
self.constraint_cls = retry_constraint.RetryConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'retry_constraint.RetryConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,131 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import \
server_group_affinity_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestServerGroupAffinityConstraint(test.NoDBTestCase):
def setUp(self):
super(TestServerGroupAffinityConstraint, self).setUp()
self.constraint_cls = \
server_group_affinity_constraint.ServerGroupAffinityConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
fake_filter_properties = {
'group_policies': ['affinity'],
'group_hosts': ['host2'],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[False, False],
[True, True],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_empty_group_hosts_list(self):
fake_filter_properties = {
'group_policies': ['affinity'],
'group_hosts': [],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[False, True],
[False, True],
[False, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_wrong_policy(self):
fake_filter_properties = {
'group_policies': ['other'],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[True, True],
[True, True],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
class TestServerGroupAntiAffinityConstraint(test.NoDBTestCase):
def setUp(self):
super(TestServerGroupAntiAffinityConstraint, self).setUp()
self.constraint_cls = server_group_affinity_constraint.\
ServerGroupAntiAffinityConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
fake_filter_properties = {
'group_policies': ['anti-affinity'],
'group_hosts': ['host1', 'host3'],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[False, False],
[True, False],
[False, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_empty_group_hosts_list(self):
fake_filter_properties = {
'group_policies': ['anti-affinity'],
'group_hosts': [],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[True, False],
[True, False],
[True, False]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix_wrong_policy(self):
fake_filter_properties = {
'group_policies': ['other'],
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
expected_cons_mat = [
[True, True],
[True, True],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import trusted_hosts_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestTrustedHostsConstraint(test.NoDBTestCase):
def setUp(self):
super(TestTrustedHostsConstraint, self).setUp()
self.constraint_cls = trusted_hosts_constraint.TrustedHostsConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'trusted_hosts_constraint.TrustedHostsConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,49 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from nova.scheduler.solvers.constraints import type_affinity_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestTypeAffinityConstraint(test.NoDBTestCase):
def setUp(self):
super(TestTypeAffinityConstraint, self).setUp()
self.constraint_cls = type_affinity_constraint.TypeAffinityConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)],
'num_instances': 3}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {})
self.fake_hosts = [host1, host2]
@mock.patch('nova.scheduler.solvers.constraints.'
'type_affinity_constraint.TypeAffinityConstraint.'
'host_filter_cls')
def test_get_constraint_matrix(self, mock_filter_cls):
expected_cons_mat = [
[True, True, True],
[False, False, False]]
mock_filter = mock_filter_cls.return_value
mock_filter.host_passes.side_effect = [True, False]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)

View File

@ -0,0 +1,60 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.constraints import vcpu_constraint
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestVcpuConstraint(test.NoDBTestCase):
def setUp(self):
super(TestVcpuConstraint, self).setUp()
self.constraint_cls = vcpu_constraint.VcpuConstraint
self._generate_fake_constraint_input()
def _generate_fake_constraint_input(self):
self.fake_filter_properties = {
'instance_type': {'vcpus': 2},
'instance_uuids': ['fake_uuid_%s' % x for x in range(2)],
'num_instances': 2}
host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1',
{'vcpus_total': 4, 'vcpus_used': 4})
host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1',
{'vcpus_total': 8, 'vcpus_used': 2})
host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {})
self.fake_hosts = [host1, host2, host3]
def test_get_constraint_matrix(self):
self.flags(cpu_allocation_ratio=1.0)
expected_cons_mat = [
[False, False],
[True, True],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
def test_get_constraint_matrix(self):
self.flags(cpu_allocation_ratio=2.0)
expected_cons_mat = [
[True, True],
[True, True],
[True, True]]
cons_mat = self.constraint_cls().get_constraint_matrix(
self.fake_hosts, self.fake_filter_properties)
self.assertEqual(expected_cons_mat, cons_mat)
self.assertEqual(4 * 2.0, self.fake_hosts[0].limits['vcpu'])
self.assertEqual(8 * 2.0, self.fake_hosts[1].limits['vcpu'])

View File

@ -0,0 +1,48 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for solver scheduler costs.
"""
from nova import context
from nova.scheduler.solvers import costs
from nova import test
class CostTestBase(test.NoDBTestCase):
"""Base test case for costs."""
def setUp(self):
super(CostTestBase, self).setUp()
self.context = context.RequestContext('fake', 'fake')
cost_handler = costs.CostHandler()
classes = cost_handler.get_matching_classes(
['nova.scheduler.solvers.costs.all_costs'])
self.class_map = {}
for c in classes:
self.class_map[c.__name__] = c
class CostsTestCase(CostTestBase):
def test_all_costs(self):
"""Test the existence of all cost classes."""
self.assertIn('RamCost', self.class_map)
self.assertIn('MetricsCost', self.class_map)
def test_base_linear_costs(self):
blc = costs.BaseLinearCost()
variables, coefficients = blc.get_components(None, None, None)
self.assertEqual([], variables)
self.assertEqual([], coefficients)

View File

@ -0,0 +1,272 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test case for solver scheduler RAM cost."""
from nova import context
from nova.openstack.common.fixture import mockpatch
from nova.scheduler.solvers import costs
from nova.scheduler.solvers.costs import ram_cost
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestMetricsCost(test.NoDBTestCase):
def setUp(self):
super(TestMetricsCost, self).setUp()
self.context = context.RequestContext('fake_usr', 'fake_proj')
self.useFixture(mockpatch.Patch('nova.db.compute_node_get_all',
return_value=fakes.COMPUTE_NODES_METRICS))
self.host_manager = fakes.FakeSolverSchedulerHostManager()
self.cost_handler = costs.CostHandler()
self.cost_classes = self.cost_handler.get_matching_classes(
['nova.scheduler.solvers.costs.metrics_cost.MetricsCost'])
def _get_all_hosts(self):
ctxt = context.get_admin_context()
return self.host_manager.get_all_host_states(ctxt)
def _get_fake_cost_inputs(self):
fake_hosts = self._get_all_hosts()
# FIXME: ideally should mock get_hosts_stripping_forced_and_ignored()
fake_hosts = list(fake_hosts)
# the hosts order may be arbitrary, here we manually order them
# which is for convenience of testings
tmp = []
for i in range(len(fake_hosts)):
for fh in fake_hosts:
if fh.host == 'host%s' % (i + 1):
tmp.append(fh)
fake_hosts = tmp
fake_filter_properties = {
'context': self.context.elevated(),
'num_instances': 3,
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)]}
return (fake_hosts, fake_filter_properties)
def test_metrics_cost_multiplier_1(self):
self.flags(ram_cost_multiplier=0.5, group='solver_scheduler')
self.assertEqual(0.5, ram_cost.RamCost().cost_multiplier())
def test_metrics_cost_multiplier_2(self):
self.flags(ram_cost_multiplier=(-2), group='solver_scheduler')
self.assertEqual((-2), ram_cost.RamCost().cost_multiplier())
def test_get_extended_cost_matrix_single_resource(self):
# host1: foo=512
# host2: foo=1024
# host3: foo=3072
# host4: foo=8192
setting = ['foo=1']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[0:4]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[-0.0625, -0.0625, -0.0625, -0.0625],
[-0.125, -0.125, -0.125, -0.125],
[-0.375, -0.375, -0.375, -0.375],
[-1.0, -1.0, -1.0, -1.0]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_multiple_resource(self):
# host1: foo=512, bar=1
# host2: foo=1024, bar=2
# host3: foo=3072, bar=1
# host4: foo=8192, bar=0
setting = ['foo=0.0001', 'bar=1']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[0:4]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[-0.5, -0.5, -0.5, -0.5],
[-1.0, -1.0, -1.0, -1.0],
[-0.6218, -0.6218, -0.6218, -0.6218],
[-0.3896, -0.3896, -0.3896, -0.3896]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_single_resource_negative_ratio(self):
# host1: foo=512
# host2: foo=1024
# host3: foo=3072
# host4: foo=8192
setting = ['foo=-1']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[0:4]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[0.0625, 0.0625, 0.0625, 0.0625],
[0.125, 0.125, 0.125, 0.125],
[0.375, 0.375, 0.375, 0.375],
[1.0, 1.0, 1.0, 1.0]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_multiple_resource_missing_ratio(self):
# host1: foo=512, bar=1
# host2: foo=1024, bar=2
# host3: foo=3072, bar=1
# host4: foo=8192, bar=0
setting = ['foo=0.0001', 'bar']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[0:4]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[-0.0625, -0.0625, -0.0625, -0.0625],
[-0.125, -0.125, -0.125, -0.125],
[-0.375, -0.375, -0.375, -0.375],
[-1.0, -1.0, -1.0, -1.0]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_multiple_resource_wrong_ratio(self):
# host1: foo=512, bar=1
# host2: foo=1024, bar=2
# host3: foo=3072, bar=1
# host4: foo=8192, bar=0
setting = ['foo=0.0001', 'bar = 2.0t']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[0:4]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[-0.0625, -0.0625, -0.0625, -0.0625],
[-0.125, -0.125, -0.125, -0.125],
[-0.375, -0.375, -0.375, -0.375],
[-1.0, -1.0, -1.0, -1.0]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_metric_not_found(self):
# host3: foo=3072, bar=1
# host4: foo=8192, bar=0
# host5: foo=768, bar=0, zot=1
# host6: foo=2048, bar=0, zot=2
setting = ['foo=0.0001', 'zot=-2']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[2:6]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0],
[0.3394, 0.3394, 0.3394, 0.3394],
[0.6697, 0.6697, 0.6697, 0.6697]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def test_get_extended_cost_matrix_metric_not_found_in_any(self):
# host3: foo=3072, bar=1
# host4: foo=8192, bar=0
# host5: foo=768, bar=0, zot=1
# host6: foo=2048, bar=0, zot=2
setting = ['foo=0.0001', 'non_exist_met=2']
self.flags(weight_setting=setting, group='metrics')
fake_hosts, fake_filter_properties = self._get_fake_cost_inputs()
fake_hosts = fake_hosts[2:6]
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)
def _check_parsing_result(self, cost, setting, results):
self.flags(weight_setting=setting, group='metrics')
cost._parse_setting()
self.assertTrue(len(results) == len(cost.setting))
for item in results:
self.assertTrue(item in cost.setting)
def test_metrics_cost_parse_setting(self):
fake_cost = self.cost_classes[0]()
self._check_parsing_result(fake_cost,
['foo=1'],
[('foo', 1.0)])
self._check_parsing_result(fake_cost,
['foo=1', 'bar=-2.1'],
[('foo', 1.0), ('bar', -2.1)])
self._check_parsing_result(fake_cost,
['foo=a1', 'bar=-2.1'],
[('bar', -2.1)])
self._check_parsing_result(fake_cost,
['foo', 'bar=-2.1'],
[('bar', -2.1)])
self._check_parsing_result(fake_cost,
['=5', 'bar=-2.1'],
[('bar', -2.1)])

View File

@ -0,0 +1,82 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test case for solver scheduler RAM cost."""
from nova import context
from nova.openstack.common.fixture import mockpatch
from nova.scheduler.solvers import costs
from nova.scheduler.solvers.costs import ram_cost
from nova import test
from nova.tests.scheduler import solver_scheduler_fakes as fakes
class TestRamCost(test.NoDBTestCase):
def setUp(self):
super(TestRamCost, self).setUp()
self.context = context.RequestContext('fake_usr', 'fake_proj')
self.useFixture(mockpatch.Patch('nova.db.compute_node_get_all',
return_value=fakes.COMPUTE_NODES[0:5]))
self.host_manager = fakes.FakeSolverSchedulerHostManager()
self.cost_handler = costs.CostHandler()
self.cost_classes = self.cost_handler.get_matching_classes(
['nova.scheduler.solvers.costs.ram_cost.RamCost'])
def _get_all_hosts(self):
ctxt = context.get_admin_context()
return self.host_manager.get_all_host_states(ctxt)
def test_ram_cost_multiplier_1(self):
self.flags(ram_cost_multiplier=0.5, group='solver_scheduler')
self.assertEqual(0.5, ram_cost.RamCost().cost_multiplier())
def test_ram_cost_multiplier_2(self):
self.flags(ram_cost_multiplier=(-2), group='solver_scheduler')
self.assertEqual((-2), ram_cost.RamCost().cost_multiplier())
def test_get_extended_cost_matrix(self):
# the host.free_ram_mb values of these fake hosts are supposed to be:
# 512, 1024, 3072, 8192
fake_hosts = self._get_all_hosts()
# FIXME: ideally should mock get_hosts_stripping_forced_and_ignored()
fake_hosts = list(fake_hosts)
# the hosts order may be arbitrary, here we manually order them
# which is for convenience of testings
tmp = []
for i in range(len(fake_hosts)):
for fh in fake_hosts:
if fh.host == 'host%s' % (i + 1):
tmp.append(fh)
fake_hosts = tmp
fake_filter_properties = {
'context': self.context.elevated(),
'num_instances': 3,
'instance_type': {'memory_mb': 1024},
'instance_uuids': ['fake_uuid_%s' % x for x in range(3)]}
fake_cost = self.cost_classes[0]()
expected_x_cost_mat = [
[0.0, 0.125, 0.25, 0.375],
[-0.125, 0.0, 0.125, 0.25],
[-0.375, -0.25, -0.125, -0.0],
[-1.0, -0.875, -0.75, -0.625]]
x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts,
fake_filter_properties)
expected_x_cost_mat = [[round(val, 4) for val in row]
for row in expected_x_cost_mat]
x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat]
self.assertEqual(expected_x_cost_mat, x_cost_mat)

View File

@ -0,0 +1,64 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.scheduler.solvers.costs import utils
from nova import test
class CostUtilsTestCase(test.NoDBTestCase):
def test_normalize_cost_matrix(self):
test_matrix = [
[1, 2, 3, 4],
[5, 7, 9, 10],
[-2, -1, 0, 2]]
expected_result = [
[0.2, 0.4, 0.6, 0.8],
[1.0, 1.4, 1.8, 2.0],
[-0.4, -0.2, 0.0, 0.4]]
result = utils.normalize_cost_matrix(test_matrix)
round_values = lambda x: [round(v, 4) for v in x]
expected_result = map(round_values, expected_result)
result = map(round_values, result)
self.assertEqual(expected_result, result)
def test_normalize_cost_matrix_empty_input(self):
test_matrix = []
expected_result = []
result = utils.normalize_cost_matrix(test_matrix)
self.assertEqual(expected_result, result)
def test_normalize_cost_matrix_first_column_all_zero(self):
test_matrix = [
[0, 1, 2, 3],
[0, -1, -2, -3],
[0, 0.2, 0.4, 0.6]]
expected_result = [
[0, 1, 2, 3],
[0, -1, -2, -3],
[0, 0.2, 0.4, 0.6]]
result = utils.normalize_cost_matrix(test_matrix)
round_values = lambda x: [round(v, 4) for v in x]
expected_result = map(round_values, expected_result)
result = map(round_values, result)
self.assertEqual(expected_result, result)

View File

@ -1,117 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for solver scheduler Active Host linearconstraint.
"""
import mock
from nova import servicegroup
from nova.tests.scheduler import fakes
from nova.tests.scheduler.solvers import test_linearconstraints as lctest
class ActiveHostConstraintTestCase(lctest.LinearConstraintsTestBase):
"""Test case for ActiveHostConstraint."""
def setUp(self):
super(ActiveHostConstraintTestCase, self).setUp()
def test_get_coefficient_vectors(self):
variables = [[1, 2],
[3, 4]]
host1 = fakes.FakeHostState('host1', 'node1',
{'service': {'disabled': False}})
host2 = fakes.FakeHostState('host2', 'node2',
{'service': {'disabled': True}})
hosts = [host1, host2]
fake_instance1_uuid = 'fake-instance1-id'
fake_instance2_uuid = 'fake-instance2-id'
instance_uuids = [fake_instance1_uuid, fake_instance2_uuid]
request_spec = {'instance_type': 'fake_type',
'instance_uuids': instance_uuids,
'num_instances': 2}
filter_properties = {'context': self.context.elevated()}
constraint_cls = self.class_map['ActiveHostConstraint'](
variables, hosts, instance_uuids, request_spec, filter_properties)
with mock.patch.object(servicegroup.API, 'service_is_up') as srv_isup:
srv_isup.return_value = True
coeff_vectors = constraint_cls.get_coefficient_vectors(variables,
hosts, instance_uuids, request_spec, filter_properties)
ref_coeff_vectors = [[0, 0],
[1, 1]]
self.assertEqual(coeff_vectors, ref_coeff_vectors)
with mock.patch.object(servicegroup.API, 'service_is_up') as srv_isup:
srv_isup.return_value = False
coeff_vectors = constraint_cls.get_coefficient_vectors(variables,
hosts, instance_uuids, request_spec, filter_properties)
ref_coeff_vectors = [[1, 1],
[1, 1]]
self.assertEqual(coeff_vectors, ref_coeff_vectors)
def test_get_variable_vectors(self):
variables = [[1, 2],
[3, 4]]
host1 = fakes.FakeHostState('host1', 'node1',
{'service': {'disabled': False}})
host2 = fakes.FakeHostState('host2', 'node2',
{'service': {'disabled': True}})
hosts = [host1, host2]
fake_instance1_uuid = 'fake-instance1-id'
fake_instance2_uuid = 'fake-instance2-id'
instance_uuids = [fake_instance1_uuid, fake_instance2_uuid]
request_spec = {'instance_type': 'fake_type',
'instance_uuids': instance_uuids,
'num_instances': 2}
filter_properties = {'context': self.context.elevated()}
constraint_cls = self.class_map['ActiveHostConstraint'](
variables, hosts, instance_uuids, request_spec, filter_properties)
variable_vectors = constraint_cls.get_variable_vectors(variables,
hosts, instance_uuids, request_spec, filter_properties)
ref_variable_vectors = [[1, 2],
[3, 4]]
self.assertEqual(variable_vectors, ref_variable_vectors)
def test_get_operations(self):
variables = [[1, 2],
[3, 4]]
host1 = fakes.FakeHostState('host1', 'node1',
{'service': {'disabled': False}})
host2 = fakes.FakeHostState('host2', 'node2',
{'service': {'disabled': True}})
hosts = [host1, host2]
fake_instance1_uuid = 'fake-instance1-id'
fake_instance2_uuid = 'fake-instance2-id'
instance_uuids = [fake_instance1_uuid, fake_instance2_uuid]
request_spec = {'instance_type': 'fake_type',
'instance_uuids': instance_uuids,
'num_instances': 2}
filter_properties = {'context': self.context.elevated()}
constraint_cls = self.class_map['ActiveHostConstraint'](
variables, hosts, instance_uuids, request_spec, filter_properties)
operations = constraint_cls.get_operations(variables,
hosts, instance_uuids, request_spec, filter_properties)
ref_operations = [(lambda x: x == 0), (lambda x: x == 0)]
self.assertEqual(len(operations), len(ref_operations))
for idx in range(len(operations)):
for n in range(4):
self.assertEqual(operations[idx](n), ref_operations[idx](n))

View File

@ -1,210 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for solver scheduler linearconstraints.
"""
from nova.tests.scheduler import fakes
from nova.tests.scheduler.solvers import test_linearconstraints as lctests
HOSTS = [fakes.FakeHostState('host1', 'node1',
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13,
'free_ram_mb': 1024, 'total_usable_ram_mb': 1024,
'vcpus_total': 4, 'vcpus_used': 7,
'service': {'disabled': False}}),
fakes.FakeHostState('host2', 'node2',
{'free_disk_mb': 1024, 'total_usable_disk_gb': 13,
'free_ram_mb': 1023, 'total_usable_ram_mb': 1024,
'vcpus_total': 4, 'vcpus_used': 8,
'service': {'disabled': False}}),
]
INSTANCE_UUIDS = ['fake-instance1-uuid', 'fake-instance2-uuid']
INSTANCE_TYPES = [{'root_gb': 1, 'ephemeral_gb': 1, 'swap': 512,
'memory_mb': 1024, 'vcpus': 1},
]
REQUEST_SPECS = [{'instance_type': INSTANCE_TYPES[0],
'instance_uuids': INSTANCE_UUIDS,
'num_instances': 2}, ]
class MaxDiskAllocationConstraintTestCase(lctests.LinearConstraintsTestBase):
"""Test case for MaxDiskAllocationPerHostConstraint."""
def setUp(self):
super(MaxDiskAllocationConstraintTestCase, self).setUp()
self.variables = [[1, 2],
[3, 4]]
self.hosts = HOSTS
self.instance_uuids = INSTANCE_UUIDS
self.instance_type = INSTANCE_TYPES[0].copy()
self.request_spec = REQUEST_SPECS[0].copy()
self.filter_properties = {'context': self.context.elevated(),
'instance_type': self.instance_type}
def test_get_coefficient_vectors(self):
constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[2560, 2560, -11264],
[2560, 2560, -1024]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_variable_vectors(self):
constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
variable_vectors = constraint_cls.get_variable_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_variable_vectors = [[1, 2, 1],
[3, 4, 1]]
self.assertEqual(variable_vectors, ref_variable_vectors)
def test_get_operations(self):
constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
operations = constraint_cls.get_operations(self.variables, self.hosts,
self.instance_uuids, self.request_spec, self.filter_properties)
ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)]
self.assertEqual(len(operations), len(ref_operations))
for idx in range(len(operations)):
for n in range(4):
self.assertEqual(operations[idx](n), ref_operations[idx](n))
class MaxRamAllocationConstraintTestCase(lctests.LinearConstraintsTestBase):
"""Test case for MaxRamAllocationPerHostConstraint."""
def setUp(self):
super(MaxRamAllocationConstraintTestCase, self).setUp()
self.flags(ram_allocation_ratio=1.0)
self.variables = [[1, 2],
[3, 4]]
self.hosts = HOSTS
self.instance_uuids = INSTANCE_UUIDS
self.instance_type = INSTANCE_TYPES[0].copy()
self.request_spec = REQUEST_SPECS[0].copy()
self.filter_properties = {'context': self.context.elevated(),
'instance_type': self.instance_type}
def test_get_coefficient_vectors(self):
constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[1024, 1024, -1024],
[1024, 1024, -1023]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_variable_vectors(self):
constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
variable_vectors = constraint_cls.get_variable_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_variable_vectors = [[1, 2, 1],
[3, 4, 1]]
self.assertEqual(variable_vectors, ref_variable_vectors)
def test_get_operations(self):
constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
operations = constraint_cls.get_operations(self.variables, self.hosts,
self.instance_uuids, self.request_spec, self.filter_properties)
ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)]
self.assertEqual(len(operations), len(ref_operations))
for idx in range(len(operations)):
for n in range(4):
self.assertEqual(operations[idx](n), ref_operations[idx](n))
class MaxVcpuAllocationConstraintTestCase(lctests.LinearConstraintsTestBase):
"""Test case for MaxVcpuAllocationPerHostConstraint."""
def setUp(self):
super(MaxVcpuAllocationConstraintTestCase, self).setUp()
self.flags(cpu_allocation_ratio=2)
self.variables = [[1, 2],
[3, 4]]
self.hosts = HOSTS
self.instance_uuids = INSTANCE_UUIDS
self.instance_type = INSTANCE_TYPES[0].copy()
self.request_spec = REQUEST_SPECS[0].copy()
self.filter_properties = {'context': self.context.elevated(),
'instance_type': self.instance_type}
def test_get_coefficient_vectors(self):
constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[1, 1, -1],
[1, 1, 0]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_variable_vectors(self):
constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
variable_vectors = constraint_cls.get_variable_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_variable_vectors = [[1, 2, 1],
[3, 4, 1]]
self.assertEqual(variable_vectors, ref_variable_vectors)
def test_get_operations(self):
constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
operations = constraint_cls.get_operations(self.variables, self.hosts,
self.instance_uuids, self.request_spec, self.filter_properties)
ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)]
self.assertEqual(len(operations), len(ref_operations))
for idx in range(len(operations)):
for n in range(4):
self.assertEqual(operations[idx](n), ref_operations[idx](n))

View File

@ -1,171 +0,0 @@
# Copyright (c) 2014 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests for solver scheduler Availability Zone linearconstraint.
"""
import mock
from nova import db
from nova.tests.scheduler import fakes
from nova.tests.scheduler.solvers import test_linearconstraints as lctests
HOSTS = [fakes.FakeHostState('host1', 'node1',
{'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13,
'free_ram_mb': 1024, 'total_usable_ram_mb': 1024,
'vcpus_total': 4, 'vcpus_used': 7,
'service': {'disabled': False}}),
fakes.FakeHostState('host2', 'node2',
{'free_disk_mb': 1024, 'total_usable_disk_gb': 13,
'free_ram_mb': 1023, 'total_usable_ram_mb': 1024,
'vcpus_total': 4, 'vcpus_used': 8,
'service': {'disabled': False}}),
]
INSTANCE_UUIDS = ['fake-instance1-uuid', 'fake-instance2-uuid']
INSTANCE_TYPES = [{'root_gb': 1, 'ephemeral_gb': 1, 'swap': 512,
'memory_mb': 1024, 'vcpus': 1},
]
REQUEST_SPECS = [{'instance_type': INSTANCE_TYPES[0],
'instance_uuids': INSTANCE_UUIDS,
'instance_properties': {'availability_zone': 'zone1'},
'num_instances': 2},
{'instance_type': INSTANCE_TYPES[0],
'instance_uuids': INSTANCE_UUIDS,
'instance_properties': {'availability_zone': 'default'},
'num_instances': 2},
{'instance_type': INSTANCE_TYPES[0],
'instance_uuids': INSTANCE_UUIDS,
'instance_properties': {'availability_zone': 'unknown'},
'num_instances': 2},
{'instance_type': INSTANCE_TYPES[0],
'instance_uuids': INSTANCE_UUIDS,
'num_instances': 2},
]
class AvailabilityZoneConstraintTestCase(lctests.LinearConstraintsTestBase):
"""Test case for AvailabilityZoneConstraint."""
def setUp(self):
super(AvailabilityZoneConstraintTestCase, self).setUp()
self.variables = [[1, 2],
[3, 4]]
self.hosts = HOSTS
self.instance_uuids = INSTANCE_UUIDS
self.instance_type = INSTANCE_TYPES[0].copy()
self.request_spec = REQUEST_SPECS[0].copy()
self.filter_properties = {'context': self.context.elevated(),
'instance_type': self.instance_type}
def test_get_coefficient_vectors_in_metadata(self):
def dbget_side_effect(*args, **kwargs):
if args[1] == 'host1':
return {'availability_zone': 'zone1'}
else:
return {}
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget:
dbget.side_effect = dbget_side_effect
coeff_vectors = constraint_cls.get_coefficient_vectors(
self.variables, self.hosts,
self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[0, 0],
[1, 1]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_coefficient_vectors_in_default_zone(self):
self.request_spec = REQUEST_SPECS[1].copy()
self.flags(default_availability_zone='default')
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget:
dbget.return_value = {}
coeff_vectors = constraint_cls.get_coefficient_vectors(
self.variables, self.hosts,
self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[0, 0],
[0, 0]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_coefficient_vectors_in_non_default_zone(self):
self.request_spec = REQUEST_SPECS[2].copy()
self.flags(default_availability_zone='default')
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget:
dbget.return_value = {}
coeff_vectors = constraint_cls.get_coefficient_vectors(
self.variables, self.hosts,
self.instance_uuids, self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[1, 1],
[1, 1]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_coefficient_vectors_missing_zone(self):
self.request_spec = REQUEST_SPECS[3].copy()
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables,
self.hosts, self.instance_uuids,
self.request_spec,
self.filter_properties)
ref_coeff_vectors = [[0, 0],
[0, 0]]
self.assertEqual(ref_coeff_vectors, coeff_vectors)
def test_get_variable_vectors(self):
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
variable_vectors = constraint_cls.get_variable_vectors(self.variables,
self.hosts, self.instance_uuids, self.request_spec,
self.filter_properties)
ref_variable_vectors = [[1, 2],
[3, 4]]
self.assertEqual(variable_vectors, ref_variable_vectors)
def test_get_operations(self):
constraint_cls = self.class_map['AvailabilityZoneConstraint'](
self.variables, self.hosts, self.instance_uuids,
self.request_spec, self.filter_properties)
operations = constraint_cls.get_operations(self.variables, self.hosts,
self.instance_uuids, self.request_spec, self.filter_properties)
ref_operations = [(lambda x: x == 0), (lambda x: x == 0)]
self.assertEqual(len(operations), len(ref_operations))
for idx in range(len(operations)):
for n in range(4):
self.assertEqual(operations[idx](n), ref_operations[idx](n))

Some files were not shown because too many files have changed in this diff Show More