diff --git a/README.md b/README.md index 8bda905..03cf691 100644 --- a/README.md +++ b/README.md @@ -1,398 +1,129 @@ -Openstack Nova Solver Scheduler -=============================== +# Openstack Nova Solver Scheduler -Solver Scheduler is an Openstack Nova Scheduler driver that provides a smarter, complex constraints optimization based resource scheduling in Nova. It is a pluggable scheduler driver, that can leverage existing complex constraint solvers available in open source such as PULP, CVXOPT, Google OR-TOOLS, etc. It can be easily extended to add complex constraint models for various use cases, written using any of the available open source constraint solving frameworks. +## Overview -Key modules ------------ +Solver Scheduler is an OpenStack Nova scheduler driver that provides smart, efficient, and optimization based compute resource scheduling in Nova. It is a pluggable scheduler driver, that can leverage existing constraint solvers available in open source such as PULP, CVXOPT, Google OR-TOOLS, etc. It can be easily extended to add complex constraint models for various use cases, and to solve complex scheduling problems with pulggable solving frameworks. -* The new scheduler driver module: +## From filter-scheduler to solver-scheduler - nova/scheduler/solver_scheduler.py +The nova-solver-scheduler works in a similar (but different) way as Nova's default filter-scheduler. It is designed to have the following 3-layer pluggable architecture, compared with filter-scheduler's 2-layer architecture. -* The code includes a reference implementation of a solver that models the scheduling problem as a Linear Programming model, written using the PULP LP modeling language. It uses a PULP_CBC_CMD, which is a packaged constraint solver, included in the coinor-pulp python package. +* Filter-scheduler architecture + - Scheduler driver: FilterScheduler. This is a driver that realizes the filter based scheduling functionalities. It is plugged into Nova scheduler service, and configured by the second layer plug-in's, which are known as weighers and filters. + - Configurable plug-ins: Weights/Filters. They are the configuration units that defines the behaviour of the filter-scheduler, where filters decide which hosts are available for user requested instance, and weights are then used to sort filtered hosts to find a best one to place the instance. - nova/scheduler/solvers/hosts_pulp_solver.py +* Solver-scheduler architecture + - Scheduler driver: SolverScheduler. This is a driver that realizes constraint optimization based scheduling functionalities. It sits in parellel with FilterScheduler and can be used as an (advanced) alternative. It uses pluggable opeimization solver modules to solve scheduling problems. + - Solver drivers. The solver drivers are pluggavle optimization solvers that solve scheduling problems defined by its lower layer plug-in's, which are costs/constraints. The pluggable solvers provide more flexibility for complex scheduling scenarios as well as the ability to give more optimimal scheduling solutions. + - Configurable plug-ins: Costs/Constraints. Similar as weights/filters in the filter-scheduler. The constraints define the hard restrictions of hosts that cannot be violated, and the costs define the soft objectives that the scheduler should tend to achieve. Scheduler solver will give an overall optimized solution for the placement of all requested instances in each single instance creation request. The costs/constraints can be plugged and configured in a similar way as weights/filters in Nova's default filter scheduler. -* The pluggable solvers using coinor-pulp package, where costs functions and linear constraints can be plugged into the solver. - - nova/scheduler/solvers/pluggable_hosts_pulp_solver.py - -Additional modules ------------------- - -* The cost functions pluggable to solver: - - nova/scheduler/solvers/costs/ram_cost.py - nova/scheduler/solvers/costs/volume_affinity_cost.py - -* The linear constraints that are pluggable to solver: - - nova/scheduler/solvers/linearconstraints/active_host_constraint.py - nova/scheduler/solvers/linearconstraints/all_hosts_constraint.py - nova/scheduler/solvers/linearconstraints/availability_zone_constraint.py - nova/scheduler/solvers/linearconstraints/different_host_constraint.py - nova/scheduler/solvers/linearconstraints/same_host_constraint.py - nova/scheduler/solvers/linearconstraints/io_ops_constraint.py - nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py - nova/scheduler/solvers/linearconstraints/max_ram_allocation_constraint.py - nova/scheduler/solvers/linearconstraints/max_vcpu_allocation_constraint.py - nova/scheduler/solvers/linearconstraints/max_instances_per_host_constraint.py - nova/scheduler/solvers/linearconstraints/non_trivial_solution_constraint.py - -Requirements ------------- - -* coinor.pulp>=1.0.4 - -Installation ------------- - -We provide 2 ways to install the solver-scheduler code. In this section, we will guide you through installing the solver scheduler with the minimum configuration. For instructions of configuring a fully functional solver-scheduler, please check out the next sections. - -* **Note:** - - - Make sure you have an existing installation of **Openstack Icehouse**. - - - The automatic installation scripts are of **alpha** version, which was tested on **Ubuntu 14.04** and **OpenStack Icehouse** only. - - - We recommend that you Do backup at least the following files before installation, because they are to be overwritten or modified: - $NOVA_CONFIG_PARENT_DIR/nova.conf - $NOVA_PARENT_DIR/nova/volume/cinder.py - $NOVA_PARENT_DIR/nova/tests/scheduler/fakes.py - (replace the $... with actual directory names.) - -* **Prerequisites** - - Please install the python package: coinor.pulp >= 1.0.4 - -* **Manual Installation** - - - Make sure you have performed backups properly. - - - Clone the repository to your local host where nova-scheduler is run. - - - Navigate to the local repository and copy the contents in 'nova' sub-directory to the corresponding places in existing nova, e.g. - ```cp -r $LOCAL_REPOSITORY_DIR/nova $NOVA_PARENT_DIR``` - (replace the $... with actual directory name.) - - - Update the nova configuration file (e.g. /etc/nova/nova.conf) with the minimum option below. If the option already exists, modify its value, otherwise add it to the config file. Check the "Configurations" section below for a full configuration guide. - ``` - [DEFAULT] - ... - scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler - ``` - - - Restart the nova scheduler. - ```service nova-scheduler restart``` - - - Done. The nova-solver-scheduler should be working with a demo configuration. - -* **Automatic Installation** - - - Make sure you have performed backups properly. - - - Clone the repository to your local host where nova-scheduler is run. - - - Navigate to the installation directory and run installation script. - ``` - cd $LOCAL_REPOSITORY_DIR/installation - sudo bash ./install.sh - ``` - (replace the $... with actual directory name.) - - - Done. The installation code should setup the solver-scheduler with the minimum configuration below. Check the "Configurations" section for a full configuration guide. - ``` - [DEFAULT] - ... - scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler - ``` - -* **Uninstallation** - - - If you need to switch to other scheduler, simply open the nova configuration file and edit the option ```scheduler_driver``` (the default: ```scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler```), then restart nova-scheduler. - - - There is no need to remove the solver-scheduler code after installation since it is supposed to be fully compatible with the default nova scheduler. However, in case you really want to restore the code, you can either do it manually with your backup, or use the uninstallation script provided in the installation directory. - ``` - cd $LOCAL_REPOSITORY_DIR/installation - sudo bash ./uninstall.sh - ``` - (replace the $... with actual directory name.) - - - Please remember to restart the nova-scheduler service everytime when changes are made in the code or config file. - -* **Troubleshooting** - - In case the automatic installation/uninstallation process is not complete, please check the followings: - - - Make sure your OpenStack version is Icehouse. - - - Check the variables in the beginning of the install.sh/uninstall.sh scripts. Your installation directories may be different from the default values we provide. - - - The installation code will automatically backup the related codes to: - $NOVA_PARENT_DIR/nova/.solver-scheduler-installation-backup - Please do not make changes to the backup if you do not have to. If you encounter problems during installation, you can always find the backup files in this directory. - - - The automatic uninstallation script can only work when you used automatic installation beforehand. If you installed manually, please also uninstall manually (though there is no need to actually "uninstall"). - - - In case the automatic installation does not work, try to install manually. - -Configurations --------------- - -* This is a (default) configuration sample for the solver-scheduler. Please add/modify these options in /etc/nova/nova.conf. -* Note: - - Please carefully make sure that options in the configuration file are not duplicated. If an option name already exists, modify its value instead of adding a new one of the same name. - - The solver class 'nova.scheduler.solvers.hosts_pulp_solver.HostsPulpSolver' is used by default in installation for demo purpose. It is self-inclusive and non-pluggable for costs and constraints. Please switch to 'nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver' for a fully functional (pluggable) solver. - - Please refer to the 'Configuration Details' section below for proper configuration and usage of costs and constraints. +While it appears to the user that the solver scheduler with costs/constraints provides similar functionalities as filter scheduler with weights/filters, solver scheduler can be more efficient in large scale high amount scheduling problems (e.g. placing 500 instances in a 1000 node cluster in a single request), the scheduling solution from the solver scheduler is also more optimal compared with that from filter scheduler due to its more flexible designs. + +## Basic configurations +In order to enable nova-solver-scheduler, we need to have the following minimal configurations in the "[default]" section of nova.conf. Please overwrite the config options' values if the option keys already exist in the configuration file. ``` [DEFAULT] - -... - -# -# Options defined in nova.scheduler.manager -# - -# Default driver to use for the scheduler (string value) +... (other options) scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler +scheduler_host_manager=nova.scheduler.solver_scheduler_host_manager.SolverSchedulerHostManager +``` -# -# Options defined in nova.scheduler.filters.core_filter -# - -# Virtual CPU to physical CPU allocation ratio which affects -# all CPU filters. This configuration specifies a global ratio -# for CoreFilter. For AggregateCoreFilter, it will fall back -# to this configuration value if no per-aggregate setting -# found. This option is also used in Solver Scheduler for the -# MaxVcpuAllocationPerHostConstraint (floating point value) -cpu_allocation_ratio=16.0 - -# -# Options defined in nova.scheduler.filters.disk_filter -# - -# Virtual disk to physical disk allocation ratio (floating -# point value) -disk_allocation_ratio=1.0 - -# -# Options defined in nova.scheduler.filters.num_instances_filter -# - -# Ignore hosts that have too many instances (integer value) -max_instances_per_host=50 - -# -# Options defined in nova.scheduler.filters.io_ops_filter -# - -# Ignore hosts that have too many -# builds/resizes/snaps/migrations. (integer value) -max_io_ops_per_host=8 - -# -# Options defined in nova.scheduler.filters.ram_filter -# - -# Virtual ram to physical ram allocation ratio which affects -# all ram filters. This configuration specifies a global ratio -# for RamFilter. For AggregateRamFilter, it will fall back to -# this configuration value if no per-aggregate setting found. -# (floating point value) -ram_allocation_ratio=1.5 - -# -# Options defined in nova.scheduler.weights.ram -# - -# Multiplier used for weighing ram. Negative numbers mean to -# stack vs spread. (floating point value) -ram_weight_multiplier=1.0 - -# -# Options defined in nova.volume.cinder -# - -# Keystone Cinder account username (string value) -cinder_admin_user= - -# Keystone Cinder account password (string value) -cinder_admin_password= - -# Keystone Cinder account tenant name (string value) -cinder_admin_tenant_name=service - -# Complete public Identity API endpoint (string value) -cinder_auth_uri= +## Solvers +We provide 2 solvers that can plug-in-and-solve the scheduling problems by satisfying all the configured constraints: FastSolver (default), and PulpSolver. The FastSolver runs a fast algorithm that can solve large scale scheduling requests efficiently while giving optimal solutions. The PulpSolver translates the scheduling problems into standard LP (Linear Programming) problems, and invokes a 3rd party LP solver interface (coinor.pulp >= 1.0.4) to solve the scheduling problems. While PulpSolver might be more flexible for complex constraints (which might happen in the future), it works slower than the FastSolver especially in large scale scheduling problems. +We recommend using FastSolver at the current stage, as it covers all (currently) known constraint requirements, and scales better. +The following option in the "[solver_scheduler]" section of nova config should be used to specify which solver to use. Please add a new section title called "[solver_scheduler]" if it (probably) doesn't already exist in the nova config file. +``` [solver_scheduler] +scheduler_host_solver=nova.scheduler.solvers.fast_solver.FastSolver +``` -# -# Options defined in nova.scheduler.solver_scheduler -# +## Costs and Constraints -# The pluggable solver implementation to use. By default, a -# reference solver implementation is included that models the -# problem as a Linear Programming (LP) problem using PULP. -# To use a fully functional (pluggable) solver, set the option as -# "nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver" -# (string value) -scheduler_host_solver=nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver +### Configuring which costs/constraints to use +Solver-scheduler uses "costs" and "constraints" to configure the behaviour of scheduler. They can be set in a similar way as "weights" and "filters" in the filter-scheduler. -# -# Options defined in nova.scheduler.solvers -# - -# Which constraints to use in scheduler solver (list value) -scheduler_solver_constraints=ActiveHostConstraint, NonTrivialSolutionConstraint - -# Assign weight for each cost (list value) -scheduler_solver_cost_weights=RamCost:1.0 - -# Which cost matrices to use in the scheduler solver. -# (list value) -scheduler_solver_costs=RamCost - +Here is an example for setting which "costs" to use, pleas put these options in the "[solver_scheduler]" section of nova config, each "cost" has a multiplier associated with it to specify its weight in the scheduler decision: ``` - -Configuration Details ---------------------- - -* Available costs - - - **RamCost** - Help to balance (or stack) ram usage of hosts. - The following option should be set in configuration when using this cost: - ``` - ram_weight_multiplier = - ram_allocation_ratio = - ``` - set the multiplier to negative number for balanced ram usage, - set the multiplier to positive number for stacked ram usage. - - - **VolumeAffinityCost** - Help to place instances at the same host as a specific volume, if possible. - In order to use this cost, you need to pass a hint to the scheduler on booting a server. - ```nova boot ... --hint affinity_volume_id= ...``` - You also need to have the following options set in the '[DEFAULT]' section of the configuration. - ``` - cinder_admin_user= - cinder_admin_password= - cinder_admin_tenant_name= - cinder_auth_uri= - ``` - -* Available linear constraints - - - **ActiveHostConstraint** - By enabling this constraint, only enabled and operational hosts are allowed to be selected. - Normally this constraint should always be enabled. - - - **NonTrivialSolutionConstraint** - The purpose of this constraint is to avoid trivial solution (i.e. instances placed nowhere). - Normally this constraint should always be enabled. - - - **MaxRamAllocationPerHostConstraint** - Cap the virtual ram allocation of hosts. - The following option should be set in configuration when using this constraint: - ```ram_allocation_ratio = ``` (virtual-to-physical ram allocation ratio, if >1.0 then over-allocation is allowed.) - - - **MaxDiskAllocationPerHostConstraint** - Cap the virtual disk allocation of hosts. - The following option should be set in configuration when using this constraint: - ```disk_allocation_ratio = ``` (virtual-to-physical disk allocation ratio, if >1.0 then over-allocation is allowed.) - - - **MaxVcpuAllocationPerHostConstraint** - Cap the vcpu allocation of hosts. - The following option should be set in configuration when using this constraint: - ```cpu_allocation_ratio = ``` (virtual-to-physical cpu allocation ratio, if >1.0 then over-allocation is allowed.) - - - **NumInstancesPerHostConstraint** - Specify the maximum number of instances that can be placed in each host. - The following option is expected in the configuration: - ```max_instances_per_host = ``` - - - **DifferentHostConstraint** - Force instances to be placed at different hosts as specified instance(s). - The following scheduler hint is expected when using this constraint: - ```different_host = ``` - - - **SameHostConstraint** - Force instances to be placed at same hosts as specified instance(s). - The following scheduler hint is expected when using this constraint: - ```same_host = ``` - - - **AvailablilityZoneConstraint** - Select hosts belongong to an availability zone. - The following option should be set in configuration when using this constraint: - ```default_availability_zone = ``` - - - **IoOpsConstraint** - Ensure the concurrent I/O operations number of selected hosts are within a threshold. - The following option should be set in configuration when using this constraint: - ```max_io_ops_per_host = ``` - -Examples --------- - -This is an example usage for creating VMs with volume affinity using the solver scheduler. - -* Install the solver scheduler. - -* Update the nova.conf with following options: - -``` -[DEFAULT] -... -# Default driver to use for the scheduler -scheduler_driver = nova.scheduler.solver_scheduler.ConstraintSolverScheduler - -# Virtual-to-physical disk allocation ratio -disk_allocation_ratio = 1.0 - -# Virtual-to-physical ram allocation ratio -ram_allocation_ratio = 1.5 - -# Keystone Cinder account username (string value) -cinder_admin_user= - -# Keystone Cinder account password (string value) -cinder_admin_password= - -# Keystone Cinder account tenant name (string value) -cinder_admin_tenant_name= - -# Complete public Identity API endpoint (string value) -cinder_auth_uri= - - [solver_scheduler] +... (other options) +scheduler_solver_costs=RamCost,AffinityCost,AntiAffinityCost +ram_cost_multiplier=1.0 +affinity_cost_multiplier=2.0 +anti_affinity_cost_multiplier=0.5 +``` -# Default solver to use for the solver scheduler -scheduler_host_solver = nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver +**Notes** +Tips about the cost multipliers' values: -# Cost functions to use in the linear solver -scheduler_solver_costs = VolumeAffinityCost - -# Weight of each cost (every cost function used should be given a weight.) -scheduler_solver_cost_weights = VolumeAffinityCost:1.0 - -# Constraints used in the solver -scheduler_solver_constraints = ActiveHostConstraint, NonTrivialSolutionConstraint, MaxDiskAllocationPerHostConstraint, MaxRamAllocationPerHostConstraint +Cost class | Multiplier +---------- | ---------- +RamCost | \> 0: the scheduler will tend to balance the usage of RAM. The higher the value, the more weight the cost will get in scheduler's decision.
\< 0: the scheduler will tend to stack the RAM usage. The higher the *absolute* value, the more weight the cost will get in scheduler's decision.
= 0: the cost will be ignored. +MetricsCost | \> 0: The higher the value, the more weight the cost will get in scheduler's decision.
\< 0: Not recommended. Might not make the cost meaningful.
= 0: the cost will be ignored. +The followings is an example of how to set which "constraints" to be used by the solver-scheduler. ``` +[solver_scheduler] +... (other options) +scheduler_solver_constraints=ActiveHostsConstraint,RamConstraint,NumInstancesConstraint +``` -* Restart nova-scheduler and then do the followings: +In the following section we will discuss the detailed cost and constraint classes. -* Create multiple volumes at different hosts +### Transition from filter-scheduler to solver-scheduler -* Run the following command to boot a new instance. (The id of a volume you want to use should be provided as scheduler hint.) -``` -nova boot --image= --flavor= --hint affinity_volume_id= -``` +The table below lists supported constraints and their counterparts in filter scheduler. Those costs and constraints can be used in the same way as the weights/filters in the filter scheduler except the above option-setting. Please refer to [OpenStack Configuration Reference](http://docs.openstack.org/icehouse/config-reference/content/section_compute-scheduler.html) for detailed explanation of available weights and filters and their usage instructions. -* The instance should be created at the same host as the chosen volume as long as the host is active and has enough resources. +Weight | Cost +------ | ---- +MetricsWeigher | MetricsCost +RAMWeigher | RamCost + +Filter | Constraint +------ | ---------- +AggregateCoreFilter | AggregateVcpuConstraint +AggregateDiskFilter | AggregateDiskConstraint +AggregateImagePropertiesIsolation | AggregateImagePropertiesIsolationConstraint +AggregateInstanceExtraSpecsFilter | AggregateInstanceExtraSpecsConstraint +AggregateMultiTenancyIsolation | AggregateMultiTenancyIsolationConstraint +AggregateRamFilter | AggregateRamConstraint +AggregateTypeAffinityFilter | AggregateTypeAffinityConstraint +AllHostsFilter | NoConstraint +AvailabilityZoneFilter | AvailabilityZoneConstraint +ComputeCapabilitiesFilter | ComputeCapabilitiesConstraint +ComputeFilter | ActiveHostsConstraint +CoreFilter | VcpuConstraint +DifferentHostFilter | DifferentHostConstraint +DiskFilter | DiskConstraint +ImagePropertiesFilter | ImagePropertiesConstraint +IsolatedHostsFilter | IsolatedHostsConstraint +IoOpsFilter | IoOpsConstraint +JsonFilter | JsonConstraint +MetricsFilter | MetricsConstraint +NumInstancesFilter | NumInstancesConstraint +PciPassthroughFilter | PciPassthroughConstraint +RamFilter | RamConstraint +RetryFilter | RetryConstraint +SameHostFilter | SameHostConstraint +ServerGroupAffinityFilter | ServerGroupAffinityConstraint +ServerGroupAntiAffinityFilter | ServerGroupAntiAffinityConstraint +SimpleCIDRAffinityFilter | SimpleCidrAffinityConstraint +TrustedFilter | TrustedHostsConstraint +TypeAffinityFilter | TypeAffinityConstraint + +**Notes** +Some of the above constraints directly invoke their filter counterparts to check host availability, others (in the following list) are implemented with improved logic that may result in more optimal placement decisions for multi-instance requests: +- DiskConstraint +- AggregateDiskConstraint (inherited from DiskConstraint) +- RamConstraint +- AggregateRamConstraint (inherited from RamConstraint) +- VcpuConstraint +- AggregateVcpuConstraint (inherited from VcpuConstraint) +- IoOpsConstraint +- NumInstancesConstraint +- PciPassthroughConstraint +- ServerGroupAffinityConstraint +- ServerGroupAntiAffinityConstraint diff --git a/etc/nova/nova.conf.sample b/etc/nova/nova.conf.sample deleted file mode 100644 index 0cdf071..0000000 --- a/etc/nova/nova.conf.sample +++ /dev/null @@ -1,3762 +0,0 @@ -[DEFAULT] - -# -# Options defined in oslo.messaging -# - -# Use durable queues in amqp. (boolean value) -# Deprecated group/name - [DEFAULT]/rabbit_durable_queues -#amqp_durable_queues=false - -# Auto-delete queues in amqp. (boolean value) -#amqp_auto_delete=false - -# Size of RPC connection pool. (integer value) -#rpc_conn_pool_size=30 - -# Modules of exceptions that are permitted to be recreated -# upon receiving exception data from an rpc call. (list value) -#allowed_rpc_exception_modules=oslo.messaging.exceptions,nova.exception,cinder.exception,exceptions - -# Qpid broker hostname. (string value) -#qpid_hostname=localhost - -# Qpid broker port. (integer value) -#qpid_port=5672 - -# Qpid HA cluster host:port pairs. (list value) -#qpid_hosts=$qpid_hostname:$qpid_port - -# Username for Qpid connection. (string value) -#qpid_username= - -# Password for Qpid connection. (string value) -#qpid_password= - -# Space separated list of SASL mechanisms to use for auth. -# (string value) -#qpid_sasl_mechanisms= - -# Seconds between connection keepalive heartbeats. (integer -# value) -#qpid_heartbeat=60 - -# Transport to use, either 'tcp' or 'ssl'. (string value) -#qpid_protocol=tcp - -# Whether to disable the Nagle algorithm. (boolean value) -#qpid_tcp_nodelay=true - -# The qpid topology version to use. Version 1 is what was -# originally used by impl_qpid. Version 2 includes some -# backwards-incompatible changes that allow broker federation -# to work. Users should update to version 2 when they are -# able to take everything down, as it requires a clean break. -# (integer value) -#qpid_topology_version=1 - -# SSL version to use (valid only if SSL enabled). valid values -# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some -# distributions. (string value) -#kombu_ssl_version= - -# SSL key file (valid only if SSL enabled). (string value) -#kombu_ssl_keyfile= - -# SSL cert file (valid only if SSL enabled). (string value) -#kombu_ssl_certfile= - -# SSL certification authority file (valid only if SSL -# enabled). (string value) -#kombu_ssl_ca_certs= - -# How long to wait before reconnecting in response to an AMQP -# consumer cancel notification. (floating point value) -#kombu_reconnect_delay=1.0 - -# The RabbitMQ broker address where a single node is used. -# (string value) -#rabbit_host=localhost - -# The RabbitMQ broker port where a single node is used. -# (integer value) -#rabbit_port=5672 - -# RabbitMQ HA cluster host:port pairs. (list value) -#rabbit_hosts=$rabbit_host:$rabbit_port - -# Connect over SSL for RabbitMQ. (boolean value) -#rabbit_use_ssl=false - -# The RabbitMQ userid. (string value) -#rabbit_userid=guest - -# The RabbitMQ password. (string value) -#rabbit_password=guest - -# the RabbitMQ login method (string value) -#rabbit_login_method=AMQPLAIN - -# The RabbitMQ virtual host. (string value) -#rabbit_virtual_host=/ - -# How frequently to retry connecting with RabbitMQ. (integer -# value) -#rabbit_retry_interval=1 - -# How long to backoff for between retries when connecting to -# RabbitMQ. (integer value) -#rabbit_retry_backoff=2 - -# Maximum number of RabbitMQ connection retries. Default is 0 -# (infinite retry count). (integer value) -#rabbit_max_retries=0 - -# Use HA queues in RabbitMQ (x-ha-policy: all). If you change -# this option, you must wipe the RabbitMQ database. (boolean -# value) -#rabbit_ha_queues=false - -# If passed, use a fake RabbitMQ provider. (boolean value) -#fake_rabbit=false - -# ZeroMQ bind address. Should be a wildcard (*), an ethernet -# interface, or IP. The "host" option should point or resolve -# to this address. (string value) -#rpc_zmq_bind_address=* - -# MatchMaker driver. (string value) -#rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost - -# ZeroMQ receiver listening port. (integer value) -#rpc_zmq_port=9501 - -# Number of ZeroMQ contexts, defaults to 1. (integer value) -#rpc_zmq_contexts=1 - -# Maximum number of ingress messages to locally buffer per -# topic. Default is unlimited. (integer value) -#rpc_zmq_topic_backlog= - -# Directory for holding IPC sockets. (string value) -#rpc_zmq_ipc_dir=/var/run/openstack - -# Name of this node. Must be a valid hostname, FQDN, or IP -# address. Must match "host" option, if running Nova. (string -# value) -#rpc_zmq_host=nova - -# Seconds to wait before a cast expires (TTL). Only supported -# by impl_zmq. (integer value) -#rpc_cast_timeout=30 - -# Heartbeat frequency. (integer value) -#matchmaker_heartbeat_freq=300 - -# Heartbeat time-to-live. (integer value) -#matchmaker_heartbeat_ttl=600 - -# Host to locate redis. (string value) -#host=127.0.0.1 - -# Use this port to connect to redis host. (integer value) -#port=6379 - -# Password for Redis server (optional). (string value) -#password= - -# Size of RPC greenthread pool. (integer value) -#rpc_thread_pool_size=64 - -# Driver or drivers to handle sending notifications. (multi -# valued) -#notification_driver= - -# AMQP topic used for OpenStack notifications. (list value) -# Deprecated group/name - [rpc_notifier2]/topics -#notification_topics=notifications - -# Seconds to wait for a response from a call. (integer value) -#rpc_response_timeout=60 - -# A URL representing the messaging driver to use and its full -# configuration. If not set, we fall back to the rpc_backend -# option and driver specific configuration. (string value) -#transport_url= - -# The messaging driver to use, defaults to rabbit. Other -# drivers include qpid and zmq. (string value) -#rpc_backend=rabbit - -# The default exchange under which topics are scoped. May be -# overridden by an exchange name specified in the -# transport_url option. (string value) -#control_exchange=openstack - - -# -# Options defined in nova.availability_zones -# - -# The availability_zone to show internal services under -# (string value) -#internal_service_availability_zone=internal - -# Default compute node availability_zone (string value) -#default_availability_zone=nova - - -# -# Options defined in nova.crypto -# - -# Filename of root CA (string value) -#ca_file=cacert.pem - -# Filename of private key (string value) -#key_file=private/cakey.pem - -# Filename of root Certificate Revocation List (string value) -#crl_file=crl.pem - -# Where we keep our keys (string value) -#keys_path=$state_path/keys - -# Where we keep our root CA (string value) -#ca_path=$state_path/CA - -# Should we use a CA for each project? (boolean value) -#use_project_ca=false - -# Subject for certificate for users, %s for project, user, -# timestamp (string value) -#user_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s - -# Subject for certificate for projects, %s for project, -# timestamp (string value) -#project_cert_subject=/C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s - - -# -# Options defined in nova.exception -# - -# Make exception message format errors fatal (boolean value) -#fatal_exception_format_errors=false - - -# -# Options defined in nova.netconf -# - -# IP address of this host (string value) -#my_ip=10.0.0.1 - -# Name of this node. This can be an opaque identifier. It is -# not necessarily a hostname, FQDN, or IP address. However, -# the node name must be valid within an AMQP key, and if using -# ZeroMQ, a valid hostname, FQDN, or IP address (string value) -#host=nova - -# Use IPv6 (boolean value) -#use_ipv6=false - - -# -# Options defined in nova.notifications -# - -# If set, send compute.instance.update notifications on -# instance state changes. Valid values are None for no -# notifications, "vm_state" for notifications on VM state -# changes, or "vm_and_task_state" for notifications on VM and -# task state changes. (string value) -#notify_on_state_change= - -# If set, send api.fault notifications on caught exceptions in -# the API service. (boolean value) -#notify_api_faults=false - -# Default notification level for outgoing notifications -# (string value) -#default_notification_level=INFO - -# Default publisher_id for outgoing notifications (string -# value) -#default_publisher_id= - - -# -# Options defined in nova.paths -# - -# Directory where the nova python module is installed (string -# value) -#pybasedir=/usr/lib/python/site-packages - -# Directory where nova binaries are installed (string value) -#bindir=/usr/local/bin - -# Top-level directory for maintaining nova's state (string -# value) -#state_path=$pybasedir - - -# -# Options defined in nova.policy -# - -# JSON file representing policy (string value) -#policy_file=policy.json - -# Rule checked when requested rule is not found (string value) -#policy_default_rule=default - - -# -# Options defined in nova.quota -# - -# Number of instances allowed per project (integer value) -#quota_instances=10 - -# Number of instance cores allowed per project (integer value) -#quota_cores=20 - -# Megabytes of instance RAM allowed per project (integer -# value) -#quota_ram=51200 - -# Number of floating IPs allowed per project (integer value) -#quota_floating_ips=10 - -# Number of fixed IPs allowed per project (this should be at -# least the number of instances allowed) (integer value) -#quota_fixed_ips=-1 - -# Number of metadata items allowed per instance (integer -# value) -#quota_metadata_items=128 - -# Number of injected files allowed (integer value) -#quota_injected_files=5 - -# Number of bytes allowed per injected file (integer value) -#quota_injected_file_content_bytes=10240 - -# Number of bytes allowed per injected file path (integer -# value) -#quota_injected_file_path_bytes=255 - -# Number of security groups per project (integer value) -#quota_security_groups=10 - -# Number of security rules per security group (integer value) -#quota_security_group_rules=20 - -# Number of key pairs per user (integer value) -#quota_key_pairs=100 - -# Number of seconds until a reservation expires (integer -# value) -#reservation_expire=86400 - -# Count of reservations until usage is refreshed (integer -# value) -#until_refresh=0 - -# Number of seconds between subsequent usage refreshes -# (integer value) -#max_age=0 - -# Default driver to use for quota checks (string value) -#quota_driver=nova.quota.DbQuotaDriver - - -# -# Options defined in nova.service -# - -# Seconds between nodes reporting state to datastore (integer -# value) -#report_interval=10 - -# Enable periodic tasks (boolean value) -#periodic_enable=true - -# Range of seconds to randomly delay when starting the -# periodic task scheduler to reduce stampeding. (Disable by -# setting to 0) (integer value) -#periodic_fuzzy_delay=60 - -# A list of APIs to enable by default (list value) -#enabled_apis=ec2,osapi_compute,metadata - -# A list of APIs with enabled SSL (list value) -#enabled_ssl_apis= - -# The IP address on which the EC2 API will listen. (string -# value) -#ec2_listen=0.0.0.0 - -# The port on which the EC2 API will listen. (integer value) -#ec2_listen_port=8773 - -# Number of workers for EC2 API service. The default will be -# equal to the number of CPUs available. (integer value) -#ec2_workers= - -# The IP address on which the OpenStack API will listen. -# (string value) -#osapi_compute_listen=0.0.0.0 - -# The port on which the OpenStack API will listen. (integer -# value) -#osapi_compute_listen_port=8774 - -# Number of workers for OpenStack API service. The default -# will be the number of CPUs available. (integer value) -#osapi_compute_workers= - -# OpenStack metadata service manager (string value) -#metadata_manager=nova.api.manager.MetadataManager - -# The IP address on which the metadata API will listen. -# (string value) -#metadata_listen=0.0.0.0 - -# The port on which the metadata API will listen. (integer -# value) -#metadata_listen_port=8775 - -# Number of workers for metadata service. The default will be -# the number of CPUs available. (integer value) -#metadata_workers= - -# Full class name for the Manager for compute (string value) -#compute_manager=nova.compute.manager.ComputeManager - -# Full class name for the Manager for console proxy (string -# value) -#console_manager=nova.console.manager.ConsoleProxyManager - -# Manager for console auth (string value) -#consoleauth_manager=nova.consoleauth.manager.ConsoleAuthManager - -# Full class name for the Manager for cert (string value) -#cert_manager=nova.cert.manager.CertManager - -# Full class name for the Manager for network (string value) -#network_manager=nova.network.manager.VlanManager - -# Full class name for the Manager for scheduler (string value) -#scheduler_manager=nova.scheduler.manager.SchedulerManager - -# Maximum time since last check-in for up service (integer -# value) -#service_down_time=60 - - -# -# Options defined in nova.test -# - -# File name of clean sqlite db (string value) -#sqlite_clean_db=clean.sqlite - - -# -# Options defined in nova.utils -# - -# Whether to log monkey patching (boolean value) -#monkey_patch=false - -# List of modules/decorators to monkey patch (list value) -#monkey_patch_modules=nova.api.ec2.cloud:nova.notifications.notify_decorator,nova.compute.api:nova.notifications.notify_decorator - -# Length of generated instance admin passwords (integer value) -#password_length=12 - -# Time period to generate instance usages for. Time period -# must be hour, day, month or year (string value) -#instance_usage_audit_period=month - -# Path to the rootwrap configuration file to use for running -# commands as root (string value) -#rootwrap_config=/etc/nova/rootwrap.conf - -# Explicitly specify the temporary working directory (string -# value) -#tempdir= - - -# -# Options defined in nova.wsgi -# - -# File name for the paste.deploy config for nova-api (string -# value) -#api_paste_config=api-paste.ini - -# A python format string that is used as the template to -# generate log lines. The following values can be formatted -# into it: client_ip, date_time, request_line, status_code, -# body_length, wall_seconds. (string value) -#wsgi_log_format=%(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f - -# CA certificate file to use to verify connecting clients -# (string value) -#ssl_ca_file= - -# SSL certificate of API server (string value) -#ssl_cert_file= - -# SSL private key of API server (string value) -#ssl_key_file= - -# Sets the value of TCP_KEEPIDLE in seconds for each server -# socket. Not supported on OS X. (integer value) -#tcp_keepidle=600 - -# Size of the pool of greenthreads used by wsgi (integer -# value) -#wsgi_default_pool_size=1000 - -# Maximum line size of message headers to be accepted. -# max_header_line may need to be increased when using large -# tokens (typically those generated by the Keystone v3 API -# with big service catalogs). (integer value) -#max_header_line=16384 - - -# -# Options defined in nova.api.auth -# - -# Whether to use per-user rate limiting for the api. This -# option is only used by v2 api. Rate limiting is removed from -# v3 api. (boolean value) -#api_rate_limit=false - -# The strategy to use for auth: noauth or keystone. (string -# value) -#auth_strategy=noauth - -# Treat X-Forwarded-For as the canonical remote address. Only -# enable this if you have a sanitizing proxy. (boolean value) -#use_forwarded_for=false - - -# -# Options defined in nova.api.ec2 -# - -# Number of failed auths before lockout. (integer value) -#lockout_attempts=5 - -# Number of minutes to lockout if triggered. (integer value) -#lockout_minutes=15 - -# Number of minutes for lockout window. (integer value) -#lockout_window=15 - -# URL to get token from ec2 request. (string value) -#keystone_ec2_url=http://localhost:5000/v2.0/ec2tokens - -# Return the IP address as private dns hostname in describe -# instances (boolean value) -#ec2_private_dns_show_ip=false - -# Validate security group names according to EC2 specification -# (boolean value) -#ec2_strict_validation=true - -# Time in seconds before ec2 timestamp expires (integer value) -#ec2_timestamp_expiry=300 - - -# -# Options defined in nova.api.ec2.cloud -# - -# The IP address of the EC2 API server (string value) -#ec2_host=$my_ip - -# The internal IP address of the EC2 API server (string value) -#ec2_dmz_host=$my_ip - -# The port of the EC2 API server (integer value) -#ec2_port=8773 - -# The protocol to use when connecting to the EC2 API server -# (http, https) (string value) -#ec2_scheme=http - -# The path prefix used to call the ec2 API server (string -# value) -#ec2_path=/services/Cloud - -# List of region=fqdn pairs separated by commas (list value) -#region_list= - - -# -# Options defined in nova.api.metadata.base -# - -# List of metadata versions to skip placing into the config -# drive (string value) -#config_drive_skip_versions=1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 - -# Driver to use for vendor data (string value) -#vendordata_driver=nova.api.metadata.vendordata_json.JsonFileVendorData - - -# -# Options defined in nova.api.metadata.handler -# - -# Set flag to indicate Neutron will proxy metadata requests -# and resolve instance ids. (boolean value) -#service_neutron_metadata_proxy=false - -# Shared secret to validate proxies Neutron metadata requests -# (string value) -#neutron_metadata_proxy_shared_secret= - - -# -# Options defined in nova.api.metadata.vendordata_json -# - -# File to load json formatted vendor data from (string value) -#vendordata_jsonfile_path= - - -# -# Options defined in nova.api.openstack.common -# - -# The maximum number of items returned in a single response -# from a collection resource (integer value) -#osapi_max_limit=1000 - -# Base URL that will be presented to users in links to the -# OpenStack Compute API (string value) -#osapi_compute_link_prefix= - -# Base URL that will be presented to users in links to glance -# resources (string value) -#osapi_glance_link_prefix= - - -# -# Options defined in nova.api.openstack.compute -# - -# Permit instance snapshot operations. (boolean value) -#allow_instance_snapshots=true - - -# -# Options defined in nova.api.openstack.compute.contrib -# - -# Specify list of extensions to load when using -# osapi_compute_extension option with -# nova.api.openstack.compute.contrib.select_extensions (list -# value) -#osapi_compute_ext_list= - - -# -# Options defined in nova.api.openstack.compute.contrib.fping -# - -# Full path to fping. (string value) -#fping_path=/usr/sbin/fping - - -# -# Options defined in nova.api.openstack.compute.contrib.os_tenant_networks -# - -# Enables or disables quota checking for tenant networks -# (boolean value) -#enable_network_quota=false - -# Control for checking for default networks (string value) -#use_neutron_default_nets=False - -# Default tenant id when creating neutron networks (string -# value) -#neutron_default_tenant_id=default - - -# -# Options defined in nova.api.openstack.compute.extensions -# - -# osapi compute extension to load (multi valued) -#osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions - - -# -# Options defined in nova.api.openstack.compute.plugins.v3.hide_server_addresses -# - -# List of instance states that should hide network info (list -# value) -#osapi_hide_server_address_states=building - - -# -# Options defined in nova.api.openstack.compute.servers -# - -# Enables returning of the instance password by the relevant -# server API calls such as create, rebuild or rescue, If the -# hypervisor does not support password injection then the -# password returned will not be correct (boolean value) -#enable_instance_password=true - - -# -# Options defined in nova.api.sizelimit -# - -# The maximum body size per each osapi request(bytes) (integer -# value) -#osapi_max_request_body_size=114688 - - -# -# Options defined in nova.cert.rpcapi -# - -# The topic cert nodes listen on (string value) -#cert_topic=cert - - -# -# Options defined in nova.cloudpipe.pipelib -# - -# Image ID used when starting up a cloudpipe vpn server -# (string value) -#vpn_image_id=0 - -# Flavor for vpn instances (string value) -#vpn_flavor=m1.tiny - -# Template for cloudpipe instance boot script (string value) -#boot_script_template=$pybasedir/nova/cloudpipe/bootscript.template - -# Network to push into openvpn config (string value) -#dmz_net=10.0.0.0 - -# Netmask to push into openvpn config (string value) -#dmz_mask=255.255.255.0 - -# Suffix to add to project name for vpn key and secgroups -# (string value) -#vpn_key_suffix=-vpn - - -# -# Options defined in nova.cmd.novnc -# - -# Record sessions to FILE.[session_number] (boolean value) -#record=false - -# Become a daemon (background process) (boolean value) -#daemon=false - -# Disallow non-encrypted connections (boolean value) -#ssl_only=false - -# Source is ipv6 (boolean value) -#source_is_ipv6=false - -# SSL certificate file (string value) -#cert=self.pem - -# SSL key file (if separate from cert) (string value) -#key= - -# Run webserver on same port. Serve files from DIR. (string -# value) -#web=/usr/share/spice-html5 - - -# -# Options defined in nova.cmd.novncproxy -# - -# Host on which to listen for incoming requests (string value) -#novncproxy_host=0.0.0.0 - -# Port on which to listen for incoming requests (integer -# value) -#novncproxy_port=6080 - - -# -# Options defined in nova.cmd.spicehtml5proxy -# - -# Host on which to listen for incoming requests (string value) -#spicehtml5proxy_host=0.0.0.0 - -# Port on which to listen for incoming requests (integer -# value) -#spicehtml5proxy_port=6082 - - -# -# Options defined in nova.compute.api -# - -# Allow destination machine to match source for resize. Useful -# when testing in single-host environments. (boolean value) -#allow_resize_to_same_host=false - -# Allow migrate machine to the same host. Useful when testing -# in single-host environments. (boolean value) -#allow_migrate_to_same_host=false - -# Availability zone to use when user doesn't specify one -# (string value) -#default_schedule_zone= - -# These are image properties which a snapshot should not -# inherit from an instance (list value) -#non_inheritable_image_properties=cache_in_nova,bittorrent - -# Kernel image that indicates not to use a kernel, but to use -# a raw disk image instead (string value) -#null_kernel=nokernel - -# When creating multiple instances with a single request using -# the os-multiple-create API extension, this template will be -# used to build the display name for each instance. The -# benefit is that the instances end up with different -# hostnames. To restore legacy behavior of every instance -# having the same name, set this option to "%(name)s". Valid -# keys for the template are: name, uuid, count. (string value) -#multi_instance_display_name_template=%(name)s-%(uuid)s - -# Maximum number of devices that will result in a local image -# being created on the hypervisor node. Setting this to 0 -# means nova will allow only boot from volume. A negative -# number means unlimited. (integer value) -#max_local_block_devices=3 - - -# -# Options defined in nova.compute.flavors -# - -# Default flavor to use for the EC2 API only. The Nova API -# does not support a default flavor. (string value) -#default_flavor=m1.small - - -# -# Options defined in nova.compute.manager -# - -# Console proxy host to use to connect to instances on this -# host. (string value) -#console_host=nova - -# Name of network to use to set access IPs for instances -# (string value) -#default_access_ip_network_name= - -# Whether to batch up the application of IPTables rules during -# a host restart and apply all at the end of the init phase -# (boolean value) -#defer_iptables_apply=false - -# Where instances are stored on disk (string value) -#instances_path=$state_path/instances - -# Generate periodic compute.instance.exists notifications -# (boolean value) -#instance_usage_audit=false - -# Number of 1 second retries needed in live_migration (integer -# value) -#live_migration_retry_count=30 - -# Whether to start guests that were running before the host -# rebooted (boolean value) -#resume_guests_state_on_host_boot=false - -# Number of times to retry network allocation on failures -# (integer value) -#network_allocate_retries=0 - -# The number of times to attempt to reap an instance's files. -# (integer value) -#maximum_instance_delete_attempts=5 - -# Interval to pull network bandwidth usage info. Not supported -# on all hypervisors. Set to 0 to disable. (integer value) -#bandwidth_poll_interval=600 - -# Interval to sync power states between the database and the -# hypervisor (integer value) -#sync_power_state_interval=600 - -# Number of seconds between instance info_cache self healing -# updates (integer value) -#heal_instance_info_cache_interval=60 - -# Interval in seconds for reclaiming deleted instances -# (integer value) -#reclaim_instance_interval=0 - -# Interval in seconds for gathering volume usages (integer -# value) -#volume_usage_poll_interval=0 - -# Interval in seconds for polling shelved instances to offload -# (integer value) -#shelved_poll_interval=3600 - -# Time in seconds before a shelved instance is eligible for -# removing from a host. -1 never offload, 0 offload when -# shelved (integer value) -#shelved_offload_time=0 - -# Interval in seconds for retrying failed instance file -# deletes (integer value) -#instance_delete_interval=300 - -# Action to take if a running deleted instance is -# detected.Valid options are 'noop', 'log', 'shutdown', or -# 'reap'. Set to 'noop' to take no action. (string value) -#running_deleted_instance_action=reap - -# Number of seconds to wait between runs of the cleanup task. -# (integer value) -#running_deleted_instance_poll_interval=1800 - -# Number of seconds after being deleted when a running -# instance should be considered eligible for cleanup. (integer -# value) -#running_deleted_instance_timeout=0 - -# Automatically hard reboot an instance if it has been stuck -# in a rebooting state longer than N seconds. Set to 0 to -# disable. (integer value) -#reboot_timeout=0 - -# Amount of time in seconds an instance can be in BUILD before -# going into ERROR status.Set to 0 to disable. (integer value) -#instance_build_timeout=0 - -# Automatically unrescue an instance after N seconds. Set to 0 -# to disable. (integer value) -#rescue_timeout=0 - -# Automatically confirm resizes after N seconds. Set to 0 to -# disable. (integer value) -#resize_confirm_window=0 - - -# -# Options defined in nova.compute.monitors -# - -# Monitor classes available to the compute which may be -# specified more than once. (multi valued) -#compute_available_monitors=nova.compute.monitors.all_monitors - -# A list of monitors that can be used for getting compute -# metrics. (list value) -#compute_monitors= - - -# -# Options defined in nova.compute.resource_tracker -# - -# Amount of disk in MB to reserve for the host (integer value) -#reserved_host_disk_mb=0 - -# Amount of memory in MB to reserve for the host (integer -# value) -#reserved_host_memory_mb=512 - -# Class that will manage stats for the local compute host -# (string value) -#compute_stats_class=nova.compute.stats.Stats - - -# -# Options defined in nova.compute.rpcapi -# - -# The topic compute nodes listen on (string value) -#compute_topic=compute - - -# -# Options defined in nova.conductor.tasks.live_migrate -# - -# Number of times to retry live-migration before failing. If -# == -1, try until out of hosts. If == 0, only try once, no -# retries. (integer value) -#migrate_max_retries=-1 - - -# -# Options defined in nova.console.manager -# - -# Driver to use for the console proxy (string value) -#console_driver=nova.console.xvp.XVPConsoleProxy - -# Stub calls to compute worker for tests (boolean value) -#stub_compute=false - -# Publicly visible name for this console host (string value) -#console_public_hostname=nova - - -# -# Options defined in nova.console.rpcapi -# - -# The topic console proxy nodes listen on (string value) -#console_topic=console - - -# -# Options defined in nova.console.vmrc -# - -# Port for VMware VMRC connections (integer value) -#console_vmrc_port=443 - -# Number of retries for retrieving VMRC information (integer -# value) -#console_vmrc_error_retries=10 - - -# -# Options defined in nova.console.xvp -# - -# XVP conf template (string value) -#console_xvp_conf_template=$pybasedir/nova/console/xvp.conf.template - -# Generated XVP conf file (string value) -#console_xvp_conf=/etc/xvp.conf - -# XVP master process pid file (string value) -#console_xvp_pid=/var/run/xvp.pid - -# XVP log file (string value) -#console_xvp_log=/var/log/xvp.log - -# Port for XVP to multiplex VNC connections on (integer value) -#console_xvp_multiplex_port=5900 - - -# -# Options defined in nova.consoleauth -# - -# The topic console auth proxy nodes listen on (string value) -#consoleauth_topic=consoleauth - - -# -# Options defined in nova.consoleauth.manager -# - -# How many seconds before deleting tokens (integer value) -#console_token_ttl=600 - - -# -# Options defined in nova.db.api -# - -# Services to be added to the available pool on create -# (boolean value) -#enable_new_services=true - -# Template string to be used to generate instance names -# (string value) -#instance_name_template=instance-%08x - -# Template string to be used to generate snapshot names -# (string value) -#snapshot_name_template=snapshot-%s - - -# -# Options defined in nova.db.base -# - -# The driver to use for database access (string value) -#db_driver=nova.db - - -# -# Options defined in nova.db.sqlalchemy.api -# - -# When set, compute API will consider duplicate hostnames -# invalid within the specified scope, regardless of case. -# Should be empty, "project" or "global". (string value) -#osapi_compute_unique_server_name_scope= - - -# -# Options defined in nova.image.glance -# - -# Default glance hostname or IP address (string value) -#glance_host=$my_ip - -# Default glance port (integer value) -#glance_port=9292 - -# Default protocol to use when connecting to glance. Set to -# https for SSL. (string value) -#glance_protocol=http - -# A list of the glance api servers available to nova. Prefix -# with https:// for ssl-based glance api servers. -# ([hostname|ip]:port) (list value) -#glance_api_servers=$glance_host:$glance_port - -# Allow to perform insecure SSL (https) requests to glance -# (boolean value) -#glance_api_insecure=false - -# Number of retries when downloading an image from glance -# (integer value) -#glance_num_retries=0 - -# A list of url scheme that can be downloaded directly via the -# direct_url. Currently supported schemes: [file]. (list -# value) -#allowed_direct_url_schemes= - - -# -# Options defined in nova.image.s3 -# - -# Parent directory for tempdir used for image decryption -# (string value) -#image_decryption_dir=/tmp - -# Hostname or IP for OpenStack to use when accessing the S3 -# api (string value) -#s3_host=$my_ip - -# Port used when accessing the S3 api (integer value) -#s3_port=3333 - -# Access key to use for S3 server for images (string value) -#s3_access_key=notchecked - -# Secret key to use for S3 server for images (string value) -#s3_secret_key=notchecked - -# Whether to use SSL when talking to S3 (boolean value) -#s3_use_ssl=false - -# Whether to affix the tenant id to the access key when -# downloading from S3 (boolean value) -#s3_affix_tenant=false - - -# -# Options defined in nova.ipv6.api -# - -# Backend to use for IPv6 generation (string value) -#ipv6_backend=rfc2462 - - -# -# Options defined in nova.network -# - -# The full class name of the network API class to use (string -# value) -#network_api_class=nova.network.api.API - - -# -# Options defined in nova.network.driver -# - -# Driver to use for network creation (string value) -#network_driver=nova.network.linux_net - - -# -# Options defined in nova.network.floating_ips -# - -# Default pool for floating IPs (string value) -#default_floating_pool=nova - -# Autoassigning floating IP to VM (boolean value) -#auto_assign_floating_ip=false - -# Full class name for the DNS Manager for floating IPs (string -# value) -#floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver - -# Full class name for the DNS Manager for instance IPs (string -# value) -#instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver - -# Full class name for the DNS Zone for instance IPs (string -# value) -#instance_dns_domain= - - -# -# Options defined in nova.network.ldapdns -# - -# URL for LDAP server which will store DNS entries (string -# value) -#ldap_dns_url=ldap://ldap.example.com:389 - -# User for LDAP DNS (string value) -#ldap_dns_user=uid=admin,ou=people,dc=example,dc=org - -# Password for LDAP DNS (string value) -#ldap_dns_password=password - -# Hostmaster for LDAP DNS driver Statement of Authority -# (string value) -#ldap_dns_soa_hostmaster=hostmaster@example.org - -# DNS Servers for LDAP DNS driver (multi valued) -#ldap_dns_servers=dns.example.org - -# Base DN for DNS entries in LDAP (string value) -#ldap_dns_base_dn=ou=hosts,dc=example,dc=org - -# Refresh interval (in seconds) for LDAP DNS driver Statement -# of Authority (string value) -#ldap_dns_soa_refresh=1800 - -# Retry interval (in seconds) for LDAP DNS driver Statement of -# Authority (string value) -#ldap_dns_soa_retry=3600 - -# Expiry interval (in seconds) for LDAP DNS driver Statement -# of Authority (string value) -#ldap_dns_soa_expiry=86400 - -# Minimum interval (in seconds) for LDAP DNS driver Statement -# of Authority (string value) -#ldap_dns_soa_minimum=7200 - - -# -# Options defined in nova.network.linux_net -# - -# Location of flagfiles for dhcpbridge (multi valued) -#dhcpbridge_flagfile=/etc/nova/nova-dhcpbridge.conf - -# Location to keep network config files (string value) -#networks_path=$state_path/networks - -# Interface for public IP addresses (string value) -#public_interface=eth0 - -# MTU setting for network interface (integer value) -#network_device_mtu= - -# Location of nova-dhcpbridge (string value) -#dhcpbridge=$bindir/nova-dhcpbridge - -# Public IP of network host (string value) -#routing_source_ip=$my_ip - -# Lifetime of a DHCP lease in seconds (integer value) -#dhcp_lease_time=120 - -# If set, uses specific DNS server for dnsmasq. Can be -# specified multiple times. (multi valued) -#dns_server= - -# If set, uses the dns1 and dns2 from the network ref. as dns -# servers. (boolean value) -#use_network_dns_servers=false - -# A list of dmz range that should be accepted (list value) -#dmz_cidr= - -# Traffic to this range will always be snatted to the fallback -# ip, even if it would normally be bridged out of the node. -# Can be specified multiple times. (multi valued) -#force_snat_range= - -# Override the default dnsmasq settings with this file (string -# value) -#dnsmasq_config_file= - -# Driver used to create ethernet devices. (string value) -#linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver - -# Name of Open vSwitch bridge used with linuxnet (string -# value) -#linuxnet_ovs_integration_bridge=br-int - -# Send gratuitous ARPs for HA setup (boolean value) -#send_arp_for_ha=false - -# Send this many gratuitous ARPs for HA setup (integer value) -#send_arp_for_ha_count=3 - -# Use single default gateway. Only first nic of vm will get -# default gateway from dhcp server (boolean value) -#use_single_default_gateway=false - -# An interface that bridges can forward to. If this is set to -# all then all traffic will be forwarded. Can be specified -# multiple times. (multi valued) -#forward_bridge_interface=all - -# The IP address for the metadata API server (string value) -#metadata_host=$my_ip - -# The port for the metadata API port (integer value) -#metadata_port=8775 - -# Regular expression to match iptables rule that should always -# be on the top. (string value) -#iptables_top_regex= - -# Regular expression to match iptables rule that should always -# be on the bottom. (string value) -#iptables_bottom_regex= - -# The table that iptables to jump to when a packet is to be -# dropped. (string value) -#iptables_drop_action=DROP - -# Amount of time, in seconds, that ovs_vsctl should wait for a -# response from the database. 0 is to wait forever. (integer -# value) -#ovs_vsctl_timeout=120 - -# If passed, use fake network devices and addresses (boolean -# value) -#fake_network=false - - -# -# Options defined in nova.network.manager -# - -# Bridge for simple network instances (string value) -#flat_network_bridge= - -# DNS server for simple network (string value) -#flat_network_dns=8.8.4.4 - -# Whether to attempt to inject network setup into guest -# (boolean value) -#flat_injected=false - -# FlatDhcp will bridge into this interface if set (string -# value) -#flat_interface= - -# First VLAN for private networks (integer value) -#vlan_start=100 - -# VLANs will bridge into this interface if set (string value) -#vlan_interface= - -# Number of networks to support (integer value) -#num_networks=1 - -# Public IP for the cloudpipe VPN servers (string value) -#vpn_ip=$my_ip - -# First Vpn port for private networks (integer value) -#vpn_start=1000 - -# Number of addresses in each private subnet (integer value) -#network_size=256 - -# Fixed IPv6 address block (string value) -#fixed_range_v6=fd00::/48 - -# Default IPv4 gateway (string value) -#gateway= - -# Default IPv6 gateway (string value) -#gateway_v6= - -# Number of addresses reserved for vpn clients (integer value) -#cnt_vpn_clients=0 - -# Seconds after which a deallocated IP is disassociated -# (integer value) -#fixed_ip_disassociate_timeout=600 - -# Number of attempts to create unique mac address (integer -# value) -#create_unique_mac_address_attempts=5 - -# If True, skip using the queue and make local calls (boolean -# value) -#fake_call=false - -# If True, unused gateway devices (VLAN and bridge) are -# deleted in VLAN network mode with multi hosted networks -# (boolean value) -#teardown_unused_network_gateway=false - -# If True, send a dhcp release on instance termination -# (boolean value) -#force_dhcp_release=true - -# If True in multi_host mode, all compute hosts share the same -# dhcp address. The same IP address used for DHCP will be -# added on each nova-network node which is only visible to the -# vms on the same host. (boolean value) -#share_dhcp_address=false - -# If True, when a DNS entry must be updated, it sends a fanout -# cast to all network hosts to update their DNS entries in -# multi host mode (boolean value) -#update_dns_entries=false - -# Number of seconds to wait between runs of updates to DNS -# entries. (integer value) -#dns_update_periodic_interval=-1 - -# Domain to use for building the hostnames (string value) -#dhcp_domain=novalocal - -# Indicates underlying L3 management library (string value) -#l3_lib=nova.network.l3.LinuxNetL3 - - -# -# Options defined in nova.network.neutronv2.api -# - -# URL for connecting to neutron (string value) -#neutron_url=http://127.0.0.1:9696 - -# Timeout value for connecting to neutron in seconds (integer -# value) -#neutron_url_timeout=30 - -# Username for connecting to neutron in admin context (string -# value) -#neutron_admin_username= - -# Password for connecting to neutron in admin context (string -# value) -#neutron_admin_password= - -# Tenant id for connecting to neutron in admin context (string -# value) -#neutron_admin_tenant_id= - -# Tenant name for connecting to neutron in admin context. This -# option is mutually exclusive with neutron_admin_tenant_id. -# Note that with Keystone V3 tenant names are only unique -# within a domain. (string value) -#neutron_admin_tenant_name= - -# Region name for connecting to neutron in admin context -# (string value) -#neutron_region_name= - -# Authorization URL for connecting to neutron in admin context -# (string value) -#neutron_admin_auth_url=http://localhost:5000/v2.0 - -# If set, ignore any SSL validation issues (boolean value) -#neutron_api_insecure=false - -# Authorization strategy for connecting to neutron in admin -# context (string value) -#neutron_auth_strategy=keystone - -# Name of Integration Bridge used by Open vSwitch (string -# value) -#neutron_ovs_bridge=br-int - -# Number of seconds before querying neutron for extensions -# (integer value) -#neutron_extension_sync_interval=600 - -# Location of CA certificates file to use for neutron client -# requests. (string value) -#neutron_ca_certificates_file= - - -# -# Options defined in nova.network.rpcapi -# - -# The topic network nodes listen on (string value) -#network_topic=network - -# Default value for multi_host in networks. Also, if set, some -# rpc network calls will be sent directly to host. (boolean -# value) -#multi_host=false - - -# -# Options defined in nova.network.security_group.openstack_driver -# - -# The full class name of the security API class (string value) -#security_group_api=nova - - -# -# Options defined in nova.objectstore.s3server -# - -# Path to S3 buckets (string value) -#buckets_path=$state_path/buckets - -# IP address for S3 API to listen (string value) -#s3_listen=0.0.0.0 - -# Port for S3 API to listen (integer value) -#s3_listen_port=3333 - - -# -# Options defined in nova.openstack.common.eventlet_backdoor -# - -# Enable eventlet backdoor. Acceptable values are 0, -# and :, where 0 results in listening on a random -# tcp port number, results in listening on the -# specified port number and not enabling backdoorif it is in -# use and : results in listening on the smallest -# unused port number within the specified range of port -# numbers. The chosen port is displayed in the service's log -# file. (string value) -#backdoor_port= - - -# -# Options defined in nova.openstack.common.lockutils -# - -# Whether to disable inter-process locks (boolean value) -#disable_process_locking=false - -# Directory to use for lock files. (string value) -#lock_path= - - -# -# Options defined in nova.openstack.common.log -# - -# Print debugging output (set logging level to DEBUG instead -# of default WARNING level). (boolean value) -#debug=false - -# Print more verbose output (set logging level to INFO instead -# of default WARNING level). (boolean value) -#verbose=false - -# Log output to standard error (boolean value) -#use_stderr=true - -# format string to use for log messages with context (string -# value) -#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s - -# format string to use for log messages without context -# (string value) -#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s - -# data to append to log format when level is DEBUG (string -# value) -#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d - -# prefix each line of exception output with this format -# (string value) -#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s - -# list of logger=LEVEL pairs (list value) -#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN - -# publish error events (boolean value) -#publish_errors=false - -# make deprecations fatal (boolean value) -#fatal_deprecations=false - -# If an instance is passed with the log message, format it -# like this (string value) -#instance_format="[instance: %(uuid)s] " - -# If an instance UUID is passed with the log message, format -# it like this (string value) -#instance_uuid_format="[instance: %(uuid)s] " - -# The name of logging configuration file. It does not disable -# existing loggers, but just appends specified logging -# configuration to any other existing logging options. Please -# see the Python logging module documentation for details on -# logging configuration files. (string value) -# Deprecated group/name - [DEFAULT]/log_config -#log_config_append= - -# DEPRECATED. A logging.Formatter log message format string -# which may use any of the available logging.LogRecord -# attributes. This option is deprecated. Please use -# logging_context_format_string and -# logging_default_format_string instead. (string value) -#log_format= - -# Format string for %%(asctime)s in log records. Default: -# %(default)s (string value) -#log_date_format=%Y-%m-%d %H:%M:%S - -# (Optional) Name of log file to output to. If no default is -# set, logging will go to stdout. (string value) -# Deprecated group/name - [DEFAULT]/logfile -#log_file= - -# (Optional) The base directory used for relative --log-file -# paths (string value) -# Deprecated group/name - [DEFAULT]/logdir -#log_dir= - -# Use syslog for logging. Existing syslog format is DEPRECATED -# during I, and then will be changed in J to honor RFC5424 -# (boolean value) -#use_syslog=false - -# (Optional) Use syslog rfc5424 format for logging. If -# enabled, will add APP-NAME (RFC5424) before the MSG part of -# the syslog message. The old format without APP-NAME is -# deprecated in I, and will be removed in J. (boolean value) -#use_syslog_rfc_format=false - -# syslog facility to receive log lines (string value) -#syslog_log_facility=LOG_USER - - -# -# Options defined in nova.openstack.common.memorycache -# - -# Memcached servers or None for in process cache. (list value) -#memcached_servers= - - -# -# Options defined in nova.openstack.common.periodic_task -# - -# Some periodic tasks can be run in a separate process. Should -# we run them here? (boolean value) -#run_external_periodic_tasks=true - - -# -# Options defined in nova.pci.pci_request -# - -# An alias for a PCI passthrough device requirement. This -# allows users to specify the alias in the extra_spec for a -# flavor, without needing to repeat all the PCI property -# requirements. For example: pci_alias = { "name": -# "QuicAssist", "product_id": "0443", "vendor_id": "8086", -# "device_type": "ACCEL" } defines an alias for the Intel -# QuickAssist card. (multi valued) (multi valued) -#pci_alias= - - -# -# Options defined in nova.pci.pci_whitelist -# - -# White list of PCI devices available to VMs. For example: -# pci_passthrough_whitelist = [{"vendor_id": "8086", -# "product_id": "0443"}] (multi valued) -#pci_passthrough_whitelist= - - -# -# Options defined in nova.scheduler.driver -# - -# The scheduler host manager class to use (string value) -#scheduler_host_manager=nova.scheduler.host_manager.HostManager - -# Maximum number of attempts to schedule an instance (integer -# value) -#scheduler_max_attempts=3 - - -# -# Options defined in nova.scheduler.filter_scheduler -# - -# New instances will be scheduled on a host chosen randomly -# from a subset of the N best hosts. This property defines the -# subset size that a host is chosen from. A value of 1 chooses -# the first host returned by the weighing functions. This -# value must be at least 1. Any value less than 1 will be -# ignored, and 1 will be used instead (integer value) -#scheduler_host_subset_size=1 - - -# -# Options defined in nova.scheduler.filters.aggregate_image_properties_isolation -# - -# Force the filter to consider only keys matching the given -# namespace. (string value) -#aggregate_image_properties_isolation_namespace= - -# The separator used between the namespace and keys (string -# value) -#aggregate_image_properties_isolation_separator=. - - -# -# Options defined in nova.scheduler.filters.core_filter -# - -# Virtual CPU to physical CPU allocation ratio which affects -# all CPU filters. This configuration specifies a global ratio -# for CoreFilter. For AggregateCoreFilter, it will fall back -# to this configuration value if no per-aggregate setting -# found. This option is also used in Solver Scheduler for the -# MaxVcpuAllocationPerHostConstraint (floating point value) -#cpu_allocation_ratio=16.0 - - -# -# Options defined in nova.scheduler.filters.disk_filter -# - -# Virtual disk to physical disk allocation ratio (floating -# point value) -#disk_allocation_ratio=1.0 - - -# -# Options defined in nova.scheduler.filters.io_ops_filter -# - -# Ignore hosts that have too many -# builds/resizes/snaps/migrations. (integer value) -#max_io_ops_per_host=8 - - -# -# Options defined in nova.scheduler.filters.isolated_hosts_filter -# - -# Images to run on isolated host (list value) -#isolated_images= - -# Host reserved for specific images (list value) -#isolated_hosts= - -# Whether to force isolated hosts to run only isolated images -# (boolean value) -#restrict_isolated_hosts_to_isolated_images=true - - -# -# Options defined in nova.scheduler.filters.num_instances_filter -# - -# Ignore hosts that have too many instances (integer value) -#max_instances_per_host=50 - - -# -# Options defined in nova.scheduler.filters.ram_filter -# - -# Virtual ram to physical ram allocation ratio which affects -# all ram filters. This configuration specifies a global ratio -# for RamFilter. For AggregateRamFilter, it will fall back to -# this configuration value if no per-aggregate setting found. -# (floating point value) -#ram_allocation_ratio=1.5 - - -# -# Options defined in nova.scheduler.host_manager -# - -# Filter classes available to the scheduler which may be -# specified more than once. An entry of -# "nova.scheduler.filters.standard_filters" maps to all -# filters included with nova. (multi valued) -#scheduler_available_filters=nova.scheduler.filters.all_filters - -# Which filter class names to use for filtering hosts when not -# specified in the request. (list value) -#scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter - -# Which weight class names to use for weighing hosts (list -# value) -#scheduler_weight_classes=nova.scheduler.weights.all_weighers - - -# -# Options defined in nova.scheduler.manager -# - -# Default driver to use for the scheduler (string value) -#scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler - -# How often (in seconds) to run periodic tasks in the -# scheduler driver of your choice. Please note this is likely -# to interact with the value of service_down_time, but exactly -# how they interact will depend on your choice of scheduler -# driver. (integer value) -#scheduler_driver_task_period=60 - - -# -# Options defined in nova.scheduler.rpcapi -# - -# The topic scheduler nodes listen on (string value) -#scheduler_topic=scheduler - - -# -# Options defined in nova.scheduler.scheduler_options -# - -# Absolute path to scheduler configuration JSON file. (string -# value) -#scheduler_json_config_location= - - -# -# Options defined in nova.scheduler.weights.ram -# - -# Multiplier used for weighing ram. Negative numbers mean to -# stack vs spread. (floating point value) -#ram_weight_multiplier=1.0 - - -# -# Options defined in nova.servicegroup.api -# - -# The driver for servicegroup service (valid options are: db, -# zk, mc) (string value) -#servicegroup_driver=db - - -# -# Options defined in nova.virt.configdrive -# - -# Config drive format. One of iso9660 (default) or vfat -# (string value) -#config_drive_format=iso9660 - -# Where to put temporary files associated with config drive -# creation (string value) -#config_drive_tempdir= - -# Set to force injection to take place on a config drive (if -# set, valid options are: always) (string value) -#force_config_drive= - -# Name and optionally path of the tool used for ISO image -# creation (string value) -#mkisofs_cmd=genisoimage - - -# -# Options defined in nova.virt.cpu -# - -# Defines which pcpus that instance vcpus can use. For -# example, "4-12,^8,15" (string value) -#vcpu_pin_set= - - -# -# Options defined in nova.virt.disk.api -# - -# Template file for injected network (string value) -#injected_network_template=$pybasedir/nova/virt/interfaces.template - -# Name of the mkfs commands for ephemeral device. The format -# is = (multi valued) -#virt_mkfs= - -# Attempt to resize the filesystem by accessing the image over -# a block device. This is done by the host and may not be -# necessary if the image contains a recent version of cloud- -# init. Possible mechanisms require the nbd driver (for qcow -# and raw), or loop (for raw). (boolean value) -#resize_fs_using_block_device=false - - -# -# Options defined in nova.virt.disk.mount.nbd -# - -# Amount of time, in seconds, to wait for NBD device start up. -# (integer value) -#timeout_nbd=10 - - -# -# Options defined in nova.virt.driver -# - -# Driver to use for controlling virtualization. Options -# include: libvirt.LibvirtDriver, xenapi.XenAPIDriver, -# fake.FakeDriver, baremetal.BareMetalDriver, -# vmwareapi.VMwareESXDriver, vmwareapi.VMwareVCDriver (string -# value) -#compute_driver= - -# The default format an ephemeral_volume will be formatted -# with on creation. (string value) -#default_ephemeral_format= - -# VM image preallocation mode: "none" => no storage -# provisioning is done up front, "space" => storage is fully -# allocated at instance start (string value) -#preallocate_images=none - -# Whether to use cow images (boolean value) -#use_cow_images=true - -# Fail instance boot if vif plugging fails (boolean value) -#vif_plugging_is_fatal=true - -# Number of seconds to wait for neutron vif plugging events to -# arrive before continuing or failing (see -# vif_plugging_is_fatal). If this is set to zero and -# vif_plugging_is_fatal is False, events should not be -# expected to arrive at all. (integer value) -#vif_plugging_timeout=300 - - -# -# Options defined in nova.virt.firewall -# - -# Firewall driver (defaults to hypervisor specific iptables -# driver) (string value) -#firewall_driver= - -# Whether to allow network traffic from same network (boolean -# value) -#allow_same_net_traffic=true - - -# -# Options defined in nova.virt.imagecache -# - -# Number of seconds to wait between runs of the image cache -# manager (integer value) -#image_cache_manager_interval=2400 - -# Where cached images are stored under $instances_path. This -# is NOT the full path - just a folder name. For per-compute- -# host cached images, set to _base_$my_ip (string value) -# Deprecated group/name - [DEFAULT]/base_dir_name -#image_cache_subdirectory_name=_base - -# Should unused base images be removed? (boolean value) -#remove_unused_base_images=true - -# Unused unresized base images younger than this will not be -# removed (integer value) -#remove_unused_original_minimum_age_seconds=86400 - - -# -# Options defined in nova.virt.imagehandler -# - -# Specifies which image handler extension names to use for -# handling images. The first extension in the list which can -# handle the image with a suitable location will be used. -# (list value) -#image_handlers=download - - -# -# Options defined in nova.virt.images -# - -# Force backing images to raw format (boolean value) -#force_raw_images=true - - -# -# Options defined in nova.vnc -# - -# Location of VNC console proxy, in the form -# "http://127.0.0.1:6080/vnc_auto.html" (string value) -#novncproxy_base_url=http://127.0.0.1:6080/vnc_auto.html - -# Location of nova xvp VNC console proxy, in the form -# "http://127.0.0.1:6081/console" (string value) -#xvpvncproxy_base_url=http://127.0.0.1:6081/console - -# IP address on which instance vncservers should listen -# (string value) -#vncserver_listen=127.0.0.1 - -# The address to which proxy clients (like nova-xvpvncproxy) -# should connect (string value) -#vncserver_proxyclient_address=127.0.0.1 - -# Enable VNC related features (boolean value) -#vnc_enabled=true - -# Keymap for VNC (string value) -#vnc_keymap=en-us - - -# -# Options defined in nova.vnc.xvp_proxy -# - -# Port that the XCP VNC proxy should bind to (integer value) -#xvpvncproxy_port=6081 - -# Address that the XCP VNC proxy should bind to (string value) -#xvpvncproxy_host=0.0.0.0 - - -# -# Options defined in nova.volume -# - -# The full class name of the volume API class to use (string -# value) -#volume_api_class=nova.volume.cinder.API - - -# -# Options defined in nova.volume.cinder -# - -# Info to match when looking for cinder in the service -# catalog. Format is: separated values of the form: -# :: (string value) -#cinder_catalog_info=volume:cinder:publicURL - -# Override service catalog lookup with template for cinder -# endpoint e.g. http://localhost:8776/v1/%(project_id)s -# (string value) -#cinder_endpoint_template= - -# Region name of this node (string value) -#os_region_name= - -# Location of ca certificates file to use for cinder client -# requests. (string value) -#cinder_ca_certificates_file= - -# Number of cinderclient retries on failed http calls (integer -# value) -#cinder_http_retries=3 - -# Allow to perform insecure SSL requests to cinder (boolean -# value) -#cinder_api_insecure=false - -# Allow attach between instance and volume in different -# availability zones. (boolean value) -#cinder_cross_az_attach=true - -# Keystone Cinder account username (string value) -#cinder_admin_user= - -# Keystone Cinder account password (string value) -#cinder_admin_password= - -# Keystone Cinder account tenant name (string value) -#cinder_admin_tenant_name=service - -# Complete public Identity API endpoint (string value) -#cinder_auth_uri= - - -[baremetal] - -# -# Options defined in nova.virt.baremetal.db.api -# - -# The backend to use for bare-metal database (string value) -#db_backend=sqlalchemy - - -# -# Options defined in nova.virt.baremetal.db.sqlalchemy.session -# - -# The SQLAlchemy connection string used to connect to the -# bare-metal database (string value) -#sql_connection=sqlite:///$state_path/baremetal_nova.sqlite - - -# -# Options defined in nova.virt.baremetal.driver -# - -# Baremetal VIF driver. (string value) -#vif_driver=nova.virt.baremetal.vif_driver.BareMetalVIFDriver - -# Baremetal volume driver. (string value) -#volume_driver=nova.virt.baremetal.volume_driver.LibvirtVolumeDriver - -# A list of additional capabilities corresponding to -# flavor_extra_specs for this compute host to advertise. Valid -# entries are name=value, pairs For example, "key1:val1, -# key2:val2" (list value) -# Deprecated group/name - [DEFAULT]/instance_type_extra_specs -#flavor_extra_specs= - -# Baremetal driver back-end (pxe or tilera) (string value) -#driver=nova.virt.baremetal.pxe.PXE - -# Baremetal power management method (string value) -#power_manager=nova.virt.baremetal.ipmi.IPMI - -# Baremetal compute node's tftp root path (string value) -#tftp_root=/tftpboot - - -# -# Options defined in nova.virt.baremetal.ipmi -# - -# Path to baremetal terminal program (string value) -#terminal=shellinaboxd - -# Path to baremetal terminal SSL cert(PEM) (string value) -#terminal_cert_dir= - -# Path to directory stores pidfiles of baremetal_terminal -# (string value) -#terminal_pid_dir=$state_path/baremetal/console - -# Maximal number of retries for IPMI operations (integer -# value) -#ipmi_power_retry=10 - - -# -# Options defined in nova.virt.baremetal.pxe -# - -# Default kernel image ID used in deployment phase (string -# value) -#deploy_kernel= - -# Default ramdisk image ID used in deployment phase (string -# value) -#deploy_ramdisk= - -# Template file for injected network config (string value) -#net_config_template=$pybasedir/nova/virt/baremetal/net-dhcp.ubuntu.template - -# Additional append parameters for baremetal PXE boot (string -# value) -#pxe_append_params=nofb nomodeset vga=normal - -# Template file for PXE configuration (string value) -#pxe_config_template=$pybasedir/nova/virt/baremetal/pxe_config.template - -# If True, enable file injection for network info, files and -# admin password (boolean value) -#use_file_injection=false - -# Timeout for PXE deployments. Default: 0 (unlimited) (integer -# value) -#pxe_deploy_timeout=0 - -# If set, pass the network configuration details to the -# initramfs via cmdline. (boolean value) -#pxe_network_config=false - -# This gets passed to Neutron as the bootfile dhcp parameter. -# (string value) -#pxe_bootfile_name=pxelinux.0 - - -# -# Options defined in nova.virt.baremetal.tilera_pdu -# - -# IP address of tilera pdu (string value) -#tile_pdu_ip=10.0.100.1 - -# Management script for tilera pdu (string value) -#tile_pdu_mgr=/tftpboot/pdu_mgr - -# Power status of tilera PDU is OFF (integer value) -#tile_pdu_off=2 - -# Power status of tilera PDU is ON (integer value) -#tile_pdu_on=1 - -# Power status of tilera PDU (integer value) -#tile_pdu_status=9 - -# Wait time in seconds until check the result after tilera -# power operations (integer value) -#tile_power_wait=9 - - -# -# Options defined in nova.virt.baremetal.virtual_power_driver -# - -# IP or name to virtual power host (string value) -#virtual_power_ssh_host= - -# Port to use for ssh to virtual power host (integer value) -#virtual_power_ssh_port=22 - -# Base command to use for virtual power(vbox, virsh) (string -# value) -#virtual_power_type=virsh - -# User to execute virtual power commands as (string value) -#virtual_power_host_user= - -# Password for virtual power host_user (string value) -#virtual_power_host_pass= - -# The ssh key for virtual power host_user (string value) -#virtual_power_host_key= - - -# -# Options defined in nova.virt.baremetal.volume_driver -# - -# Do not set this out of dev/test environments. If a node does -# not have a fixed PXE IP address, volumes are exported with -# globally opened ACL (boolean value) -#use_unsafe_iscsi=false - -# The iSCSI IQN prefix used in baremetal volume connections. -# (string value) -#iscsi_iqn_prefix=iqn.2010-10.org.openstack.baremetal - - -[cells] - -# -# Options defined in nova.cells.manager -# - -# Cells communication driver to use (string value) -#driver=nova.cells.rpc_driver.CellsRPCDriver - -# Number of seconds after an instance was updated or deleted -# to continue to update cells (integer value) -#instance_updated_at_threshold=3600 - -# Number of instances to update per periodic task run (integer -# value) -#instance_update_num_instances=1 - - -# -# Options defined in nova.cells.messaging -# - -# Maximum number of hops for cells routing. (integer value) -#max_hop_count=10 - -# Cells scheduler to use (string value) -#scheduler=nova.cells.scheduler.CellsScheduler - - -# -# Options defined in nova.cells.opts -# - -# Enable cell functionality (boolean value) -#enable=false - -# The topic cells nodes listen on (string value) -#topic=cells - -# Manager for cells (string value) -#manager=nova.cells.manager.CellsManager - -# Name of this cell (string value) -#name=nova - -# Key/Multi-value list with the capabilities of the cell (list -# value) -#capabilities=hypervisor=xenserver;kvm,os=linux;windows - -# Seconds to wait for response from a call to a cell. (integer -# value) -#call_timeout=60 - -# Percentage of cell capacity to hold in reserve. Affects both -# memory and disk utilization (floating point value) -#reserve_percent=10.0 - -# Type of cell: api or compute (string value) -#cell_type=compute - -# Number of seconds after which a lack of capability and -# capacity updates signals the child cell is to be treated as -# a mute. (integer value) -#mute_child_interval=300 - -# Seconds between bandwidth updates for cells. (integer value) -#bandwidth_update_interval=600 - - -# -# Options defined in nova.cells.rpc_driver -# - -# Base queue name to use when communicating between cells. -# Various topics by message type will be appended to this. -# (string value) -#rpc_driver_queue_base=cells.intercell - - -# -# Options defined in nova.cells.scheduler -# - -# Filter classes the cells scheduler should use. An entry of -# "nova.cells.filters.all_filters" maps to all cells filters -# included with nova. (list value) -#scheduler_filter_classes=nova.cells.filters.all_filters - -# Weigher classes the cells scheduler should use. An entry of -# "nova.cells.weights.all_weighers" maps to all cell weighers -# included with nova. (list value) -#scheduler_weight_classes=nova.cells.weights.all_weighers - -# How many retries when no cells are available. (integer -# value) -#scheduler_retries=10 - -# How often to retry in seconds when no cells are available. -# (integer value) -#scheduler_retry_delay=2 - - -# -# Options defined in nova.cells.state -# - -# Interval, in seconds, for getting fresh cell information -# from the database. (integer value) -#db_check_interval=60 - -# Configuration file from which to read cells configuration. -# If given, overrides reading cells from the database. (string -# value) -#cells_config= - - -# -# Options defined in nova.cells.weights.mute_child -# - -# Multiplier used to weigh mute children. (The value should be -# negative.) (floating point value) -#mute_weight_multiplier=-10.0 - -# Weight value assigned to mute children. (The value should be -# positive.) (floating point value) -#mute_weight_value=1000.0 - - -# -# Options defined in nova.cells.weights.ram_by_instance_type -# - -# Multiplier used for weighing ram. Negative numbers mean to -# stack vs spread. (floating point value) -#ram_weight_multiplier=10.0 - - -# -# Options defined in nova.cells.weights.weight_offset -# - -# Multiplier used to weigh offset weigher. (floating point -# value) -#offset_weight_multiplier=1.0 - - -[conductor] - -# -# Options defined in nova.conductor.api -# - -# Perform nova-conductor operations locally (boolean value) -#use_local=false - -# The topic on which conductor nodes listen (string value) -#topic=conductor - -# Full class name for the Manager for conductor (string value) -#manager=nova.conductor.manager.ConductorManager - -# Number of workers for OpenStack Conductor service. The -# default will be the number of CPUs available. (integer -# value) -#workers= - - -[database] - -# -# Options defined in nova.db.sqlalchemy.api -# - -# The SQLAlchemy connection string used to connect to the -# slave database (string value) -#slave_connection= - - -# -# Options defined in nova.openstack.common.db.options -# - -# The file name to use with SQLite (string value) -#sqlite_db=nova.sqlite - -# If True, SQLite uses synchronous mode (boolean value) -#sqlite_synchronous=true - -# The backend to use for db (string value) -# Deprecated group/name - [DEFAULT]/db_backend -#backend=sqlalchemy - -# The SQLAlchemy connection string used to connect to the -# database (string value) -# Deprecated group/name - [DEFAULT]/sql_connection -# Deprecated group/name - [DATABASE]/sql_connection -# Deprecated group/name - [sql]/connection -#connection= - -# The SQL mode to be used for MySQL sessions (default is -# empty, meaning do not override any server-side SQL mode -# setting) (string value) -#mysql_sql_mode= - -# Timeout before idle sql connections are reaped (integer -# value) -# Deprecated group/name - [DEFAULT]/sql_idle_timeout -# Deprecated group/name - [DATABASE]/sql_idle_timeout -# Deprecated group/name - [sql]/idle_timeout -#idle_timeout=3600 - -# Minimum number of SQL connections to keep open in a pool -# (integer value) -# Deprecated group/name - [DEFAULT]/sql_min_pool_size -# Deprecated group/name - [DATABASE]/sql_min_pool_size -#min_pool_size=1 - -# Maximum number of SQL connections to keep open in a pool -# (integer value) -# Deprecated group/name - [DEFAULT]/sql_max_pool_size -# Deprecated group/name - [DATABASE]/sql_max_pool_size -#max_pool_size= - -# Maximum db connection retries during startup. (setting -1 -# implies an infinite retry count) (integer value) -# Deprecated group/name - [DEFAULT]/sql_max_retries -# Deprecated group/name - [DATABASE]/sql_max_retries -#max_retries=10 - -# Interval between retries of opening a sql connection -# (integer value) -# Deprecated group/name - [DEFAULT]/sql_retry_interval -# Deprecated group/name - [DATABASE]/reconnect_interval -#retry_interval=10 - -# If set, use this value for max_overflow with sqlalchemy -# (integer value) -# Deprecated group/name - [DEFAULT]/sql_max_overflow -# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow -#max_overflow= - -# Verbosity of SQL debugging information. 0=None, -# 100=Everything (integer value) -# Deprecated group/name - [DEFAULT]/sql_connection_debug -#connection_debug=0 - -# Add python stack traces to SQL as comment strings (boolean -# value) -# Deprecated group/name - [DEFAULT]/sql_connection_trace -#connection_trace=false - -# If set, use this value for pool_timeout with sqlalchemy -# (integer value) -# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout -#pool_timeout= - -# Enable the experimental use of database reconnect on -# connection lost (boolean value) -#use_db_reconnect=false - -# seconds between db connection retries (integer value) -#db_retry_interval=1 - -# Whether to increase interval between db connection retries, -# up to db_max_retry_interval (boolean value) -#db_inc_retry_interval=true - -# max seconds between db connection retries, if -# db_inc_retry_interval is enabled (integer value) -#db_max_retry_interval=10 - -# maximum db connection retries before error is raised. -# (setting -1 implies an infinite retry count) (integer value) -#db_max_retries=20 - - -[hyperv] - -# -# Options defined in nova.virt.hyperv.pathutils -# - -# The name of a Windows share name mapped to the -# "instances_path" dir and used by the resize feature to copy -# files to the target host. If left blank, an administrative -# share will be used, looking for the same "instances_path" -# used locally (string value) -#instances_path_share= - - -# -# Options defined in nova.virt.hyperv.utilsfactory -# - -# Force V1 WMI utility classes (boolean value) -#force_hyperv_utils_v1=false - -# Force V1 volume utility class (boolean value) -#force_volumeutils_v1=false - - -# -# Options defined in nova.virt.hyperv.vif -# - -# External virtual switch Name, if not provided, the first -# external virtual switch is used (string value) -#vswitch_name= - - -# -# Options defined in nova.virt.hyperv.vmops -# - -# Required for live migration among hosts with different CPU -# features (boolean value) -#limit_cpu_features=false - -# Sets the admin password in the config drive image (boolean -# value) -#config_drive_inject_password=false - -# Path of qemu-img command which is used to convert between -# different image types (string value) -#qemu_img_cmd=qemu-img.exe - -# Attaches the Config Drive image as a cdrom drive instead of -# a disk drive (boolean value) -#config_drive_cdrom=false - -# Enables metrics collections for an instance by using -# Hyper-V's metric APIs. Collected data can by retrieved by -# other apps and services, e.g.: Ceilometer. Requires Hyper-V -# / Windows Server 2012 and above (boolean value) -#enable_instance_metrics_collection=false - -# Enables dynamic memory allocation (ballooning) when set to a -# value greater than 1. The value expresses the ratio between -# the total RAM assigned to an instance and its startup RAM -# amount. For example a ratio of 2.0 for an instance with -# 1024MB of RAM implies 512MB of RAM allocated at startup -# (floating point value) -#dynamic_memory_ratio=1.0 - - -# -# Options defined in nova.virt.hyperv.volumeops -# - -# The number of times to retry to attach a volume (integer -# value) -#volume_attach_retry_count=10 - -# Interval between volume attachment attempts, in seconds -# (integer value) -#volume_attach_retry_interval=5 - -# The number of times to retry checking for a disk mounted via -# iSCSI. (integer value) -#mounted_disk_query_retry_count=10 - -# Interval between checks for a mounted iSCSI disk, in -# seconds. (integer value) -#mounted_disk_query_retry_interval=5 - - -[image_file_url] - -# -# Options defined in nova.image.download.file -# - -# List of file systems that are configured in this file in the -# image_file_url: sections (list value) -#filesystems= - - -[keymgr] - -# -# Options defined in nova.keymgr -# - -# The full class name of the key manager API class (string -# value) -#api_class=nova.keymgr.conf_key_mgr.ConfKeyManager - - -# -# Options defined in nova.keymgr.conf_key_mgr -# - -# Fixed key returned by key manager, specified in hex (string -# value) -#fixed_key= - - -[keystone_authtoken] - -# -# Options defined in keystoneclient.middleware.auth_token -# - -# Prefix to prepend at the beginning of the path. Deprecated, -# use identity_uri. (string value) -#auth_admin_prefix= - -# Host providing the admin Identity API endpoint. Deprecated, -# use identity_uri. (string value) -#auth_host=127.0.0.1 - -# Port of the admin Identity API endpoint. Deprecated, use -# identity_uri. (integer value) -#auth_port=35357 - -# Protocol of the admin Identity API endpoint (http or https). -# Deprecated, use identity_uri. (string value) -#auth_protocol=https - -# Complete public Identity API endpoint (string value) -#auth_uri= - -# Complete admin Identity API endpoint. This should specify -# the unversioned root endpoint e.g. https://localhost:35357/ -# (string value) -#identity_uri= - -# API version of the admin Identity API endpoint (string -# value) -#auth_version= - -# Do not handle authorization requests within the middleware, -# but delegate the authorization decision to downstream WSGI -# components (boolean value) -#delay_auth_decision=false - -# Request timeout value for communicating with Identity API -# server. (boolean value) -#http_connect_timeout= - -# How many times are we trying to reconnect when communicating -# with Identity API Server. (integer value) -#http_request_max_retries=3 - -# This option is deprecated and may be removed in a future -# release. Single shared secret with the Keystone -# configuration used for bootstrapping a Keystone -# installation, or otherwise bypassing the normal -# authentication process. This option should not be used, use -# `admin_user` and `admin_password` instead. (string value) -#admin_token= - -# Keystone account username (string value) -#admin_user= - -# Keystone account password (string value) -#admin_password= - -# Keystone service account tenant name to validate user tokens -# (string value) -#admin_tenant_name=admin - -# Env key for the swift cache (string value) -#cache= - -# Required if Keystone server requires client certificate -# (string value) -#certfile= - -# Required if Keystone server requires client certificate -# (string value) -#keyfile= - -# A PEM encoded Certificate Authority to use when verifying -# HTTPs connections. Defaults to system CAs. (string value) -#cafile= - -# Verify HTTPS connections. (boolean value) -#insecure=false - -# Directory used to cache files related to PKI tokens (string -# value) -#signing_dir= - -# Optionally specify a list of memcached server(s) to use for -# caching. If left undefined, tokens will instead be cached -# in-process. (list value) -# Deprecated group/name - [DEFAULT]/memcache_servers -#memcached_servers= - -# In order to prevent excessive effort spent validating -# tokens, the middleware caches previously-seen tokens for a -# configurable duration (in seconds). Set to -1 to disable -# caching completely. (integer value) -#token_cache_time=300 - -# Determines the frequency at which the list of revoked tokens -# is retrieved from the Identity service (in seconds). A high -# number of revocation events combined with a low cache -# duration may significantly reduce performance. (integer -# value) -#revocation_cache_time=10 - -# (optional) if defined, indicate whether token data should be -# authenticated or authenticated and encrypted. Acceptable -# values are MAC or ENCRYPT. If MAC, token data is -# authenticated (with HMAC) in the cache. If ENCRYPT, token -# data is encrypted and authenticated in the cache. If the -# value is not one of these options or empty, auth_token will -# raise an exception on initialization. (string value) -#memcache_security_strategy= - -# (optional, mandatory if memcache_security_strategy is -# defined) this string is used for key derivation. (string -# value) -#memcache_secret_key= - -# (optional) indicate whether to set the X-Service-Catalog -# header. If False, middleware will not ask for service -# catalog on token validation and will not set the X-Service- -# Catalog header. (boolean value) -#include_service_catalog=true - -# Used to control the use and type of token binding. Can be -# set to: "disabled" to not check token binding. "permissive" -# (default) to validate binding information if the bind type -# is of a form known to the server and ignore it if not. -# "strict" like "permissive" but if the bind type is unknown -# the token will be rejected. "required" any form of token -# binding is needed to be allowed. Finally the name of a -# binding method that must be present in tokens. (string -# value) -#enforce_token_bind=permissive - -# If true, the revocation list will be checked for cached -# tokens. This requires that PKI tokens are configured on the -# Keystone server. (boolean value) -#check_revocations_for_cached=false - -# Hash algorithms to use for hashing PKI tokens. This may be a -# single algorithm or multiple. The algorithms are those -# supported by Python standard hashlib.new(). The hashes will -# be tried in the order given, so put the preferred one first -# for performance. The result of the first hash will be stored -# in the cache. This will typically be set to multiple values -# only while migrating from a less secure algorithm to a more -# secure one. Once all the old tokens are expired this option -# should be set to a single value for better performance. -# (list value) -#hash_algorithms=md5 - - -[libvirt] - -# -# Options defined in nova.virt.libvirt.driver -# - -# Rescue ami image (string value) -#rescue_image_id= - -# Rescue aki image (string value) -#rescue_kernel_id= - -# Rescue ari image (string value) -#rescue_ramdisk_id= - -# Libvirt domain type (valid options are: kvm, lxc, qemu, uml, -# xen) (string value) -# Deprecated group/name - [DEFAULT]/libvirt_type -#virt_type=kvm - -# Override the default libvirt URI (which is dependent on -# virt_type) (string value) -# Deprecated group/name - [DEFAULT]/libvirt_uri -#connection_uri= - -# Inject the admin password at boot time, without an agent. -# (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_inject_password -#inject_password=false - -# Inject the ssh public key at boot time (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_inject_key -#inject_key=false - -# The partition to inject to : -2 => disable, -1 => inspect -# (libguestfs only), 0 => not partitioned, >0 => partition -# number (integer value) -# Deprecated group/name - [DEFAULT]/libvirt_inject_partition -#inject_partition=-2 - -# Sync virtual and real mouse cursors in Windows VMs (boolean -# value) -#use_usb_tablet=true - -# Migration target URI (any included "%s" is replaced with the -# migration target hostname) (string value) -#live_migration_uri=qemu+tcp://%s/system - -# Migration flags to be set for live migration (string value) -#live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER - -# Migration flags to be set for block migration (string value) -#block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC - -# Maximum bandwidth to be used during migration, in Mbps -# (integer value) -#live_migration_bandwidth=0 - -# Snapshot image format (valid options are : raw, qcow2, vmdk, -# vdi). Defaults to same as source image (string value) -#snapshot_image_format= - -# DEPRECATED. The libvirt VIF driver to configure the -# VIFs.This option is deprecated and will be removed in the -# Juno release. (string value) -# Deprecated group/name - [DEFAULT]/libvirt_vif_driver -#vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver - -# Libvirt handlers for remote volumes. (list value) -# Deprecated group/name - [DEFAULT]/libvirt_volume_drivers -#volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,local=nova.virt.libvirt.volume.LibvirtVolumeDriver,fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver,aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver,glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver,fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver,scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver - -# Override the default disk prefix for the devices attached to -# a server, which is dependent on virt_type. (valid options -# are: sd, xvd, uvd, vd) (string value) -# Deprecated group/name - [DEFAULT]/libvirt_disk_prefix -#disk_prefix= - -# Number of seconds to wait for instance to shut down after -# soft reboot request is made. We fall back to hard reboot if -# instance does not shutdown within this window. (integer -# value) -# Deprecated group/name - [DEFAULT]/libvirt_wait_soft_reboot_seconds -#wait_soft_reboot_seconds=120 - -# Set to "host-model" to clone the host CPU feature flags; to -# "host-passthrough" to use the host CPU model exactly; to -# "custom" to use a named CPU model; to "none" to not set any -# CPU model. If virt_type="kvm|qemu", it will default to -# "host-model", otherwise it will default to "none" (string -# value) -# Deprecated group/name - [DEFAULT]/libvirt_cpu_mode -#cpu_mode= - -# Set to a named libvirt CPU model (see names listed in -# /usr/share/libvirt/cpu_map.xml). Only has effect if -# cpu_mode="custom" and virt_type="kvm|qemu" (string value) -# Deprecated group/name - [DEFAULT]/libvirt_cpu_model -#cpu_model= - -# Location where libvirt driver will store snapshots before -# uploading them to image service (string value) -# Deprecated group/name - [DEFAULT]/libvirt_snapshots_directory -#snapshots_directory=$instances_path/snapshots - -# Location where the Xen hvmloader is kept (string value) -#xen_hvmloader_path=/usr/lib/xen/boot/hvmloader - -# Specific cachemodes to use for different disk types e.g: -# file=directsync,block=none (list value) -#disk_cachemodes= - -# A path to a device that will be used as source of entropy on -# the host. Permitted options are: /dev/random or /dev/hwrng -# (string value) -#rng_dev_path= - - -# -# Options defined in nova.virt.libvirt.imagebackend -# - -# VM Images format. Acceptable values are: raw, qcow2, lvm, -# rbd, default. If default is specified, then use_cow_images -# flag is used instead of this one. (string value) -# Deprecated group/name - [DEFAULT]/libvirt_images_type -#images_type=default - -# LVM Volume Group that is used for VM images, when you -# specify images_type=lvm. (string value) -# Deprecated group/name - [DEFAULT]/libvirt_images_volume_group -#images_volume_group= - -# Create sparse logical volumes (with virtualsize) if this -# flag is set to True. (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_sparse_logical_volumes -#sparse_logical_volumes=false - -# Method used to wipe old volumes (valid options are: none, -# zero, shred) (string value) -#volume_clear=zero - -# Size in MiB to wipe at start of old volumes. 0 => all -# (integer value) -#volume_clear_size=0 - -# The RADOS pool in which rbd volumes are stored (string -# value) -# Deprecated group/name - [DEFAULT]/libvirt_images_rbd_pool -#images_rbd_pool=rbd - -# Path to the ceph configuration file to use (string value) -# Deprecated group/name - [DEFAULT]/libvirt_images_rbd_ceph_conf -#images_rbd_ceph_conf= - - -# -# Options defined in nova.virt.libvirt.imagecache -# - -# Allows image information files to be stored in non-standard -# locations (string value) -#image_info_filename_pattern=$instances_path/$image_cache_subdirectory_name/%(image)s.info - -# Should unused kernel images be removed? This is only safe to -# enable if all compute nodes have been updated to support -# this option. This will be enabled by default in future. -# (boolean value) -#remove_unused_kernels=false - -# Unused resized base images younger than this will not be -# removed (integer value) -#remove_unused_resized_minimum_age_seconds=3600 - -# Write a checksum for files in _base to disk (boolean value) -#checksum_base_images=false - -# How frequently to checksum base images (integer value) -#checksum_interval_seconds=3600 - - -# -# Options defined in nova.virt.libvirt.utils -# - -# Compress snapshot images when possible. This currently -# applies exclusively to qcow2 images (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_snapshot_compression -#snapshot_compression=false - - -# -# Options defined in nova.virt.libvirt.vif -# - -# Use virtio for bridge interfaces with KVM/QEMU (boolean -# value) -# Deprecated group/name - [DEFAULT]/libvirt_use_virtio_for_bridges -#use_virtio_for_bridges=true - - -# -# Options defined in nova.virt.libvirt.volume -# - -# Number of times to rescan iSCSI target to find volume -# (integer value) -#num_iscsi_scan_tries=5 - -# Number of times to rescan iSER target to find volume -# (integer value) -#num_iser_scan_tries=5 - -# The RADOS client name for accessing rbd volumes (string -# value) -#rbd_user= - -# The libvirt UUID of the secret for the rbd_uservolumes -# (string value) -#rbd_secret_uuid= - -# Directory where the NFS volume is mounted on the compute -# node (string value) -#nfs_mount_point_base=$state_path/mnt - -# Mount options passedf to the NFS client. See section of the -# nfs man page for details (string value) -#nfs_mount_options= - -# Number of times to rediscover AoE target to find volume -# (integer value) -#num_aoe_discover_tries=3 - -# Directory where the glusterfs volume is mounted on the -# compute node (string value) -#glusterfs_mount_point_base=$state_path/mnt - -# Use multipath connection of the iSCSI volume (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_iscsi_use_multipath -#iscsi_use_multipath=false - -# Use multipath connection of the iSER volume (boolean value) -# Deprecated group/name - [DEFAULT]/libvirt_iser_use_multipath -#iser_use_multipath=false - -# Path or URL to Scality SOFS configuration file (string -# value) -#scality_sofs_config= - -# Base dir where Scality SOFS shall be mounted (string value) -#scality_sofs_mount_point=$state_path/scality - -# Protocols listed here will be accessed directly from QEMU. -# Currently supported protocols: [gluster] (list value) -#qemu_allowed_storage_drivers= - - -[matchmaker_ring] - -# -# Options defined in oslo.messaging -# - -# Matchmaker ring file (JSON). (string value) -# Deprecated group/name - [DEFAULT]/matchmaker_ringfile -#ringfile=/etc/oslo/matchmaker_ring.json - - -[metrics] - -# -# Options defined in nova.scheduler.weights.metrics -# - -# Multiplier used for weighing metrics. (floating point value) -#weight_multiplier=1.0 - -# How the metrics are going to be weighed. This should be in -# the form of "=, =, ...", where -# is one of the metrics to be weighed, and is -# the corresponding ratio. So for "name1=1.0, name2=-1.0" The -# final weight would be name1.value * 1.0 + name2.value * -# -1.0. (list value) -#weight_setting= - -# How to treat the unavailable metrics. When a metric is NOT -# available for a host, if it is set to be True, it would -# raise an exception, so it is recommended to use the -# scheduler filter MetricFilter to filter out those hosts. If -# it is set to be False, the unavailable metric would be -# treated as a negative factor in weighing process, the -# returned value would be set by the option -# weight_of_unavailable. (boolean value) -#required=true - -# The final weight value to be returned if required is set to -# False and any one of the metrics set by weight_setting is -# unavailable. (floating point value) -#weight_of_unavailable=-10000.0 - - -[osapi_v3] - -# -# Options defined in nova.api.openstack -# - -# Whether the V3 API is enabled or not (boolean value) -#enabled=false - -# A list of v3 API extensions to never load. Specify the -# extension aliases here. (list value) -#extensions_blacklist= - -# If the list is not empty then a v3 API extension will only -# be loaded if it exists in this list. Specify the extension -# aliases here. (list value) -#extensions_whitelist= - - -[rdp] - -# -# Options defined in nova.rdp -# - -# Location of RDP html5 console proxy, in the form -# "http://127.0.0.1:6083/" (string value) -#html5_proxy_base_url=http://127.0.0.1:6083/ - -# Enable RDP related features (boolean value) -#enabled=false - - -[solver_scheduler] - -# -# Options defined in nova.scheduler.solver_scheduler -# - -# The pluggable solver implementation to use. By default, a -# reference solver implementation is included that models the -# problem as a Linear Programming (LP) problem using PULP. -# (string value) -#scheduler_host_solver=nova.scheduler.solvers.hosts_pulp_solver.HostsPulpSolver - - -# -# Options defined in nova.scheduler.solvers -# - -# Which constraints to use in scheduler solver (list value) -#scheduler_solver_constraints= - -# Assign weight for each cost (list value) -#scheduler_solver_cost_weights=RamCost:1.0 - -# Which cost matrices to use in the scheduler solver. (list -# value) -#scheduler_solver_costs=RamCost - - -[spice] - -# -# Options defined in nova.spice -# - -# Location of spice HTML5 console proxy, in the form -# "http://127.0.0.1:6082/spice_auto.html" (string value) -#html5proxy_base_url=http://127.0.0.1:6082/spice_auto.html - -# IP address on which instance spice server should listen -# (string value) -#server_listen=127.0.0.1 - -# The address to which proxy clients (like nova- -# spicehtml5proxy) should connect (string value) -#server_proxyclient_address=127.0.0.1 - -# Enable spice related features (boolean value) -#enabled=false - -# Enable spice guest agent support (boolean value) -#agent_enabled=true - -# Keymap for spice (string value) -#keymap=en-us - - -[ssl] - -# -# Options defined in nova.openstack.common.sslutils -# - -# CA certificate file to use to verify connecting clients. -# (string value) -#ca_file= - -# Certificate file to use when starting the server securely. -# (string value) -#cert_file= - -# Private key file to use when starting the server securely. -# (string value) -#key_file= - - -[trusted_computing] - -# -# Options defined in nova.scheduler.filters.trusted_filter -# - -# Attestation server HTTP (string value) -#attestation_server= - -# Attestation server Cert file for Identity verification -# (string value) -#attestation_server_ca_file= - -# Attestation server port (string value) -#attestation_port=8443 - -# Attestation web API URL (string value) -#attestation_api_url=/OpenAttestationWebServices/V1.0 - -# Attestation authorization blob - must change (string value) -#attestation_auth_blob= - -# Attestation status cache valid period length (integer value) -#attestation_auth_timeout=60 - - -[upgrade_levels] - -# -# Options defined in nova.baserpc -# - -# Set a version cap for messages sent to the base api in any -# service (string value) -#baseapi= - - -# -# Options defined in nova.cells.rpc_driver -# - -# Set a version cap for messages sent between cells services -# (string value) -#intercell= - - -# -# Options defined in nova.cells.rpcapi -# - -# Set a version cap for messages sent to local cells services -# (string value) -#cells= - - -# -# Options defined in nova.cert.rpcapi -# - -# Set a version cap for messages sent to cert services (string -# value) -#cert= - - -# -# Options defined in nova.compute.rpcapi -# - -# Set a version cap for messages sent to compute services. If -# you plan to do a live upgrade from havana to icehouse, you -# should set this option to "icehouse-compat" before beginning -# the live upgrade procedure. (string value) -#compute= - - -# -# Options defined in nova.conductor.rpcapi -# - -# Set a version cap for messages sent to conductor services -# (string value) -#conductor= - - -# -# Options defined in nova.console.rpcapi -# - -# Set a version cap for messages sent to console services -# (string value) -#console= - - -# -# Options defined in nova.consoleauth.rpcapi -# - -# Set a version cap for messages sent to consoleauth services -# (string value) -#consoleauth= - - -# -# Options defined in nova.network.rpcapi -# - -# Set a version cap for messages sent to network services -# (string value) -#network= - - -# -# Options defined in nova.scheduler.rpcapi -# - -# Set a version cap for messages sent to scheduler services -# (string value) -#scheduler= - - -[vmware] - -# -# Options defined in nova.virt.vmwareapi.driver -# - -# Hostname or IP address for connection to VMware ESX/VC host. -# (string value) -#host_ip= - -# Username for connection to VMware ESX/VC host. (string -# value) -#host_username= - -# Password for connection to VMware ESX/VC host. (string -# value) -#host_password= - -# Name of a VMware Cluster ComputeResource. Used only if -# compute_driver is vmwareapi.VMwareVCDriver. (multi valued) -#cluster_name= - -# Regex to match the name of a datastore. (string value) -#datastore_regex= - -# The interval used for polling of remote tasks. (floating -# point value) -#task_poll_interval=0.5 - -# The number of times we retry on failures, e.g., socket -# error, etc. (integer value) -#api_retry_count=10 - -# VNC starting port (integer value) -#vnc_port=5900 - -# Total number of VNC ports (integer value) -#vnc_port_total=10000 - -# Whether to use linked clone (boolean value) -#use_linked_clone=true - - -# -# Options defined in nova.virt.vmwareapi.vif -# - -# Physical ethernet adapter name for vlan networking (string -# value) -#vlan_interface=vmnic0 - - -# -# Options defined in nova.virt.vmwareapi.vim -# - -# Optional VIM Service WSDL Location e.g -# http:///vimService.wsdl. Optional over-ride to -# default location for bug work-arounds (string value) -#wsdl_location= - - -# -# Options defined in nova.virt.vmwareapi.vim_util -# - -# The maximum number of ObjectContent data objects that should -# be returned in a single result. A positive value will cause -# the operation to suspend the retrieval when the count of -# objects reaches the specified maximum. The server may still -# limit the count to something less than the configured value. -# Any remaining objects may be retrieved with additional -# requests. (integer value) -#maximum_objects=100 - - -# -# Options defined in nova.virt.vmwareapi.vmops -# - -# Name of Integration Bridge (string value) -#integration_bridge=br-int - - -[xenserver] - -# -# Options defined in nova.virt.xenapi.agent -# - -# Number of seconds to wait for agent reply (integer value) -# Deprecated group/name - [DEFAULT]/agent_timeout -#agent_timeout=30 - -# Number of seconds to wait for agent to be fully operational -# (integer value) -# Deprecated group/name - [DEFAULT]/agent_version_timeout -#agent_version_timeout=300 - -# Number of seconds to wait for agent reply to resetnetwork -# request (integer value) -# Deprecated group/name - [DEFAULT]/agent_resetnetwork_timeout -#agent_resetnetwork_timeout=60 - -# Specifies the path in which the XenAPI guest agent should be -# located. If the agent is present, network configuration is -# not injected into the image. Used if -# compute_driver=xenapi.XenAPIDriver and flat_injected=True -# (string value) -# Deprecated group/name - [DEFAULT]/xenapi_agent_path -#agent_path=usr/sbin/xe-update-networking - -# Disables the use of the XenAPI agent in any image regardless -# of what image properties are present. (boolean value) -# Deprecated group/name - [DEFAULT]/xenapi_disable_agent -#disable_agent=false - -# Determines if the XenAPI agent should be used when the image -# used does not contain a hint to declare if the agent is -# present or not. The hint is a glance property -# "xenapi_use_agent" that has the value "True" or "False". -# Note that waiting for the agent when it is not present will -# significantly increase server boot times. (boolean value) -# Deprecated group/name - [DEFAULT]/xenapi_use_agent_default -#use_agent_default=false - - -# -# Options defined in nova.virt.xenapi.client.session -# - -# Timeout in seconds for XenAPI login. (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_login_timeout -#login_timeout=10 - -# Maximum number of concurrent XenAPI connections. Used only -# if compute_driver=xenapi.XenAPIDriver (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_connection_concurrent -#connection_concurrent=5 - - -# -# Options defined in nova.virt.xenapi.driver -# - -# URL for connection to XenServer/Xen Cloud Platform. A -# special value of unix://local can be used to connect to the -# local unix socket. Required if -# compute_driver=xenapi.XenAPIDriver (string value) -# Deprecated group/name - [DEFAULT]/xenapi_connection_url -#connection_url= - -# Username for connection to XenServer/Xen Cloud Platform. -# Used only if compute_driver=xenapi.XenAPIDriver (string -# value) -# Deprecated group/name - [DEFAULT]/xenapi_connection_username -#connection_username=root - -# Password for connection to XenServer/Xen Cloud Platform. -# Used only if compute_driver=xenapi.XenAPIDriver (string -# value) -# Deprecated group/name - [DEFAULT]/xenapi_connection_password -#connection_password= - -# The interval used for polling of coalescing vhds. Used only -# if compute_driver=xenapi.XenAPIDriver (floating point value) -# Deprecated group/name - [DEFAULT]/xenapi_vhd_coalesce_poll_interval -#vhd_coalesce_poll_interval=5.0 - -# Ensure compute service is running on host XenAPI connects -# to. (boolean value) -# Deprecated group/name - [DEFAULT]/xenapi_check_host -#check_host=true - -# Max number of times to poll for VHD to coalesce. Used only -# if compute_driver=xenapi.XenAPIDriver (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_vhd_coalesce_max_attempts -#vhd_coalesce_max_attempts=20 - -# Base path to the storage repository (string value) -# Deprecated group/name - [DEFAULT]/xenapi_sr_base_path -#sr_base_path=/var/run/sr-mount - -# The iSCSI Target Host (string value) -# Deprecated group/name - [DEFAULT]/target_host -#target_host= - -# The iSCSI Target Port, default is port 3260 (string value) -# Deprecated group/name - [DEFAULT]/target_port -#target_port=3260 - -# IQN Prefix (string value) -# Deprecated group/name - [DEFAULT]/iqn_prefix -#iqn_prefix=iqn.2010-10.org.openstack - -# Used to enable the remapping of VBD dev (Works around an -# issue in Ubuntu Maverick) (boolean value) -# Deprecated group/name - [DEFAULT]/xenapi_remap_vbd_dev -#remap_vbd_dev=false - -# Specify prefix to remap VBD dev to (ex. /dev/xvdb -> -# /dev/sdb) (string value) -# Deprecated group/name - [DEFAULT]/xenapi_remap_vbd_dev_prefix -#remap_vbd_dev_prefix=sd - - -# -# Options defined in nova.virt.xenapi.image.bittorrent -# - -# Base URL for torrent files. (string value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_base_url -#torrent_base_url= - -# Probability that peer will become a seeder. (1.0 = 100%) -# (floating point value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_seed_chance -#torrent_seed_chance=1.0 - -# Number of seconds after downloading an image via BitTorrent -# that it should be seeded for other peers. (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_seed_duration -#torrent_seed_duration=3600 - -# Cached torrent files not accessed within this number of -# seconds can be reaped (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_max_last_accessed -#torrent_max_last_accessed=86400 - -# Beginning of port range to listen on (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_listen_port_start -#torrent_listen_port_start=6881 - -# End of port range to listen on (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_listen_port_end -#torrent_listen_port_end=6891 - -# Number of seconds a download can remain at the same progress -# percentage w/o being considered a stall (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_download_stall_cutoff -#torrent_download_stall_cutoff=600 - -# Maximum number of seeder processes to run concurrently -# within a given dom0. (-1 = no limit) (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_max_seeder_processes_per_host -#torrent_max_seeder_processes_per_host=1 - - -# -# Options defined in nova.virt.xenapi.pool -# - -# To use for hosts with different CPUs (boolean value) -# Deprecated group/name - [DEFAULT]/use_join_force -#use_join_force=true - - -# -# Options defined in nova.virt.xenapi.vif -# - -# Name of Integration Bridge used by Open vSwitch (string -# value) -# Deprecated group/name - [DEFAULT]/xenapi_ovs_integration_bridge -#ovs_integration_bridge=xapi1 - - -# -# Options defined in nova.virt.xenapi.vm_utils -# - -# Cache glance images locally. `all` will cache all images, -# `some` will only cache images that have the image_property -# `cache_in_nova=True`, and `none` turns off caching entirely -# (string value) -# Deprecated group/name - [DEFAULT]/cache_images -#cache_images=all - -# Compression level for images, e.g., 9 for gzip -9. Range is -# 1-9, 9 being most compressed but most CPU intensive on dom0. -# (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_image_compression_level -#image_compression_level= - -# Default OS type (string value) -# Deprecated group/name - [DEFAULT]/default_os_type -#default_os_type=linux - -# Time to wait for a block device to be created (integer -# value) -# Deprecated group/name - [DEFAULT]/block_device_creation_timeout -#block_device_creation_timeout=10 - -# Maximum size in bytes of kernel or ramdisk images (integer -# value) -# Deprecated group/name - [DEFAULT]/max_kernel_ramdisk_size -#max_kernel_ramdisk_size=16777216 - -# Filter for finding the SR to be used to install guest -# instances on. To use the Local Storage in default -# XenServer/XCP installations set this flag to other-config -# :i18n-key=local-storage. To select an SR with a different -# matching criteria, you could set it to other- -# config:my_favorite_sr=true. On the other hand, to fall back -# on the Default SR, as displayed by XenCenter, set this flag -# to: default-sr:true (string value) -# Deprecated group/name - [DEFAULT]/sr_matching_filter -#sr_matching_filter=default-sr:true - -# Whether to use sparse_copy for copying data on a resize down -# (False will use standard dd). This speeds up resizes down -# considerably since large runs of zeros won't have to be -# rsynced (boolean value) -# Deprecated group/name - [DEFAULT]/xenapi_sparse_copy -#sparse_copy=true - -# Maximum number of retries to unplug VBD (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_num_vbd_unplug_retries -#num_vbd_unplug_retries=10 - -# Whether or not to download images via Bit Torrent -# (all|some|none). (string value) -# Deprecated group/name - [DEFAULT]/xenapi_torrent_images -#torrent_images=none - -# Name of network to use for booting iPXE ISOs (string value) -# Deprecated group/name - [DEFAULT]/xenapi_ipxe_network_name -#ipxe_network_name= - -# URL to the iPXE boot menu (string value) -# Deprecated group/name - [DEFAULT]/xenapi_ipxe_boot_menu_url -#ipxe_boot_menu_url= - -# Name and optionally path of the tool used for ISO image -# creation (string value) -# Deprecated group/name - [DEFAULT]/xenapi_ipxe_mkisofs_cmd -#ipxe_mkisofs_cmd=mkisofs - - -# -# Options defined in nova.virt.xenapi.vmops -# - -# Number of seconds to wait for instance to go to running -# state (integer value) -# Deprecated group/name - [DEFAULT]/xenapi_running_timeout -#running_timeout=60 - -# The XenAPI VIF driver using XenServer Network APIs. (string -# value) -# Deprecated group/name - [DEFAULT]/xenapi_vif_driver -#vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver - -# Dom0 plugin driver used to handle image uploads. (string -# value) -# Deprecated group/name - [DEFAULT]/xenapi_image_upload_handler -#image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore - - -# -# Options defined in nova.virt.xenapi.volume_utils -# - -# Number of seconds to wait for an SR to settle if the VDI -# does not exist when first introduced (integer value) -#introduce_vdi_retry_wait=20 - - -[zookeeper] - -# -# Options defined in nova.servicegroup.drivers.zk -# - -# The ZooKeeper addresses for servicegroup service in the -# format of host1:port,host2:port,host3:port (string value) -#address= - -# The recv_timeout parameter for the zk session (integer -# value) -#recv_timeout=4000 - -# The prefix used in ZooKeeper to store ephemeral nodes -# (string value) -#sg_prefix=/servicegroups - -# Number of seconds to wait until retrying to join the session -# (integer value) -#sg_retry_interval=5 - - diff --git a/etc/nova/solver_scheduler_conf.sample b/etc/nova/solver_scheduler_conf.sample deleted file mode 100644 index 613d6c0..0000000 --- a/etc/nova/solver_scheduler_conf.sample +++ /dev/null @@ -1,108 +0,0 @@ -[DEFAULT] - -# -# Options defined in nova.scheduler.manager -# - -# Default driver to use for the scheduler (string value) -scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler - -# -# Options defined in nova.scheduler.filters.core_filter -# - -# Virtual CPU to physical CPU allocation ratio which affects -# all CPU filters. This configuration specifies a global ratio -# for CoreFilter. For AggregateCoreFilter, it will fall back -# to this configuration value if no per-aggregate setting -# found. This option is also used in Solver Scheduler for the -# MaxVcpuAllocationPerHostConstraint (floating point value) -cpu_allocation_ratio=16.0 - -# -# Options defined in nova.scheduler.filters.disk_filter -# - -# Virtual disk to physical disk allocation ratio (floating -# point value) -disk_allocation_ratio=1.0 - -# -# Options defined in nova.scheduler.filters.num_instances_filter -# - -# Ignore hosts that have too many instances (integer value) -max_instances_per_host=50 - -# -# Options defined in nova.scheduler.filters.io_ops_filter -# - -# Ignore hosts that have too many -# builds/resizes/snaps/migrations. (integer value) -max_io_ops_per_host=8 - -# -# Options defined in nova.scheduler.filters.ram_filter -# - -# Virtual ram to physical ram allocation ratio which affects -# all ram filters. This configuration specifies a global ratio -# for RamFilter. For AggregateRamFilter, it will fall back to -# this configuration value if no per-aggregate setting found. -# (floating point value) -ram_allocation_ratio=1.5 - -# -# Options defined in nova.scheduler.weights.ram -# - -# Multiplier used for weighing ram. Negative numbers mean to -# stack vs spread. (floating point value) -ram_weight_multiplier=1.0 - -# -# Options defined in nova.volume.cinder -# - -# Keystone Cinder account username (string value) -cinder_admin_user= - -# Keystone Cinder account password (string value) -cinder_admin_password= - -# Keystone Cinder account tenant name (string value) -cinder_admin_tenant_name=service - -# Complete public Identity API endpoint (string value) -cinder_auth_uri= - - -[solver_scheduler] - -# -# Options defined in nova.scheduler.solver_scheduler -# - -# The pluggable solver implementation to use. By default, a -# reference solver implementation is included that models the -# problem as a Linear Programming (LP) problem using PULP. -# To use a fully functional (pluggable) solver, set the option as -# "nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver" -# (string value) -scheduler_host_solver=nova.scheduler.solvers.pluggable_hosts_pulp_solver.HostsPulpSolver - - -# -# Options defined in nova.scheduler.solvers -# - -# Which constraints to use in scheduler solver (list value) -scheduler_solver_constraints=ActiveHostConstraint, NonTrivialSolutionConstraint - -# Assign weight for each cost (list value) -scheduler_solver_cost_weights=RamCost:1.0 - -# Which cost matrices to use in the scheduler solver. -# (list value) -scheduler_solver_costs=RamCost diff --git a/installation/install.sh b/installation/install.sh deleted file mode 100755 index e0f1044..0000000 --- a/installation/install.sh +++ /dev/null @@ -1,117 +0,0 @@ -#!/bin/bash - -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -_NOVA_CONF_DIR="/etc/nova" -_NOVA_CONF_FILE="nova.conf" -_NOVA_DIR="/usr/lib/python2.7/dist-packages/nova" - -# if you did not make changes to the installation files, -# please do not edit the following directories. -_CODE_DIR="../nova" -_BACKUP_DIR="${_NOVA_DIR}/.solver-scheduler-installation-backup" - -#_SCRIPT_NAME="${0##*/}" -#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log" - -if [[ ${EUID} -ne 0 ]]; then - echo "Please run as root." - exit 1 -fi - -##Redirecting output to logfile as well as stdout -#exec > >(tee -a ${_SCRIPT_LOGFILE}) -#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2) - -cd `dirname $0` - -echo "checking installation directories..." -if [ ! -d "${_NOVA_DIR}" ] ; then - echo "Could not find the nova installation. Please check the variables in the beginning of the script." - echo "aborted." - exit 1 -fi -if [ ! -f "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" ] ; then - echo "Could not find nova config file. Please check the variables in the beginning of the script." - echo "aborted." - exit 1 -fi - -echo "checking previous installation..." -if [ -d "${_BACKUP_DIR}/nova" ] ; then - echo "It seems nova-solver-scheduler has already been installed!" - echo "Please check README for solution if this is not true." - exit 1 -fi - -echo "backing up current files that might be overwritten..." -mkdir -p "${_BACKUP_DIR}/nova" -mkdir -p "${_BACKUP_DIR}/etc/nova" -cp -r "${_NOVA_DIR}/scheduler" "${_BACKUP_DIR}/nova/" && cp -r "${_NOVA_DIR}/volume" "${_BACKUP_DIR}/nova/" -if [ $? -ne 0 ] ; then - rm -r "${_BACKUP_DIR}/nova" - echo "Error in code backup, aborted." - exit 1 -fi -cp "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" "${_BACKUP_DIR}/etc/nova/" -if [ $? -ne 0 ] ; then - rm -r "${_BACKUP_DIR}/nova" - rm -r "${_BACKUP_DIR}/etc" - echo "Error in config backup, aborted." - exit 1 -fi - -echo "copying in new files..." -cp -r "${_CODE_DIR}" `dirname ${_NOVA_DIR}` -if [ $? -ne 0 ] ; then - echo "Error in copying, aborted." - echo "Recovering original files..." - cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}` && rm -r "${_BACKUP_DIR}/nova" - if [ $? -ne 0 ] ; then - echo "Recovering failed! Please install manually." - fi - exit 1 -fi - -echo "updating config file..." -sed -i.backup -e "/scheduler_driver *=/d" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" -sed -i -e "/\[DEFAULT\]/a \\ -scheduler_driver=nova.scheduler.solver_scheduler.ConstraintSolverScheduler" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" -if [ $? -ne 0 ] ; then - echo "Error in updating, aborted." - echo "Recovering original files..." - cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}` && rm -r "${_BACKUP_DIR}/nova" - if [ $? -ne 0 ] ; then - echo "Recovering /nova failed! Please install manually." - fi - cp "${_BACKUP_DIR}/etc/nova/${_NOVA_CONF_FILE}" "${_NOVA_CONF_DIR}" && rm -r "${_BACKUP_DIR}/etc" - if [ $? -ne 0 ] ; then - echo "Recovering config failed! Please install manually." - fi - exit 1 -fi - -echo "restarting nova scheduler..." -service nova-scheduler restart -if [ $? -ne 0 ] ; then - echo "There was an error in restarting the service, please restart nova scheduler manually." - exit 1 -fi - -echo "Completed." -echo "See README to get started." - -exit 0 diff --git a/installation/uninstall.sh b/installation/uninstall.sh deleted file mode 100755 index c885a20..0000000 --- a/installation/uninstall.sh +++ /dev/null @@ -1,119 +0,0 @@ -#!/bin/bash - -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -_NOVA_CONF_DIR="/etc/nova" -_NOVA_CONF_FILE="nova.conf" -_NOVA_DIR="/usr/lib/python2.7/dist-packages/nova" - -# if you did not make changes to the installation files, -# please do not edit the following directories. -_CODE_DIR="../nova" -_BACKUP_DIR="${_NOVA_DIR}/.solver-scheduler-installation-backup" - -#_SCRIPT_NAME="${0##*/}" -#_SCRIPT_LOGFILE="/var/log/nova-solver-scheduler/installation/${_SCRIPT_NAME}.log" - -if [[ ${EUID} -ne 0 ]]; then - echo "Please run as root." - exit 1 -fi - -##Redirecting output to logfile as well as stdout -#exec > >(tee -a ${_SCRIPT_LOGFILE}) -#exec 2> >(tee -a ${_SCRIPT_LOGFILE} >&2) - -cd `dirname $0` - -echo "checking installation directories..." -if [ ! -d "${_NOVA_DIR}" ] ; then - echo "Could not find the nova installation. Please check the variables in the beginning of the script." - echo "aborted." - exit 1 -fi -if [ ! -f "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" ] ; then - echo "Could not find nova config file. Please check the variables in the beginning of the script." - echo "aborted." - exit 1 -fi - -echo "checking backup..." -if [ ! -d "${_BACKUP_DIR}/nova" ] ; then - echo "Could not find backup files. It is possible that the solver-scheduler has been uninstalled." - echo "If this is not the case, then please uninstall manually." - exit 1 -fi - -echo "backing up current files that might be overwritten..." -if [ -d "${_BACKUP_DIR}/uninstall" ] ; then - rm -r "${_BACKUP_DIR}/uninstall" -fi -mkdir -p "${_BACKUP_DIR}/uninstall/nova" -mkdir -p "${_BACKUP_DIR}/uninstall/etc/nova" -cp -r "${_NOVA_DIR}/scheduler" "${_BACKUP_DIR}/uninstall/nova/" && cp -r "${_NOVA_DIR}/volume" "${_BACKUP_DIR}/uninstall/nova/" -if [ $? -ne 0 ] ; then - rm -r "${_BACKUP_DIR}/uninstall/nova" - echo "Error in code backup, aborted." - exit 1 -fi -cp "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" "${_BACKUP_DIR}/uninstall/etc/nova/" -if [ $? -ne 0 ] ; then - rm -r "${_BACKUP_DIR}/uninstall/nova" - rm -r "${_BACKUP_DIR}/uninstall/etc" - echo "Error in config backup, aborted." - exit 1 -fi - -echo "restoring code to the status before installing solver-scheduler..." -cp -r "${_BACKUP_DIR}/nova" `dirname ${_NOVA_DIR}` -if [ $? -ne 0 ] ; then - echo "Error in copying, aborted." - echo "Recovering current files..." - cp -r "${_BACKUP_DIR}/uninstall/nova" `dirname ${_NOVA_DIR}` - if [ $? -ne 0 ] ; then - echo "Recovering failed! Please uninstall manually." - fi - exit 1 -fi - -echo "updating config file..." -sed -i.uninstall.backup -e "/scheduler_driver *=/d" "${_NOVA_CONF_DIR}/${_NOVA_CONF_FILE}" -if [ $? -ne 0 ] ; then - echo "Error in updating, aborted." - echo "Recovering current files..." - cp "${_BACKUP_DIR}/uninstall/etc/nova/${_NOVA_CONF_FILE}" "${_NOVA_CONF_DIR}" - if [ $? -ne 0 ] ; then - echo "Recovering failed! Please uninstall manually." - fi - exit 1 -fi - -echo "cleaning up backup files..." -rm -r "${_BACKUP_DIR}/nova" && rm -r "${_BACKUP_DIR}/etc" -if [ $? -ne 0 ] ; then - echo "There was an error when cleaning up the backup files." -fi - -echo "restarting nova scheduler..." -service nova-scheduler restart -if [ $? -ne 0 ] ; then - echo "There was an error in restarting the service, please restart nova scheduler manually." - exit 1 -fi - -echo "Completed." - -exit 0 diff --git a/nova/scheduler/solver_scheduler.py b/nova/scheduler/solver_scheduler.py index b4d49f8..ca62708 100644 --- a/nova/scheduler/solver_scheduler.py +++ b/nova/scheduler/solver_scheduler.py @@ -22,11 +22,9 @@ A default solver implementation that uses PULP is included. from oslo.config import cfg -from nova import exception from nova.openstack.common.gettextutils import _ from nova.openstack.common import importutils from nova.openstack.common import log as logging -from nova.scheduler import driver from nova.scheduler import filter_scheduler from nova.scheduler import weights @@ -35,11 +33,9 @@ LOG = logging.getLogger(__name__) solver_opts = [ cfg.StrOpt('scheduler_host_solver', - default='nova.scheduler.solvers' - '.pluggable_hosts_pulp_solver.HostsPulpSolver', - help='The pluggable solver implementation to use. By default, a ' - 'reference solver implementation is included that models ' - 'the problem as a Linear Programming (LP) problem using PULP.'), + default='nova.scheduler.solvers.fast_solver.FastSolver', + help='The pluggable solver implementation to use. By ' + 'default, use the FastSolver.'), ] CONF.register_opts(solver_opts, group='solver_scheduler') @@ -55,82 +51,11 @@ class ConstraintSolverScheduler(filter_scheduler.FilterScheduler): self.hosts_solver = importutils.import_object( CONF.solver_scheduler.scheduler_host_solver) - def schedule_run_instance(self, context, request_spec, - admin_password, injected_files, - requested_networks, is_first_time, - filter_properties, legacy_bdm_in_spec): - """This method is called from nova.compute.api to provision - an instance. We first create a build plan (a list of WeightedHosts) - and then provision. - - Returns a list of the instances created. - """ - payload = dict(request_spec=request_spec) - self.notifier.info(context, 'scheduler.run_instance.start', payload) - - instance_uuids = request_spec.get('instance_uuids') - LOG.info(_("Attempting to build %(num_instances)d instance(s) " - "uuids: %(instance_uuids)s"), - {'num_instances': len(instance_uuids), - 'instance_uuids': instance_uuids}) - LOG.debug(_("Request Spec: %s") % request_spec) - - # Stuff network requests into filter_properties - # NOTE (Xinyuan): currently for POC only. - filter_properties['requested_networks'] = requested_networks - - weighed_hosts = self._schedule(context, request_spec, - filter_properties, instance_uuids) - - # NOTE: Pop instance_uuids as individual creates do not need the - # set of uuids. Do not pop before here as the upper exception - # handler fo NoValidHost needs the uuid to set error state - instance_uuids = request_spec.pop('instance_uuids') - - # NOTE(comstud): Make sure we do not pass this through. It - # contains an instance of RpcContext that cannot be serialized. - filter_properties.pop('context', None) - - for num, instance_uuid in enumerate(instance_uuids): - request_spec['instance_properties']['launch_index'] = num - - try: - try: - weighed_host = weighed_hosts.pop(0) - LOG.info(_("Choosing host %(weighed_host)s " - "for instance %(instance_uuid)s"), - {'weighed_host': weighed_host, - 'instance_uuid': instance_uuid}) - except IndexError: - raise exception.NoValidHost(reason="") - - self._provision_resource(context, weighed_host, - request_spec, - filter_properties, - requested_networks, - injected_files, admin_password, - is_first_time, - instance_uuid=instance_uuid, - legacy_bdm_in_spec=legacy_bdm_in_spec) - except Exception as ex: - # NOTE(vish): we don't reraise the exception here to make sure - # that all instances in the request get set to - # error properly - driver.handle_schedule_error(context, ex, instance_uuid, - request_spec) - # scrub retry host list in case we're scheduling multiple - # instances: - retry = filter_properties.get('retry', {}) - retry['hosts'] = [] - - self.notifier.info(context, 'scheduler.run_instance.end', payload) - def _schedule(self, context, request_spec, filter_properties, instance_uuids=None): """Returns a list of hosts that meet the required specs, ordered by their fitness. """ - elevated = context.elevated() instance_properties = request_spec['instance_properties'] instance_type = request_spec.get("instance_type", None) @@ -147,52 +72,57 @@ class ConstraintSolverScheduler(filter_scheduler.FilterScheduler): properties['uuid'] = instance_uuids[0] self._populate_retry(filter_properties, properties) + if instance_uuids: + num_instances = len(instance_uuids) + else: + num_instances = request_spec.get('num_instances', 1) + + # initilize an empty key-value cache to be used in solver for internal + # temporary data storage + solver_cache = {} + filter_properties.update({'context': context, 'request_spec': request_spec, 'config_options': config_options, - 'instance_type': instance_type}) + 'instance_type': instance_type, + 'num_instances': num_instances, + 'instance_uuids': instance_uuids, + 'solver_cache': solver_cache}) - self.populate_filter_properties(request_spec, - filter_properties) + self.populate_filter_properties(request_spec, filter_properties) - # Note: Moving the host selection logic to a new method so that + # NOTE(Yathi): Moving the host selection logic to a new method so that # the subclasses can override the behavior. - return self._get_final_host_list(elevated, request_spec, - filter_properties, - instance_properties, - update_group_hosts, - instance_uuids) + selected_hosts = self._get_selected_hosts(context, filter_properties) - def _get_final_host_list(self, context, request_spec, filter_properties, - instance_properties, update_group_hosts=False, - instance_uuids=None): - """Returns the final list of hosts that meet the required specs for + # clear solver's memory after scheduling process + filter_properties.pop('solver_cache') + + return selected_hosts + + def _get_selected_hosts(self, context, filter_properties): + """Returns the list of hosts that meet the required specs for each instance in the list of instance_uuids. Here each instance in instance_uuids have the same requirement as specified by request_spec. """ + elevated = context.elevated() # this returns a host iterator - hosts = self._get_all_host_states(context) + hosts = self._get_all_host_states(elevated) selected_hosts = [] hosts = self.host_manager.get_hosts_stripping_ignored_and_forced( hosts, filter_properties) list_hosts = list(hosts) - host_instance_tuples_list = self.hosts_solver.host_solve( - list_hosts, instance_uuids, - request_spec, filter_properties) - LOG.debug(_("solver results: %(host_instance_list)s") % - {"host_instance_list": host_instance_tuples_list}) + host_instance_combinations = self.hosts_solver.solve( + list_hosts, filter_properties) + LOG.debug(_("solver results: %(host_instance_tuples_list)s") % + {"host_instance_tuples_list": host_instance_combinations}) # NOTE(Yathi): Not using weights in solver scheduler, # but creating a list of WeighedHosts with a default weight of 1 # to match the common method signatures of the # FilterScheduler class selected_hosts = [weights.WeighedHost(host, 1) - for (host, instance) in host_instance_tuples_list] - for chosen_host in selected_hosts: - # Update the host state after deducting the - # resource used by the instance - chosen_host.obj.consume_from_instance(instance_properties) - if update_group_hosts is True: - filter_properties['group_hosts'].add(chosen_host.obj.host) + for (host, instance) in host_instance_combinations] + return selected_hosts diff --git a/nova/scheduler/solver_scheduler_host_manager.py b/nova/scheduler/solver_scheduler_host_manager.py index ef89f96..2764e06 100644 --- a/nova/scheduler/solver_scheduler_host_manager.py +++ b/nova/scheduler/solver_scheduler_host_manager.py @@ -19,256 +19,50 @@ Manage hosts in the current zone. from oslo.config import cfg -from nova.compute import task_states -from nova.compute import vm_states -from nova import db -from nova.objects import aggregate as aggregate_obj -from nova.objects import instance as instance_obj from nova.openstack.common.gettextutils import _ -from nova.openstack.common import jsonutils from nova.openstack.common import log as logging -from nova.openstack.common import timeutils -from nova.pci import pci_request -from nova.pci import pci_stats from nova.scheduler import host_manager -physnet_config_file_opts = [ - cfg.StrOpt('physnet_config_file', - default='/etc/neutron/plugins/ml2/ml2_conf_cisco.ini', - help='The config file specifying the physical network topology') - ] CONF = cfg.CONF -CONF.import_opt('scheduler_available_filters', 'nova.scheduler.host_manager') -CONF.import_opt('scheduler_default_filters', 'nova.scheduler.host_manager') -CONF.import_opt('scheduler_weight_classes', 'nova.scheduler.host_manager') -CONF.register_opts(physnet_config_file_opts) LOG = logging.getLogger(__name__) -class HostState(host_manager.HostState): +class SolverSchedulerHostState(host_manager.HostState): """Mutable and immutable information tracked for a host. This is an attempt to remove the ad-hoc data structures previously used and lock down access. """ def __init__(self, *args, **kwargs): - super(HostState, self).__init__(*args, **kwargs) + super(SolverSchedulerHostState, self).__init__(*args, **kwargs) self.projects = [] - # For network constraints - # NOTE(Xinyuan): currently for POC only, and may require Neurtron - self.networks = [] - self.physnet_config = [] - self.rack_networks = [] - # For host aggregate constraints - self.host_aggregates_stats = {} - - def update_from_hosted_instances(self, context, compute): - service = compute['service'] - if not service: - LOG.warn(_("No service for compute ID %s") % compute['id']) - return - host = service['host'] - # retrieve instances for each hosts to extract needed infomation - # NOTE: ideally we should use get_by_host_and_node, but there's a bug - # in the Icehouse release, that doesn't allow 'expected_attrs' here. - instances = instance_obj.InstanceList.get_by_host(context, host, - expected_attrs=['info_cache']) - # get hosted networks - # NOTE(Xinyuan): POC. - instance_networks = [] - for inst in instances: - network_info = inst.get('info_cache', {}).get('network_info', []) - instance_networks.extend([vif['network']['id'] - for vif in network_info]) - self.networks = list(set(instance_networks)) def update_from_compute_node(self, compute): - """Update information about a host from its compute_node info.""" - if (self.updated and compute['updated_at'] - and self.updated > compute['updated_at']): - return - all_ram_mb = compute['memory_mb'] - - # Assume virtual size is all consumed by instances if use qcow2 disk. - free_gb = compute['free_disk_gb'] - least_gb = compute.get('disk_available_least') - if least_gb is not None: - if least_gb > free_gb: - # can occur when an instance in database is not on host - LOG.warn(_("Host has more disk space than database expected" - " (%(physical)sgb > %(database)sgb)") % - {'physical': least_gb, 'database': free_gb}) - free_gb = min(least_gb, free_gb) - free_disk_mb = free_gb * 1024 - - self.disk_mb_used = compute['local_gb_used'] * 1024 - - #NOTE(jogo) free_ram_mb can be negative - self.free_ram_mb = compute['free_ram_mb'] - self.total_usable_ram_mb = all_ram_mb - self.total_usable_disk_gb = compute['local_gb'] - self.free_disk_mb = free_disk_mb - self.vcpus_total = compute['vcpus'] - self.vcpus_used = compute['vcpus_used'] - self.updated = compute['updated_at'] - if 'pci_stats' in compute: - self.pci_stats = pci_stats.PciDeviceStats(compute['pci_stats']) - else: - self.pci_stats = None - - # All virt drivers report host_ip - self.host_ip = compute['host_ip'] - self.hypervisor_type = compute.get('hypervisor_type') - self.hypervisor_version = compute.get('hypervisor_version') - self.hypervisor_hostname = compute.get('hypervisor_hostname') - self.cpu_info = compute.get('cpu_info') - if compute.get('supported_instances'): - self.supported_instances = jsonutils.loads( - compute.get('supported_instances')) - - # Don't store stats directly in host_state to make sure these don't - # overwrite any values, or get overwritten themselves. Store in self so - # filters can schedule with them. - stats = compute.get('stats', None) or '{}' - self.stats = jsonutils.loads(stats) - - self.hypervisor_version = compute['hypervisor_version'] - - # Track number of instances on host - self.num_instances = int(self.stats.get('num_instances', 0)) - - # Track number of instances by project_id + super(SolverSchedulerHostState, self).update_from_compute_node( + compute) + # Track projects in the compute node project_id_keys = [k for k in self.stats.keys() if k.startswith("num_proj_")] for key in project_id_keys: project_id = key[9:] - self.num_instances_by_project[project_id] = int(self.stats[key]) - - # Track number of instances in certain vm_states - vm_state_keys = [k for k in self.stats.keys() if - k.startswith("num_vm_")] - for key in vm_state_keys: - vm_state = key[7:] - self.vm_states[vm_state] = int(self.stats[key]) - - # Track number of instances in certain task_states - task_state_keys = [k for k in self.stats.keys() if - k.startswith("num_task_")] - for key in task_state_keys: - task_state = key[9:] - self.task_states[task_state] = int(self.stats[key]) - - # Track number of instances by host_type - os_keys = [k for k in self.stats.keys() if - k.startswith("num_os_type_")] - for key in os_keys: - os = key[12:] - self.num_instances_by_os_type[os] = int(self.stats[key]) - - # Track the number of projects on host - self.projects = [k[9:] for k in self.stats.keys() if - k.startswith("num_proj_") and int(self.stats[k]) > 0] - - self.num_io_ops = int(self.stats.get('io_workload', 0)) - - # update metrics - self._update_metrics_from_compute_node(compute) + if int(self.stats[key]) > 0: + self.projects.append(project_id) def consume_from_instance(self, instance): - """Incrementally update host state from an instance.""" - disk_mb = (instance['root_gb'] + instance['ephemeral_gb']) * 1024 - ram_mb = instance['memory_mb'] - vcpus = instance['vcpus'] - self.free_ram_mb -= ram_mb - self.free_disk_mb -= disk_mb - self.vcpus_used += vcpus - self.updated = timeutils.utcnow() - - # Track number of instances on host - self.num_instances += 1 - - # Track number of instances by project_id - project_id = instance.get('project_id') - if project_id not in self.num_instances_by_project: - self.num_instances_by_project[project_id] = 0 - self.num_instances_by_project[project_id] += 1 - - # Track number of instances in certain vm_states - vm_state = instance.get('vm_state', vm_states.BUILDING) - if vm_state not in self.vm_states: - self.vm_states[vm_state] = 0 - self.vm_states[vm_state] += 1 - - # Track number of instances in certain task_states - task_state = instance.get('task_state') - if task_state not in self.task_states: - self.task_states[task_state] = 0 - self.task_states[task_state] += 1 - - # Track number of instances by host_type - os_type = instance.get('os_type') - if os_type not in self.num_instances_by_os_type: - self.num_instances_by_os_type[os_type] = 0 - self.num_instances_by_os_type[os_type] += 1 - - pci_requests = pci_request.get_instance_pci_requests(instance) - if pci_requests and self.pci_stats: - self.pci_stats.apply_requests(pci_requests) - - vm_state = instance.get('vm_state', vm_states.BUILDING) - task_state = instance.get('task_state') - if vm_state == vm_states.BUILDING or task_state in [ - task_states.RESIZE_MIGRATING, task_states.REBUILDING, - task_states.RESIZE_PREP, task_states.IMAGE_SNAPSHOT, - task_states.IMAGE_BACKUP]: - self.num_io_ops += 1 - - # Track the number of projects + super(SolverSchedulerHostState, self).consume_from_instance(instance) + # Track projects in the compute node project_id = instance.get('project_id') if project_id not in self.projects: self.projects.append(project_id) - # Track aggregate stats - project_id = instance.get('project_id') - for aggr in self.host_aggregates_stats: - aggregate_project_list = self.host_aggregates_stats[aggr].get( - 'projects', []) - if project_id not in aggregate_project_list: - self.host_aggregates_stats[aggr]['projects'].append(project_id) - - def update_from_networks(self, requested_networks): - for network_id, fixed_ip, port_id in requested_networks: - if network_id: - if network_id not in self.networks: - self.networks.append(network_id) - if not network_id not in self.aggregated_networks: - for device in self.aggregated_networks: - self.aggregated_networks[device].append(network_id) - # do this for host aggregates - for aggr in self.host_aggregates_stats: - host_aggr_network_list = self.host_aggregates_stats[ - aggr].get('networks', []) - if network_id not in host_aggr_network_list: - self.host_aggregates_stats[aggr][ - 'networks'].append(network_id) - - def __repr__(self): - return ("(%s, %s) ram:%s disk:%s io_ops:%s instances:%s " - "physnet_config:%s networks:%s rack_networks:%s " - "projects:%s aggregate_stats:%s" % - (self.host, self.nodename, self.free_ram_mb, self.free_disk_mb, - self.num_io_ops, self.num_instances, self.physnet_config, - self.networks, self.rack_networks, self.projects, - self.host_aggregates_stats)) - class SolverSchedulerHostManager(host_manager.HostManager): """HostManager class for solver scheduler.""" # Can be overridden in a subclass - host_state_cls = HostState + host_state_cls = SolverSchedulerHostState def __init__(self, *args, **kwargs): super(SolverSchedulerHostManager, self).__init__(*args, **kwargs) @@ -342,194 +136,3 @@ class SolverSchedulerHostManager(host_manager.HostManager): hosts = name_to_cls_map.itervalues() return hosts - - def get_filtered_hosts(self, hosts, filter_properties, - filter_class_names=None, index=0): - """Filter hosts and return only ones passing all filters.""" - # NOTE(Yathi): Calling the method to apply ignored and forced options - hosts = self.get_hosts_stripping_ignored_and_forced(hosts, - filter_properties) - - force_hosts = filter_properties.get('force_hosts', []) - force_nodes = filter_properties.get('force_nodes', []) - - if force_hosts or force_nodes: - # NOTE: Skip filters when forcing host or node - return list(hosts) - - filter_classes = self._choose_host_filters(filter_class_names) - - return self.filter_handler.get_filtered_objects(filter_classes, - hosts, filter_properties, index) - - def _get_aggregate_stats(self, context, host_state_map): - """Update certain stats for the aggregates of the hosts.""" - aggregates = aggregate_obj.AggregateList.get_all(context) - host_state_list_map = {} - - for (host, node) in host_state_map.keys(): - current_list = host_state_list_map.get(host, None) - state = host_state_map[(host, node)] - if not current_list: - host_state_list_map[host] = [state] - else: - host_state_list_map[host] = current_list.append(state) - - for aggregate in aggregates: - hosts = aggregate.hosts - projects = set() - networks = set() - # Collect all the projects from all the member hosts - aggr_host_states = [] - for host in hosts: - host_state_list = host_state_list_map.get(host, None) or [] - aggr_host_states += host_state_list - for host_state in host_state_list: - projects = projects.union(host_state.projects) - networks = networks.union(host_state.networks) - aggregate_stats = {'hosts': hosts, - 'projects': list(projects), - 'networks': list(networks), - 'metadata': aggregate.metadata} - # Now set this value to all the member host_states - for host_state in aggr_host_states: - host_state.host_aggregates_stats[ - aggregate.name] = aggregate_stats - - def _get_rack_states(self, context, host_state_map): - """Retrieve the physical and virtual network states of the hosts. - """ - def _get_physnet_mappings(): - """Get physical network topologies from a Neutron config file. - This is a hard-coded function which only supports Cisco Nexus - driver for Neutron ML2 plugin currently. - """ - # NOTE(Xinyuan): This feature is for POC only! - # TODO(Xinyuan): further works are required in implementing - # Neutron API extensions to get related information. - host2device_map = {} - device2host_map = {} - sections = {} - - state_keys = host_state_map.keys() - hostname_list = [host for (host, node) in state_keys] - - try: - physnet_config_parser = cfg.ConfigParser( - CONF.physnet_config_file, sections) - physnet_config_parser.parse() - except Exception: - LOG.warn(_("Physnet config file was not parsed properly.")) - # Example section: - # [ml2_mech_cisco_nexus:1.1.1.1] - # compute1=1/1 - # compute2=1/2 - # ssh_port=22 - # username=admin - # password=mySecretPassword - for parsed_item in sections.keys(): - dev_id, sep, dev_ip = parsed_item.partition(':') - if dev_id.lower() == 'ml2_mech_cisco_nexus': - for key, value in sections[parsed_item].items(): - if key in hostname_list: - hostname = key - portid = value[0] - host2device_map.setdefault(hostname, []) - host2device_map[hostname].append((dev_ip, portid)) - device2host_map.setdefault(dev_ip, []) - device2host_map[dev_ip].append((hostname, portid)) - return host2device_map, device2host_map - - def _get_rack_networks(host_dev_map, dev_host_map, host_state_map): - """Aggregate the networks associated with a group of hosts in - same physical groups (e.g. under same ToR switches...) - """ - rack_networks = {} - - if not dev_host_map or not host_dev_map: - return rack_networks - - host_networks = {} - for state_key in host_state_map.keys(): - (host, node) = state_key - host_state = host_state_map[state_key] - host_networks.setdefault(host, set()) - host_networks[host] = host_networks[host].union( - host_state.networks) - - # aggregate hosted networks for each upper level device - dev_networks = {} - for dev_id in dev_host_map.keys(): - current_dev_networks = set() - for (host_name, port_id) in dev_host_map[dev_id]: - current_dev_networks = current_dev_networks.union( - host_networks.get(host_name, [])) - dev_networks[dev_id] = list(current_dev_networks) - - # make aggregated networks list for each hosts - for host_name in host_dev_map.keys(): - dev_list = list(set([dev_id for (dev_id, physport) - in host_dev_map.get(host_name, [])])) - host_rack_networks = {} - for dev in dev_list: - host_rack_networks[dev] = dev_networks.get(dev, []) - rack_networks[host_name] = host_rack_networks - - return rack_networks - - host_dev_map, dev_host_map = _get_physnet_mappings() - rack_networks = _get_rack_networks( - host_dev_map, dev_host_map, host_state_map) - - for state_key in host_state_map.keys(): - host_state = self.host_state_map[state_key] - (host, node) = state_key - host_state.physnet_config = host_dev_map.get(host, []) - host_state.rack_networks = rack_networks.get(host, []) - - def get_all_host_states(self, context): - """Returns a list of HostStates that represents all the hosts - the HostManager knows about. Also, each of the consumable resources - in HostState are pre-populated and adjusted based on data in the db. - """ - - # Get resource usage across the available compute nodes: - compute_nodes = db.compute_node_get_all(context) - seen_nodes = set() - for compute in compute_nodes: - service = compute['service'] - if not service: - LOG.warn(_("No service for compute ID %s") % compute['id']) - continue - host = service['host'] - node = compute.get('hypervisor_hostname') - state_key = (host, node) - capabilities = self.service_states.get(state_key, None) - host_state = self.host_state_map.get(state_key) - if host_state: - host_state.update_capabilities(capabilities, - dict(service.iteritems())) - else: - host_state = self.host_state_cls(host, node, - capabilities=capabilities, - service=dict(service.iteritems())) - self.host_state_map[state_key] = host_state - host_state.update_from_compute_node(compute) - # update information from hosted instances - host_state.update_from_hosted_instances(context, compute) - seen_nodes.add(state_key) - - # remove compute nodes from host_state_map if they are not active - dead_nodes = set(self.host_state_map.keys()) - seen_nodes - for state_key in dead_nodes: - host, node = state_key - LOG.info(_("Removing dead compute node %(host)s:%(node)s " - "from scheduler") % {'host': host, 'node': node}) - del self.host_state_map[state_key] - - # get information from groups of hosts - # NOTE(Xinyaun): currently for POC only. - self._get_rack_states(context, self.host_state_map) - self._get_aggregate_stats(context, self.host_state_map) - - return self.host_state_map.itervalues() diff --git a/nova/scheduler/solvers/__init__.py b/nova/scheduler/solvers/__init__.py index f8a3dee..3431c92 100644 --- a/nova/scheduler/solvers/__init__.py +++ b/nova/scheduler/solvers/__init__.py @@ -17,73 +17,73 @@ Scheduler host constraint solvers """ -from nova.scheduler.solvers import costs -from nova.scheduler.solvers import linearconstraints - from oslo.config import cfg -scheduler_solver_costs_opt = cfg.ListOpt( - 'scheduler_solver_costs', - default=['RamCost'], - help='Which cost matrices to use in the scheduler solver.') +from nova.scheduler.solvers import constraints +from nova.scheduler.solvers import costs +from nova import solver_scheduler_exception as exception -# (xinyuan) This option should be changed to DictOpt type -# when bug #1276859 is fixed. -scheduler_solver_cost_weights_opt = cfg.ListOpt( - 'scheduler_solver_cost_weights', - default=['RamCost:1.0'], - help='Assign weight for each cost') - -scheduler_solver_constraints_opt = cfg.ListOpt( - 'scheduler_solver_constraints', - default=[], - help='Which constraints to use in scheduler solver') +scheduler_solver_opts = [ + cfg.ListOpt('scheduler_solver_costs', + default=['RamCost'], + help='Which cost matrices to use in the ' + 'scheduler solver.'), + cfg.ListOpt('scheduler_solver_constraints', + default=['ActiveHostsConstraint'], + help='Which constraints to use in scheduler solver'), +] CONF = cfg.CONF -CONF.register_opt(scheduler_solver_costs_opt, group='solver_scheduler') -CONF.register_opt(scheduler_solver_cost_weights_opt, group='solver_scheduler') -CONF.register_opt(scheduler_solver_constraints_opt, group='solver_scheduler') -SOLVER_CONF = CONF.solver_scheduler +CONF.register_opts(scheduler_solver_opts, group='solver_scheduler') class BaseHostSolver(object): """Base class for host constraint solvers.""" + def __init__(self): + super(BaseHostSolver, self).__init__() + def _get_cost_classes(self): """Get cost classes from configuration.""" cost_classes = [] + bad_cost_names = [] cost_handler = costs.CostHandler() all_cost_classes = cost_handler.get_all_classes() - expected_costs = SOLVER_CONF.scheduler_solver_costs - for cost in all_cost_classes: - if cost.__name__ in expected_costs: - cost_classes.append(cost) + all_cost_names = [c.__name__ for c in all_cost_classes] + expected_costs = CONF.solver_scheduler.scheduler_solver_costs + for cost in expected_costs: + if cost in all_cost_names: + cost_classes.append(all_cost_classes[ + all_cost_names.index(cost)]) + else: + bad_cost_names.append(cost) + if bad_cost_names: + msg = ", ".join(bad_cost_names) + raise exception.SchedulerSolverCostNotFound(cost_name=msg) return cost_classes def _get_constraint_classes(self): """Get constraint classes from configuration.""" constraint_classes = [] - constraint_handler = linearconstraints.LinearConstraintHandler() + bad_constraint_names = [] + constraint_handler = constraints.ConstraintHandler() all_constraint_classes = constraint_handler.get_all_classes() - expected_constraints = SOLVER_CONF.scheduler_solver_constraints - for constraint in all_constraint_classes: - if constraint.__name__ in expected_constraints: - constraint_classes.append(constraint) + all_constraint_names = [c.__name__ for c in all_constraint_classes] + expected_constraints = ( + CONF.solver_scheduler.scheduler_solver_constraints) + for constraint in expected_constraints: + if constraint in all_constraint_names: + constraint_classes.append(all_constraint_classes[ + all_constraint_names.index(constraint)]) + else: + bad_constraint_names.append(constraint) + if bad_constraint_names: + msg = ", ".join(bad_constraint_names) + raise exception.SchedulerSolverConstraintNotFound( + constraint_name=msg) return constraint_classes - def _get_cost_weights(self): - """Get cost weights from configuration.""" - cost_weights = {} - # (xinyuan) This is a temporary workaround for bug #1276859, - # need to wait until DictOpt is supported by config sample generator. - weights_str_list = SOLVER_CONF.scheduler_solver_cost_weights - for weight_str in weights_str_list: - (key, sep, val) = weight_str.partition(':') - cost_weights[str(key)] = float(val) - return cost_weights - - def host_solve(self, hosts, instance_uuids, request_spec, - filter_properties): + def solve(self, hosts, filter_properties): """Return the list of host-instance tuples after solving the constraints. Implement this in a subclass. diff --git a/nova/scheduler/solvers/constraints/__init__.py b/nova/scheduler/solvers/constraints/__init__.py new file mode 100644 index 0000000..a672331 --- /dev/null +++ b/nova/scheduler/solvers/constraints/__init__.py @@ -0,0 +1,97 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Constraints for scheduler constraint solvers +""" + +from nova import loadables +from nova.scheduler import filters + + +class BaseConstraint(object): + """Base class for constraints.""" + + precedence = 0 + + def get_components(self, variables, hosts, filter_properties): + """Return the components of the constraint.""" + raise NotImplementedError() + + +class BaseLinearConstraint(BaseConstraint): + """Base class of LP constraint.""" + + def __init__(self): + self._reset() + + def _reset(self): + self.variables = [] + self.coefficients = [] + self.constants = [] + self.operators = [] + + def _generate_components(self, variables, hosts, filter_properties): + # override in a sub class + pass + + def get_components(self, variables, hosts, filter_properties): + # deprecated currently, reserve for future use + self._reset() + self._generate_components(variables, hosts, filter_properties) + return (self.variables, self.coefficients, self.constants, + self.operators) + + def get_constraint_matrix(self, hosts, filter_properties): + raise NotImplementedError() + + +class BaseFilterConstraint(BaseLinearConstraint): + """Base class for constraints that correspond to 1-time host filters.""" + + # override this in sub classes + host_filter_cls = filters.BaseHostFilter + + def __init__(self): + super(BaseFilterConstraint, self).__init__() + self.host_filter = self.host_filter_cls() + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + for i in xrange(num_hosts): + host_passes = self.host_filter.host_passes( + hosts[i], filter_properties) + if not host_passes: + constraint_matrix[i] = [False for j in xrange(num_instances)] + + return constraint_matrix + + +class ConstraintHandler(loadables.BaseLoader): + def __init__(self): + super(ConstraintHandler, self).__init__(BaseConstraint) + + +def all_constraints(): + """Return a list of constraint classes found in this directory. + This method is used as the default for available constraints for + scheduler and returns a list of all constraint classes available. + """ + return ConstraintHandler().get_all_classes() diff --git a/nova/scheduler/solvers/constraints/active_hosts_constraint.py b/nova/scheduler/solvers/constraints/active_hosts_constraint.py new file mode 100644 index 0000000..741455d --- /dev/null +++ b/nova/scheduler/solvers/constraints/active_hosts_constraint.py @@ -0,0 +1,22 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import compute_filter +from nova.scheduler.solvers import constraints + + +class ActiveHostsConstraint(constraints.BaseFilterConstraint): + """Constraint that only allows active hosts to be selected.""" + host_filter_cls = compute_filter.ComputeFilter diff --git a/nova/scheduler/solvers/constraints/affinity_constraint.py b/nova/scheduler/solvers/constraints/affinity_constraint.py new file mode 100644 index 0000000..c68964d --- /dev/null +++ b/nova/scheduler/solvers/constraints/affinity_constraint.py @@ -0,0 +1,34 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import affinity_filter +from nova.scheduler.solvers import constraints + + +class SameHostConstraint(constraints.BaseFilterConstraint): + """Schedule the instance on the same host as another instance in a set + of instances. + """ + host_filter_cls = affinity_filter.SameHostFilter + + +class DifferentHostConstraint(constraints.BaseFilterConstraint): + """Schedule the instance on a different host from a set of instances.""" + host_filter_cls = affinity_filter.DifferentHostFilter + + +class SimpleCidrAffinityConstraint(constraints.BaseFilterConstraint): + """Schedule the instance on a host with a particular cidr.""" + host_filter_cls = affinity_filter.SimpleCIDRAffinityFilter diff --git a/nova/scheduler/solvers/constraints/aggregate_disk.py b/nova/scheduler/solvers/constraints/aggregate_disk.py new file mode 100644 index 0000000..e998ce8 --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_disk.py @@ -0,0 +1,60 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova import db +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers.constraints import disk_constraint + +CONF = cfg.CONF +CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter') + +LOG = logging.getLogger(__name__) + + +class AggregateDiskConstraint(disk_constraint.DiskConstraint): + """AggregateDiskConstraint with per-aggregate disk subscription flag. + + Fall back to global disk_allocation_ratio if no per-aggregate setting + found. + """ + + def _get_disk_allocation_ratio(self, host_state, filter_properties): + context = filter_properties['context'].elevated() + # TODO(uni): DB query in filter is a performance hit, especially for + # system with lots of hosts. Will need a general solution here to fix + # all filters with aggregate DB call things. + metadata = db.aggregate_metadata_get_by_host( + context, host_state.host, key='disk_allocation_ratio') + aggregate_vals = metadata.get('disk_allocation_ratio', set()) + num_values = len(aggregate_vals) + + if num_values == 0: + return CONF.disk_allocation_ratio + + if num_values > 1: + LOG.warn(_("%(num_values)d ratio values found, " + "of which the minimum value will be used."), + {'num_values': num_values}) + + try: + ratio = min(map(float, aggregate_vals)) + except ValueError as e: + LOG.warning(_("Could not decode disk_allocation_ratio: '%s'"), e) + ratio = CONF.disk_allocation_ratio + + return ratio diff --git a/nova/scheduler/solvers/constraints/aggregate_image_properties_isolation.py b/nova/scheduler/solvers/constraints/aggregate_image_properties_isolation.py new file mode 100644 index 0000000..489f9c7 --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_image_properties_isolation.py @@ -0,0 +1,24 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import aggregate_image_properties_isolation +from nova.scheduler.solvers import constraints + + +class AggregateImagePropertiesIsolationConstraint( + constraints.BaseFilterConstraint): + """AggregateImagePropertiesIsolation works with image properties.""" + host_filter_cls = aggregate_image_properties_isolation.\ + AggregateImagePropertiesIsolation diff --git a/nova/scheduler/solvers/constraints/aggregate_instance_extra_specs.py b/nova/scheduler/solvers/constraints/aggregate_instance_extra_specs.py new file mode 100644 index 0000000..1988fe5 --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_instance_extra_specs.py @@ -0,0 +1,23 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import aggregate_instance_extra_specs +from nova.scheduler.solvers import constraints + + +class AggregateInstanceExtraSpecsConstraint(constraints.BaseFilterConstraint): + """AggregateInstanceExtraSpecsFilter works with InstanceType records.""" + host_filter_cls = aggregate_instance_extra_specs.\ + AggregateInstanceExtraSpecsFilter diff --git a/nova/scheduler/solvers/constraints/aggregate_multitenancy_isolation.py b/nova/scheduler/solvers/constraints/aggregate_multitenancy_isolation.py new file mode 100644 index 0000000..43c26e1 --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_multitenancy_isolation.py @@ -0,0 +1,24 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import aggregate_multitenancy_isolation +from nova.scheduler.solvers import constraints + + +class AggregateMultiTenancyIsolationConstraint( + constraints.BaseFilterConstraint): + """Isolate tenants in specific aggregates.""" + host_filter_cls = aggregate_multitenancy_isolation.\ + AggregateMultiTenancyIsolation diff --git a/nova/scheduler/solvers/constraints/aggregate_ram.py b/nova/scheduler/solvers/constraints/aggregate_ram.py new file mode 100644 index 0000000..c8b3cbe --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_ram.py @@ -0,0 +1,59 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova import db +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers.constraints import ram_constraint + +CONF = cfg.CONF +CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter') + +LOG = logging.getLogger(__name__) + + +class AggregateRamConstraint(ram_constraint.RamConstraint): + """AggregateRamConstraint with per-aggregate ram subscription flag. + + Fall back to global ram_allocation_ratio if no per-aggregate setting found. + """ + + def _get_ram_allocation_ratio(self, host_state, filter_properties): + context = filter_properties['context'].elevated() + # TODO(uni): DB query in filter is a performance hit, especially for + # system with lots of hosts. Will need a general solution here to fix + # all filters with aggregate DB call things. + metadata = db.aggregate_metadata_get_by_host( + context, host_state.host, key='ram_allocation_ratio') + aggregate_vals = metadata.get('ram_allocation_ratio', set()) + num_values = len(aggregate_vals) + + if num_values == 0: + return CONF.ram_allocation_ratio + + if num_values > 1: + LOG.warn(_("%(num_values)d ratio values found, " + "of which the minimum value will be used."), + {'num_values': num_values}) + + try: + ratio = min(map(float, aggregate_vals)) + except ValueError as e: + LOG.warning(_("Could not decode ram_allocation_ratio: '%s'"), e) + ratio = CONF.ram_allocation_ratio + + return ratio diff --git a/nova/scheduler/solvers/constraints/aggregate_type_affinity.py b/nova/scheduler/solvers/constraints/aggregate_type_affinity.py new file mode 100644 index 0000000..3f05c16 --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_type_affinity.py @@ -0,0 +1,26 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import type_filter +from nova.scheduler.solvers import constraints + + +class AggregateTypeAffinityConstraint(constraints.BaseFilterConstraint): + """AggregateTypeAffinityFilter limits instance_type by aggregate + + return True if no instance_type key is set or if the aggregate metadata + key 'instance_type' has the instance_type name as a value + """ + host_filter_cls = type_filter.AggregateTypeAffinityFilter diff --git a/nova/scheduler/solvers/constraints/aggregate_vcpu.py b/nova/scheduler/solvers/constraints/aggregate_vcpu.py new file mode 100644 index 0000000..cc6a96b --- /dev/null +++ b/nova/scheduler/solvers/constraints/aggregate_vcpu.py @@ -0,0 +1,59 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova import db +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers.constraints import vcpu_constraint + +CONF = cfg.CONF +CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter') + +LOG = logging.getLogger(__name__) + + +class AggregateVcpuConstraint(vcpu_constraint.VcpuConstraint): + """AggregateVcpuConstraint with per-aggregate CPU subscription flag. + + Fall back to global cpu_allocation_ratio if no per-aggregate setting found. + """ + + def _get_cpu_allocation_ratio(self, host_state, filter_properties): + context = filter_properties['context'].elevated() + # TODO(uni): DB query in filter is a performance hit, especially for + # system with lots of hosts. Will need a general solution here to fix + # all filters with aggregate DB call things. + metadata = db.aggregate_metadata_get_by_host( + context, host_state.host, key='cpu_allocation_ratio') + aggregate_vals = metadata.get('cpu_allocation_ratio', set()) + num_values = len(aggregate_vals) + + if num_values == 0: + return CONF.cpu_allocation_ratio + + if num_values > 1: + LOG.warning(_("%(num_values)d ratio values found, " + "of which the minimum value will be used."), + {'num_values': num_values}) + + try: + ratio = min(map(float, aggregate_vals)) + except ValueError as e: + LOG.warning(_("Could not decode cpu_allocation_ratio: '%s'"), e) + ratio = CONF.cpu_allocation_ratio + + return ratio diff --git a/nova/scheduler/solvers/constraints/availability_zone_constraint.py b/nova/scheduler/solvers/constraints/availability_zone_constraint.py new file mode 100644 index 0000000..8efd521 --- /dev/null +++ b/nova/scheduler/solvers/constraints/availability_zone_constraint.py @@ -0,0 +1,27 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import availability_zone_filter +from nova.scheduler.solvers import constraints + + +class AvailabilityZoneConstraint(constraints.BaseFilterConstraint): + """Selects Hosts by availability zone. + + Works with aggregate metadata availability zones, using the key + 'availability_zone' + Note: in theory a compute node can be part of multiple availability_zones + """ + host_filter_cls = availability_zone_filter.AvailabilityZoneFilter diff --git a/nova/scheduler/solvers/constraints/compute_capabilities_constraint.py b/nova/scheduler/solvers/constraints/compute_capabilities_constraint.py new file mode 100644 index 0000000..d72765c --- /dev/null +++ b/nova/scheduler/solvers/constraints/compute_capabilities_constraint.py @@ -0,0 +1,22 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import compute_capabilities_filter +from nova.scheduler.solvers import constraints + + +class ComputeCapabilitiesConstraint(constraints.BaseFilterConstraint): + """Hard-coded to work with InstanceType records.""" + host_filter_cls = compute_capabilities_filter.ComputeCapabilitiesFilter diff --git a/nova/scheduler/solvers/constraints/disk_constraint.py b/nova/scheduler/solvers/constraints/disk_constraint.py new file mode 100644 index 0000000..5f56c52 --- /dev/null +++ b/nova/scheduler/solvers/constraints/disk_constraint.py @@ -0,0 +1,77 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +CONF = cfg.CONF +CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter') + +LOG = logging.getLogger(__name__) + + +class DiskConstraint(constraints.BaseLinearConstraint): + """Constraint of the maximum total disk demand acceptable on each host.""" + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + # get requested disk + instance_type = filter_properties.get('instance_type') or {} + requested_disk = (1024 * (instance_type.get('root_gb', 0) + + instance_type.get('ephemeral_gb', 0)) + + instance_type.get('swap', 0)) + for inst_type_key in ['root_gb', 'ephemeral_gb', 'swap']: + if inst_type_key not in instance_type: + LOG.warn(_("Disk information of requested instances\' %s " + "is incomplete, use 0 as the requested size.") % + inst_type_key) + if requested_disk <= 0: + LOG.warn(_("DiskConstraint is skipped because requested " + "instance disk size is 0 or invalid.")) + return constraint_matrix + + for i in xrange(num_hosts): + # get usable disk + free_disk_mb = hosts[i].free_disk_mb + total_usable_disk_mb = hosts[i].total_usable_disk_gb * 1024 + disk_mb_limit = total_usable_disk_mb * CONF.disk_allocation_ratio + used_disk_mb = total_usable_disk_mb - free_disk_mb + usable_disk_mb = disk_mb_limit - used_disk_mb + + acceptable_num_instances = int(usable_disk_mb / requested_disk) + if acceptable_num_instances < num_instances: + inacceptable_num = (num_instances - acceptable_num_instances) + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to DiskConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + disk_gb_limit = disk_mb_limit / 1024 + hosts[i].limits['disk_gb'] = disk_gb_limit + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/image_props_constraint.py b/nova/scheduler/solvers/constraints/image_props_constraint.py new file mode 100644 index 0000000..31e6599 --- /dev/null +++ b/nova/scheduler/solvers/constraints/image_props_constraint.py @@ -0,0 +1,28 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import image_props_filter +from nova.scheduler.solvers import constraints + + +class ImagePropertiesConstraint(constraints.BaseFilterConstraint): + """Select compute nodes that satisfy instance image properties. + + The ImagePropertiesConstraint selects compute nodes that satisfy + any architecture, hypervisor type, or virtual machine mode properties + specified on the instance's image properties. Image properties are + contained in the image dictionary in the request_spec. + """ + host_filter_cls = image_props_filter.ImagePropertiesFilter diff --git a/nova/scheduler/solvers/constraints/io_ops_constraint.py b/nova/scheduler/solvers/constraints/io_ops_constraint.py new file mode 100644 index 0000000..e93ca6e --- /dev/null +++ b/nova/scheduler/solvers/constraints/io_ops_constraint.py @@ -0,0 +1,59 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +LOG = logging.getLogger(__name__) + +CONF = cfg.CONF +CONF.import_opt('max_io_ops_per_host', 'nova.scheduler.filters.io_ops_filter') + + +class IoOpsConstraint(constraints.BaseLinearConstraint): + """A constraint to ensure only those hosts are selected whose number of + concurrent I/O operations are within a set threshold. + """ + + def get_constraint_matrix(self, hosts, filter_properties): + max_io_ops = CONF.max_io_ops_per_host + + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + for i in xrange(num_hosts): + num_io_ops = hosts[i].num_io_ops + + acceptable_num_instances = int(max_io_ops - num_io_ops) + if acceptable_num_instances < 0: + acceptable_num_instances = 0 + if acceptable_num_instances < num_instances: + inacceptable_num = (num_instances - acceptable_num_instances) + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to IoOpsConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/isolated_hosts_constraint.py b/nova/scheduler/solvers/constraints/isolated_hosts_constraint.py new file mode 100644 index 0000000..af0526d --- /dev/null +++ b/nova/scheduler/solvers/constraints/isolated_hosts_constraint.py @@ -0,0 +1,22 @@ +# Copyright (c) 2011-2012 OpenStack Foundation +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import isolated_hosts_filter +from nova.scheduler.solvers import constraints + + +class IsolatedHostsConstraint(constraints.BaseFilterConstraint): + """Keep specified images to selected hosts.""" + host_filter_cls = isolated_hosts_filter.IsolatedHostsFilter diff --git a/nova/scheduler/solvers/constraints/json_constraint.py b/nova/scheduler/solvers/constraints/json_constraint.py new file mode 100644 index 0000000..98f37ab --- /dev/null +++ b/nova/scheduler/solvers/constraints/json_constraint.py @@ -0,0 +1,24 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import json_filter +from nova.scheduler.solvers import constraints + + +class JsonConstraint(constraints.BaseFilterConstraint): + """Constraint to allow simple JSON-based grammar for + selecting hosts. + """ + host_filter_cls = json_filter.JsonFilter diff --git a/nova/scheduler/solvers/constraints/metrics_constraint.py b/nova/scheduler/solvers/constraints/metrics_constraint.py new file mode 100644 index 0000000..f86ca96 --- /dev/null +++ b/nova/scheduler/solvers/constraints/metrics_constraint.py @@ -0,0 +1,24 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import metrics_filter +from nova.scheduler.solvers import constraints + + +class MetricsConstraint(constraints.BaseFilterConstraint): + """This constraint is used to filter out those hosts which don't have the + corresponding metrics. + """ + host_filter_cls = metrics_filter.MetricsFilter diff --git a/nova/scheduler/solvers/constraints/no_constraint.py b/nova/scheduler/solvers/constraints/no_constraint.py new file mode 100644 index 0000000..ae335c3 --- /dev/null +++ b/nova/scheduler/solvers/constraints/no_constraint.py @@ -0,0 +1,29 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers import constraints + + +class NoConstraint(constraints.BaseLinearConstraint): + """No-op constraint that returns empty linear constraint components.""" + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/num_instances_constraint.py b/nova/scheduler/solvers/constraints/num_instances_constraint.py new file mode 100644 index 0000000..45d1db3 --- /dev/null +++ b/nova/scheduler/solvers/constraints/num_instances_constraint.py @@ -0,0 +1,59 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +CONF = cfg.CONF +CONF.import_opt("max_instances_per_host", + "nova.scheduler.filters.num_instances_filter") + +LOG = logging.getLogger(__name__) + + +class NumInstancesConstraint(constraints.BaseLinearConstraint): + """Constraint that specifies the maximum number of instances that + each host can launch. + """ + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + max_instances = CONF.max_instances_per_host + + for i in xrange(num_hosts): + num_host_instances = hosts[i].num_instances + acceptable_num_instances = int(max_instances - num_host_instances) + if acceptable_num_instances < 0: + acceptable_num_instances = 0 + if acceptable_num_instances < num_instances: + inacceptable_num = num_instances - acceptable_num_instances + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to NumInstancesConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/pci_passthrough_constraint.py b/nova/scheduler/solvers/constraints/pci_passthrough_constraint.py new file mode 100644 index 0000000..886078f --- /dev/null +++ b/nova/scheduler/solvers/constraints/pci_passthrough_constraint.py @@ -0,0 +1,80 @@ +# Copyright (c) 2011-2012 OpenStack Foundation +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import copy + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +LOG = logging.getLogger(__name__) + + +class PciPassthroughConstraint(constraints.BaseLinearConstraint): + """Constraint that schedules instances on a host if the host has devices + to meet the device requests in the 'extra_specs' for the flavor. + + PCI resource tracker provides updated summary information about the + PCI devices for each host, like: + [{"count": 5, "vendor_id": "8086", "product_id": "1520", + "extra_info":'{}'}], + and VM requests PCI devices via PCI requests, like: + [{"count": 1, "vendor_id": "8086", "product_id": "1520",}]. + + The constraint checks if the host passes or not based on this information. + """ + + def _get_acceptable_pci_requests_times(self, max_times_to_try, + pci_requests, host_pci_stats): + acceptable_times = 0 + while acceptable_times < max_times_to_try: + if host_pci_stats.support_requests(pci_requests): + acceptable_times += 1 + host_pci_stats.apply_requests(pci_requests) + else: + break + return acceptable_times + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + pci_requests = filter_properties.get('pci_requests') + if not pci_requests: + LOG.warn(_("PciPassthroughConstraint check is skipped because " + "requested instance PCI requests is unavailable.")) + return constraint_matrix + + for i in xrange(num_hosts): + host_pci_stats = copy.deepcopy(hosts[i].pci_stats) + acceptable_num_instances = ( + self._get_acceptable_pci_requests_times(num_instances, + pci_requests, host_pci_stats)) + + if acceptable_num_instances < num_instances: + inacceptable_num = num_instances - acceptable_num_instances + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to PciPassthroughConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/ram_constraint.py b/nova/scheduler/solvers/constraints/ram_constraint.py new file mode 100644 index 0000000..83c42cb --- /dev/null +++ b/nova/scheduler/solvers/constraints/ram_constraint.py @@ -0,0 +1,77 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +CONF = cfg.CONF +CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter') + +LOG = logging.getLogger(__name__) + + +class RamConstraint(constraints.BaseLinearConstraint): + """Constraint of the total ram demand acceptable on each host.""" + + def _get_ram_allocation_ratio(self, host_state, filter_properties): + return CONF.ram_allocation_ratio + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + # get requested ram + instance_type = filter_properties.get('instance_type') or {} + requested_ram = instance_type.get('memory_mb', 0) + if 'memory_mb' not in instance_type: + LOG.warn(_("No information about requested instances\' RAM size " + "was found, default value (0) is used.")) + if requested_ram <= 0: + LOG.warn(_("RamConstraint is skipped because requested " + "instance RAM size is 0 or invalid.")) + return constraint_matrix + + for i in xrange(num_hosts): + ram_allocation_ratio = self._get_ram_allocation_ratio( + hosts[i], filter_properties) + # get available ram + free_ram_mb = hosts[i].free_ram_mb + total_usable_ram_mb = hosts[i].total_usable_ram_mb + memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio + used_ram_mb = total_usable_ram_mb - free_ram_mb + usable_ram = memory_mb_limit - used_ram_mb + + acceptable_num_instances = int(usable_ram / requested_ram) + if acceptable_num_instances < num_instances: + inacceptable_num = num_instances - acceptable_num_instances + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to RamConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + hosts[i].limits['memory_mb'] = memory_mb_limit + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/retry_constraint.py b/nova/scheduler/solvers/constraints/retry_constraint.py new file mode 100644 index 0000000..89cd484 --- /dev/null +++ b/nova/scheduler/solvers/constraints/retry_constraint.py @@ -0,0 +1,22 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import retry_filter +from nova.scheduler.solvers import constraints + + +class RetryConstraint(constraints.BaseFilterConstraint): + """Selects nodes that have not been attempted for scheduling purposes.""" + host_filter_cls = retry_filter.RetryFilter diff --git a/nova/scheduler/solvers/constraints/server_group_affinity_constraint.py b/nova/scheduler/solvers/constraints/server_group_affinity_constraint.py new file mode 100644 index 0000000..85d89c6 --- /dev/null +++ b/nova/scheduler/solvers/constraints/server_group_affinity_constraint.py @@ -0,0 +1,80 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers import constraints + + +class ServerGroupAffinityConstraint(constraints.BaseLinearConstraint): + """Force to select hosts which host given server group.""" + + def __init__(self, *args, **kwargs): + super(ServerGroupAffinityConstraint, self).__init__(*args, **kwargs) + self.policy_name = 'affinity' + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + policies = filter_properties.get('group_policies', []) + if self.policy_name not in policies: + return constraint_matrix + + group_hosts = filter_properties.get('group_hosts') + + if not group_hosts: + constraint_matrix = [ + ([False for j in xrange(num_instances - 1)] + [True]) + for i in xrange(num_hosts)] + else: + for i in xrange(num_hosts): + if hosts[i].host not in group_hosts: + constraint_matrix[i] = [False for + j in xrange(num_instances)] + + return constraint_matrix + + +class ServerGroupAntiAffinityConstraint(constraints.BaseLinearConstraint): + """Force to select hosts which host given server group.""" + + def __init__(self, *args, **kwargs): + super(ServerGroupAntiAffinityConstraint, self).__init__( + *args, **kwargs) + self.policy_name = 'anti-affinity' + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + policies = filter_properties.get('group_policies', []) + if self.policy_name not in policies: + return constraint_matrix + + group_hosts = filter_properties.get('group_hosts') + + for i in xrange(num_hosts): + if hosts[i].host in group_hosts: + constraint_matrix[i] = [False for j in xrange(num_instances)] + else: + constraint_matrix[i] = ([True] + [False for + j in xrange(num_instances - 1)]) + + return constraint_matrix diff --git a/nova/scheduler/solvers/constraints/trusted_hosts_constraint.py b/nova/scheduler/solvers/constraints/trusted_hosts_constraint.py new file mode 100644 index 0000000..1fcd349 --- /dev/null +++ b/nova/scheduler/solvers/constraints/trusted_hosts_constraint.py @@ -0,0 +1,30 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import trusted_filter +from nova.scheduler.solvers import constraints + + +class TrustedHostsConstraint(constraints.BaseFilterConstraint): + """Constraint to add support for Trusted Computing Pools. + + Allows a host to be selected by scheduler only when the integrity (trust) + of that host matches the trust requested in the `extra_specs' for the + flavor. The `extra_specs' will contain a key/value pair where the + key is `trust'. The value of this pair (`trusted'/`untrusted') must + match the integrity of that host (obtained from the Attestation + service) before the task can be scheduled on that host. + """ + host_filter_cls = trusted_filter.TrustedFilter diff --git a/nova/scheduler/solvers/constraints/type_affinity_constraint.py b/nova/scheduler/solvers/constraints/type_affinity_constraint.py new file mode 100644 index 0000000..38c3d72 --- /dev/null +++ b/nova/scheduler/solvers/constraints/type_affinity_constraint.py @@ -0,0 +1,22 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.filters import type_filter +from nova.scheduler.solvers import constraints + + +class TypeAffinityConstraint(constraints.BaseFilterConstraint): + """TypeAffinityConstraint doesn't allow more then one VM type per host.""" + host_filter_cls = type_filter.TypeAffinityFilter diff --git a/nova/scheduler/solvers/constraints/vcpu_constraint.py b/nova/scheduler/solvers/constraints/vcpu_constraint.py new file mode 100644 index 0000000..27f810d --- /dev/null +++ b/nova/scheduler/solvers/constraints/vcpu_constraint.py @@ -0,0 +1,79 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import constraints + +CONF = cfg.CONF +CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter') + +LOG = logging.getLogger(__name__) + + +class VcpuConstraint(constraints.BaseLinearConstraint): + """Constraint of the total vcpu demand acceptable on each host.""" + + def _get_cpu_allocation_ratio(self, host_state, filter_properties): + return CONF.cpu_allocation_ratio + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + + # get requested vcpus + instance_type = filter_properties.get('instance_type') or {} + if not instance_type: + return constraint_matrix + else: + instance_vcpus = instance_type['vcpus'] + if instance_vcpus <= 0: + LOG.warn(_("VcpuConstraint is skipped because requested " + "instance vCPU number is 0 or invalid.")) + return constraint_matrix + + for i in xrange(num_hosts): + cpu_allocation_ratio = self._get_cpu_allocation_ratio( + hosts[i], filter_properties) + # get available vcpus + if not hosts[i].vcpus_total: + LOG.warn(_("vCPUs of %(host)s not set; assuming CPU " + "collection broken."), {'host': hosts[i]}) + continue + else: + vcpus_total = hosts[i].vcpus_total * cpu_allocation_ratio + usable_vcpus = vcpus_total - hosts[i].vcpus_used + + acceptable_num_instances = int(usable_vcpus / instance_vcpus) + if acceptable_num_instances < num_instances: + inacceptable_num = num_instances - acceptable_num_instances + constraint_matrix[i] = ( + [True for j in xrange(acceptable_num_instances)] + + [False for j in xrange(inacceptable_num)]) + + LOG.debug(_("%(host)s can accept %(num)s requested instances " + "according to VcpuConstraint."), + {'host': hosts[i], + 'num': acceptable_num_instances}) + + if vcpus_total > 0: + hosts[i].limits['vcpu'] = vcpus_total + + return constraint_matrix diff --git a/nova/scheduler/solvers/costs/__init__.py b/nova/scheduler/solvers/costs/__init__.py index 6b56748..5661ebe 100644 --- a/nova/scheduler/solvers/costs/__init__.py +++ b/nova/scheduler/solvers/costs/__init__.py @@ -23,33 +23,50 @@ from nova import loadables class BaseCost(object): """Base class for cost.""" - def get_cost_matrix(self, hosts, instance_uuids, request_spec, - filter_properties): - """Return the cost matrix. Implement this in a subclass.""" + precedence = 0 + + def cost_multiplier(self): + """How weighted this cost should be. + + Override this method in a subclass, so that the returned value is + read from a configuration option to permit operators specify a + multiplier for the cost. + """ + return 1.0 + + def get_components(self, variables, hosts, filter_properties): + """Return the components of the cost.""" raise NotImplementedError() - def normalize_cost_matrix(self, cost_matrix, lower_bound=0.0, - upper_bound=1.0): - """Normalize the cost matrix. By default scale to [0,1].""" - if (lower_bound > upper_bound): - return None - cost_array = [] - normalized_cost_matrix = list(cost_matrix) - for i in range(len(cost_matrix)): - for j in range(len(cost_matrix[i])): - cost_array.append(cost_matrix[i][j]) - max_cost = max(cost_array) - min_cost = min(cost_array) - for i in range(len(normalized_cost_matrix)): - for j in range(len(normalized_cost_matrix[i])): - if max_cost == min_cost: - normalized_cost_matrix[i][j] = (upper_bound - + lower_bound) / 2 - else: - normalized_cost_matrix[i][j] = (lower_bound + - (cost_matrix[i][j] - min_cost) * (upper_bound - - lower_bound) / (max_cost - min_cost)) - return normalized_cost_matrix + +class BaseLinearCost(BaseCost): + """Base class of LP cost.""" + + def __init__(self): + self.variables = [] + self.coefficients = [] + + def _generate_components(self, variables, hosts, filter_properties): + # override in a sub class. + pass + + def get_components(self, variables, hosts, filter_properties): + # deprecated currently, reserve for future use + self._generate_components(variables, hosts, filter_properties) + return (self.variables, self.coefficients) + + def get_extended_cost_matrix(self, hosts, filter_properties): + raise NotImplementedError() + + def get_init_costs(self, hosts, filter_properties): + x_cost_mat = self.get_extended_cost_matrix(hosts, filter_properties) + init_costs = [row[0] for row in x_cost_mat] + return init_costs + + def get_cost_matrix(self, hosts, filter_properties): + x_cost_mat = self.get_extended_cost_matrix(hosts, filter_properties) + cost_matrix = [row[1:] for row in x_cost_mat] + return cost_matrix class CostHandler(loadables.BaseLoader): diff --git a/nova/scheduler/solvers/costs/host_network_affinity_cost.py b/nova/scheduler/solvers/costs/host_network_affinity_cost.py deleted file mode 100644 index 20b18f1..0000000 --- a/nova/scheduler/solvers/costs/host_network_affinity_cost.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -"""Host network cost.""" - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import costs as solvercosts - -from oslo.config import cfg - -LOG = logging.getLogger(__name__) - -CONF = cfg.CONF - - -class HostNetworkAffinityCost(solvercosts.BaseCost): - """The cost is evaluated by the existence of - requested networks in hosts. - """ - - def get_cost_matrix(self, hosts, instance_uuids, request_spec, - filter_properties): - """Calculate the cost matrix.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - - costs = [[0 for j in range(num_instances)] - for i in range(num_hosts)] - - requested_networks = filter_properties.get('requested_networks', None) - if requested_networks is None: - return costs - - for i in range(num_hosts): - host_cost = 0 - for network_id, requested_ip, port_id in requested_networks: - if network_id: - if network_id in hosts[i].networks: - host_cost -= 1 - costs[i] = [host_cost for j in range(num_instances)] - - return costs diff --git a/nova/scheduler/solvers/costs/metrics_cost.py b/nova/scheduler/solvers/costs/metrics_cost.py new file mode 100644 index 0000000..a181b00 --- /dev/null +++ b/nova/scheduler/solvers/costs/metrics_cost.py @@ -0,0 +1,105 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Metrics Cost. Calculate hosts' costs by their metrics. + +This can compute the costs based on the compute node hosts' various +metrics. The to-be computed metrics and their weighing ratio are specified +in the configuration file as the followings: + + [metrics] + weight_setting = name1=1.0, name2=-1.0 + + The final weight would be name1.value * 1.0 + name2.value * -1.0. +""" + +from oslo.config import cfg + +from nova.scheduler.solvers import costs as solver_costs +from nova.scheduler.solvers.costs import utils as cost_utils +from nova.scheduler import utils + +metrics_cost_opts = [ + cfg.FloatOpt('metrics_cost_multiplier', + default=1.0, + help='Multiplier used for metrics costs.'), +] + +metrics_weight_opts = [ + cfg.FloatOpt('weight_multiplier_of_unavailable', + default=(-1.0), + help='If any one of the metrics set by weight_setting ' + 'is unavailable, the metric weight of the host ' + 'will be set to (minw + (maxw - minw) * m), ' + 'where maxw and minw are the max and min weights ' + 'among all hosts, and m is the multiplier.'), +] + +CONF = cfg.CONF +CONF.register_opts(metrics_cost_opts, group='solver_scheduler') +CONF.register_opts(metrics_weight_opts, group='metrics') +CONF.import_opt('weight_setting', 'nova.scheduler.weights.metrics', + group='metrics') + + +class MetricsCost(solver_costs.BaseLinearCost): + def __init__(self): + self._parse_setting() + + def _parse_setting(self): + self.setting = utils.parse_options(CONF.metrics.weight_setting, + sep='=', + converter=float, + name="metrics.weight_setting") + + def cost_multiplier(self): + return CONF.solver_scheduler.metrics_cost_multiplier + + def get_extended_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances') + + host_weights = [] + numeric_values = [] + for host in hosts: + metric_sum = 0.0 + for (name, ratio) in self.setting: + metric = host.metrics.get(name, None) + if metric: + metric_sum += metric.value * ratio + else: + metric_sum = None + break + host_weights.append(metric_sum) + if metric_sum: + numeric_values.append(metric_sum) + if numeric_values: + minval = min(numeric_values) + maxval = max(numeric_values) + weight_of_unavailable = (minval + (maxval - minval) * + CONF.metrics.weight_multiplier_of_unavailable) + for i in range(num_hosts): + if host_weights[i] is None: + host_weights[i] = weight_of_unavailable + else: + host_weights = [0 for i in xrange(num_hosts)] + + extended_cost_matrix = [[(-host_weights[i]) + for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + extended_cost_matrix = cost_utils.normalize_cost_matrix( + extended_cost_matrix) + return extended_cost_matrix diff --git a/nova/scheduler/solvers/costs/rack_network_affinity_cost.py b/nova/scheduler/solvers/costs/rack_network_affinity_cost.py deleted file mode 100644 index c8e95ee..0000000 --- a/nova/scheduler/solvers/costs/rack_network_affinity_cost.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -"""Rack network cost.""" - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import costs as solvercosts - -from oslo.config import cfg - -LOG = logging.getLogger(__name__) - -CONF = cfg.CONF - - -class RackNetworkAffinityCost(solvercosts.BaseCost): - """The cost is evaluated by the existence of - requested networks in racks. - """ - - def get_cost_matrix(self, hosts, instance_uuids, request_spec, - filter_properties): - """Calculate the cost matrix.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - - costs = [[0 for j in range(num_instances)] - for i in range(num_hosts)] - - requested_networks = filter_properties.get('requested_networks', None) - if requested_networks is None: - return costs - - for i in range(num_hosts): - host_cost = 0 - for network_id, requested_ip, port_id in requested_networks: - if network_id: - if network_id in sum(hosts[i].rack_networks.values(), []): - host_cost -= 1 - costs[i] = [host_cost for j in range(num_instances)] - - return costs diff --git a/nova/scheduler/solvers/costs/ram_cost.py b/nova/scheduler/solvers/costs/ram_cost.py index 7415a25..5b15329 100644 --- a/nova/scheduler/solvers/costs/ram_cost.py +++ b/nova/scheduler/solvers/costs/ram_cost.py @@ -13,33 +13,65 @@ # License for the specific language governing permissions and limitations # under the License. -"""Ram cost.""" +""" +RAM Cost. Calculate instance placement costs by hosts' RAM usage. -from nova.openstack.common import log as logging -from nova.scheduler.solvers import costs as solvercosts +The default is to spread instances across all hosts evenly. If you prefer +stacking, you can set the 'ram_cost_multiplier' option to a positive +number and the cost has the opposite effect of the default. +""" from oslo.config import cfg -LOG = logging.getLogger(__name__) +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler.solvers import costs as solver_costs +from nova.scheduler.solvers.costs import utils + +ram_cost_opts = [ + cfg.FloatOpt('ram_cost_multiplier', + default=1.0, + help='Multiplier used for ram costs. Negative ' + 'numbers mean to stack vs spread.'), +] CONF = cfg.CONF +CONF.register_opts(ram_cost_opts, group='solver_scheduler') + +LOG = logging.getLogger(__name__) -class RamCost(solvercosts.BaseCost): - """The cost is evaluated by the production of hosts' free memory - and a pre-defined multiplier. - """ +class RamCost(solver_costs.BaseLinearCost): - def get_cost_matrix(self, hosts, instance_uuids, request_spec, - filter_properties): - """Calculate the cost matrix.""" + def cost_multiplier(self): + return CONF.solver_scheduler.ram_cost_multiplier + + def get_extended_cost_matrix(self, hosts, filter_properties): num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) + num_instances = filter_properties.get('num_instances') + + instance_type = filter_properties.get('instance_type') or {} + requested_ram = instance_type.get('memory_mb', 0) + if 'memory_mb' not in instance_type: + LOG.warn(_("No information about requested instances\' RAM size " + "was found, default value (0) is used.")) + + extended_cost_matrix = [[0 for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + + if requested_ram == 0: + extended_cost_matrix = [ + [(-hosts[i].free_ram_mb) + for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] else: - num_instances = request_spec.get('num_instances', 1) - - costs = [[hosts[i].free_ram_mb * CONF.ram_weight_multiplier - for j in range(num_instances)] for i in range(num_hosts)] - - return costs + # we use int approximation here to avoid scaling problems after + # normalization, in the case that the free ram in all hosts are + # of very small values + extended_cost_matrix = [ + [-int(hosts[i].free_ram_mb / requested_ram) + j + for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + extended_cost_matrix = utils.normalize_cost_matrix( + extended_cost_matrix) + return extended_cost_matrix diff --git a/nova/scheduler/solvers/costs/utils.py b/nova/scheduler/solvers/costs/utils.py new file mode 100644 index 0000000..abe17e2 --- /dev/null +++ b/nova/scheduler/solvers/costs/utils.py @@ -0,0 +1,43 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +"""Utility methods for scheduler solver costs.""" + + +def normalize_cost_matrix(cost_matrix): + """This is a special normalization method for cost matrix, it + preserves the linear relationships among current host states (first + column of the matrix) while scaling their maximum absolute value to 1. + Notice that by this method, the matrix is not scaled to a fixed range. + """ + normalized_matrix = cost_matrix + + if not cost_matrix: + return normalized_matrix + + first_column = [row[0] for row in cost_matrix] + maxval = max(first_column) + minval = min(first_column) + maxabs = max(abs(maxval), abs(minval)) + + if maxabs == 0: + return normalized_matrix + + scale_factor = 1.0 / maxabs + for i in xrange(len(cost_matrix)): + for j in xrange(len(cost_matrix[i])): + normalized_matrix[i][j] = cost_matrix[i][j] * scale_factor + + return normalized_matrix diff --git a/nova/scheduler/solvers/costs/volume_affinity_cost.py b/nova/scheduler/solvers/costs/volume_affinity_cost.py deleted file mode 100644 index 70fb6ff..0000000 --- a/nova/scheduler/solvers/costs/volume_affinity_cost.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -"""Volume affinity cost. - This pluggable cost provides a way to schedule a VM on a host that has - a specified volume. In the cost matrix used for the linear programming - optimization problem, the entries for the host that contains the - specified volume is given as 0, and 1 for other hosts. So all the other - hosts have equal cost and are considered equal. Currently this solution - allows you to provide only one volume_id as a hint, so this solution - works best for scheduling a single VM. Another limitation is that the - user needs to have an admin context to obtain the host information from - the cinderclient. Without the knowledge of the host containing the volume - all hosts will have the same cost of 1. -""" - -from cinderclient import exceptions as client_exceptions - -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler import driver as scheduler_driver -from nova.scheduler.solvers import costs as solvercosts -import nova.volume.cinder as volumecinder - -LOG = logging.getLogger(__name__) - - -class VolumeAffinityCost(solvercosts.BaseCost): - """The cost is 0 for same-as-volume host and 1 otherwise.""" - - hint_name = 'affinity_volume_id' - - def get_cost_matrix(self, hosts, instance_uuids, request_spec, - filter_properties): - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - - context = filter_properties.get('context') - scheduler_hints = filter_properties.get('scheduler_hints', None) - - cost_matrix = [[1.0 for j in range(num_instances)] - for i in range(num_hosts)] - - if scheduler_hints is not None: - volume_id = scheduler_hints.get(self.hint_name, None) - LOG.debug(_("volume id: %s") % volume_id) - if volume_id: - volume = None - volume_host = None - try: - volume = volumecinder.cinderclient(context).volumes.get( - volume_id) - if volume: - volume = volumecinder.cinderadminclient().volumes.get( - volume_id) - volume_host = getattr(volume, 'os-vol-host-attr:host', - None) - LOG.debug(_("volume host: %s") % volume_host) - except client_exceptions.NotFound: - LOG.warning( - _("volume with provided id ('%s') was not found") - % volume_id) - except client_exceptions.Unauthorized: - LOG.warning(_("Failed to retrieve volume %s: unauthorized") - % volume_id) - except: - LOG.warning(_("Failed to retrieve volume due to an error")) - - if volume_host: - for i in range(num_hosts): - host_state = hosts[i] - if host_state.host == volume_host: - cost_matrix[i] = [0.0 - for j in range(num_instances)] - LOG.debug(_("this host: %(h1)s volume host: %(h2)s") % - {"h1": host_state.host, "h2": volume_host}) - else: - LOG.warning(_("Cannot find volume host.")) - return cost_matrix diff --git a/nova/scheduler/solvers/fast_solver.py b/nova/scheduler/solvers/fast_solver.py new file mode 100644 index 0000000..f2fb11f --- /dev/null +++ b/nova/scheduler/solvers/fast_solver.py @@ -0,0 +1,138 @@ +# Copyright (c) 2015 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import operator + +from nova.scheduler import solvers as scheduler_solver + + +class FastSolver(scheduler_solver.BaseHostSolver): + + def __init__(self): + super(FastSolver, self).__init__() + self.cost_classes = self._get_cost_classes() + self.constraint_classes = self._get_constraint_classes() + + def _get_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + solver_cache = filter_properties['solver_cache'] + # initialize cost matrix + cost_matrix = [[0 for j in xrange(num_instances)] + for i in xrange(num_hosts)] + solver_cache['cost_matrix'] = cost_matrix + cost_objects = [cost() for cost in self.cost_classes] + cost_objects.sort(key=lambda cost: cost.precedence) + precedence_level = 0 + for cost_object in cost_objects: + if cost_object.precedence > precedence_level: + # update cost matrix in the solver cache + solver_cache['cost_matrix'] = cost_matrix + precedence_level = cost_object.precedence + cost_multiplier = cost_object.cost_multiplier() + this_cost_mat = cost_object.get_cost_matrix(hosts, + filter_properties) + if not this_cost_mat: + continue + cost_matrix = [[cost_matrix[i][j] + + this_cost_mat[i][j] * cost_multiplier + for j in xrange(num_instances)] + for i in xrange(num_hosts)] + # update cost matrix in the solver cache + solver_cache['cost_matrix'] = cost_matrix + + return cost_matrix + + def _get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + solver_cache = filter_properties['solver_cache'] + # initialize constraint_matrix + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + solver_cache['constraint_matrix'] = constraint_matrix + constraint_objects = [cons() for cons in self.constraint_classes] + constraint_objects.sort(key=lambda cons: cons.precedence) + precedence_level = 0 + for constraint_object in constraint_objects: + if constraint_object.precedence > precedence_level: + # update constraint matrix in the solver cache + solver_cache['constraint_matrix'] = constraint_matrix + precedence_level = constraint_object.precedence + this_cons_mat = constraint_object.get_constraint_matrix(hosts, + filter_properties) + if not this_cons_mat: + continue + constraint_matrix = [[constraint_matrix[i][j] & + this_cons_mat[i][j] for j in xrange(num_instances)] + for i in xrange(num_hosts)] + # update constraint matrix in the solver cache + solver_cache['constraint_matrix'] = constraint_matrix + + return constraint_matrix + + def solve(self, hosts, filter_properties): + host_instance_combinations = [] + + num_instances = filter_properties['num_instances'] + num_hosts = len(hosts) + + instance_uuids = filter_properties.get('instance_uuids') or [ + '(unknown_uuid)' + str(i) for i in xrange(num_instances)] + + filter_properties.setdefault('solver_cache', {}) + filter_properties['solver_cache'].update( + {'cost_matrix': [], + 'constraint_matrix': []}) + + cost_matrix = self._get_cost_matrix(hosts, filter_properties) + constraint_matrix = self._get_constraint_matrix(hosts, + filter_properties) + + placement_cost_tuples = [] + for i in xrange(num_hosts): + for j in xrange(num_instances): + if constraint_matrix[i][j]: + host_idx = i + inst_num = j + 1 + cost_val = cost_matrix[i][j] + placement_cost_tuples.append( + (host_idx, inst_num, cost_val)) + + sorted_placement_costs = sorted(placement_cost_tuples, + key=operator.itemgetter(2)) + + host_inst_alloc = [0 for i in xrange(num_hosts)] + allocated_inst_num = 0 + for (host_idx, inst_num, cost_val) in sorted_placement_costs: + delta = inst_num - host_inst_alloc[host_idx] + + if (delta <= 0) or (allocated_inst_num + delta > num_instances): + continue + + host_inst_alloc[host_idx] += delta + allocated_inst_num += delta + + if allocated_inst_num == num_instances: + break + + instances_iter = iter(instance_uuids) + for i in xrange(len(host_inst_alloc)): + num = host_inst_alloc[i] + for n in xrange(num): + host_instance_combinations.append( + (hosts[i], instances_iter.next())) + + return host_instance_combinations diff --git a/nova/scheduler/solvers/hosts_pulp_solver.py b/nova/scheduler/solvers/hosts_pulp_solver.py deleted file mode 100644 index ab6c135..0000000 --- a/nova/scheduler/solvers/hosts_pulp_solver.py +++ /dev/null @@ -1,224 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -A reference solver implementation that models the scheduling problem as a -Linear Programming (LP) problem using the PULP modeling framework. This -implementation includes disk and memory constraints, and uses the free ram as -a cost metric to maximize or minimize for the LP problem. -""" - -from oslo.config import cfg -from pulp import constants -from pulp import pulp - -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler import solvers as novasolvers - -LOG = logging.getLogger(__name__) - -CONF = cfg.CONF -CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter') -CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter') -CONF.import_opt('ram_weight_multiplier', 'nova.scheduler.weights.ram') - - -class HostsPulpSolver(novasolvers.BaseHostSolver): - """A LP based constraint solver implemented using PULP modeler.""" - - def host_solve(self, hosts, instance_uuids, request_spec, - filter_properties): - """This method returns a list of tuples - (host, instance_uuid) - that are returned by the solver. Here the assumption is that - all instance_uuids have the same requirement as specified in - filter_properties - """ - host_instance_tuples_list = [] - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - instance_uuids = ['unset_uuid%s' % i - for i in xrange(num_instances)] - - num_hosts = len(hosts) - - host_ids = ['Host%s' % i for i in range(num_hosts)] - LOG.debug(_("All Hosts: %s") % [h.host for h in hosts]) - - for host in hosts: - LOG.debug(_("Host state: %s") % host) - - host_id_dict = dict(zip(host_ids, hosts)) - - instances = ['Instance%s' % i for i in range(num_instances)] - - instance_id_dict = dict(zip(instances, instance_uuids)) - - # supply is a dictionary for the number of units of - # resource for each Host. - # Currently using only the disk_mb and memory_mb - # as the two resources to satisfy. Need to eventually be able to - # plug-in different resources. An example supply dictionary: - # supply = {"Host1": [1000, 1000], - # "Host2": [4000, 1000]} - - supply = dict((host_ids[i], - [self._get_usable_disk_mb(hosts[i]), - self._get_usable_memory_mb(hosts[i]), ]) - for i in range(len(host_ids))) - - number_of_resource_types_per_host = 2 - - required_disk_mb = self._get_required_disk_mb(filter_properties) - required_memory_mb = self._get_required_memory_mb(filter_properties) - - # demand is a dictionary for the number of - # units of resource required for each Instance. - # An example demand dictionary: - # demand = {"Instance0":[200, 300], - # "Instance1":[900, 100], - # "Instance2":[1800, 200], - # "Instance3":[200, 300], - # "Instance4":[700, 800], } - # However for the current scenario, all instances to be scheduled - # per request have the same requirements. Need to eventually - # to support requests to specify different instance requirements - - demand = dict((instances[i], - [required_disk_mb, required_memory_mb, ]) - for i in range(num_instances)) - - # Creates a list of costs of each Host-Instance assignment - # Currently just like the nova.scheduler.weights.ram.RAMWeigher, - # using host_state.free_ram_mb * ram_weight_multiplier - # as the cost. A negative ram_weight_multiplier means to stack, - # vs spread. - # An example costs list: - # costs = [ # Instances - # # 1 2 3 4 5 - # [2, 4, 5, 2, 1], # A Hosts - # [3, 1, 3, 2, 3] # B - # ] - # Multiplying -1 as we want to use the same behavior of - # ram_weight_multiplier as used by ram weigher. - costs = [[-1 * host.free_ram_mb * - CONF.ram_weight_multiplier - for i in range(num_instances)] - for host in hosts] - - costs = pulp.makeDict([host_ids, instances], costs, 0) - - # The PULP LP problem variable used to add all the problem data - prob = pulp.LpProblem("Host Instance Scheduler Problem", - constants.LpMinimize) - - all_host_instance_tuples = [(w, b) - for w in host_ids - for b in instances] - - vars = pulp.LpVariable.dicts("IA", (host_ids, instances), - 0, 1, constants.LpInteger) - - # The objective function is added to 'prob' first - prob += (pulp.lpSum([vars[w][b] * costs[w][b] - for (w, b) in all_host_instance_tuples]), - "Sum_of_Host_Instance_Scheduling_Costs") - - # The supply maximum constraints are added to - # prob for each supply node (Host) - for w in host_ids: - for i in range(number_of_resource_types_per_host): - prob += (pulp.lpSum([vars[w][b] * demand[b][i] - for b in instances]) - <= supply[w][i], - "Sum_of_Resource_%s" % i + "_provided_by_Host_%s" % w) - - # The number of Hosts required per Instance, in this case it is only 1 - for b in instances: - prob += (pulp.lpSum([vars[w][b] for w in host_ids]) - == 1, "Sum_of_Instance_Assignment%s" % b) - - # The demand minimum constraints are added to prob for - # each demand node (Instance) - for b in instances: - for j in range(number_of_resource_types_per_host): - prob += (pulp.lpSum([vars[w][b] * demand[b][j] - for w in host_ids]) - >= demand[b][j], - "Sum_of_Resource_%s" % j + "_required_by_Instance_%s" % b) - - # The problem is solved using PuLP's choice of Solver - prob.solve() - - if pulp.LpStatus[prob.status] == 'Optimal': - for v in prob.variables(): - if v.name.startswith('IA'): - (host_id, instance_id) = v.name.lstrip('IA').lstrip( - '_').split('_') - if v.varValue == 1.0: - host_instance_tuples_list.append( - (host_id_dict[host_id], - instance_id_dict[instance_id])) - - return host_instance_tuples_list - - def _get_usable_disk_mb(self, host_state): - """This method returns the usable disk in mb for the given host. - Takes into account the disk allocation ratio. - (virtual disk to physical disk allocation ratio). - """ - free_disk_mb = host_state.free_disk_mb - total_usable_disk_mb = host_state.total_usable_disk_gb * 1024 - disk_allocation_ratio = CONF.disk_allocation_ratio - disk_mb_limit = total_usable_disk_mb * disk_allocation_ratio - used_disk_mb = total_usable_disk_mb - free_disk_mb - usable_disk_mb = disk_mb_limit - used_disk_mb - return usable_disk_mb - - def _get_required_disk_mb(self, filter_properties): - """This method returns the required disk in mb from - the given filter_properties dictionary object. - """ - requested_disk_mb = 0 - instance_type = filter_properties.get('instance_type') - if instance_type is not None: - requested_disk_mb = 1024 * (instance_type.get('root_gb', 0) + - instance_type.get('ephemeral_gb', 0)) - return requested_disk_mb - - def _get_usable_memory_mb(self, host_state): - """This method returns the usable memory in mb for the given host. - Takes into account the ram allocation ratio. - (Virtual ram to physical ram allocation ratio). - """ - free_ram_mb = host_state.free_ram_mb - total_usable_ram_mb = host_state.total_usable_ram_mb - ram_allocation_ratio = CONF.ram_allocation_ratio - memory_mb_limit = total_usable_ram_mb * ram_allocation_ratio - used_ram_mb = total_usable_ram_mb - free_ram_mb - usable_ram_mb = memory_mb_limit - used_ram_mb - return usable_ram_mb - - def _get_required_memory_mb(self, filter_properties): - """This method returns the required memory in mb from - the given filter_properties dictionary object - """ - required_ram_mb = 0 - instance_type = filter_properties.get('instance_type') - if instance_type is not None: - required_ram_mb = instance_type.get('memory_mb', 0) - return required_ram_mb diff --git a/nova/scheduler/solvers/linearconstraints/__init__.py b/nova/scheduler/solvers/linearconstraints/__init__.py deleted file mode 100644 index 0642379..0000000 --- a/nova/scheduler/solvers/linearconstraints/__init__.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Linear constraints for scheduler linear constraint solvers -""" - -from nova.compute import api as compute -from nova import loadables - - -class BaseLinearConstraint(object): - """Base class for linear constraint.""" - # The linear constraint should be formed as: - # coeff_vector * var_vector' - # where is ==, >, >=, <, <=, !=, etc. - # For convenience, the can be merged into left-hand-side, - # thus the right-hand-side is always 0. - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - pass - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Retruns a list of coefficient vectors.""" - raise NotImplementedError() - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Returns a list of variable vectors.""" - raise NotImplementedError() - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Returns a list of operations.""" - raise NotImplementedError() - -class AffinityConstraint(BaseLinearConstraint): - def __init__(self, variables, hosts, instance_uuids, request_spec, filter_properties): - self.compute_api = compute.API() - [self.num_hosts, self.num_instances] = self._get_host_instance_nums(hosts,instance_uuids,request_spec) - def _get_host_instance_nums(self,hosts,instance_uuids,request_spec): - """This method calculates number of hosts and instances""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts,num_instances] - -class ResourceAllocationConstraint(BaseLinearConstraint): - """Base class of resource allocation constraints.""" - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - -class LinearConstraintHandler(loadables.BaseLoader): - def __init__(self): - super(LinearConstraintHandler, self).__init__(BaseLinearConstraint) - - -def all_linear_constraints(): - """Return a list of lineear constraint classes found in this directory. - This method is used as the default for available linear constraints for - scheduler and returns a list of all linearconstraint classes available. - """ - return LinearConstraintHandler().get_all_classes() diff --git a/nova/scheduler/solvers/linearconstraints/active_host_constraint.py b/nova/scheduler/solvers/linearconstraints/active_host_constraint.py deleted file mode 100644 index 21feb68..0000000 --- a/nova/scheduler/solvers/linearconstraints/active_host_constraint.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints -from nova import servicegroup - -CONF = cfg.CONF - -LOG = logging.getLogger(__name__) - - -class ActiveHostConstraint(linearconstraints.BaseLinearConstraint): - """Constraint that only allows active hosts to be selected.""" - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - self.servicegroup_api = servicegroup.API() - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Calculate the coeffivient vectors.""" - # Coefficients are 0 for active hosts and 1 otherwise - coefficient_matrix = [] - for host in hosts: - service = host.service - if service['disabled'] or not self.servicegroup_api.service_is_up( - service): - coefficient_matrix.append([1 for j in range( - self.num_instances)]) - LOG.debug(_("%s is not active") % host.host) - else: - coefficient_matrix.append([0 for j in range( - self.num_instances)]) - LOG.debug(_("%s is ok") % host.host) - return coefficient_matrix - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Reorganize the variables.""" - # The variable_matrix[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_matrix = [] - variable_matrix = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_matrix - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Set operations for each constraint function.""" - # Operations are '=='. - operations = [(lambda x: x == 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/all_hosts_constraint.py b/nova/scheduler/solvers/linearconstraints/all_hosts_constraint.py deleted file mode 100644 index 31f101b..0000000 --- a/nova/scheduler/solvers/linearconstraints/all_hosts_constraint.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints -from nova import servicegroup - -LOG = logging.getLogger(__name__) - - -class AllHostsConstraint(linearconstraints.BaseLinearConstraint): - """NoOp constraint. Passes all hosts.""" - - # The linear constraint should be formed as: - # coeff_vector * var_vector' - # where is ==, >, >=, <, <=, !=, etc. - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - self.servicegroup_api = servicegroup.API() - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - self._check_variables_size(variables) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - def _check_variables_size(self, variables): - """This method checks the size of variable matirx.""" - # Supposed to be a by matrix. - if len(variables) != self.num_hosts: - raise ValueError(_('Variables row length should match' - 'number of hosts.')) - for row in variables: - if len(row) != self.num_instances: - raise ValueError(_('Variables column length should' - 'match number of instances.')) - return True - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Calculate the coeffivient vectors.""" - # Coefficients are 0 for active hosts and 1 otherwise - coefficient_vectors = [] - for host in hosts: - coefficient_vectors.append([0 for j in range(self.num_instances)]) - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Reorganize the variables.""" - # The variable_vectors[i][j] denotes the relationship between host[i] - # and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in - range(self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Set operations for each constraint function.""" - # Operations are '=='. - operations = [(lambda x: x == 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/availability_zone_constraint.py b/nova/scheduler/solvers/linearconstraints/availability_zone_constraint.py deleted file mode 100644 index 3f1590d..0000000 --- a/nova/scheduler/solvers/linearconstraints/availability_zone_constraint.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.compute import api as compute -from nova import db -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) -CONF = cfg.CONF -CONF.import_opt('default_availability_zone', 'nova.availability_zones') - - -class AvailabilityZoneConstraint(linearconstraints.BaseLinearConstraint): - """To select only the hosts belonging to an availability zone. - """ - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - self.compute_api = compute.API() - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # Coefficients are 0 for hosts in the availability zone, 1 for others - props = request_spec.get('instance_properties', {}) - availability_zone = props.get('availability_zone') - - coefficient_vectors = [] - for host in hosts: - if availability_zone: - context = filter_properties['context'].elevated() - metadata = db.aggregate_metadata_get_by_host(context, - host.host, key='availability_zone') - if 'availability_zone' in metadata: - if availability_zone in metadata['availability_zone']: - coefficient_vectors.append([0 for j in range( - self.num_instances)]) - else: - coefficient_vectors.append([1 for j in range( - self.num_instances)]) - elif availability_zone == CONF.default_availability_zone: - coefficient_vectors.append([0 for j in range( - self.num_instances)]) - else: - coefficient_vectors.append([1 for j in range( - self.num_instances)]) - else: - coefficient_vectors.append([0 for j in range( - self.num_instances)]) - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_vectors[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '=='. - operations = [(lambda x: x == 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/different_host_constraint.py b/nova/scheduler/solvers/linearconstraints/different_host_constraint.py deleted file mode 100644 index f0f0b56..0000000 --- a/nova/scheduler/solvers/linearconstraints/different_host_constraint.py +++ /dev/null @@ -1,50 +0,0 @@ -from oslo.config import cfg - -from nova.compute import api as compute -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - -class DifferentHostConstraint(linearconstraints.AffinityConstraint): - """Force to select hosts which are different from a set of given instances.""" - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def get_coefficient_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # Coefficients are 1 for same hosts and 0 for different hosts. - context = filter_properties['context'] - scheduler_hints = filter_properties.get('scheduler_hints', {}) - affinity_uuids = scheduler_hints.get('different_host', []) - if isinstance(affinity_uuids, basestring): - affinity_uuids = [affinity_uuids] - coefficient_vectors = [] - for host in hosts: - if affinity_uuids: - if self.compute_api.get_all(context, - {'host':host.host, - 'uuid':affinity_uuids, - 'deleted':False}): - coefficient_vectors.append([1 for j in range(self.num_instances)]) - else: - coefficient_vectors.append([0 for j in range(self.num_instances)]) - else: - coefficient_vectors.append([0 for j in range(self.num_instances)]) - return coefficient_vectors - - def get_variable_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # The variable_vectors[i,j] denotes the relationship between host[i] and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range(self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # Operations are '=='. - operations = [(lambda x: x==0) for i in range(self.num_hosts)] - return operations - diff --git a/nova/scheduler/solvers/linearconstraints/io_ops_constraint.py b/nova/scheduler/solvers/linearconstraints/io_ops_constraint.py deleted file mode 100644 index d32204b..0000000 --- a/nova/scheduler/solvers/linearconstraints/io_ops_constraint.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.compute import api as compute -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - -CONF = cfg.CONF -CONF.import_opt('max_io_ops_per_host', 'nova.scheduler.filters.io_ops_filter') - - -class IoOpsConstraint(linearconstraints.BaseLinearConstraint): - """A constraint to ensure only those hosts are selected whose number of - concurrent I/O operations are within a set threshold. - """ - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - self.compute_api = compute.API() - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # Coefficients are 0 for hosts within the limit and 1 for other hosts. - coefficient_vectors = [] - for host in hosts: - num_io_ops = host.num_io_ops - max_io_ops = CONF.max_io_ops_per_host - passes = num_io_ops < max_io_ops - if passes: - coefficient_vectors.append([0 for j in range( - self.num_instances)]) - else: - coefficient_vectors.append([1 for j in range( - self.num_instances)]) - LOG.debug(_("%(host)s fails I/O ops check: Max IOs per host " - "is set to %(max_io_ops)s"), - {'host': host, - 'max_io_ops': max_io_ops}) - - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_vectors[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '=='. - operations = [(lambda x: x == 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py b/nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py deleted file mode 100644 index ea818fb..0000000 --- a/nova/scheduler/solvers/linearconstraints/max_disk_allocation_constraint.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - - -from oslo.config import cfg - -from nova.scheduler.solvers import linearconstraints - -CONF = cfg.CONF -CONF.import_opt('disk_allocation_ratio', 'nova.scheduler.filters.disk_filter') - - -class MaxDiskAllocationPerHostConstraint( - linearconstraints.ResourceAllocationConstraint): - """Constraint of the maximum total disk demand acceptable on each host.""" - - # The linear constraint should be formed as: - # coeff_vectors * var_vectors' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # Give demand as coefficient for each variable and -supply as - # constant in each constraint. - demand = [self._get_required_disk_mb(filter_properties) - for j in range(self.num_instances)] - supply = [self._get_usable_disk_mb(hosts[i]) - for i in range(self.num_hosts)] - coefficient_vectors = [demand + [-supply[i]] - for i in range(self.num_hosts)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_vectors[i,j] denotes the relationship between host[i] - # and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] - for j in range(self.num_instances)] - + [1] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '<='. - operations = [(lambda x: x <= 0) for i in range(self.num_hosts)] - return operations - - def _get_usable_disk_mb(self, host_state): - """This method returns the usable disk in mb for the given host. - Takes into account the disk allocation ratio (virtual disk to - physical disk allocation ratio) - """ - free_disk_mb = host_state.free_disk_mb - total_usable_disk_mb = host_state.total_usable_disk_gb * 1024 - disk_mb_limit = total_usable_disk_mb * CONF.disk_allocation_ratio - used_disk_mb = total_usable_disk_mb - free_disk_mb - usable_disk_mb = disk_mb_limit - used_disk_mb - return usable_disk_mb - - def _get_required_disk_mb(self, filter_properties): - """This method returns the required disk in mb from - the given filter_properties dictionary object. - """ - requested_disk_mb = 0 - instance_type = filter_properties.get('instance_type') - if instance_type is not None: - requested_disk_mb = (1024 * (instance_type.get('root_gb', 0) + - instance_type.get('ephemeral_gb', 0)) + - instance_type.get('swap', 0)) - return requested_disk_mb diff --git a/nova/scheduler/solvers/linearconstraints/max_ram_allocation_constraint.py b/nova/scheduler/solvers/linearconstraints/max_ram_allocation_constraint.py deleted file mode 100644 index 705bdf4..0000000 --- a/nova/scheduler/solvers/linearconstraints/max_ram_allocation_constraint.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - - -from oslo.config import cfg - -from nova.scheduler.solvers import linearconstraints - -CONF = cfg.CONF -CONF.import_opt('ram_allocation_ratio', 'nova.scheduler.filters.ram_filter') - - -class MaxRamAllocationPerHostConstraint( - linearconstraints.ResourceAllocationConstraint): - """Constraint of the total ram demand acceptable on each host.""" - - # The linear constraint should be formed as: - # coeff_vectors * var_vectors' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # Give demand as coefficient for each variable and -supply as constant - # in each constraint. - [num_hosts, num_instances] = self._get_host_instance_nums(hosts, - instance_uuids, request_spec) - demand = [self._get_required_memory_mb(filter_properties) - for j in range(self.num_instances)] - supply = [self._get_usable_memory_mb(hosts[i]) - for i in range(self.num_hosts)] - coefficient_vectors = [demand + [-supply[i]] - for i in range(self.num_hosts)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_vectors[i,j] denotes the relationship between host[i] - # and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] - for j in range(self.num_instances)] - + [1] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '<='. - operations = [(lambda x: x <= 0) for i in range(self.num_hosts)] - return operations - - def _get_usable_memory_mb(self, host_state): - """This method returns the usable memory in mb for the given host. - Takes into account the ram allocation ratio (Virtual ram to - physical ram allocation ratio) - """ - free_ram_mb = host_state.free_ram_mb - total_usable_ram_mb = host_state.total_usable_ram_mb - memory_mb_limit = total_usable_ram_mb * CONF.ram_allocation_ratio - used_ram_mb = total_usable_ram_mb - free_ram_mb - usable_ram_mb = memory_mb_limit - used_ram_mb - return usable_ram_mb - - def _get_required_memory_mb(self, filter_properties): - """This method returns the required memory in mb from - the given filter_properties dictionary object. - """ - required_ram_mb = 0 - instance_type = filter_properties.get('instance_type') - if instance_type is not None: - required_ram_mb = instance_type.get('memory_mb', 0) - return required_ram_mb diff --git a/nova/scheduler/solvers/linearconstraints/max_vcpu_allocation_constraint.py b/nova/scheduler/solvers/linearconstraints/max_vcpu_allocation_constraint.py deleted file mode 100644 index fb901d4..0000000 --- a/nova/scheduler/solvers/linearconstraints/max_vcpu_allocation_constraint.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.scheduler.solvers import linearconstraints - -CONF = cfg.CONF -CONF.import_opt('cpu_allocation_ratio', 'nova.scheduler.filters.core_filter') - - -class MaxVcpuAllocationPerHostConstraint( - linearconstraints.ResourceAllocationConstraint): - """Constraint of the total vcpu demand acceptable on each host.""" - - # The linear constraint should be formed as: - # coeff_vectors * var_vectors' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # Give demand as coefficient for each variable and -supply as constant - # in each constraint. - [num_hosts, num_instances] = self._get_host_instance_nums(hosts, - instance_uuids, request_spec) - demand = [self._get_required_vcpus(filter_properties) - for j in range(self.num_instances)] - supply = [self._get_usable_vcpus(hosts[i]) - for i in range(self.num_hosts)] - coefficient_vectors = [demand + [-supply[i]] - for i in range(self.num_hosts)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_vectors[i,j] denotes the relationship between host[i] - # and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] - for j in range(self.num_instances)] - + [1] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '<='. - operations = [(lambda x: x <= 0) for i in range(self.num_hosts)] - return operations - - def _get_usable_vcpus(self, host_state): - """This method returns the usable vcpus for the given host. - """ - vcpus_total = host_state.vcpus_total * CONF.cpu_allocation_ratio - usable_vcpus = vcpus_total - host_state.vcpus_used - return usable_vcpus - - def _get_required_vcpus(self, filter_properties): - """This method returns the required vcpus from - the given filter_properties dictionary object. - """ - required_vcpus = 1 - instance_type = filter_properties.get('instance_type') - if instance_type is not None: - required_vcpus = instance_type.get('vcpus', 1) - return required_vcpus diff --git a/nova/scheduler/solvers/linearconstraints/non_trivial_solution_constraint.py b/nova/scheduler/solvers/linearconstraints/non_trivial_solution_constraint.py deleted file mode 100644 index b426e62..0000000 --- a/nova/scheduler/solvers/linearconstraints/non_trivial_solution_constraint.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - - -class NonTrivialSolutionConstraint(linearconstraints.BaseLinearConstraint): - """Constraint that forces each instance to be placed - at exactly one host, so as to avoid trivial solutions. - """ - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The coefficient for each variable is 1 and - # constant in each constraint is (-1). - coefficient_vectors = [[1 for i in range(self.num_hosts)] + [-1] - for j in range(self.num_instances)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - # The variable_matrix[i,j] denotes the relationship between - # instance[i] and host[j] - variable_vectors = [] - variable_vectors = [[variables[i][j] for i in range(self.num_hosts)] + - [1] for j in range(self.num_instances)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - # Operations are '=='. - operations = [(lambda x: x == 0) for j in range(self.num_instances)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/num_instances_per_host_constraint.py b/nova/scheduler/solvers/linearconstraints/num_instances_per_host_constraint.py deleted file mode 100644 index c3d14b9..0000000 --- a/nova/scheduler/solvers/linearconstraints/num_instances_per_host_constraint.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -CONF = cfg.CONF -CONF.import_opt("max_instances_per_host", "nova.scheduler.filters.num_instances_filter") - -LOG = logging.getLogger(__name__) - - -class NumInstancesPerHostConstraint(linearconstraints.BaseLinearConstraint): - """Constraint that specifies the maximum number of instances that - each host can launch. - """ - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def __init__(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - [self.num_hosts, self.num_instances] = self._get_host_instance_nums( - hosts, instance_uuids, request_spec) - - def _get_host_instance_nums(self, hosts, instance_uuids, request_spec): - """This method calculates number of hosts and instances.""" - num_hosts = len(hosts) - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - return [num_hosts, num_instances] - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Calculate the coeffivient vectors.""" - # The coefficient for each variable is 1 and constant in - # each constraint is -(max_instances_per_host) - supply = [self._get_usable_instance_num(hosts[i]) - for i in range(self.num_hosts)] - coefficient_matrix = [[1 for j in range(self.num_instances)] + - [-supply[i]] for i in range(self.num_hosts)] - return coefficient_matrix - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Reorganize the variables.""" - # The variable_matrix[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_matrix = [] - variable_matrix = [[variables[i][j] for j in range( - self.num_instances)] + [1] for i in range(self.num_hosts)] - return variable_matrix - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Set operations for each constraint function.""" - # Operations are '<='. - operations = [(lambda x: x <= 0) for i in range(self.num_hosts)] - return operations - - def _get_usable_instance_num(self, host_state): - """This method returns the usable number of instance - for the given host. - """ - num_instances = host_state.num_instances - max_instances_allowed = CONF.max_instances_per_host - usable_instance_num = max_instances_allowed - num_instances - return usable_instance_num diff --git a/nova/scheduler/solvers/linearconstraints/num_networks_per_host_constraint.py b/nova/scheduler/solvers/linearconstraints/num_networks_per_host_constraint.py deleted file mode 100644 index fe5a6ae..0000000 --- a/nova/scheduler/solvers/linearconstraints/num_networks_per_host_constraint.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -max_host_networks_opts = [ - cfg.IntOpt('max_networks_per_host', - default=4094, - help='The maximum number of networks allowed in a host') - ] - -CONF = cfg.CONF -CONF.register_opts(max_host_networks_opts) - -LOG = logging.getLogger(__name__) - - -class NumNetworksPerHostConstraint( - linearconstraints.BaseLinearConstraint): - """Constraint that specifies the maximum number of networks that - each host can launch. - """ - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Calculate the coeffivient vectors.""" - # The coefficient for each variable is 1 and constant in - # each constraint is -(max_instances_per_host) - usable_network_nums = [self._get_usable_network_num(hosts[i]) - for i in range(self.num_hosts)] - requested_networks = filter_properties.get('requested_networks', None) - num_new_networks = [0 for i in range(self.num_hosts)] - for i in range(self.num_hosts): - for network_id, requested_ip, port_id in requested_networks: - if network_id: - if network_id not in hosts[i].networks: - num_new_networks[i] += 1 - - coefficient_vectors = [ - [num_new_networks[i] - usable_network_nums[i] - for j in range(self.num_instances)] - for i in range(self.num_hosts)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Reorganize the variables.""" - # The variable_matrix[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Set operations for each constraint function.""" - # Operations are '<='. - operations = [(lambda x: x <= 0) for i in range(self.num_hosts)] - return operations - - def _get_usable_network_num(self, host_state): - """This method returns the usable number of network - for the given host. - """ - num_networks = len(host_state.networks) - max_networks_allowed = CONF.max_networks_per_host - usable_network_num = max_networks_allowed - num_networks - return usable_network_num diff --git a/nova/scheduler/solvers/linearconstraints/num_networks_per_rack_constraint.py b/nova/scheduler/solvers/linearconstraints/num_networks_per_rack_constraint.py deleted file mode 100644 index 9881c59..0000000 --- a/nova/scheduler/solvers/linearconstraints/num_networks_per_rack_constraint.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo.config import cfg - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -max_rack_networks_opts = [ - cfg.IntOpt('max_networks_per_rack', - default=4094, - help='The maximum number of networks allowed in a rack') - ] - -CONF = cfg.CONF -CONF.register_opts(max_rack_networks_opts) - -LOG = logging.getLogger(__name__) - - -class NumNetworksPerRackConstraint( - linearconstraints.BaseLinearConstraint): - """Constraint that specifies the maximum number of networks that - each rack can launch. - """ - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) (constants) - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the (constants) is merged into left-hand-side, - # thus the right-hand-side is 0. - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Calculate the coeffivient vectors.""" - # The coefficient for each variable is 1 and constant in - # each constraint is -(max_instances_per_host) - requested_networks = filter_properties.get('requested_networks', None) - max_networks_allowed = CONF.max_networks_per_rack - host_coeffs = [1 for i in range(self.num_hosts)] - for i in range(self.num_hosts): - rack_networks = hosts[i].rack_networks - for rack in rack_networks.keys(): - this_rack_networks = rack_networks[rack] - num_networks = len(this_rack_networks) - num_new_networks = 0 - for network_id, requested_ip, port_id in requested_networks: - if network_id: - if network_id not in this_rack_networks: - num_new_networks += 1 - if num_networks + num_new_networks > max_networks_allowed: - host_coeffs[i] = -1 - break - - coefficient_vectors = [ - [host_coeffs[i] for j in range(self.num_instances)] - for i in range(self.num_hosts)] - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - """Reorganize the variables.""" - # The variable_matrix[i,j] denotes the relationship between - # host[i] and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, request_spec, - filter_properties): - """Set operations for each constraint function.""" - # Operations are '>='. - operations = [(lambda x: x >= 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/same_host_constraint.py b/nova/scheduler/solvers/linearconstraints/same_host_constraint.py deleted file mode 100644 index 426479e..0000000 --- a/nova/scheduler/solvers/linearconstraints/same_host_constraint.py +++ /dev/null @@ -1,49 +0,0 @@ -from oslo.config import cfg - -from nova.compute import api as compute -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - -class SameHostConstraint(linearconstraints.AffinityConstraint): - """Force to select hosts which are same as a set of given instances'.""" - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def get_coefficient_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # Coefficients are 0 for same hosts and 1 for different hosts. - context = filter_properties['context'] - scheduler_hints = filter_properties.get('scheduler_hints', {}) - affinity_uuids = scheduler_hints.get('same_host', []) - if isinstance(affinity_uuids, basestring): - affinity_uuids = [affinity_uuids] - coefficient_vectors = [] - for host in hosts: - if affinity_uuids: - if self.compute_api.get_all(context, - {'host':host.host, - 'uuid':affinity_uuids, - 'deleted':False}): - coefficient_vectors.append([0 for j in range(self.num_instances)]) - else: - coefficient_vectors.append([1 for j in range(self.num_instances)]) - else: - coefficient_vectors.append([0 for j in range(self.num_instances)]) - return coefficient_vectors - - def get_variable_vectors(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # The variable_vectors[i,j] denotes the relationship between host[i] and instance[j]. - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range(self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self,variables,hosts,instance_uuids,request_spec,filter_properties): - # Operations are '=='. - operations = [(lambda x: x==0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/server_group_affinity_constraint.py b/nova/scheduler/solvers/linearconstraints/server_group_affinity_constraint.py deleted file mode 100644 index bdf730f..0000000 --- a/nova/scheduler/solvers/linearconstraints/server_group_affinity_constraint.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - - -class ServerGroupAffinityConstraint(linearconstraints.AffinityConstraint): - """Force to select hosts which host given server group.""" - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def __init__(self, *args, **kwargs): - super(ServerGroupAffinityConstraint, self).__init__(*args, **kwargs) - self.policy_name = 'affinity' - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - coefficient_vectors = [] - policies = filter_properties.get('group_policies', []) - if self.policy_name not in policies: - coefficient_vectors = [[0 for j in range(self.num_instances)] - for i in range(self.num_hosts)] - return coefficient_vectors - - group_hosts = filter_properties.get('group_hosts') - if not group_hosts: - # when the group is empty, we need to place all the instances in - # a same host. - coefficient_vectors = [[1 - self.num_instances] + - [1 for j in range(self.num_instances - 1)] for - i in range(self.num_hosts)] - for host in hosts: - if host.host in group_hosts: - coefficient_vectors.append([0 for j - in range(self.num_instances)]) - else: - coefficient_vectors.append([1 for j - in range(self.num_instances)]) - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - operations = [(lambda x: x == 0) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/linearconstraints/server_group_anti_affinity_constraint.py b/nova/scheduler/solvers/linearconstraints/server_group_anti_affinity_constraint.py deleted file mode 100644 index fb62c6c..0000000 --- a/nova/scheduler/solvers/linearconstraints/server_group_anti_affinity_constraint.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from nova.openstack.common import log as logging -from nova.scheduler.solvers import linearconstraints - -LOG = logging.getLogger(__name__) - - -class ServerGroupAntiAffinityConstraint(linearconstraints.AffinityConstraint): - """Force to select hosts which host given server group.""" - - # The linear constraint should be formed as: - # coeff_matrix * var_matrix' (operator) constant_vector - # where (operator) is ==, >, >=, <, <=, !=, etc. - # For convenience, the constant_vector is merged into left-hand-side, - # thus the right-hand-side is always 0. - - def __init__(self, *args, **kwargs): - super(ServerGroupAntiAffinityConstraint, self).__init__( - *args, **kwargs) - self.policy_name = 'anti-affinity' - - def get_coefficient_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - coefficient_vectors = [] - policies = filter_properties.get('group_policies', []) - if self.policy_name not in policies: - coefficient_vectors = [[0 for j in range(self.num_instances)] + [0] - for i in range(self.num_hosts)] - return coefficient_vectors - - group_hosts = filter_properties.get('group_hosts') - for host in hosts: - if host.host in group_hosts: - coefficient_vectors.append([1 for j - in range(self.num_instances)] + [1]) - else: - coefficient_vectors.append([1 for j - in range(self.num_instances)] + [0]) - return coefficient_vectors - - def get_variable_vectors(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - variable_vectors = [] - variable_vectors = [[variables[i][j] for j in range( - self.num_instances)] + [1] for i in range(self.num_hosts)] - return variable_vectors - - def get_operations(self, variables, hosts, instance_uuids, - request_spec, filter_properties): - operations = [(lambda x: x <= 1) for i in range(self.num_hosts)] - return operations diff --git a/nova/scheduler/solvers/pluggable_hosts_pulp_solver.py b/nova/scheduler/solvers/pluggable_hosts_pulp_solver.py deleted file mode 100644 index 637d644..0000000 --- a/nova/scheduler/solvers/pluggable_hosts_pulp_solver.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from pulp import constants -from pulp import pulp - -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging -from nova.scheduler import solvers as scheduler_solver - -LOG = logging.getLogger(__name__) - - -class HostsPulpSolver(scheduler_solver.BaseHostSolver): - """A LP based pluggable LP solver implemented using PULP modeler.""" - - def __init__(self): - self.cost_classes = self._get_cost_classes() - self.constraint_classes = self._get_constraint_classes() - self.cost_weights = self._get_cost_weights() - - def host_solve(self, hosts, instance_uuids, request_spec, - filter_properties): - """This method returns a list of tuples - (host, instance_uuid) - that are returned by the solver. Here the assumption is that - all instance_uuids have the same requirement as specified in - filter_properties. - """ - host_instance_tuples_list = [] - - if instance_uuids: - num_instances = len(instance_uuids) - else: - num_instances = request_spec.get('num_instances', 1) - #Setting a unset uuid string for each instance. - instance_uuids = ['unset_uuid' + str(i) - for i in xrange(num_instances)] - - num_hosts = len(hosts) - - LOG.debug(_("All Hosts: %s") % [h.host for h in hosts]) - for host in hosts: - LOG.debug(_("Host state: %s") % host) - - # Create dictionaries mapping host/instance IDs to hosts/instances. - host_ids = ['Host' + str(i) for i in range(num_hosts)] - host_id_dict = dict(zip(host_ids, hosts)) - instance_ids = ['Instance' + str(i) for i in range(num_instances)] - instance_id_dict = dict(zip(instance_ids, instance_uuids)) - - # Create the 'prob' variable to contain the problem data. - prob = pulp.LpProblem("Host Instance Scheduler Problem", - constants.LpMinimize) - - # Create the 'variables' matrix to contain the referenced variables. - variables = [[pulp.LpVariable("IA" + "_Host" + str(i) + "_Instance" + - str(j), 0, 1, constants.LpInteger) for j in - range(num_instances)] for i in range(num_hosts)] - - # Get costs and constraints and formulate the linear problem. - self.cost_objects = [cost() for cost in self.cost_classes] - self.constraint_objects = [constraint(variables, hosts, - instance_uuids, request_spec, filter_properties) - for constraint in self.constraint_classes] - - costs = [[0 for j in range(num_instances)] for i in range(num_hosts)] - for cost_object in self.cost_objects: - cost = cost_object.get_cost_matrix(hosts, instance_uuids, - request_spec, filter_properties) - cost = cost_object.normalize_cost_matrix(cost, 0.0, 1.0) - weight = float(self.cost_weights[cost_object.__class__.__name__]) - costs = [[costs[i][j] + weight * cost[i][j] - for j in range(num_instances)] for i in range(num_hosts)] - prob += (pulp.lpSum([costs[i][j] * variables[i][j] - for i in range(num_hosts) for j in range(num_instances)]), - "Sum_of_Host_Instance_Scheduling_Costs") - - for constraint_object in self.constraint_objects: - coefficient_vectors = constraint_object.get_coefficient_vectors( - variables, hosts, instance_uuids, - request_spec, filter_properties) - variable_vectors = constraint_object.get_variable_vectors( - variables, hosts, instance_uuids, - request_spec, filter_properties) - operations = constraint_object.get_operations( - variables, hosts, instance_uuids, - request_spec, filter_properties) - for i in range(len(operations)): - operation = operations[i] - len_vector = len(variable_vectors[i]) - prob += (operation(pulp.lpSum([coefficient_vectors[i][j] - * variable_vectors[i][j] for j in range(len_vector)])), - "Costraint_Name_%s" % constraint_object.__class__.__name__ - + "_No._%s" % i) - - # The problem is solved using PULP's choice of Solver. - prob.solve() - - # Create host-instance tuples from the solutions. - if pulp.LpStatus[prob.status] == 'Optimal': - for v in prob.variables(): - if v.name.startswith('IA'): - (host_id, instance_id) = v.name.lstrip('IA').lstrip( - '_').split('_') - if v.varValue == 1.0: - host_instance_tuples_list.append( - (host_id_dict[host_id], - instance_id_dict[instance_id])) - - return host_instance_tuples_list diff --git a/nova/scheduler/solvers/pulp_solver.py b/nova/scheduler/solvers/pulp_solver.py new file mode 100644 index 0000000..7333243 --- /dev/null +++ b/nova/scheduler/solvers/pulp_solver.py @@ -0,0 +1,220 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from pulp import constants +from pulp import pulp +from pulp import solvers as pulp_solver_classes + +from oslo.config import cfg + +from nova.openstack.common.gettextutils import _ +from nova.openstack.common import log as logging +from nova.scheduler import solvers as scheduler_solver + +pulp_solver_opts = [ + cfg.IntOpt('pulp_solver_timeout_seconds', + default=20, + help='How much time in seconds is allowed for solvers to ' + 'solve the scheduling problem. If this time limit ' + 'is exceeded the solver will be stopped.'), +] + +CONF = cfg.CONF +CONF.register_opts(pulp_solver_opts, group='solver_scheduler') + +LOG = logging.getLogger(__name__) + + +class PulpSolver(scheduler_solver.BaseHostSolver): + """A LP based pluggable LP solver implemented using PULP modeler.""" + + def __init__(self): + super(PulpSolver, self).__init__() + self.cost_classes = self._get_cost_classes() + self.constraint_classes = self._get_constraint_classes() + + def _get_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + solver_cache = filter_properties['solver_cache'] + # initialize cost matrix + cost_matrix = [[0 for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + solver_cache['cost_matrix'] = cost_matrix + cost_objects = [cost() for cost in self.cost_classes] + cost_objects.sort(key=lambda cost: cost.precedence) + precedence_level = 0 + for cost_object in cost_objects: + if cost_object.precedence > precedence_level: + # update cost matrix in the solver cache + solver_cache['cost_matrix'] = cost_matrix + precedence_level = cost_object.precedence + cost_multiplier = cost_object.cost_multiplier() + this_cost_mat = cost_object.get_extended_cost_matrix(hosts, + filter_properties) + if not this_cost_mat: + continue + cost_matrix = [[cost_matrix[i][j] + + this_cost_mat[i][j] * cost_multiplier + for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + # update cost matrix in the solver cache + solver_cache['cost_matrix'] = cost_matrix + + return cost_matrix + + def _get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + solver_cache = filter_properties['solver_cache'] + # initialize constraint_matrix + constraint_matrix = [[True for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + solver_cache['constraint_matrix'] = constraint_matrix + constraint_objects = [cons() for cons in self.constraint_classes] + constraint_objects.sort(key=lambda cons: cons.precedence) + precedence_level = 0 + for constraint_object in constraint_objects: + if constraint_object.precedence > precedence_level: + # update constraint matrix in the solver cache + solver_cache['constraint_matrix'] = constraint_matrix + precedence_level = constraint_object.precedence + this_cons_mat = constraint_object.get_constraint_matrix(hosts, + filter_properties) + if not this_cons_mat: + continue + for i in xrange(num_hosts): + constraint_matrix[i][1:] = [constraint_matrix[i][j + 1] & + this_cons_mat[i][j] for j in xrange(num_instances)] + # update constraint matrix in the solver cache + solver_cache['constraint_matrix'] = constraint_matrix + + return constraint_matrix + + def _adjust_cost_matrix(self, cost_matrix): + """Modify cost matrix to fit the optimization problem.""" + new_cost_matrix = cost_matrix + if not cost_matrix: + return new_cost_matrix + first_column = [row[0] for row in cost_matrix] + last_column = [row[-1] for row in cost_matrix] + if sum(first_column) < sum(last_column): + offset = min(first_column) + sign = 1 + else: + offset = max(first_column) + sign = -1 + for i in xrange(len(cost_matrix)): + for j in xrange(len(cost_matrix[i])): + new_cost_matrix[i][j] = sign * ( + (cost_matrix[i][j] - offset) ** 2) + return new_cost_matrix + + def solve(self, hosts, filter_properties): + """This method returns a list of tuples - (host, instance_uuid) + that are returned by the solver. Here the assumption is that + all instance_uuids have the same requirement as specified in + filter_properties. + """ + host_instance_combinations = [] + + num_instances = filter_properties['num_instances'] + num_hosts = len(hosts) + + instance_uuids = filter_properties.get('instance_uuids') or [ + '(unknown_uuid)' + str(i) for i in xrange(num_instances)] + + filter_properties.setdefault('solver_cache', {}) + filter_properties['solver_cache'].update( + {'cost_matrix': [], + 'constraint_matrix': []}) + + cost_matrix = self._get_cost_matrix(hosts, filter_properties) + cost_matrix = self._adjust_cost_matrix(cost_matrix) + constraint_matrix = self._get_constraint_matrix(hosts, + filter_properties) + + # Create dictionaries mapping temporary host/instance keys to + # hosts/instance_uuids. These temorary keys are to be used in the + # solving process since we need a convention of lp variable names. + host_keys = ['Host' + str(i) for i in xrange(num_hosts)] + host_key_map = dict(zip(host_keys, hosts)) + instance_num_keys = ['InstanceNum' + str(i) for + i in xrange(num_instances + 1)] + instance_num_key_map = dict(zip(instance_num_keys, + xrange(num_instances + 1))) + + # create the pulp variables + variable_matrix = [ + [pulp.LpVariable('HI_' + host_key + '_' + instance_num_key, + 0, 1, constants.LpInteger) + for instance_num_key in instance_num_keys] + for host_key in host_keys] + + # create the 'prob' variable to contain the problem data. + prob = pulp.LpProblem("Host Instance Scheduler Problem", + constants.LpMinimize) + + # add cost function to pulp solver + cost_variables = [variable_matrix[i][j] for i in xrange(num_hosts) + for j in xrange(num_instances + 1)] + cost_coefficients = [cost_matrix[i][j] for i in xrange(num_hosts) + for j in xrange(num_instances + 1)] + prob += (pulp.lpSum([cost_coefficients[i] * cost_variables[i] + for i in xrange(len(cost_variables))]), "Sum_Costs") + + # add constraints to pulp solver + for i in xrange(num_hosts): + for j in xrange(num_instances + 1): + if constraint_matrix[i][j] is False: + prob += (variable_matrix[i][j] == 0, + "Cons_Host_%s" % i + "_NumInst_%s" % j) + + # add additional constraints to ensure the problem is valid + # (1) non-trivial solution: number of all instances == that requested + prob += (pulp.lpSum([variable_matrix[i][j] * j for i in + xrange(num_hosts) for j in xrange(num_instances + 1)]) == + num_instances, "NonTrivialCons") + # (2) valid solution: each host is assigned 1 num-instances value + for i in xrange(num_hosts): + prob += (pulp.lpSum([variable_matrix[i][j] for j in + xrange(num_instances + 1)]) == 1, "ValidCons_Host_%s" % i) + + # The problem is solved using PULP's choice of Solver. + prob.solve(pulp_solver_classes.PULP_CBC_CMD( + maxSeconds=CONF.solver_scheduler.pulp_solver_timeout_seconds)) + + # Create host-instance tuples from the solutions. + if pulp.LpStatus[prob.status] == 'Optimal': + num_insts_on_host = {} + for v in prob.variables(): + if v.name.startswith('HI'): + (host_key, instance_num_key) = v.name.lstrip('HI').lstrip( + '_').split('_') + if v.varValue == 1: + num_insts_on_host[host_key] = ( + instance_num_key_map[instance_num_key]) + instances_iter = iter(instance_uuids) + for host_key in host_keys: + num_insts_on_this_host = num_insts_on_host.get(host_key, 0) + for i in xrange(num_insts_on_this_host): + host_instance_combinations.append( + (host_key_map[host_key], instances_iter.next())) + else: + LOG.warn(_("Pulp solver didnot find optimal solution! reason: %s") + % pulp.LpStatus[prob.status]) + host_instance_combinations = [] + + return host_instance_combinations diff --git a/nova/solver_scheduler_exception.py b/nova/solver_scheduler_exception.py new file mode 100644 index 0000000..016be7a --- /dev/null +++ b/nova/solver_scheduler_exception.py @@ -0,0 +1,30 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova import exception +from nova.openstack.common.gettextutils import _ + + +class SolverFailed(exception.NovaException): + msg_fmt = _("Scheduler solver failed to find a solution. %(reason)s") + + +class SchedulerSolverCostNotFound(exception.NovaException): + msg_fmt = _("Scheduler solver cost cannot be found: %(cost_name)s") + + +class SchedulerSolverConstraintNotFound(exception.NovaException): + msg_fmt = _("Scheduler solver constraint cannot be found: " + "%(constraint_name)s") diff --git a/nova/tests/scheduler/fakes.py b/nova/tests/scheduler/solver_scheduler_fakes.py similarity index 95% rename from nova/tests/scheduler/fakes.py rename to nova/tests/scheduler/solver_scheduler_fakes.py index 895ce22..9675dfe 100644 --- a/nova/tests/scheduler/fakes.py +++ b/nova/tests/scheduler/solver_scheduler_fakes.py @@ -21,9 +21,8 @@ import mox from nova.compute import vm_states from nova import db from nova.openstack.common import jsonutils -from nova.scheduler import filter_scheduler -from nova.scheduler import host_manager from nova.scheduler import solver_scheduler +from nova.scheduler import solver_scheduler_host_manager COMPUTE_NODES = [ @@ -188,19 +187,15 @@ INSTANCES = [ ] -class FakeFilterScheduler(filter_scheduler.FilterScheduler): - def __init__(self, *args, **kwargs): - super(FakeFilterScheduler, self).__init__(*args, **kwargs) - self.host_manager = host_manager.HostManager() - - class FakeSolverScheduler(solver_scheduler.ConstraintSolverScheduler): def __init__(self, *args, **kwargs): super(FakeSolverScheduler, self).__init__(*args, **kwargs) - self.host_manager = host_manager.HostManager() + self.host_manager = ( + solver_scheduler_host_manager.SolverSchedulerHostManager()) -class FakeHostManager(host_manager.HostManager): +class FakeSolverSchedulerHostManager( + solver_scheduler_host_manager.SolverSchedulerHostManager): """host1: free_ram_mb=1024-512-512=0, free_disk_gb=1024-512-512=0 host2: free_ram_mb=2048-512=1536 free_disk_gb=2048-512=1536 host3: free_ram_mb=4096-1024=3072 free_disk_gb=4096-1024=3072 @@ -208,7 +203,7 @@ class FakeHostManager(host_manager.HostManager): """ def __init__(self): - super(FakeHostManager, self).__init__() + super(FakeSolverSchedulerHostManager, self).__init__() self.service_states = { 'host1': { @@ -226,9 +221,10 @@ class FakeHostManager(host_manager.HostManager): } -class FakeHostState(host_manager.HostState): +class FakeSolverSchedulerHostState( + solver_scheduler_host_manager.SolverSchedulerHostState): def __init__(self, host, node, attribute_dict): - super(FakeHostState, self).__init__(host, node) + super(FakeSolverSchedulerHostState, self).__init__(host, node) for (key, val) in attribute_dict.iteritems(): setattr(self, key, val) diff --git a/nova/tests/scheduler/solvers/constraints/__init__.py b/nova/tests/scheduler/solvers/constraints/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nova/tests/scheduler/solvers/constraints/test_active_hosts_constraint.py b/nova/tests/scheduler/solvers/constraints/test_active_hosts_constraint.py new file mode 100644 index 0000000..6c17716 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_active_hosts_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import active_hosts_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestActiveHostsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestActiveHostsConstraint, self).setUp() + self.constraint_cls = active_hosts_constraint.ActiveHostsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'active_hosts_constraint.ActiveHostsConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_affinity_constraint.py b/nova/tests/scheduler/solvers/constraints/test_affinity_constraint.py new file mode 100644 index 0000000..2cd172a --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_affinity_constraint.py @@ -0,0 +1,107 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import affinity_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestSameHostConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestSameHostConstraint, self).setUp() + self.constraint_cls = affinity_constraint.SameHostConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'affinity_constraint.SameHostConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + +class TestDifferentHostConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestDifferentHostConstraint, self).setUp() + self.constraint_cls = affinity_constraint.DifferentHostConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'affinity_constraint.DifferentHostConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + +class TestSimpleCidrAffinityConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestSimpleCidrAffinityConstraint, self).setUp() + self.constraint_cls = affinity_constraint.SimpleCidrAffinityConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'affinity_constraint.SimpleCidrAffinityConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_disk.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_disk.py new file mode 100644 index 0000000..8b9c1f8 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_disk.py @@ -0,0 +1,63 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova import context +from nova.scheduler.solvers.constraints import aggregate_disk +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateDiskConstraint(test.NoDBTestCase): + def setUp(self): + super(TestAggregateDiskConstraint, self).setUp() + self.constraint_cls = aggregate_disk.AggregateDiskConstraint + self.context = context.RequestContext('fake', 'fake') + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'context': self.context, + 'instance_type': {'root_gb': 1, 'ephemeral_gb': 1, 'swap': 0}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'free_disk_mb': 1024, 'total_usable_disk_gb': 2}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'free_disk_mb': 1024, 'total_usable_disk_gb': 2}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'free_disk_mb': 1024, 'total_usable_disk_gb': 2}) + self.fake_hosts = [host1, host2, host3] + + @mock.patch('nova.db.aggregate_metadata_get_by_host') + def test_get_constraint_matrix(self, agg_mock): + self.flags(disk_allocation_ratio=1.0) + + def _agg_mock_side_effect(*args, **kwargs): + if args[1] == 'host1': + return {'disk_allocation_ratio': set(['2.0', '3.0'])} + if args[1] == 'host2': + return {'disk_allocation_ratio': set(['3.0'])} + if args[1] == 'host3': + return {'disk_allocation_ratio': set()} + agg_mock.side_effect = _agg_mock_side_effect + + expected_cons_mat = [ + [True, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_image_properties_isolation.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_image_properties_isolation.py new file mode 100644 index 0000000..65d61bf --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_image_properties_isolation.py @@ -0,0 +1,51 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import \ + aggregate_image_properties_isolation +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateImagePropertiesIsolationConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestAggregateImagePropertiesIsolationConstraint, self).setUp() + self.constraint_cls = aggregate_image_properties_isolation.\ + AggregateImagePropertiesIsolationConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'aggregate_image_properties_isolation.' + 'AggregateImagePropertiesIsolationConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_instance_extra_specs.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_instance_extra_specs.py new file mode 100644 index 0000000..1a76f9a --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_instance_extra_specs.py @@ -0,0 +1,50 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import aggregate_instance_extra_specs +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateInstanceExtraSpecsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestAggregateInstanceExtraSpecsConstraint, self).setUp() + self.constraint_cls = aggregate_instance_extra_specs.\ + AggregateInstanceExtraSpecsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'aggregate_instance_extra_specs.' + 'AggregateInstanceExtraSpecsConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_multitenancy_isolation.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_multitenancy_isolation.py new file mode 100644 index 0000000..aa93d61 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_multitenancy_isolation.py @@ -0,0 +1,51 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import \ + aggregate_multitenancy_isolation +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateMultiTenancyIsolationConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestAggregateMultiTenancyIsolationConstraint, self).setUp() + self.constraint_cls = aggregate_multitenancy_isolation.\ + AggregateMultiTenancyIsolationConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'aggregate_multitenancy_isolation.' + 'AggregateMultiTenancyIsolationConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_ram.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_ram.py new file mode 100644 index 0000000..363c6cf --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_ram.py @@ -0,0 +1,63 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova import context +from nova.scheduler.solvers.constraints import aggregate_ram +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateRamConstraint(test.NoDBTestCase): + def setUp(self): + super(TestAggregateRamConstraint, self).setUp() + self.constraint_cls = aggregate_ram.AggregateRamConstraint + self.context = context.RequestContext('fake', 'fake') + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'context': self.context, + 'instance_type': {'memory_mb': 1024}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'free_ram_mb': 512, 'total_usable_ram_mb': 1024}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'free_ram_mb': 512, 'total_usable_ram_mb': 1024}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'free_ram_mb': 512, 'total_usable_ram_mb': 1024}) + self.fake_hosts = [host1, host2, host3] + + @mock.patch('nova.db.aggregate_metadata_get_by_host') + def test_get_constraint_matrix(self, agg_mock): + self.flags(ram_allocation_ratio=1.0) + + def _agg_mock_side_effect(*args, **kwargs): + if args[1] == 'host1': + return {'ram_allocation_ratio': set(['1.0', '2.0'])} + if args[1] == 'host2': + return {'ram_allocation_ratio': set(['3.0'])} + if args[1] == 'host3': + return {'ram_allocation_ratio': set()} + agg_mock.side_effect = _agg_mock_side_effect + + expected_cons_mat = [ + [False, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_type_affinity.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_type_affinity.py new file mode 100644 index 0000000..b5d2e36 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_type_affinity.py @@ -0,0 +1,50 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import aggregate_type_affinity +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateTypeAffinityConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestAggregateTypeAffinityConstraint, self).setUp() + self.constraint_cls = aggregate_type_affinity.\ + AggregateTypeAffinityConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'aggregate_type_affinity.AggregateTypeAffinityConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_aggregate_vcpu.py b/nova/tests/scheduler/solvers/constraints/test_aggregate_vcpu.py new file mode 100644 index 0000000..9a050db --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_aggregate_vcpu.py @@ -0,0 +1,64 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova import context +from nova.scheduler.solvers.constraints import aggregate_vcpu +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAggregateVcpuConstraint(test.NoDBTestCase): + def setUp(self): + super(TestAggregateVcpuConstraint, self).setUp() + self.constraint_cls = aggregate_vcpu.AggregateVcpuConstraint + self.context = context.RequestContext('fake', 'fake') + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'context': self.context, + 'instance_type': {'vcpus': 2}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'vcpus_total': 16, 'vcpus_used': 16}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'vcpus_total': 16, 'vcpus_used': 16}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'vcpus_total': 16, 'vcpus_used': 16}) + self.fake_hosts = [host1, host2, host3] + + @mock.patch('nova.db.aggregate_metadata_get_by_host') + def test_get_constraint_matrix(self, agg_mock): + self.flags(cpu_allocation_ratio=1.0) + + def _agg_mock_side_effect(*args, **kwargs): + if args[1] == 'host1': + return {'cpu_allocation_ratio': set(['1.0', '2.0'])} + if args[1] == 'host2': + return {'cpu_allocation_ratio': set(['3.0'])} + if args[1] == 'host3': + return {'cpu_allocation_ratio': set()} + agg_mock.side_effect = _agg_mock_side_effect + + expected_cons_mat = [ + [False, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_availability_zone_constraint.py b/nova/tests/scheduler/solvers/constraints/test_availability_zone_constraint.py new file mode 100644 index 0000000..bcd8b47 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_availability_zone_constraint.py @@ -0,0 +1,50 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import availability_zone_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestAvailabilityZoneConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestAvailabilityZoneConstraint, self).setUp() + self.constraint_cls = availability_zone_constraint.\ + AvailabilityZoneConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'availability_zone_constraint.' + 'AvailabilityZoneConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_compute_capabilities_constraint.py b/nova/tests/scheduler/solvers/constraints/test_compute_capabilities_constraint.py new file mode 100644 index 0000000..3831939 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_compute_capabilities_constraint.py @@ -0,0 +1,50 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import compute_capabilities_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestComputeCapabilitiesConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestComputeCapabilitiesConstraint, self).setUp() + self.constraint_cls = compute_capabilities_constraint.\ + ComputeCapabilitiesConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'compute_capabilities_constraint.' + 'ComputeCapabilitiesConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_constraints.py b/nova/tests/scheduler/solvers/constraints/test_constraints.py new file mode 100644 index 0000000..7d42518 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_constraints.py @@ -0,0 +1,82 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Tests for solver scheduler constraints. +""" + +import mock + +from nova import context +from nova.scheduler.solvers import constraints +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class ConstraintTestBase(test.NoDBTestCase): + """Base test case for constraints.""" + def setUp(self): + super(ConstraintTestBase, self).setUp() + self.context = context.RequestContext('fake', 'fake') + constraint_handler = constraints.ConstraintHandler() + classes = constraint_handler.get_matching_classes( + ['nova.scheduler.solvers.constraints.all_constraints']) + self.class_map = {} + for c in classes: + self.class_map[c.__name__] = c + + +class ConstraintsTestCase(ConstraintTestBase): + def test_all_constraints(self): + """Test the existence of all constraint classes.""" + self.assertIn('NoConstraint', self.class_map) + self.assertIn('ActiveHostsConstraint', self.class_map) + self.assertIn('DifferentHostConstraint', self.class_map) + + def test_base_linear_constraints(self): + blc = constraints.BaseLinearConstraint() + variables, coefficients, constants, operators = ( + blc.get_components(None, None, None)) + self.assertEqual([], variables) + self.assertEqual([], coefficients) + self.assertEqual([], constants) + self.assertEqual([], operators) + + +class TestBaseFilterConstraint(ConstraintTestBase): + def setUp(self): + super(TestBaseFilterConstraint, self).setUp() + self.constraint_cls = constraints.BaseFilterConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'BaseFilterConstraint.host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_disk_constraint.py b/nova/tests/scheduler/solvers/constraints/test_disk_constraint.py new file mode 100644 index 0000000..57b6d61 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_disk_constraint.py @@ -0,0 +1,63 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import disk_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestDiskConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestDiskConstraint, self).setUp() + self.constraint_cls = disk_constraint.DiskConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_type': {'root_gb': 1, 'ephemeral_gb': 1, + 'swap': 512}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 2}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'free_disk_mb': 10 * 1024, 'total_usable_disk_gb': 12}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 6}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + self.flags(disk_allocation_ratio=1.0) + expected_cons_mat = [ + [False, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_oversubscribe(self): + self.flags(disk_allocation_ratio=2.0) + expected_cons_mat = [ + [True, False], + [True, True], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + self.assertEqual(2 * 2.0, self.fake_hosts[0].limits['disk_gb']) + self.assertEqual(12 * 2.0, self.fake_hosts[1].limits['disk_gb']) + self.assertEqual(6 * 2.0, self.fake_hosts[2].limits['disk_gb']) diff --git a/nova/tests/scheduler/solvers/constraints/test_image_props_constraint.py b/nova/tests/scheduler/solvers/constraints/test_image_props_constraint.py new file mode 100644 index 0000000..8701bb1 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_image_props_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import image_props_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestImagePropsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestImagePropsConstraint, self).setUp() + self.constraint_cls = image_props_constraint.ImagePropertiesConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'image_props_constraint.ImagePropertiesConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_io_ops_constraint.py b/nova/tests/scheduler/solvers/constraints/test_io_ops_constraint.py new file mode 100644 index 0000000..1b20ef9 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_io_ops_constraint.py @@ -0,0 +1,58 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import io_ops_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestIoOpsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestIoOpsConstraint, self).setUp() + self.constraint_cls = io_ops_constraint.IoOpsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'num_io_ops': 6}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'num_io_ops': 10}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'num_io_ops': 15}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + self.flags(max_io_ops_per_host=7) + expected_cons_mat = [ + [True, False], + [False, False], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix2(self): + self.flags(max_io_ops_per_host=15) + expected_cons_mat = [ + [True, True], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_isolated_hosts_constraint.py b/nova/tests/scheduler/solvers/constraints/test_isolated_hosts_constraint.py new file mode 100644 index 0000000..29e28e1 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_isolated_hosts_constraint.py @@ -0,0 +1,50 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import isolated_hosts_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestIsolatedHostsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestIsolatedHostsConstraint, self).setUp() + self.constraint_cls = isolated_hosts_constraint.\ + IsolatedHostsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'isolated_hosts_constraint.IsolatedHostsConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_json_constraint.py b/nova/tests/scheduler/solvers/constraints/test_json_constraint.py new file mode 100644 index 0000000..d3de253 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_json_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import json_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestJsonConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestJsonConstraint, self).setUp() + self.constraint_cls = json_constraint.JsonConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'json_constraint.JsonConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_metrics_constraint.py b/nova/tests/scheduler/solvers/constraints/test_metrics_constraint.py new file mode 100644 index 0000000..4e42baa --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_metrics_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import metrics_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestMetricsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestMetricsConstraint, self).setUp() + self.constraint_cls = metrics_constraint.MetricsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'metrics_constraint.MetricsConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_no_constraint.py b/nova/tests/scheduler/solvers/constraints/test_no_constraint.py new file mode 100644 index 0000000..34a1276 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_no_constraint.py @@ -0,0 +1,42 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import no_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestNoConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestNoConstraint, self).setUp() + self.constraint_cls = no_constraint.NoConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + def test_get_constraint_matrix(self): + expected_cons_mat = [ + [True, True, True], + [True, True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_num_instances_constraint.py b/nova/tests/scheduler/solvers/constraints/test_num_instances_constraint.py new file mode 100644 index 0000000..7b8255f --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_num_instances_constraint.py @@ -0,0 +1,48 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import num_instances_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestNumInstancesConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestNumInstancesConstraint, self).setUp() + self.constraint_cls = num_instances_constraint.NumInstancesConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'num_instances': 1}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'num_instances': 4}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'num_instances': 5}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + self.flags(max_instances_per_host=5) + expected_cons_mat = [ + [True, True], + [True, False], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_pci_passthrough_constraint.py b/nova/tests/scheduler/solvers/constraints/test_pci_passthrough_constraint.py new file mode 100644 index 0000000..c8fdf47 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_pci_passthrough_constraint.py @@ -0,0 +1,56 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.pci import pci_stats +from nova.scheduler.solvers.constraints import pci_passthrough_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestPciPassthroughConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestPciPassthroughConstraint, self).setUp() + self.constraint_cls = \ + pci_passthrough_constraint.PciPassthroughConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + requests = [{'count': 1, 'spec': [{'vendor_id': '8086'}]}] + self.fake_filter_properties = { + 'pci_requests': requests, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'pci_stats': pci_stats.PciDeviceStats()}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'pci_stats': pci_stats.PciDeviceStats()}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'pci_stats': pci_stats.PciDeviceStats()}) + self.fake_hosts = [host1, host2, host3] + + @mock.patch('nova.pci.pci_stats.PciDeviceStats.support_requests') + @mock.patch('nova.pci.pci_stats.PciDeviceStats.apply_requests') + def test_get_constraint_matrix(self, apl_reqs, spt_reqs): + spt_reqs.side_effect = [True, False] + [False] + [True, True, False] + expected_cons_mat = [ + [True, False], + [False, False], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_ram_constraint.py b/nova/tests/scheduler/solvers/constraints/test_ram_constraint.py new file mode 100644 index 0000000..08b7f1e --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_ram_constraint.py @@ -0,0 +1,62 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import ram_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestRamConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestRamConstraint, self).setUp() + self.constraint_cls = ram_constraint.RamConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_type': {'memory_mb': 1024}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'free_ram_mb': 512, 'total_usable_ram_mb': 1024}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'free_ram_mb': 2048, 'total_usable_ram_mb': 2048}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', + {'free_ram_mb': -256, 'total_usable_ram_mb': 512}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + self.flags(ram_allocation_ratio=1.0) + expected_cons_mat = [ + [False, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_oversubscribe(self): + self.flags(ram_allocation_ratio=2.0) + expected_cons_mat = [ + [True, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + self.assertEqual(1024 * 2.0, self.fake_hosts[0].limits['memory_mb']) + self.assertEqual(2048 * 2.0, self.fake_hosts[1].limits['memory_mb']) + self.assertEqual(512 * 2.0, self.fake_hosts[2].limits['memory_mb']) diff --git a/nova/tests/scheduler/solvers/constraints/test_retry_constraint.py b/nova/tests/scheduler/solvers/constraints/test_retry_constraint.py new file mode 100644 index 0000000..ed4d981 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_retry_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import retry_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestRetryConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestRetryConstraint, self).setUp() + self.constraint_cls = retry_constraint.RetryConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'retry_constraint.RetryConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_server_group_affinity_constraint.py b/nova/tests/scheduler/solvers/constraints/test_server_group_affinity_constraint.py new file mode 100644 index 0000000..c59d983 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_server_group_affinity_constraint.py @@ -0,0 +1,131 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import \ + server_group_affinity_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestServerGroupAffinityConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestServerGroupAffinityConstraint, self).setUp() + self.constraint_cls = \ + server_group_affinity_constraint.ServerGroupAffinityConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + fake_filter_properties = { + 'group_policies': ['affinity'], + 'group_hosts': ['host2'], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [False, False], + [True, True], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_empty_group_hosts_list(self): + fake_filter_properties = { + 'group_policies': ['affinity'], + 'group_hosts': [], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [False, True], + [False, True], + [False, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_wrong_policy(self): + fake_filter_properties = { + 'group_policies': ['other'], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [True, True], + [True, True], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + +class TestServerGroupAntiAffinityConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestServerGroupAntiAffinityConstraint, self).setUp() + self.constraint_cls = server_group_affinity_constraint.\ + ServerGroupAntiAffinityConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + fake_filter_properties = { + 'group_policies': ['anti-affinity'], + 'group_hosts': ['host1', 'host3'], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [False, False], + [True, False], + [False, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_empty_group_hosts_list(self): + fake_filter_properties = { + 'group_policies': ['anti-affinity'], + 'group_hosts': [], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [True, False], + [True, False], + [True, False]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix_wrong_policy(self): + fake_filter_properties = { + 'group_policies': ['other'], + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + expected_cons_mat = [ + [True, True], + [True, True], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_trusted_hosts_constraint.py b/nova/tests/scheduler/solvers/constraints/test_trusted_hosts_constraint.py new file mode 100644 index 0000000..409cad0 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_trusted_hosts_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import trusted_hosts_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestTrustedHostsConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestTrustedHostsConstraint, self).setUp() + self.constraint_cls = trusted_hosts_constraint.TrustedHostsConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'trusted_hosts_constraint.TrustedHostsConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_type_affinity_constraint.py b/nova/tests/scheduler/solvers/constraints/test_type_affinity_constraint.py new file mode 100644 index 0000000..35b564f --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_type_affinity_constraint.py @@ -0,0 +1,49 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler.solvers.constraints import type_affinity_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestTypeAffinityConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestTypeAffinityConstraint, self).setUp() + self.constraint_cls = type_affinity_constraint.TypeAffinityConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)], + 'num_instances': 3} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', {}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', {}) + self.fake_hosts = [host1, host2] + + @mock.patch('nova.scheduler.solvers.constraints.' + 'type_affinity_constraint.TypeAffinityConstraint.' + 'host_filter_cls') + def test_get_constraint_matrix(self, mock_filter_cls): + expected_cons_mat = [ + [True, True, True], + [False, False, False]] + mock_filter = mock_filter_cls.return_value + mock_filter.host_passes.side_effect = [True, False] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) diff --git a/nova/tests/scheduler/solvers/constraints/test_vcpu_constraint.py b/nova/tests/scheduler/solvers/constraints/test_vcpu_constraint.py new file mode 100644 index 0000000..939a0a5 --- /dev/null +++ b/nova/tests/scheduler/solvers/constraints/test_vcpu_constraint.py @@ -0,0 +1,60 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.constraints import vcpu_constraint +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestVcpuConstraint(test.NoDBTestCase): + + def setUp(self): + super(TestVcpuConstraint, self).setUp() + self.constraint_cls = vcpu_constraint.VcpuConstraint + self._generate_fake_constraint_input() + + def _generate_fake_constraint_input(self): + self.fake_filter_properties = { + 'instance_type': {'vcpus': 2}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(2)], + 'num_instances': 2} + host1 = fakes.FakeSolverSchedulerHostState('host1', 'node1', + {'vcpus_total': 4, 'vcpus_used': 4}) + host2 = fakes.FakeSolverSchedulerHostState('host2', 'node1', + {'vcpus_total': 8, 'vcpus_used': 2}) + host3 = fakes.FakeSolverSchedulerHostState('host3', 'node1', {}) + self.fake_hosts = [host1, host2, host3] + + def test_get_constraint_matrix(self): + self.flags(cpu_allocation_ratio=1.0) + expected_cons_mat = [ + [False, False], + [True, True], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + + def test_get_constraint_matrix(self): + self.flags(cpu_allocation_ratio=2.0) + expected_cons_mat = [ + [True, True], + [True, True], + [True, True]] + cons_mat = self.constraint_cls().get_constraint_matrix( + self.fake_hosts, self.fake_filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + self.assertEqual(4 * 2.0, self.fake_hosts[0].limits['vcpu']) + self.assertEqual(8 * 2.0, self.fake_hosts[1].limits['vcpu']) diff --git a/nova/tests/scheduler/solvers/costs/__init__.py b/nova/tests/scheduler/solvers/costs/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/nova/tests/scheduler/solvers/costs/test_costs.py b/nova/tests/scheduler/solvers/costs/test_costs.py new file mode 100644 index 0000000..28dd175 --- /dev/null +++ b/nova/tests/scheduler/solvers/costs/test_costs.py @@ -0,0 +1,48 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Tests for solver scheduler costs. +""" + +from nova import context +from nova.scheduler.solvers import costs +from nova import test + + +class CostTestBase(test.NoDBTestCase): + """Base test case for costs.""" + def setUp(self): + super(CostTestBase, self).setUp() + self.context = context.RequestContext('fake', 'fake') + cost_handler = costs.CostHandler() + classes = cost_handler.get_matching_classes( + ['nova.scheduler.solvers.costs.all_costs']) + self.class_map = {} + for c in classes: + self.class_map[c.__name__] = c + + +class CostsTestCase(CostTestBase): + def test_all_costs(self): + """Test the existence of all cost classes.""" + self.assertIn('RamCost', self.class_map) + self.assertIn('MetricsCost', self.class_map) + + def test_base_linear_costs(self): + blc = costs.BaseLinearCost() + variables, coefficients = blc.get_components(None, None, None) + self.assertEqual([], variables) + self.assertEqual([], coefficients) diff --git a/nova/tests/scheduler/solvers/costs/test_metrics_cost.py b/nova/tests/scheduler/solvers/costs/test_metrics_cost.py new file mode 100644 index 0000000..fb91265 --- /dev/null +++ b/nova/tests/scheduler/solvers/costs/test_metrics_cost.py @@ -0,0 +1,272 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +"""Test case for solver scheduler RAM cost.""" + +from nova import context +from nova.openstack.common.fixture import mockpatch +from nova.scheduler.solvers import costs +from nova.scheduler.solvers.costs import ram_cost +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestMetricsCost(test.NoDBTestCase): + def setUp(self): + super(TestMetricsCost, self).setUp() + self.context = context.RequestContext('fake_usr', 'fake_proj') + self.useFixture(mockpatch.Patch('nova.db.compute_node_get_all', + return_value=fakes.COMPUTE_NODES_METRICS)) + self.host_manager = fakes.FakeSolverSchedulerHostManager() + self.cost_handler = costs.CostHandler() + self.cost_classes = self.cost_handler.get_matching_classes( + ['nova.scheduler.solvers.costs.metrics_cost.MetricsCost']) + + def _get_all_hosts(self): + ctxt = context.get_admin_context() + return self.host_manager.get_all_host_states(ctxt) + + def _get_fake_cost_inputs(self): + fake_hosts = self._get_all_hosts() + # FIXME: ideally should mock get_hosts_stripping_forced_and_ignored() + fake_hosts = list(fake_hosts) + # the hosts order may be arbitrary, here we manually order them + # which is for convenience of testings + tmp = [] + for i in range(len(fake_hosts)): + for fh in fake_hosts: + if fh.host == 'host%s' % (i + 1): + tmp.append(fh) + fake_hosts = tmp + fake_filter_properties = { + 'context': self.context.elevated(), + 'num_instances': 3, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)]} + return (fake_hosts, fake_filter_properties) + + def test_metrics_cost_multiplier_1(self): + self.flags(ram_cost_multiplier=0.5, group='solver_scheduler') + self.assertEqual(0.5, ram_cost.RamCost().cost_multiplier()) + + def test_metrics_cost_multiplier_2(self): + self.flags(ram_cost_multiplier=(-2), group='solver_scheduler') + self.assertEqual((-2), ram_cost.RamCost().cost_multiplier()) + + def test_get_extended_cost_matrix_single_resource(self): + # host1: foo=512 + # host2: foo=1024 + # host3: foo=3072 + # host4: foo=8192 + setting = ['foo=1'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[0:4] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [-0.0625, -0.0625, -0.0625, -0.0625], + [-0.125, -0.125, -0.125, -0.125], + [-0.375, -0.375, -0.375, -0.375], + [-1.0, -1.0, -1.0, -1.0]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_multiple_resource(self): + # host1: foo=512, bar=1 + # host2: foo=1024, bar=2 + # host3: foo=3072, bar=1 + # host4: foo=8192, bar=0 + setting = ['foo=0.0001', 'bar=1'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[0:4] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [-0.5, -0.5, -0.5, -0.5], + [-1.0, -1.0, -1.0, -1.0], + [-0.6218, -0.6218, -0.6218, -0.6218], + [-0.3896, -0.3896, -0.3896, -0.3896]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_single_resource_negative_ratio(self): + # host1: foo=512 + # host2: foo=1024 + # host3: foo=3072 + # host4: foo=8192 + setting = ['foo=-1'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[0:4] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [0.0625, 0.0625, 0.0625, 0.0625], + [0.125, 0.125, 0.125, 0.125], + [0.375, 0.375, 0.375, 0.375], + [1.0, 1.0, 1.0, 1.0]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_multiple_resource_missing_ratio(self): + # host1: foo=512, bar=1 + # host2: foo=1024, bar=2 + # host3: foo=3072, bar=1 + # host4: foo=8192, bar=0 + setting = ['foo=0.0001', 'bar'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[0:4] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [-0.0625, -0.0625, -0.0625, -0.0625], + [-0.125, -0.125, -0.125, -0.125], + [-0.375, -0.375, -0.375, -0.375], + [-1.0, -1.0, -1.0, -1.0]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_multiple_resource_wrong_ratio(self): + # host1: foo=512, bar=1 + # host2: foo=1024, bar=2 + # host3: foo=3072, bar=1 + # host4: foo=8192, bar=0 + setting = ['foo=0.0001', 'bar = 2.0t'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[0:4] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [-0.0625, -0.0625, -0.0625, -0.0625], + [-0.125, -0.125, -0.125, -0.125], + [-0.375, -0.375, -0.375, -0.375], + [-1.0, -1.0, -1.0, -1.0]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_metric_not_found(self): + # host3: foo=3072, bar=1 + # host4: foo=8192, bar=0 + # host5: foo=768, bar=0, zot=1 + # host6: foo=2048, bar=0, zot=2 + setting = ['foo=0.0001', 'zot=-2'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[2:6] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [1.0, 1.0, 1.0, 1.0], + [1.0, 1.0, 1.0, 1.0], + [0.3394, 0.3394, 0.3394, 0.3394], + [0.6697, 0.6697, 0.6697, 0.6697]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def test_get_extended_cost_matrix_metric_not_found_in_any(self): + # host3: foo=3072, bar=1 + # host4: foo=8192, bar=0 + # host5: foo=768, bar=0, zot=1 + # host6: foo=2048, bar=0, zot=2 + setting = ['foo=0.0001', 'non_exist_met=2'] + self.flags(weight_setting=setting, group='metrics') + + fake_hosts, fake_filter_properties = self._get_fake_cost_inputs() + fake_hosts = fake_hosts[2:6] + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [0, 0, 0, 0], + [0, 0, 0, 0], + [0, 0, 0, 0], + [0, 0, 0, 0]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) + + def _check_parsing_result(self, cost, setting, results): + self.flags(weight_setting=setting, group='metrics') + cost._parse_setting() + self.assertTrue(len(results) == len(cost.setting)) + for item in results: + self.assertTrue(item in cost.setting) + + def test_metrics_cost_parse_setting(self): + fake_cost = self.cost_classes[0]() + self._check_parsing_result(fake_cost, + ['foo=1'], + [('foo', 1.0)]) + self._check_parsing_result(fake_cost, + ['foo=1', 'bar=-2.1'], + [('foo', 1.0), ('bar', -2.1)]) + self._check_parsing_result(fake_cost, + ['foo=a1', 'bar=-2.1'], + [('bar', -2.1)]) + self._check_parsing_result(fake_cost, + ['foo', 'bar=-2.1'], + [('bar', -2.1)]) + self._check_parsing_result(fake_cost, + ['=5', 'bar=-2.1'], + [('bar', -2.1)]) diff --git a/nova/tests/scheduler/solvers/costs/test_ram_cost.py b/nova/tests/scheduler/solvers/costs/test_ram_cost.py new file mode 100644 index 0000000..75ca8bb --- /dev/null +++ b/nova/tests/scheduler/solvers/costs/test_ram_cost.py @@ -0,0 +1,82 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +"""Test case for solver scheduler RAM cost.""" + +from nova import context +from nova.openstack.common.fixture import mockpatch +from nova.scheduler.solvers import costs +from nova.scheduler.solvers.costs import ram_cost +from nova import test +from nova.tests.scheduler import solver_scheduler_fakes as fakes + + +class TestRamCost(test.NoDBTestCase): + def setUp(self): + super(TestRamCost, self).setUp() + self.context = context.RequestContext('fake_usr', 'fake_proj') + self.useFixture(mockpatch.Patch('nova.db.compute_node_get_all', + return_value=fakes.COMPUTE_NODES[0:5])) + self.host_manager = fakes.FakeSolverSchedulerHostManager() + self.cost_handler = costs.CostHandler() + self.cost_classes = self.cost_handler.get_matching_classes( + ['nova.scheduler.solvers.costs.ram_cost.RamCost']) + + def _get_all_hosts(self): + ctxt = context.get_admin_context() + return self.host_manager.get_all_host_states(ctxt) + + def test_ram_cost_multiplier_1(self): + self.flags(ram_cost_multiplier=0.5, group='solver_scheduler') + self.assertEqual(0.5, ram_cost.RamCost().cost_multiplier()) + + def test_ram_cost_multiplier_2(self): + self.flags(ram_cost_multiplier=(-2), group='solver_scheduler') + self.assertEqual((-2), ram_cost.RamCost().cost_multiplier()) + + def test_get_extended_cost_matrix(self): + # the host.free_ram_mb values of these fake hosts are supposed to be: + # 512, 1024, 3072, 8192 + fake_hosts = self._get_all_hosts() + # FIXME: ideally should mock get_hosts_stripping_forced_and_ignored() + fake_hosts = list(fake_hosts) + # the hosts order may be arbitrary, here we manually order them + # which is for convenience of testings + tmp = [] + for i in range(len(fake_hosts)): + for fh in fake_hosts: + if fh.host == 'host%s' % (i + 1): + tmp.append(fh) + fake_hosts = tmp + fake_filter_properties = { + 'context': self.context.elevated(), + 'num_instances': 3, + 'instance_type': {'memory_mb': 1024}, + 'instance_uuids': ['fake_uuid_%s' % x for x in range(3)]} + + fake_cost = self.cost_classes[0]() + + expected_x_cost_mat = [ + [0.0, 0.125, 0.25, 0.375], + [-0.125, 0.0, 0.125, 0.25], + [-0.375, -0.25, -0.125, -0.0], + [-1.0, -0.875, -0.75, -0.625]] + + x_cost_mat = fake_cost.get_extended_cost_matrix(fake_hosts, + fake_filter_properties) + expected_x_cost_mat = [[round(val, 4) for val in row] + for row in expected_x_cost_mat] + x_cost_mat = [[round(val, 4) for val in row] for row in x_cost_mat] + self.assertEqual(expected_x_cost_mat, x_cost_mat) diff --git a/nova/tests/scheduler/solvers/costs/test_utils.py b/nova/tests/scheduler/solvers/costs/test_utils.py new file mode 100644 index 0000000..b163723 --- /dev/null +++ b/nova/tests/scheduler/solvers/costs/test_utils.py @@ -0,0 +1,64 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from nova.scheduler.solvers.costs import utils +from nova import test + + +class CostUtilsTestCase(test.NoDBTestCase): + + def test_normalize_cost_matrix(self): + test_matrix = [ + [1, 2, 3, 4], + [5, 7, 9, 10], + [-2, -1, 0, 2]] + + expected_result = [ + [0.2, 0.4, 0.6, 0.8], + [1.0, 1.4, 1.8, 2.0], + [-0.4, -0.2, 0.0, 0.4]] + + result = utils.normalize_cost_matrix(test_matrix) + + round_values = lambda x: [round(v, 4) for v in x] + expected_result = map(round_values, expected_result) + result = map(round_values, result) + + self.assertEqual(expected_result, result) + + def test_normalize_cost_matrix_empty_input(self): + test_matrix = [] + expected_result = [] + result = utils.normalize_cost_matrix(test_matrix) + self.assertEqual(expected_result, result) + + def test_normalize_cost_matrix_first_column_all_zero(self): + test_matrix = [ + [0, 1, 2, 3], + [0, -1, -2, -3], + [0, 0.2, 0.4, 0.6]] + + expected_result = [ + [0, 1, 2, 3], + [0, -1, -2, -3], + [0, 0.2, 0.4, 0.6]] + + result = utils.normalize_cost_matrix(test_matrix) + + round_values = lambda x: [round(v, 4) for v in x] + expected_result = map(round_values, expected_result) + result = map(round_values, result) + + self.assertEqual(expected_result, result) diff --git a/nova/tests/scheduler/solvers/test_active_host_constraint.py b/nova/tests/scheduler/solvers/test_active_host_constraint.py deleted file mode 100644 index cee282a..0000000 --- a/nova/tests/scheduler/solvers/test_active_host_constraint.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler Active Host linearconstraint. -""" - -import mock - -from nova import servicegroup -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctest - - -class ActiveHostConstraintTestCase(lctest.LinearConstraintsTestBase): - """Test case for ActiveHostConstraint.""" - - def setUp(self): - super(ActiveHostConstraintTestCase, self).setUp() - - def test_get_coefficient_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['ActiveHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - with mock.patch.object(servicegroup.API, 'service_is_up') as srv_isup: - srv_isup.return_value = True - coeff_vectors = constraint_cls.get_coefficient_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - ref_coeff_vectors = [[0, 0], - [1, 1]] - self.assertEqual(coeff_vectors, ref_coeff_vectors) - - with mock.patch.object(servicegroup.API, 'service_is_up') as srv_isup: - srv_isup.return_value = False - coeff_vectors = constraint_cls.get_coefficient_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - ref_coeff_vectors = [[1, 1], - [1, 1]] - self.assertEqual(coeff_vectors, ref_coeff_vectors) - - def test_get_variable_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['ActiveHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_variable_vectors = [[1, 2], - [3, 4]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['ActiveHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - operations = constraint_cls.get_operations(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_allocation_constraints.py b/nova/tests/scheduler/solvers/test_allocation_constraints.py deleted file mode 100644 index 2699f50..0000000 --- a/nova/tests/scheduler/solvers/test_allocation_constraints.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler linearconstraints. -""" - -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctests - -HOSTS = [fakes.FakeHostState('host1', 'node1', - {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13, - 'free_ram_mb': 1024, 'total_usable_ram_mb': 1024, - 'vcpus_total': 4, 'vcpus_used': 7, - 'service': {'disabled': False}}), - fakes.FakeHostState('host2', 'node2', - {'free_disk_mb': 1024, 'total_usable_disk_gb': 13, - 'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, - 'vcpus_total': 4, 'vcpus_used': 8, - 'service': {'disabled': False}}), - ] - -INSTANCE_UUIDS = ['fake-instance1-uuid', 'fake-instance2-uuid'] - -INSTANCE_TYPES = [{'root_gb': 1, 'ephemeral_gb': 1, 'swap': 512, - 'memory_mb': 1024, 'vcpus': 1}, - ] - -REQUEST_SPECS = [{'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'num_instances': 2}, ] - - -class MaxDiskAllocationConstraintTestCase(lctests.LinearConstraintsTestBase): - """Test case for MaxDiskAllocationPerHostConstraint.""" - - def setUp(self): - super(MaxDiskAllocationConstraintTestCase, self).setUp() - self.variables = [[1, 2], - [3, 4]] - self.hosts = HOSTS - self.instance_uuids = INSTANCE_UUIDS - self.instance_type = INSTANCE_TYPES[0].copy() - self.request_spec = REQUEST_SPECS[0].copy() - self.filter_properties = {'context': self.context.elevated(), - 'instance_type': self.instance_type} - - def test_get_coefficient_vectors(self): - constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_coeff_vectors = [[2560, 2560, -11264], - [2560, 2560, -1024]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_variable_vectors(self): - constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_variable_vectors = [[1, 2, 1], - [3, 4, 1]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - constraint_cls = self.class_map['MaxDiskAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - operations = constraint_cls.get_operations(self.variables, self.hosts, - self.instance_uuids, self.request_spec, self.filter_properties) - - ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) - - -class MaxRamAllocationConstraintTestCase(lctests.LinearConstraintsTestBase): - """Test case for MaxRamAllocationPerHostConstraint.""" - - def setUp(self): - super(MaxRamAllocationConstraintTestCase, self).setUp() - self.flags(ram_allocation_ratio=1.0) - self.variables = [[1, 2], - [3, 4]] - self.hosts = HOSTS - self.instance_uuids = INSTANCE_UUIDS - self.instance_type = INSTANCE_TYPES[0].copy() - self.request_spec = REQUEST_SPECS[0].copy() - self.filter_properties = {'context': self.context.elevated(), - 'instance_type': self.instance_type} - - def test_get_coefficient_vectors(self): - constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_coeff_vectors = [[1024, 1024, -1024], - [1024, 1024, -1023]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_variable_vectors(self): - constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_variable_vectors = [[1, 2, 1], - [3, 4, 1]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - constraint_cls = self.class_map['MaxRamAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - operations = constraint_cls.get_operations(self.variables, self.hosts, - self.instance_uuids, self.request_spec, self.filter_properties) - - ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) - - -class MaxVcpuAllocationConstraintTestCase(lctests.LinearConstraintsTestBase): - """Test case for MaxVcpuAllocationPerHostConstraint.""" - - def setUp(self): - super(MaxVcpuAllocationConstraintTestCase, self).setUp() - self.flags(cpu_allocation_ratio=2) - self.variables = [[1, 2], - [3, 4]] - self.hosts = HOSTS - self.instance_uuids = INSTANCE_UUIDS - self.instance_type = INSTANCE_TYPES[0].copy() - self.request_spec = REQUEST_SPECS[0].copy() - self.filter_properties = {'context': self.context.elevated(), - 'instance_type': self.instance_type} - - def test_get_coefficient_vectors(self): - constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_coeff_vectors = [[1, 1, -1], - [1, 1, 0]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_variable_vectors(self): - constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_variable_vectors = [[1, 2, 1], - [3, 4, 1]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - constraint_cls = self.class_map['MaxVcpuAllocationPerHostConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - operations = constraint_cls.get_operations(self.variables, self.hosts, - self.instance_uuids, self.request_spec, self.filter_properties) - - ref_operations = [(lambda x: x <= 0), (lambda x: x <= 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_availability_zone_constraint.py b/nova/tests/scheduler/solvers/test_availability_zone_constraint.py deleted file mode 100644 index 754c36c..0000000 --- a/nova/tests/scheduler/solvers/test_availability_zone_constraint.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler Availability Zone linearconstraint. -""" - -import mock - -from nova import db -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctests - -HOSTS = [fakes.FakeHostState('host1', 'node1', - {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13, - 'free_ram_mb': 1024, 'total_usable_ram_mb': 1024, - 'vcpus_total': 4, 'vcpus_used': 7, - 'service': {'disabled': False}}), - fakes.FakeHostState('host2', 'node2', - {'free_disk_mb': 1024, 'total_usable_disk_gb': 13, - 'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, - 'vcpus_total': 4, 'vcpus_used': 8, - 'service': {'disabled': False}}), - ] - -INSTANCE_UUIDS = ['fake-instance1-uuid', 'fake-instance2-uuid'] - -INSTANCE_TYPES = [{'root_gb': 1, 'ephemeral_gb': 1, 'swap': 512, - 'memory_mb': 1024, 'vcpus': 1}, - ] - -REQUEST_SPECS = [{'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'instance_properties': {'availability_zone': 'zone1'}, - 'num_instances': 2}, - {'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'instance_properties': {'availability_zone': 'default'}, - 'num_instances': 2}, - {'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'instance_properties': {'availability_zone': 'unknown'}, - 'num_instances': 2}, - {'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'num_instances': 2}, - ] - - -class AvailabilityZoneConstraintTestCase(lctests.LinearConstraintsTestBase): - """Test case for AvailabilityZoneConstraint.""" - - def setUp(self): - super(AvailabilityZoneConstraintTestCase, self).setUp() - self.variables = [[1, 2], - [3, 4]] - self.hosts = HOSTS - self.instance_uuids = INSTANCE_UUIDS - self.instance_type = INSTANCE_TYPES[0].copy() - self.request_spec = REQUEST_SPECS[0].copy() - self.filter_properties = {'context': self.context.elevated(), - 'instance_type': self.instance_type} - - def test_get_coefficient_vectors_in_metadata(self): - - def dbget_side_effect(*args, **kwargs): - if args[1] == 'host1': - return {'availability_zone': 'zone1'} - else: - return {} - - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget: - dbget.side_effect = dbget_side_effect - coeff_vectors = constraint_cls.get_coefficient_vectors( - self.variables, self.hosts, - self.instance_uuids, self.request_spec, - self.filter_properties) - ref_coeff_vectors = [[0, 0], - [1, 1]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_coefficient_vectors_in_default_zone(self): - self.request_spec = REQUEST_SPECS[1].copy() - self.flags(default_availability_zone='default') - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget: - dbget.return_value = {} - coeff_vectors = constraint_cls.get_coefficient_vectors( - self.variables, self.hosts, - self.instance_uuids, self.request_spec, - self.filter_properties) - ref_coeff_vectors = [[0, 0], - [0, 0]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_coefficient_vectors_in_non_default_zone(self): - self.request_spec = REQUEST_SPECS[2].copy() - self.flags(default_availability_zone='default') - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - with mock.patch.object(db, 'aggregate_metadata_get_by_host') as dbget: - dbget.return_value = {} - coeff_vectors = constraint_cls.get_coefficient_vectors( - self.variables, self.hosts, - self.instance_uuids, self.request_spec, - self.filter_properties) - ref_coeff_vectors = [[1, 1], - [1, 1]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_coefficient_vectors_missing_zone(self): - self.request_spec = REQUEST_SPECS[3].copy() - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables, - self.hosts, self.instance_uuids, - self.request_spec, - self.filter_properties) - ref_coeff_vectors = [[0, 0], - [0, 0]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_variable_vectors(self): - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_variable_vectors = [[1, 2], - [3, 4]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - constraint_cls = self.class_map['AvailabilityZoneConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - operations = constraint_cls.get_operations(self.variables, self.hosts, - self.instance_uuids, self.request_spec, self.filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_costs.py b/nova/tests/scheduler/solvers/test_costs.py deleted file mode 100644 index 0aef89a..0000000 --- a/nova/tests/scheduler/solvers/test_costs.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler costs. -""" - -from nova import context -from nova.scheduler.solvers import costs -from nova import test -from nova.tests.scheduler import fakes - - -class CostsTestBase(test.TestCase): - """Base test case for costs.""" - def setUp(self): - super(CostsTestBase, self).setUp() - self.context = context.RequestContext('fake', 'fake') - cost_handler = costs.CostHandler() - classes = cost_handler.get_matching_classes( - ['nova.scheduler.solvers.costs.all_costs']) - self.class_map = {} - for cls in classes: - self.class_map[cls.__name__] = cls - - -class AllCostsTestCase(CostsTestBase): - """Test case for existence of all cost classes.""" - def test_all_costs(self): - self.assertIn('RamCost', self.class_map) - - -class BaseCostTestCase(CostsTestBase): - """Test case for BaseCost.""" - def test_normalize_cost_matrix_default(self): - cost_cls = costs.BaseCost() - cost_matrix = [[-1.0, 2.0], [3.0, 4.0]] - output = cost_cls.normalize_cost_matrix(cost_matrix) - - ref_output = [[0.0, 0.6], [0.8, 1.0]] - self.assertEqual(output, ref_output) - - def test_normalize_cost_matrix_custom_bounds(self): - cost_cls = costs.BaseCost() - cost_matrix = [[1.0, 2.0], [3.0, 4.0]] - output = cost_cls.normalize_cost_matrix(cost_matrix, -1.0, 2.0) - - ref_output = [[-1.0, 0.0], [1.0, 2.0]] - self.assertEqual(output, ref_output) - - -class RamCostTestCase(CostsTestBase): - """Test case for RamCost.""" - def test_get_cost_matrix_single(self): - self.flags(ram_weight_multiplier=-1.0) - cost_cls = self.class_map['RamCost']() - host1 = fakes.FakeHostState('host1', 'node1', {}) - hosts = [host1] - fake_instance_uuid = 'fake-instance-id' - instance_uuids = [fake_instance_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids} - filter_properties = {'context': self.context.elevated()} - - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, filter_properties) - ref_cost_matrix = [[host1.free_ram_mb * (-1.0)]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_multi(self): - self.flags(ram_weight_multiplier=1.0) - cost_cls = self.class_map['RamCost']() - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, filter_properties) - - ref_cost_matrix = [[host1.free_ram_mb * 1.0, host1.free_ram_mb * 1.0], - [host2.free_ram_mb * 1.0, host2.free_ram_mb * 1.0]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_broken_info1(self): - self.flags(ram_weight_multiplier=1.0) - cost_cls = self.class_map['RamCost']() - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - instance_uuids = None - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, filter_properties) - - ref_cost_matrix = [[host1.free_ram_mb * 1.0, host1.free_ram_mb * 1.0], - [host2.free_ram_mb * 1.0, host2.free_ram_mb * 1.0]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_broken_info2(self): - self.flags(ram_weight_multiplier=1.0) - cost_cls = self.class_map['RamCost']() - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - instance_uuids = None - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids} - filter_properties = {'context': self.context.elevated()} - - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, filter_properties) - - ref_cost_matrix = [[host1.free_ram_mb * 1.0], - [host2.free_ram_mb * 1.0]] - self.assertEqual(cost_matrix, ref_cost_matrix) diff --git a/nova/tests/scheduler/solvers/test_fast_solver.py b/nova/tests/scheduler/solvers/test_fast_solver.py new file mode 100644 index 0000000..eb20ec0 --- /dev/null +++ b/nova/tests/scheduler/solvers/test_fast_solver.py @@ -0,0 +1,498 @@ +# Copyright (c) 2015 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler import solver_scheduler_host_manager as host_manager +from nova.scheduler.solvers import constraints +from nova.scheduler.solvers import costs +from nova.scheduler.solvers import fast_solver +from nova import test + + +class FakeCost1(costs.BaseLinearCost): + def cost_multiplier(self): + return 1 + + def get_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + cost_matrix = [[i for j in xrange(num_instances)] + for i in xrange(num_hosts)] + return cost_matrix + + +class FakeCost2(costs.BaseLinearCost): + precedence = 1 + + def cost_multiplier(self): + return 2 + + def get_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + m = filter_properties['solver_cache']['cost_matrix'] + cost_matrix = [[-m[i][j] for j in xrange(num_instances)] + for i in xrange(num_hosts)] + return cost_matrix + + +class FakeConstraint1(constraints.BaseLinearConstraint): + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + constraint_matrix[0] = [False for j in xrange(num_instances)] + return constraint_matrix + + +class FakeConstraint2(constraints.BaseLinearConstraint): + precedence = 1 + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + m = filter_properties['solver_cache']['constraint_matrix'] + if m[0][0] is False: + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + constraint_matrix[-1] = [False for j in xrange(num_instances)] + else: + constraint_matrix = [[False for j in xrange(num_instances)] + for i in xrange(num_hosts)] + return constraint_matrix + + +class TestGetMatrix(test.NoDBTestCase): + + def setUp(self): + super(TestGetMatrix, self).setUp() + self.fast_solver = fast_solver.FastSolver() + self.fake_hosts = [host_manager.SolverSchedulerHostState( + 'fake_host%s' % x, 'fake-node') for x in xrange(1, 5)] + self.fake_hosts += [host_manager.SolverSchedulerHostState( + 'fake_multihost', 'fake-node%s' % x) for x in xrange(1, 5)] + + def test_get_cost_matrix_one_cost(self): + self.fast_solver.cost_classes = [FakeCost1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [0, 0, 0], + [1, 1, 1], + [2, 2, 2], + [3, 3, 3]] + cost_mat = self.fast_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_cost_matrix_multi_cost(self): + self.fast_solver.cost_classes = [FakeCost1, FakeCost2] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [-0, -0, -0], + [-1, -1, -1], + [-2, -2, -2], + [-3, -3, -3]] + cost_mat = self.fast_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_cost_matrix_multi_cost_change_order(self): + self.fast_solver.cost_classes = [FakeCost2, FakeCost1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [-0, -0, -0], + [-1, -1, -1], + [-2, -2, -2], + [-3, -3, -3]] + cost_mat = self.fast_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_constraint_matrix_one_constraint(self): + self.fast_solver.constraint_classes = [FakeConstraint1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [False, False, False], + [True, True, True], + [True, True, True], + [True, True, True]] + cons_mat = self.fast_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + def test_get_constraint_matrix_multi_constraint(self): + self.fast_solver.constraint_classes = [FakeConstraint1, + FakeConstraint2] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [False, False, False], + [True, True, True], + [True, True, True], + [False, False, False]] + cons_mat = self.fast_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + def test_get_constraint_matrix_multi_constraint_change_order(self): + self.fast_solver.constraint_classes = [FakeConstraint2, + FakeConstraint1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [False, False, False], + [True, True, True], + [True, True, True], + [False, False, False]] + cons_mat = self.fast_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + +@mock.patch.object(fast_solver.FastSolver, '_get_constraint_matrix') +@mock.patch.object(fast_solver.FastSolver, '_get_cost_matrix') +class TestFastSolver(test.NoDBTestCase): + + def setUp(self): + super(TestFastSolver, self).setUp() + self.fast_solver = fast_solver.FastSolver() + self.fake_hosts = [host_manager.SolverSchedulerHostState( + 'fake_host%s' % x, 'fake-node') for x in xrange(1, 5)] + self.fake_hosts += [host_manager.SolverSchedulerHostState( + 'fake_multihost', 'fake-node%s' % x) for x in xrange(1, 5)] + + def test_spreading_cost(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 1, 2], + # [1, 2, 3], + # [2, 3, 4], + # [3, 4, 5]] + get_costmat.return_value = [[j + i for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[0], 'fake_uuid_0'), + (hosts[0], 'fake_uuid_1'), + (hosts[1], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_spreading_cost_with_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 1, 2], + # [1, 2, 3], + # [2, 3, 4], + # [3, 4, 5]] + get_costmat.return_value = [[j + i for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[False, False, False], + # [True, False, False], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [ + [False, False, False], + [True, False, False], + [True, True, True], + [True, True, True]] + + expected_result = [ + (hosts[1], 'fake_uuid_0'), + (hosts[2], 'fake_uuid_1'), + (hosts[2], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_stacking_cost(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[-0, -1, -2], + # [-1, -2, -3], + # [-2, -3, -4], + # [-3, -4, -5]] + get_costmat.return_value = [[-(j + i) for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[3], 'fake_uuid_0'), + (hosts[3], 'fake_uuid_1'), + (hosts[3], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_stacking_cost_with_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[-0, -1, -2], + # [-1, -2, -3], + # [-2, -3, -4], + # [-3, -4, -5]] + get_costmat.return_value = [[-(j + i) for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, False, False]] + get_consmat.return_value = [ + [True, True, True], + [True, True, True], + [True, True, True], + [True, False, False]] + + expected_result = [ + (hosts[2], 'fake_uuid_0'), + (hosts[2], 'fake_uuid_1'), + (hosts[2], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_constant_cost(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 0, 0], + # [1, 1, 1], + # [2, 2, 2], + # [3, 3, 3]] + get_costmat.return_value = [[i for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[0], 'fake_uuid_0'), + (hosts[0], 'fake_uuid_1'), + (hosts[0], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_constant_cost_with_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 0, 0], + # [1, 1, 1], + # [2, 2, 2], + # [3, 3, 3]] + get_costmat.return_value = [[i for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[False, False, False], + # [True, False, False], + # [True, True, False], + # [True, True, True]] + get_consmat.return_value = [ + [False, False, False], + [True, False, False], + [True, True, False], + [True, True, True]] + + expected_result = [ + (hosts[1], 'fake_uuid_0'), + (hosts[2], 'fake_uuid_1'), + (hosts[2], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_no_cost_no_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 0, 0], + # [0, 0, 0], + # [0, 0, 0], + # [0, 0, 0]] + get_costmat.return_value = [[0 for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[0], 'fake_uuid_0'), + (hosts[0], 'fake_uuid_1'), + (hosts[0], 'fake_uuid_2')] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_no_valid_solution(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 0, 0], + # [1, 1, 1], + # [2, 2, 2], + # [3, 3, 3]] + get_costmat.return_value = [[i for j in xrange(num_insts)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[False, False, False], + # [False, False, False], + # [False, False, False], + # [False, False, False]] + get_consmat.return_value = [[False for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [] + + result = self.fast_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) diff --git a/nova/tests/scheduler/solvers/test_io_ops_constraint.py b/nova/tests/scheduler/solvers/test_io_ops_constraint.py deleted file mode 100644 index 95b5c99..0000000 --- a/nova/tests/scheduler/solvers/test_io_ops_constraint.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler IO Ops linearconstraint. -""" - -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctests - -HOSTS = [fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}), - fakes.FakeHostState('host2', 'node2', {'num_io_ops': 8}), - ] - -INSTANCE_UUIDS = ['fake-instance1-uuid', 'fake-instance2-uuid'] - -INSTANCE_TYPES = [{'root_gb': 1, 'ephemeral_gb': 1, 'swap': 512, - 'memory_mb': 1024, 'vcpus': 1}, - ] - -REQUEST_SPECS = [{'instance_type': INSTANCE_TYPES[0], - 'instance_uuids': INSTANCE_UUIDS, - 'num_instances': 2}, - ] - - -class IoOpsConstraintTestCase(lctests.LinearConstraintsTestBase): - """Test case for IoOpsConstraint.""" - - def setUp(self): - super(IoOpsConstraintTestCase, self).setUp() - self.variables = [[1, 2], - [3, 4]] - self.hosts = HOSTS - self.instance_uuids = INSTANCE_UUIDS - self.instance_type = INSTANCE_TYPES[0].copy() - self.request_spec = REQUEST_SPECS[0].copy() - self.filter_properties = {'context': self.context.elevated(), - 'instance_type': self.instance_type} - - def test_get_coefficient_vectors(self): - constraint_cls = self.class_map['IoOpsConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(self.variables, - self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - ref_coeff_vectors = [[0, 0], - [1, 1]] - self.assertEqual(ref_coeff_vectors, coeff_vectors) - - def test_get_variable_vectors(self): - constraint_cls = self.class_map['IoOpsConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(self.variables, - self.hosts, self.instance_uuids, self.request_spec, - self.filter_properties) - - ref_variable_vectors = [[1, 2], - [3, 4]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - constraint_cls = self.class_map['IoOpsConstraint']( - self.variables, self.hosts, self.instance_uuids, - self.request_spec, self.filter_properties) - - operations = constraint_cls.get_operations(self.variables, self.hosts, - self.instance_uuids, self.request_spec, self.filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_linearconstraints.py b/nova/tests/scheduler/solvers/test_linearconstraints.py deleted file mode 100644 index 37ab798..0000000 --- a/nova/tests/scheduler/solvers/test_linearconstraints.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler linearconstraints. -""" - -from nova import context -from nova.scheduler.solvers import linearconstraints -from nova import test -from nova.tests.scheduler import fakes - - -class LinearConstraintsTestBase(test.TestCase): - """Base test case for linearconstraints.""" - def setUp(self): - super(LinearConstraintsTestBase, self).setUp() - self.context = context.RequestContext('fake', 'fake') - linearconstraint_handler = linearconstraints.LinearConstraintHandler() - classes = linearconstraint_handler.get_matching_classes( - ['nova.scheduler.solvers.linearconstraints.' - 'all_linear_constraints']) - self.class_map = {} - for cls in classes: - self.class_map[cls.__name__] = cls - - -class AllConstraintsTestCase(LinearConstraintsTestBase): - """Test case for existence of all constraint classes.""" - def test_all_constraints(self): - self.assertIn('AllHostsConstraint', self.class_map) - - -class AllHostsConstraintTestCase(LinearConstraintsTestBase): - """Test case for AllHostsConstraint.""" - - def test_get_coefficient_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['AllHostsConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_coeff_vectors = [[0, 0], - [0, 0]] - self.assertEqual(coeff_vectors, ref_coeff_vectors) - - def test_get_variable_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['AllHostsConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_variable_vectors = [[1, 2], - [3, 4]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', {}) - host2 = fakes.FakeHostState('host2', 'node2', {}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['AllHostsConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - operations = constraint_cls.get_operations(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_max_instances_per_host_constraint.py b/nova/tests/scheduler/solvers/test_max_instances_per_host_constraint.py deleted file mode 100644 index be0b236..0000000 --- a/nova/tests/scheduler/solvers/test_max_instances_per_host_constraint.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler Max_Instance_Per_Host linearconstraint. -""" - -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctest - - -class MaxInstancesPerHostConstraintTestCase(lctest.LinearConstraintsTestBase): - """Test case for MaxInstancesPerHostsConstraint.""" - - def setUp(self): - super(MaxInstancesPerHostConstraintTestCase, self).setUp() - - def test_get_coefficient_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': {'max_instances_per_host': 1}} - constraint_cls = self.class_map['MaxInstancesPerHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - coeff_vectors = constraint_cls.get_coefficient_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - ref_coeff_vectors = [[1, 1, -1], - [1, 1, -1]] - self.assertEqual(coeff_vectors, ref_coeff_vectors) - - def test_get_variable_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': {'max_instances_per_host': 1}} - constraint_cls = self.class_map['MaxInstancesPerHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_variable_vectors = [[1, 2, 1], - [3, 4, 1]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': {'max_instances_per_host': 1}} - constraint_cls = self.class_map['MaxInstancesPerHostConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - operations = constraint_cls.get_operations(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_non_trivial_solution_constraint.py b/nova/tests/scheduler/solvers/test_non_trivial_solution_constraint.py deleted file mode 100644 index 9681481..0000000 --- a/nova/tests/scheduler/solvers/test_non_trivial_solution_constraint.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for solver scheduler Non-Trivial-Solution linearconstraint. -""" - -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_linearconstraints as lctest - - -class NonTrivialSolutionConstraintTestCase(lctest.LinearConstraintsTestBase): - """Test case for NonTrivialSolutionConstraint.""" - - def setUp(self): - super(NonTrivialSolutionConstraintTestCase, self).setUp() - - def test_get_coefficient_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['NonTrivialSolutionConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - coeff_vectors = constraint_cls.get_coefficient_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - ref_coeff_vectors = [[1, 1, -1], - [1, 1, -1]] - self.assertEqual(coeff_vectors, ref_coeff_vectors) - - def test_get_variable_vectors(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['NonTrivialSolutionConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - variable_vectors = constraint_cls.get_variable_vectors(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_variable_vectors = [[1, 3, 1], - [2, 4, 1]] - self.assertEqual(variable_vectors, ref_variable_vectors) - - def test_get_operations(self): - variables = [[1, 2], - [3, 4]] - host1 = fakes.FakeHostState('host1', 'node1', - {'service': {'disabled': False}}) - host2 = fakes.FakeHostState('host2', 'node2', - {'service': {'disabled': True}}) - hosts = [host1, host2] - fake_instance1_uuid = 'fake-instance1-id' - fake_instance2_uuid = 'fake-instance2-id' - instance_uuids = [fake_instance1_uuid, fake_instance2_uuid] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - constraint_cls = self.class_map['NonTrivialSolutionConstraint']( - variables, hosts, instance_uuids, request_spec, filter_properties) - - operations = constraint_cls.get_operations(variables, - hosts, instance_uuids, request_spec, filter_properties) - - ref_operations = [(lambda x: x == 0), (lambda x: x == 0)] - self.assertEqual(len(operations), len(ref_operations)) - for idx in range(len(operations)): - for n in range(4): - self.assertEqual(operations[idx](n), ref_operations[idx](n)) diff --git a/nova/tests/scheduler/solvers/test_pulp_solver.py b/nova/tests/scheduler/solvers/test_pulp_solver.py new file mode 100644 index 0000000..7ba37dd --- /dev/null +++ b/nova/tests/scheduler/solvers/test_pulp_solver.py @@ -0,0 +1,400 @@ +# Copyright (c) 2014 Cisco Systems Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +""" +Tests For Pulp-Solver. +""" + +import mock + +from nova.scheduler import solver_scheduler_host_manager as host_manager +from nova.scheduler.solvers import constraints +from nova.scheduler.solvers import costs +from nova.scheduler.solvers import pulp_solver +from nova import test + + +class FakeCost1(costs.BaseLinearCost): + def cost_multiplier(self): + return 1 + + def get_extended_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + cost_matrix = [[i for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + return cost_matrix + + +class FakeCost2(costs.BaseLinearCost): + precedence = 1 + + def cost_multiplier(self): + return 2 + + def get_extended_cost_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + m = filter_properties['solver_cache']['cost_matrix'] + cost_matrix = [[-m[i][j] for j in xrange(num_instances + 1)] + for i in xrange(num_hosts)] + return cost_matrix + + +class FakeConstraint1(constraints.BaseLinearConstraint): + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + constraint_matrix[0] = [False for j in xrange(num_instances)] + return constraint_matrix + + +class FakeConstraint2(constraints.BaseLinearConstraint): + precedence = 1 + + def get_constraint_matrix(self, hosts, filter_properties): + num_hosts = len(hosts) + num_instances = filter_properties.get('num_instances', 1) + m = filter_properties['solver_cache']['constraint_matrix'] + if m[0][1] is False: + constraint_matrix = [[True for j in xrange(num_instances)] + for i in xrange(num_hosts)] + constraint_matrix[-1] = [False for j in xrange(num_instances)] + else: + constraint_matrix = [[False for j in xrange(num_instances)] + for i in xrange(num_hosts)] + return constraint_matrix + + +class TestGetMatrix(test.NoDBTestCase): + + def setUp(self): + super(TestGetMatrix, self).setUp() + self.pulp_solver = pulp_solver.PulpSolver() + self.fake_hosts = [host_manager.SolverSchedulerHostState( + 'fake_host%s' % x, 'fake-node') for x in xrange(1, 5)] + self.fake_hosts += [host_manager.SolverSchedulerHostState( + 'fake_multihost', 'fake-node%s' % x) for x in xrange(1, 5)] + + def test_get_cost_matrix_one_cost(self): + self.pulp_solver.cost_classes = [FakeCost1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [0, 0, 0, 0], + [1, 1, 1, 1], + [2, 2, 2, 2], + [3, 3, 3, 3]] + cost_mat = self.pulp_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_cost_matrix_multi_cost(self): + self.pulp_solver.cost_classes = [FakeCost1, FakeCost2] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [-0, -0, -0, -0], + [-1, -1, -1, -1], + [-2, -2, -2, -2], + [-3, -3, -3, -3]] + cost_mat = self.pulp_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_cost_matrix_multi_cost_change_order(self): + self.pulp_solver.cost_classes = [FakeCost2, FakeCost1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cost_mat = [ + [-0, -0, -0, -0], + [-1, -1, -1, -1], + [-2, -2, -2, -2], + [-3, -3, -3, -3]] + cost_mat = self.pulp_solver._get_cost_matrix(hosts, filter_properties) + self.assertEqual(expected_cost_mat, cost_mat) + cost_mat_cache = filter_properties['solver_cache']['cost_matrix'] + self.assertEqual(expected_cost_mat, cost_mat_cache) + + def test_get_constraint_matrix_one_constraint(self): + self.pulp_solver.constraint_classes = [FakeConstraint1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [True, False, False, False], + [True, True, True, True], + [True, True, True, True], + [True, True, True, True]] + cons_mat = self.pulp_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + def test_get_constraint_matrix_multi_constraint(self): + self.pulp_solver.constraint_classes = [FakeConstraint1, + FakeConstraint2] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [True, False, False, False], + [True, True, True, True], + [True, True, True, True], + [True, False, False, False]] + cons_mat = self.pulp_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + def test_get_constraint_matrix_multi_constraint_change_order(self): + self.pulp_solver.constraint_classes = [FakeConstraint2, + FakeConstraint1] + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + expected_cons_mat = [ + [True, False, False, False], + [True, True, True, True], + [True, True, True, True], + [True, False, False, False]] + cons_mat = self.pulp_solver._get_constraint_matrix(hosts, + filter_properties) + self.assertEqual(expected_cons_mat, cons_mat) + cons_mat_cache = filter_properties['solver_cache'][ + 'constraint_matrix'] + self.assertEqual(expected_cons_mat, cons_mat_cache) + + +@mock.patch.object(pulp_solver.PulpSolver, '_get_constraint_matrix') +@mock.patch.object(pulp_solver.PulpSolver, '_get_cost_matrix') +class TestPulpSolver(test.NoDBTestCase): + + def setUp(self): + super(TestPulpSolver, self).setUp() + self.pulp_solver = pulp_solver.PulpSolver() + self.fake_hosts = [host_manager.SolverSchedulerHostState( + 'fake_host%s' % x, 'fake-node') for x in xrange(1, 5)] + self.fake_hosts += [host_manager.SolverSchedulerHostState( + 'fake_multihost', 'fake-node%s' % x) for x in xrange(1, 5)] + + def test_spreading_cost(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 1, 2], + # [1, 2, 3], + # [2, 3, 4], + # [3, 4, 5]] + get_costmat.return_value = [[j + i for j in xrange(num_insts + 1)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True] + [True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[0], 'fake_uuid_0'), + (hosts[0], 'fake_uuid_1'), + (hosts[1], 'fake_uuid_2')] + + result = self.pulp_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_spreading_cost_with_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 1, 2], + # [1, 2, 3], + # [2, 3, 4], + # [3, 4, 5]] + get_costmat.return_value = [[j + i for j in xrange(num_insts + 1)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[False, False, False], + # [True, False, False], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [ + [True] + [False, False, False], + [True] + [True, False, False], + [True] + [True, False, False], + [True] + [True, True, True]] + + expected_result = [ + (hosts[1], 'fake_uuid_0'), + (hosts[2], 'fake_uuid_1'), + (hosts[3], 'fake_uuid_2')] + + result = self.pulp_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_stacking_cost(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[-0, -1, -2], + # [-1, -2, -3], + # [-2, -3, -4], + # [-3, -4, -5]] + get_costmat.return_value = [[-(j + i) for j in xrange(num_insts + 1)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, True, True]] + get_consmat.return_value = [[True] + [True for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [ + (hosts[3], 'fake_uuid_0'), + (hosts[3], 'fake_uuid_1'), + (hosts[3], 'fake_uuid_2')] + + result = self.pulp_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_stacking_cost_with_constraint(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[-0, -1, -2], + # [-1, -2, -3], + # [-2, -3, -4], + # [-3, -4, -5]] + get_costmat.return_value = [[-(j + i) for j in xrange(num_insts + 1)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[True, True, True], + # [True, True, True], + # [True, True, True], + # [True, False, False]] + get_consmat.return_value = [ + [True] + [True, True, True], + [True] + [True, True, True], + [True] + [True, True, True], + [True] + [True, False, False]] + + expected_result = [ + (hosts[2], 'fake_uuid_0'), + (hosts[2], 'fake_uuid_1'), + (hosts[2], 'fake_uuid_2')] + + result = self.pulp_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) + + def test_no_valid_solution(self, get_costmat, get_consmat): + num_hosts = 4 + num_insts = 3 + hosts = self.fake_hosts[0:num_hosts] + filter_properties = { + 'num_instances': num_insts, + 'instance_uuids': ['fake_uuid_%s' % x for x in xrange(num_insts)], + 'solver_cache': {} + } + + # cost matrix: + # [[0, 0, 0], + # [1, 1, 1], + # [2, 2, 2], + # [3, 3, 3]] + get_costmat.return_value = [[i for j in xrange(num_insts + 1)] + for i in xrange(num_hosts)] + # constraint matrix: + # [[False, False, False], + # [False, False, False], + # [False, False, False], + # [False, False, False]] + get_consmat.return_value = [[True] + [False for j in xrange(num_insts)] + for i in xrange(num_hosts)] + + expected_result = [] + + result = self.pulp_solver.solve(hosts, filter_properties) + self.assertEqual(expected_result, result) diff --git a/nova/tests/scheduler/solvers/test_solvers.py b/nova/tests/scheduler/solvers/test_solvers.py new file mode 100644 index 0000000..aad35d5 --- /dev/null +++ b/nova/tests/scheduler/solvers/test_solvers.py @@ -0,0 +1,84 @@ +# Copyright (c) 2014 Cisco Systems, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from nova.scheduler import solvers +from nova.scheduler.solvers import constraints +from nova.scheduler.solvers import costs +from nova import solver_scheduler_exception as exception +from nova import test + + +class FakeCost1(costs.BaseCost): + def get_components(self, variables, hosts, filter_properties): + pass + + +class FakeCost2(costs.BaseCost): + def get_components(self, variables, hosts, filter_properties): + pass + + +class FakeConstraint1(constraints.BaseConstraint): + def get_components(self, variables, hosts, filter_properties): + pass + + +class FakeConstraint2(constraints.BaseConstraint): + def get_components(self, variables, hosts, filter_properties): + pass + + +class TestBaseHostSolver(test.NoDBTestCase): + """Test case for scheduler base solver.""" + + def setUp(self): + super(TestBaseHostSolver, self).setUp() + self.solver = solvers.BaseHostSolver() + + @mock.patch.object(costs.CostHandler, 'get_all_classes') + def test_get_cost_classes_normal(self, getcls): + self.flags(scheduler_solver_costs=['FakeCost1'], + group='solver_scheduler') + getcls.return_value = [FakeCost1, FakeCost2] + cost_classes = self.solver._get_cost_classes() + self.assertIn(FakeCost1, cost_classes) + self.assertNotIn(FakeCost2, cost_classes) + + @mock.patch.object(costs.CostHandler, 'get_all_classes') + def test_get_cost_classes_not_found(self, getcls): + self.flags(scheduler_solver_costs=['FakeUnknownCost'], + group='solver_scheduler') + getcls.return_value = [FakeCost1, FakeCost2] + self.assertRaises(exception.SchedulerSolverCostNotFound, + self.solver._get_cost_classes) + + @mock.patch.object(constraints.ConstraintHandler, 'get_all_classes') + def test_get_constraint_classes_normal(self, getcls): + self.flags(scheduler_solver_constraints=['FakeConstraint1'], + group='solver_scheduler') + getcls.return_value = [FakeConstraint1, FakeConstraint2] + constraint_classes = self.solver._get_constraint_classes() + self.assertIn(FakeConstraint1, constraint_classes) + self.assertNotIn(FakeConstraint2, constraint_classes) + + @mock.patch.object(constraints.ConstraintHandler, 'get_all_classes') + def test_get_constraint_classes_not_found(self, getcls): + self.flags(scheduler_solver_constraints=['FakeUnknownConstraint'], + group='solver_scheduler') + getcls.return_value = [FakeConstraint1, FakeConstraint2] + self.assertRaises(exception.SchedulerSolverConstraintNotFound, + self.solver._get_constraint_classes) diff --git a/nova/tests/scheduler/solvers/test_volume_affinity_cost.py b/nova/tests/scheduler/solvers/test_volume_affinity_cost.py deleted file mode 100644 index 3093582..0000000 --- a/nova/tests/scheduler/solvers/test_volume_affinity_cost.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Tests for Volume Affinity Cost. -""" - -import mock - -from cinderclient import exceptions as client_exceptions - -from nova.tests.scheduler import fakes -from nova.tests.scheduler.solvers import test_costs as tc -from nova.tests.volume import test_cinder as cindertest -import nova.volume.cinder as volumecinder - - -class FakeVolume(object): - - def __init__(self, id, host_id): - setattr(self, 'os-vol-host-attr:host', host_id) - self.id = id - - -HOSTS = [fakes.FakeHostState('host1', 'node1', {}), - fakes.FakeHostState('host2', 'node2', {}), - ] - -VOLUMES = [FakeVolume('volume1', 'host1'), - FakeVolume('volume2', 'host2'), - ] - -INSTANCE_UUIDS = ['fake-instance-uuid', 'fake-instance2-uuid'] - - -class VolumeAffinityCostTestCase(tc.CostsTestBase): - """Test case for VolumeAffinityCost.""" - - def setUp(self): - super(VolumeAffinityCostTestCase, self).setUp() - self.cinderclient = cindertest.FakeCinderClient() - - def test_get_cost_matrix_single(self): - cost_cls = self.class_map['VolumeAffinityCost']() - hosts = HOSTS[0:1] - instance_uuids = INSTANCE_UUIDS[0:1] - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': { - 'affinity_volume_id': 'volume1'}} - with mock.patch.object(volumecinder, 'cinderclient') as client_mock: - client_mock.return_value = self.cinderclient - with mock.patch.object(self.cinderclient.volumes, - 'get') as get_mock: - get_mock.return_value = VOLUMES[0] - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, - filter_properties) - ref_cost_matrix = [[0]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_multi(self): - cost_cls = self.class_map['VolumeAffinityCost']() - hosts = HOSTS - instance_uuids = INSTANCE_UUIDS - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': { - 'affinity_volume_id': 'volume1'}} - context = filter_properties.get('context') - with mock.patch.object(volumecinder, 'cinderclient') as client_mock: - client_mock.return_value = self.cinderclient - with mock.patch.object(self.cinderclient.volumes, - 'get') as get_mock: - get_mock.return_value = VOLUMES[0] - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, - filter_properties) - ref_cost_matrix = [[0, 0], - [1, 1]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_multi_missing_hint(self): - cost_cls = self.class_map['VolumeAffinityCost']() - hosts = HOSTS - instance_uuids = INSTANCE_UUIDS - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated()} - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, - filter_properties) - ref_cost_matrix = [[1, 1], - [1, 1]] - self.assertEqual(cost_matrix, ref_cost_matrix) - - def test_get_cost_matrix_multi_unknown_volume_id(self): - - def mock_side_effect(*args, **kwargs): - raise client_exceptions.NotFound(None) - - cost_cls = self.class_map['VolumeAffinityCost']() - hosts = HOSTS - instance_uuids = INSTANCE_UUIDS - request_spec = {'instance_type': 'fake_type', - 'instance_uuids': instance_uuids, - 'num_instances': 2} - filter_properties = {'context': self.context.elevated(), - 'scheduler_hints': { - 'affinity_volume_id': 'volume234'}} - context = filter_properties.get('context') - with mock.patch.object(volumecinder, 'cinderclient') as client_mock: - client_mock.return_value = self.cinderclient - with mock.patch.object(self.cinderclient.volumes, - 'get') as get_mock: - get_mock.side_effect = mock_side_effect - cost_matrix = cost_cls.get_cost_matrix(hosts, instance_uuids, - request_spec, - filter_properties) - ref_cost_matrix = [[1, 1], - [1, 1]] - self.assertEqual(cost_matrix, ref_cost_matrix) diff --git a/nova/tests/scheduler/test_solver_scheduler.py b/nova/tests/scheduler/test_solver_scheduler.py index 84081ca..0fd4bd9 100644 --- a/nova/tests/scheduler/test_solver_scheduler.py +++ b/nova/tests/scheduler/test_solver_scheduler.py @@ -28,7 +28,8 @@ from nova.scheduler import driver from nova.scheduler import host_manager from nova.scheduler import solver_scheduler from nova.scheduler import weights -from nova.tests.scheduler import fakes +from nova import solver_scheduler_exception +from nova.tests.scheduler import solver_scheduler_fakes as fakes from nova.tests.scheduler import test_scheduler @@ -51,6 +52,9 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): driver_cls = solver_scheduler.ConstraintSolverScheduler + def setUp(self): + super(SolverSchedulerTestCase, self).setUp() + def test_run_instance_no_hosts(self): def _fake_empty_call_zone_method(*args, **kwargs): @@ -167,6 +171,7 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): """Make sure there's nothing glaringly wrong with _schedule() by doing a happy day pass through. """ + self.flags(scheduler_solver_constraints=[], group='solver_scheduler') sched = fakes.FakeSolverScheduler() fake_context = context.RequestContext('user', 'project', @@ -218,8 +223,11 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): with mock.patch.object(db, 'compute_node_get_all') as get_all: get_all.return_value = [] - sched._schedule(self.context, request_spec, - filter_properties=filter_properties) + try: + sched._schedule(self.context, request_spec, + filter_properties=filter_properties) + except solver_scheduler_exception.SolverFailed: + pass get_all.assert_called_once_with(mock.ANY) # should not have retry info in the populated filter properties: self.assertFalse("retry" in filter_properties) @@ -235,8 +243,11 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): with mock.patch.object(db, 'compute_node_get_all') as get_all: get_all.return_value = [] - sched._schedule(self.context, request_spec, - filter_properties=filter_properties) + try: + sched._schedule(self.context, request_spec, + filter_properties=filter_properties) + except solver_scheduler_exception.SolverFailed: + pass get_all.assert_called_once_with(mock.ANY) num_attempts = filter_properties['retry']['num_attempts'] self.assertEqual(1, num_attempts) @@ -254,8 +265,11 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): with mock.patch.object(db, 'compute_node_get_all') as get_all: get_all.return_value = [] - sched._schedule(self.context, request_spec, - filter_properties=filter_properties) + try: + sched._schedule(self.context, request_spec, + filter_properties=filter_properties) + except solver_scheduler_exception.SolverFailed: + pass get_all.assert_called_once_with(mock.ANY) num_attempts = filter_properties['retry']['num_attempts'] self.assertEqual(2, num_attempts) @@ -287,7 +301,7 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): def test_schedule_chooses_best_host(self): """The host with the highest free_ram_mb will be chosen! """ - + self.flags(scheduler_solver_constraints=[], group='solver_scheduler') self.flags(ram_weight_multiplier=1) sched = fakes.FakeSolverScheduler() @@ -315,7 +329,8 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): 'ephemeral_gb': 0, 'vcpus': 1, 'os_type': 'Linux'} - request_spec = dict(instance_properties=instance_properties) + request_spec = dict(instance_properties=instance_properties, + instance_type={'memory_mb': 512}) filter_properties = {} with mock.patch.object(db, 'compute_node_get_all') as get_all: @@ -335,6 +350,7 @@ class SolverSchedulerTestCase(test_scheduler.SchedulerTestCase): Similar to the _schedule tests, this just does a happy path test to ensure there is nothing glaringly wrong. """ + self.flags(scheduler_solver_constraints=[], group='solver_scheduler') sched = fakes.FakeSolverScheduler() fake_context = context.RequestContext('user', 'project', diff --git a/nova/tests/scheduler/test_solver_scheduler_host_manager.py b/nova/tests/scheduler/test_solver_scheduler_host_manager.py new file mode 100644 index 0000000..9473ae1 --- /dev/null +++ b/nova/tests/scheduler/test_solver_scheduler_host_manager.py @@ -0,0 +1,165 @@ +# Copyright (c) 2011 OpenStack Foundation +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +""" +Tests For SolverSchedulerHostManager +""" +from nova.openstack.common import timeutils +from nova.scheduler import solver_scheduler_host_manager as host_manager +from nova import test + + +class SolverSchedulerHostManagerTestCase(test.NoDBTestCase): + """Test case for HostManager class.""" + + def setUp(self): + super(SolverSchedulerHostManagerTestCase, self).setUp() + self.host_manager = host_manager.SolverSchedulerHostManager() + self.fake_hosts = [host_manager.SolverSchedulerHostState( + 'fake_host%s' % x, 'fake-node') for x in xrange(1, 5)] + self.fake_hosts += [host_manager.SolverSchedulerHostState( + 'fake_multihost', 'fake-node%s' % x) for x in xrange(1, 5)] + self.addCleanup(timeutils.clear_time_override) + + def _verify_result(self, info, result): + self.assertEqual(set(info['expected_objs']), set(result)) + + def test_get_hosts_with_ignore(self): + fake_properties = {'ignore_hosts': ['fake_host1', 'fake_host3', + 'fake_host5', 'fake_multihost']} + + # [1] and [3] are host2 and host4 + info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], + 'expected_fprops': fake_properties} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_force(self): + fake_properties = {'force_hosts': ['fake_host1', 'fake_host3', + 'fake_host5']} + + # [0] and [2] are host1 and host3 + info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_no_matching_force_hosts(self): + fake_properties = {'force_hosts': ['fake_host5', 'fake_host6']} + + info = {'expected_objs': [], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_ignore_and_force_hosts(self): + # Ensure ignore_hosts processed before force_hosts in host filters. + fake_properties = {'force_hosts': ['fake_host3', 'fake_host1'], + 'ignore_hosts': ['fake_host1']} + + # only fake_host3 should be left. + info = {'expected_objs': [self.fake_hosts[2]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_force_host_and_many_nodes(self): + # Ensure all nodes returned for a host with many nodes + fake_properties = {'force_hosts': ['fake_multihost']} + + info = {'expected_objs': [self.fake_hosts[4], self.fake_hosts[5], + self.fake_hosts[6], self.fake_hosts[7]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_force_nodes(self): + fake_properties = {'force_nodes': ['fake-node2', 'fake-node4', + 'fake-node9']} + + # [5] is fake-node2, [7] is fake-node4 + info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_force_hosts_and_nodes(self): + # Ensure only overlapping results if both force host and node + fake_properties = {'force_hosts': ['fake_host1', 'fake_multihost'], + 'force_nodes': ['fake-node2', 'fake-node9']} + + # [5] is fake-node2 + info = {'expected_objs': [self.fake_hosts[5]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_force_hosts_and_wrong_nodes(self): + # Ensure non-overlapping force_node and force_host yield no result + fake_properties = {'force_hosts': ['fake_multihost'], + 'force_nodes': ['fake-node']} + + info = {'expected_objs': [], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_ignore_hosts_and_force_nodes(self): + # Ensure ignore_hosts can coexist with force_nodes + fake_properties = {'force_nodes': ['fake-node4', 'fake-node2'], + 'ignore_hosts': ['fake_host1', 'fake_host2']} + + info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) + + def test_get_hosts_with_ignore_hosts_and_force_same_nodes(self): + # Ensure ignore_hosts is processed before force_nodes + fake_properties = {'force_nodes': ['fake_node4', 'fake_node2'], + 'ignore_hosts': ['fake_multihost']} + + info = {'expected_objs': [], + 'expected_fprops': fake_properties, + 'got_fprops': []} + + result = self.host_manager.get_hosts_stripping_ignored_and_forced( + self.fake_hosts, fake_properties) + self._verify_result(info, result) diff --git a/nova/volume/cinder.py b/nova/volume/cinder.py deleted file mode 100644 index ad48497..0000000 --- a/nova/volume/cinder.py +++ /dev/null @@ -1,410 +0,0 @@ -# Copyright 2010 United States Government as represented by the -# Administrator of the National Aeronautics and Space Administration. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Handles all requests relating to volumes + cinder. -""" - -import copy -import sys - -from cinderclient import exceptions as cinder_exception -from cinderclient import service_catalog -from cinderclient.v1 import client as cinder_client -from oslo.config import cfg - -from nova import exception -from nova.openstack.common.gettextutils import _ -from nova.openstack.common import log as logging - -cinder_opts = [ - cfg.StrOpt('cinder_catalog_info', - default='volume:cinder:publicURL', - help='Info to match when looking for cinder in the service ' - 'catalog. Format is: separated values of the form: ' - '::'), - cfg.StrOpt('cinder_endpoint_template', - help='Override service catalog lookup with template for cinder ' - 'endpoint e.g. http://localhost:8776/v1/%(project_id)s'), - cfg.StrOpt('os_region_name', - help='Region name of this node'), - cfg.StrOpt('cinder_ca_certificates_file', - help='Location of ca certificates file to use for cinder ' - 'client requests.'), - cfg.IntOpt('cinder_http_retries', - default=3, - help='Number of cinderclient retries on failed http calls'), - cfg.BoolOpt('cinder_api_insecure', - default=False, - help='Allow to perform insecure SSL requests to cinder'), - cfg.BoolOpt('cinder_cross_az_attach', - default=True, - help='Allow attach between instance and volume in different ' - 'availability zones.'), - cfg.StrOpt('cinder_admin_user', - default=None, - help='Keystone Cinder account username'), - cfg.StrOpt('cinder_admin_password', - default=None, - help='Keystone Cinder account password'), - cfg.StrOpt('cinder_admin_tenant_name', - default='service', - help='Keystone Cinder account tenant name'), - cfg.StrOpt('cinder_auth_uri', - default=None, - help='Complete public Identity API endpoint'), -] - -CONF = cfg.CONF -CONF.register_opts(cinder_opts) - -LOG = logging.getLogger(__name__) - - -def cinderclient(context): - - # FIXME: the cinderclient ServiceCatalog object is mis-named. - # It actually contains the entire access blob. - # Only needed parts of the service catalog are passed in, see - # nova/context.py. - compat_catalog = { - 'access': {'serviceCatalog': context.service_catalog or []} - } - sc = service_catalog.ServiceCatalog(compat_catalog) - if CONF.cinder_endpoint_template: - url = CONF.cinder_endpoint_template % context.to_dict() - else: - info = CONF.cinder_catalog_info - service_type, service_name, endpoint_type = info.split(':') - # extract the region if set in configuration - if CONF.os_region_name: - attr = 'region' - filter_value = CONF.os_region_name - else: - attr = None - filter_value = None - url = sc.url_for(attr=attr, - filter_value=filter_value, - service_type=service_type, - service_name=service_name, - endpoint_type=endpoint_type) - - LOG.debug(_('Cinderclient connection created using URL: %s') % url) - - c = cinder_client.Client(context.user_id, - context.auth_token, - project_id=context.project_id, - auth_url=url, - insecure=CONF.cinder_api_insecure, - retries=CONF.cinder_http_retries, - cacert=CONF.cinder_ca_certificates_file) - # noauth extracts user_id:project_id from auth_token - c.client.auth_token = context.auth_token or '%s:%s' % (context.user_id, - context.project_id) - c.client.management_url = url - return c - - -def cinderadminclient(): - # Note(xinyuan): in case admin context is needed, - # this provides a method to use cinder admin credentials. - c = cinder_client.Client(CONF.cinder_admin_user, - CONF.cinder_admin_password, - project_id=CONF.cinder_admin_tenant_name, - auth_url=CONF.cinder_auth_uri, - insecure=CONF.cinder_api_insecure, - retries=CONF.cinder_http_retries, - cacert=CONF.cinder_ca_certificates_file) - return c - - -def _untranslate_volume_summary_view(context, vol): - """Maps keys for volumes summary view.""" - d = {} - d['id'] = vol.id - d['status'] = vol.status - d['size'] = vol.size - d['availability_zone'] = vol.availability_zone - d['created_at'] = vol.created_at - - # TODO(jdg): The calling code expects attach_time and - # mountpoint to be set. When the calling - # code is more defensive this can be - # removed. - d['attach_time'] = "" - d['mountpoint'] = "" - - if vol.attachments: - att = vol.attachments[0] - d['attach_status'] = 'attached' - d['instance_uuid'] = att['server_id'] - d['mountpoint'] = att['device'] - else: - d['attach_status'] = 'detached' - - d['display_name'] = vol.display_name - d['display_description'] = vol.display_description - - # TODO(jdg): Information may be lost in this translation - d['volume_type_id'] = vol.volume_type - d['snapshot_id'] = vol.snapshot_id - - d['volume_metadata'] = {} - for key, value in vol.metadata.items(): - d['volume_metadata'][key] = value - - if hasattr(vol, 'volume_image_metadata'): - d['volume_image_metadata'] = copy.deepcopy(vol.volume_image_metadata) - - return d - - -def _untranslate_snapshot_summary_view(context, snapshot): - """Maps keys for snapshots summary view.""" - d = {} - - d['id'] = snapshot.id - d['status'] = snapshot.status - d['progress'] = snapshot.progress - d['size'] = snapshot.size - d['created_at'] = snapshot.created_at - d['display_name'] = snapshot.display_name - d['display_description'] = snapshot.display_description - d['volume_id'] = snapshot.volume_id - d['project_id'] = snapshot.project_id - d['volume_size'] = snapshot.size - - return d - - -def translate_volume_exception(method): - """Transforms the exception for the volume but keeps its traceback intact. - """ - def wrapper(self, ctx, volume_id, *args, **kwargs): - try: - res = method(self, ctx, volume_id, *args, **kwargs) - except cinder_exception.ClientException: - exc_type, exc_value, exc_trace = sys.exc_info() - if isinstance(exc_value, cinder_exception.NotFound): - exc_value = exception.VolumeNotFound(volume_id=volume_id) - elif isinstance(exc_value, cinder_exception.BadRequest): - exc_value = exception.InvalidInput(reason=exc_value.message) - raise exc_value, None, exc_trace - return res - return wrapper - - -def translate_snapshot_exception(method): - """Transforms the exception for the snapshot but keeps its traceback - intact. - """ - def wrapper(self, ctx, snapshot_id, *args, **kwargs): - try: - res = method(self, ctx, snapshot_id, *args, **kwargs) - except cinder_exception.ClientException: - exc_type, exc_value, exc_trace = sys.exc_info() - if isinstance(exc_value, cinder_exception.NotFound): - exc_value = exception.SnapshotNotFound(snapshot_id=snapshot_id) - raise exc_value, None, exc_trace - return res - return wrapper - - -class API(object): - """API for interacting with the volume manager.""" - - @translate_volume_exception - def get(self, context, volume_id): - item = cinderclient(context).volumes.get(volume_id) - return _untranslate_volume_summary_view(context, item) - - def get_all(self, context, search_opts={}): - items = cinderclient(context).volumes.list(detailed=True) - rval = [] - - for item in items: - rval.append(_untranslate_volume_summary_view(context, item)) - - return rval - - def check_attached(self, context, volume): - """Raise exception if volume in use.""" - if volume['status'] != "in-use": - msg = _("status must be 'in-use'") - raise exception.InvalidVolume(reason=msg) - - def check_attach(self, context, volume, instance=None): - # TODO(vish): abstract status checking? - if volume['status'] != "available": - msg = _("status must be 'available'") - raise exception.InvalidVolume(reason=msg) - if volume['attach_status'] == "attached": - msg = _("already attached") - raise exception.InvalidVolume(reason=msg) - if instance and not CONF.cinder_cross_az_attach: - if instance['availability_zone'] != volume['availability_zone']: - msg = _("Instance and volume not in same availability_zone") - raise exception.InvalidVolume(reason=msg) - - def check_detach(self, context, volume): - # TODO(vish): abstract status checking? - if volume['status'] == "available": - msg = _("already detached") - raise exception.InvalidVolume(reason=msg) - - @translate_volume_exception - def reserve_volume(self, context, volume_id): - cinderclient(context).volumes.reserve(volume_id) - - @translate_volume_exception - def unreserve_volume(self, context, volume_id): - cinderclient(context).volumes.unreserve(volume_id) - - @translate_volume_exception - def begin_detaching(self, context, volume_id): - cinderclient(context).volumes.begin_detaching(volume_id) - - @translate_volume_exception - def roll_detaching(self, context, volume_id): - cinderclient(context).volumes.roll_detaching(volume_id) - - @translate_volume_exception - def attach(self, context, volume_id, instance_uuid, mountpoint): - cinderclient(context).volumes.attach(volume_id, instance_uuid, - mountpoint) - - @translate_volume_exception - def detach(self, context, volume_id): - cinderclient(context).volumes.detach(volume_id) - - @translate_volume_exception - def initialize_connection(self, context, volume_id, connector): - return cinderclient(context).volumes.initialize_connection(volume_id, - connector) - - @translate_volume_exception - def terminate_connection(self, context, volume_id, connector): - return cinderclient(context).volumes.terminate_connection(volume_id, - connector) - - def migrate_volume_completion(self, context, old_volume_id, new_volume_id, - error=False): - return cinderclient(context).volumes.migrate_volume_completion( - old_volume_id, new_volume_id, error) - - def create(self, context, size, name, description, snapshot=None, - image_id=None, volume_type=None, metadata=None, - availability_zone=None): - - if snapshot is not None: - snapshot_id = snapshot['id'] - else: - snapshot_id = None - - kwargs = dict(snapshot_id=snapshot_id, - display_name=name, - display_description=description, - volume_type=volume_type, - user_id=context.user_id, - project_id=context.project_id, - availability_zone=availability_zone, - metadata=metadata, - imageRef=image_id) - - try: - item = cinderclient(context).volumes.create(size, **kwargs) - return _untranslate_volume_summary_view(context, item) - except cinder_exception.BadRequest as e: - raise exception.InvalidInput(reason=unicode(e)) - - @translate_volume_exception - def delete(self, context, volume_id): - cinderclient(context).volumes.delete(volume_id) - - @translate_volume_exception - def update(self, context, volume_id, fields): - raise NotImplementedError() - - @translate_snapshot_exception - def get_snapshot(self, context, snapshot_id): - item = cinderclient(context).volume_snapshots.get(snapshot_id) - return _untranslate_snapshot_summary_view(context, item) - - def get_all_snapshots(self, context): - items = cinderclient(context).volume_snapshots.list(detailed=True) - rvals = [] - - for item in items: - rvals.append(_untranslate_snapshot_summary_view(context, item)) - - return rvals - - @translate_volume_exception - def create_snapshot(self, context, volume_id, name, description): - item = cinderclient(context).volume_snapshots.create(volume_id, - False, - name, - description) - return _untranslate_snapshot_summary_view(context, item) - - @translate_volume_exception - def create_snapshot_force(self, context, volume_id, name, description): - item = cinderclient(context).volume_snapshots.create(volume_id, - True, - name, - description) - - return _untranslate_snapshot_summary_view(context, item) - - @translate_snapshot_exception - def delete_snapshot(self, context, snapshot_id): - cinderclient(context).volume_snapshots.delete(snapshot_id) - - def get_volume_encryption_metadata(self, context, volume_id): - return cinderclient(context).volumes.get_encryption_metadata(volume_id) - - @translate_volume_exception - def get_volume_metadata(self, context, volume_id): - raise NotImplementedError() - - @translate_volume_exception - def delete_volume_metadata(self, context, volume_id, key): - raise NotImplementedError() - - @translate_volume_exception - def update_volume_metadata(self, context, volume_id, - metadata, delete=False): - raise NotImplementedError() - - @translate_volume_exception - def get_volume_metadata_value(self, volume_id, key): - raise NotImplementedError() - - @translate_snapshot_exception - def update_snapshot_status(self, context, snapshot_id, status): - vs = cinderclient(context).volume_snapshots - - # '90%' here is used to tell Cinder that Nova is done - # with its portion of the 'creating' state. This can - # be removed when we are able to split the Cinder states - # into 'creating' and a separate state of - # 'creating_in_nova'. (Same for 'deleting' state.) - - vs.update_snapshot_status( - snapshot_id, - {'status': status, - 'progress': '90%'} - ) diff --git a/requirements.txt b/requirements.txt index 394c393..de56463 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,36 +1 @@ -pbr>=0.6,!=0.7,<1.0 -SQLAlchemy>=0.7.8,<=0.9.99 -anyjson>=0.3.3 -argparse -boto>=2.12.0,!=2.13.0 -eventlet>=0.13.0 -Jinja2 -kombu>=2.4.8 -lxml>=2.3 -Routes>=1.12.3 -WebOb>=1.2.3 -greenlet>=0.3.2 -PasteDeploy>=1.5.0 -Paste -sqlalchemy-migrate>=0.9.1 -netaddr>=0.7.6 -suds>=0.4 -paramiko>=1.13.0 -pyasn1 -Babel>=1.3 -iso8601>=0.1.9 -jsonschema>=2.0.0,<3.0.0 -python-cinderclient>=1.0.6 -python-neutronclient>=2.3.4,<3 -python-glanceclient>=0.9.0 -python-keystoneclient>=0.9.0 -six>=1.6.0 -stevedore>=0.14 -websockify>=0.5.1,<0.6 -wsgiref>=0.1.2 -oslo.config>=1.2.0 -oslo.rootwrap -oslotest -pycadf>=0.5.1 -oslo.messaging>=1.3.0 coinor.pulp>=1.0.4