This ensures the parameters used by if-else logic accept only boolean
values because non-boolean can result in unexpected behavior.
Change-Id: I3a27d94e453f9cfbea701337308a7086693c89bb
This parameter was deprecated during Xena cycle[1] and has had no
effect since then.
[1] 3375efcfd4
Change-Id: I36227d92bbbc64a4d3d756088ca57c8a6148d270
... because the parameter was already deprecated in nova[1].
[1] 7c7a2a142d74a7deeda2a79baf21b689fe32cd08
Change-Id: I817f9cff13f164e6a54f4ef5eebe8d2be8dd2b8e
Add parameter `query_placement_for_routed_network_aggregates`
that allows the scheduler to look at the nova aggregates related
to requested routed network segments.
Change-Id: I1db6a5d92a56a1a768826b6a9434e2f3c2602eff
This patch migrates some parameters from nova::scheduler::filter to
nova::scheduler, because these are parameters to determine behavior of
nova-scheduler, and not specific to scheduler filters.
Note that the default value for max_attempts parameter has been changed
from 3 to $::os_service_default, because its default is defined as 3 in
nova implementation.
Change-Id: Ic74e1fdb4adf9f954b6e58343050c5e166d40889
Add new parameter `scheduler/query_placement_for_availability_zone`
that allows the scheduler to look up a host aggregate with metadata
key of availability zone set to the value provided by incoming request,
and request result from placement be limited to that aggregate.
Change-Id: Ie02c732d9d75e1d5783f2aa841f244c9bee695ed
Add support for configuring scheduler parameter option
`enable_isolated_aggregate_filtering` which allows the
scheduler to restrict hosts in aggregates based on matching
required traits in the aggregate metadata and the instance
flavor/image.
Change-Id: Iefb7e8a4d867b8721cdb5b00c2f71b2dd3a492c0
Add support for configuring scheduler parameter option
`placement_aggregate_required_for_tenants` which controls
whether or not a tenant with no aggregate affinity will be
allowed to schedule to any available node.
Change-Id: I61e3784ff40665ced25a52f864455c2a3518def0
Add support for configuring scheduler parameter option
'limit_tenants_to_placement_aggregate' to enable tenant
isolation with placement.
Change-Id: I37ce0633b3b56563f373fadfb74820b2896d6734
Add paramerer `query_placement_for_image_type_support` that controls
whether to add ask placement only for compute hosts that support the
``disk_format`` of the image used in the request.
Change-Id: I6d895a4789f44380565e3d2e3461d0bbb501c86b
nova-scheduler uses CONF.default_availabiliy_zone.
However, the required class was not included in scheduler.pp
This patch includes nova::availability_zone class in scheduler.pp.
Signed-off-by: Keigo Noha <knoha@redhat.com>
Closes-Bug: #1824273
Change-Id: Ia94a5d7baf1dd2efc8339475176a1e1c24ad99d0
The configuration option scheduler/workers was added
in Rocky [1]. This patch adds the nova::scheduler::workers
parameter which can be used to set this new config option.
[1] https://docs.openstack.org/releasenotes/nova/rocky.html
Change-Id: I217444b2c30d62347cd9e25def769e94d12a70fc
Starting with the Ocata release, bare metal nodes are no longer get recognized
by nova automatically. To avoid forcing users into running nova manage command
each time they enroll a node, we will have to allow enable the periodic task
to do so. This option configures it.
Change-Id: I1f0e40474018de593cb3f8798b5212285f5629a4
Closes-Bug: #1697724
The scheduler and scheduler filter options have been moved out of the
DEFAULT name space to scheduler and filter_scheduler namespaces. This
change updates the configuration namespace for the existing
configuration options, removes the DEFAULT version and adds additional
configuration options.
Change-Id: I8f1da7546bf6aa20bb2cb17d3c8163963ef32e2a
Update defaults values for scheduler_driver and scheduler_host_manager
to match with upstream values in Nova.
It was configured in Devstack:
b298e57c9f
And old values don't work anymore since:
7f1ff4b226
Change-Id: Idbbae5281d429edb95783cdde3d45804ddaeace1
Closes-Bug: #1572467
This adds defined anchor points for external modules to hook into the
software install, config and service dependency chain. This allows
external modules to manage software installation (virtualenv,
containers, etc) and service management (pacemaker) without needing rely
on resources that may change or be renamed.
Change-Id: I0b524e354b095f2642fd38a2f88536d15bcdf855
This commit updates the default value for enable for nova components to
default to true, instead of false. Without this
change the nova service is not enabled by default resulting in a
different behavior than with other puppet openstack modules.
Associated tests are updated to expect the change in defaults.
Co-Authored-By: Cody Herriges <cody@puppetlabs.com>
Change-Id: I49fc84f9fedfe00d7846441e1b49334abb09e0eb
Closes-bug: #1220473
The default value was incorrect, breaking nova-scheduler
configuration and resulting in an error message:
AttributeError: 'HostManager' object has no attribute 'select_destinations'
FilterScheduler should be the default.
Change-Id: Ie4903019cbfe30088f42ac8b76174ed85ec00fcc
Closes-Bug: #1473130
This changes the in-class documentation to reflect the correct module
namespace (scheduler vs. schedule)
Change-Id: Ic211b4287aa5fdc960351d3753e269c2ba2d5485
This changes the puppet-lint requirement to 1.1.x, so that we can use
puppet-lint plugins. Most of these plugins are for 4.x compat, but some
just catch common errors.
Change-Id: I48838fa11902247101c0473abff65cbb2558f609
Since Grizzly, nova-compute does not need database access anymore.
Currently, only nova-api, nova-scheduler and nova-conductor really need
database access.
* Keep original nova parameters with backward compatibility
* Create nova::db with database parameters
* Import nova::db in nova::init for backward compatibility
* Import nova::db in nova::{api,conductor,scheduler}
* Refactorize unit tests for conductor & scheduler
Change-Id: I42b9d2b1efb5856fed6550c25ac3142952690df1
Implements: blueprint move-db-params
When set to false, enables puppet to configure a service without
starting/stopping it on each run. This may be necessary when
using an external clustering system (Corosync/Pacemaker, for
example). Defaults to true.
Change-Id: Iff21ee384fa857e1ddec0e15ca85df8aedad3e80
This patch fixes all remaining parameter documentation
in the nova module to be compatible with puppet-doc
and documents all parameters in a standard way.
Change-Id: I451078d46cb2498dd8e3c23bd8cbcc81b8845fcd
This commit removes any occurrences of the inherits
keyword.
Ineritance should only be used for overriding
resources and accessing params from class defaults.
Any other uses are confusing to people who may
read this code in the future.
Previously, there was a lot of copy/paste code for
handling the various nova services.
This commit creates the define nova::generic_service
which is used to capture common code for
configuring nova services.
It also updates the following classes to use that code:
- nova::api
- nova::cert
- nova::compute
- nova::objectstore
- nova::network
- nova::sceduler
- nova::volume
It also updates spec tests for all of these classes
This commit ensures that the individual Ubuntu nova
packages for any given service are only installed when that
service is configured.
This behavior was broken when RedHat support was added.
Since Redhat only has a single package, it was assumed
that all packages should always be installed on all nodes.
This was causing all services to be in a running state on
Ubuntu (b/c the packages were starting the related service)
This commit is a refactor of work performed by
Derek Higgins that adds fedora 16 support to these
openstack modules.
It contains the following:
- creates a params class to store all of the
data differences.
- installs all packages on all nova nodes
- introuces an anchor that is used to specify
ordering for things that need to occur before
nova is installed.
- manages libvirt package and service in the
nova::compute::libvirt class
According to the multi-node install insturctions,
the services should be started/restarted after the
management objects are created and database is
restarted.
all nova services are started after nova.conf is
configured
db and rabbitmq should be configured before
nova.conf
Basically using nova_config as an anchor
network should only be started after nova-compute
service is started.