This parameter was deprecated during Xena cycle[1] and has had no
effect since then.
[1] 3375efcfd4
Change-Id: I36227d92bbbc64a4d3d756088ca57c8a6148d270
This change drops some redundant test cases which assert functionality
of nova::db class from spec files for different classes.
The nova::db class has its own spec file and is tested by that.
Change-Id: I92fd649b4a546dd74fac81b5a9231712768ec707
Add parameter `query_placement_for_routed_network_aggregates`
that allows the scheduler to look at the nova aggregates related
to requested routed network segments.
Change-Id: I1db6a5d92a56a1a768826b6a9434e2f3c2602eff
This patch migrates some parameters from nova::scheduler::filter to
nova::scheduler, because these are parameters to determine behavior of
nova-scheduler, and not specific to scheduler filters.
Note that the default value for max_attempts parameter has been changed
from 3 to $::os_service_default, because its default is defined as 3 in
nova implementation.
Change-Id: Ic74e1fdb4adf9f954b6e58343050c5e166d40889
Add new parameter `scheduler/query_placement_for_availability_zone`
that allows the scheduler to look up a host aggregate with metadata
key of availability zone set to the value provided by incoming request,
and request result from placement be limited to that aggregate.
Change-Id: Ie02c732d9d75e1d5783f2aa841f244c9bee695ed
Add support for configuring scheduler parameter option
`enable_isolated_aggregate_filtering` which allows the
scheduler to restrict hosts in aggregates based on matching
required traits in the aggregate metadata and the instance
flavor/image.
Change-Id: Iefb7e8a4d867b8721cdb5b00c2f71b2dd3a492c0
Add support for configuring scheduler parameter option
`placement_aggregate_required_for_tenants` which controls
whether or not a tenant with no aggregate affinity will be
allowed to schedule to any available node.
Change-Id: I61e3784ff40665ced25a52f864455c2a3518def0
Add support for configuring scheduler parameter option
'limit_tenants_to_placement_aggregate' to enable tenant
isolation with placement.
Change-Id: I37ce0633b3b56563f373fadfb74820b2896d6734
Add paramerer `query_placement_for_image_type_support` that controls
whether to add ask placement only for compute hosts that support the
``disk_format`` of the image used in the request.
Change-Id: I6d895a4789f44380565e3d2e3461d0bbb501c86b
The configuration option scheduler/workers was added
in Rocky [1]. This patch adds the nova::scheduler::workers
parameter which can be used to set this new config option.
[1] https://docs.openstack.org/releasenotes/nova/rocky.html
Change-Id: I217444b2c30d62347cd9e25def769e94d12a70fc
Starting with the Ocata release, bare metal nodes are no longer get recognized
by nova automatically. To avoid forcing users into running nova manage command
each time they enroll a node, we will have to allow enable the periodic task
to do so. This option configures it.
Change-Id: I1f0e40474018de593cb3f8798b5212285f5629a4
Closes-Bug: #1697724
The scheduler and scheduler filter options have been moved out of the
DEFAULT name space to scheduler and filter_scheduler namespaces. This
change updates the configuration namespace for the existing
configuration options, removes the DEFAULT version and adds additional
configuration options.
Change-Id: I8f1da7546bf6aa20bb2cb17d3c8163963ef32e2a
Update defaults values for scheduler_driver and scheduler_host_manager
to match with upstream values in Nova.
It was configured in Devstack:
b298e57c9f
And old values don't work anymore since:
7f1ff4b226
Change-Id: Idbbae5281d429edb95783cdde3d45804ddaeace1
Closes-Bug: #1572467
Switch Nova to use $::os_service_default
Change logging.pp, db.pp and tests.
Change-Id: I928a93534c6d27c020b7afb5b7dda32c379e9d62
Related-bug: #1515273
This commit updates the default value for enable for nova components to
default to true, instead of false. Without this
change the nova service is not enabled by default resulting in a
different behavior than with other puppet openstack modules.
Associated tests are updated to expect the change in defaults.
Co-Authored-By: Cody Herriges <cody@puppetlabs.com>
Change-Id: I49fc84f9fedfe00d7846441e1b49334abb09e0eb
Closes-bug: #1220473
The default value was incorrect, breaking nova-scheduler
configuration and resulting in an error message:
AttributeError: 'HostManager' object has no attribute 'select_destinations'
FilterScheduler should be the default.
Change-Id: Ie4903019cbfe30088f42ac8b76174ed85ec00fcc
Closes-Bug: #1473130
This patch aim to update our specs test in order to work with the
rspec-puppet release 2.0.0, in the mean time, we update rspec syntax
order to be prepared for rspec 3.x move.
In details:
* Upgrade and pin rspec-puppet from 1.0.1 to 2.0.0
* Use shared_examples "a Puppet::Error" for puppet::error tests *
* Convert 'should' keyword to 'is_expected.to' (prepare rspec 3.x) *
* Fix spec tests for rspec-puppet 2.0.0
* Clean Gemfile (remove over-specificication of runtime deps of puppetlabs_spec_helper)
Change-Id: I172439c6ed185bb38b325b2524cab1475cdc7504
Nova is able to talk to a read-only database for some tables so it
improves the scalability of MySQL server and reduce the access to the
server in charge of writes.
More documentation: https://wiki.openstack.org/wiki/Slave_usage
Change-Id: I1d44332acb381b11d90b63535d274f535be26d55
Since Grizzly, nova-compute does not need database access anymore.
Currently, only nova-api, nova-scheduler and nova-conductor really need
database access.
* Keep original nova parameters with backward compatibility
* Create nova::db with database parameters
* Import nova::db in nova::init for backward compatibility
* Import nova::db in nova::{api,conductor,scheduler}
* Refactorize unit tests for conductor & scheduler
Change-Id: I42b9d2b1efb5856fed6550c25ac3142952690df1
Implements: blueprint move-db-params
This commit ensures that the individual Ubuntu nova
packages for any given service are only installed when that
service is configured.
This behavior was broken when RedHat support was added.
Since Redhat only has a single package, it was assumed
that all packages should always be installed on all nodes.
This was causing all services to be in a running state on
Ubuntu (b/c the packages were starting the related service)