Commit Graph

26 Commits

Author SHA1 Message Date
Sean Mooney 5edd805fe2 Remove deprecated AZ filter.
This change remvoes the az filter and always enabled
the placement pre-filter. As part of this removal
the config option to control enabling the pre-filter
is removed as it is now mandatory.

The AZ filter docs and tests are also removed and an upgrade
release note is added.

Depends-On: https://review.opendev.org/c/openstack/devstack/+/886972
Change-Id: Icc8580835beb2b4d40341f81c25eb1f024e70ade
2023-07-17 12:22:22 +01:00
Balazs Gibizer bf654e3a4a Remove double mocking... again
I thought we fixed all the double mocking issues with
I3998d0d49583806ac1c3ae64f1b1fe343cefd20d but I was wrong.

While we used both mock and unittest.mock the fixtures.MockPatch
used the mock lib instead of the unittest.mock lib.
The path Ibf4f36136f2c65adad64f75d665c00cf2de4b400 (Remove the PowerVM driver)
removed the last user of mock lib from nova. So it is also
removed the mock from test-requirements. This triggered that
fixtures.MockPatch athat started using unittest.mock too.

Before Ibf4f36136f2c65adad64f75d665c00cf2de4b400 a function can be mocked
twice once with unittest.mock and once with fixtures.MockPatch (still
using mock). However after that patch both path of such double
mocking goes through unittest.mock and the second one fails.

So this patch fixes double mocking so far hidden behind
fixtures.MockPatch.

Also this patch makes the py310 and functional-py310 jobs voting at
least in the check queue to prevent future changes adding double mocks.

Change-Id: Ic1352ec31996577a5d0ad18a057339df3e49de25
2022-08-08 19:50:02 +02:00
Stephen Finucane 5bb6d4c188 functional: Add reproducer for #1907775
You can currently remove a host that has instances scheduled to it from
an aggregate. If the aggregate is configured as part of an availability
zone (AZ), this would in turn remove the host from the AZ, leaving
instances originally scheduled to that AZ stranded on a host that is no
longer a member of the AZ. This is clearly undesirable and should be
blocked at the API level.

You can also add a host to an aggregate where it wasn't in one before.
Because nova provides a default AZ for hosts that don't belong to an
aggregate, adding a host to an aggregate doesn't just assign it to an
AZ, it removes it from the default 'nova' one (or whatever you've
configured via '[DEFAULT] default_availability_zone'). As noted in the
docs [1], people should not rely on scheduling to the default AZ, but if
they had, we'd end up in the same situation as above.

Add tests for both, with a fix coming after.

[1] https://docs.openstack.org/nova/latest/admin/availability-zones.html

Change-Id: I21f7f93ee0ec0cd3a290afba59342b31d074cf2f
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Related-Bug: #1907775
2021-12-10 18:12:55 +01:00
Sean Mooney 7c7a2a142d Deprecate filters that have been replaced by placement filters
This change deprecates the AZ filters which is no longer required.

This also enable the use of placement for AZ enforcement by default and
deprecates the config option for removal.

Change-Id: I92b0386432444fc8bdf852de4bdb6cebb370a8ca
2021-06-01 15:11:50 +01:00
Stephen Finucane c269285568 tests: Move remaining non-libvirt fixtures
Move these to the central place. There's a large amount of test damage
but it's pretty trivial.

Change-Id: If581eb7aa463c9dde13714f34f0f1b41549a7130
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-05-12 16:32:43 +01:00
Stephen Finucane cc45581a18 functional: Add and use 'GlanceFixture'
This rather beefy (but also quite simple) patch replaces the
'stub_out_image_service' call and associated cleanup in all functional
tests with a new 'GlanceFixture', based on the old 'FakeImageService'.
The use of a fixture means we don't have to worry about teardown and
allows us to stub Glance in the same manners as Cinder, Neutron,
Placement etc.

Unit test cleanup is handled in a later patch.

Change-Id: I6daea47988181dfa6dde3d9c42004c0ecf6ae87a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2020-09-16 11:31:23 +01:00
Artom Lifshitz ee05cd8b9e func tests: move _run_periodics() into base class
There are two almost identical implementations of the _run_periodics()
helper - and a third one would have joined them in a subsequent patch,
if not for this patch. This patch moves the _run_periodics() to the
base test class. In addition, _run_periodics() depends on the
self.computes dict used for compute service tracking. The method that
populates that dict, _start_compute(), is therefore also moved to the
base class.

This enables some light refactoring of existing tests that need
either the _run_periodics() helper, or the compute service tracking.

In addition, a needless override of _start_compute() in
test_aggregates that provided no added value is removed. This is done
to avoid any potential confusion around _start_compute()'s role.

Change-Id: I36dd64dc272ea1743995b3b696323a9431666489

safdasdf

Change-Id: I33d8ac0a1cae0b2d275a21287d5e44c008a68122
2020-03-24 10:10:53 -04:00
Stephen Finucane 458d37fceb functional: Add unified '_build_server' helper function
'_IntegratedTestBase' has subclassed 'InstanceHelperMixin' since change
I0d21cb94c932e6e556eca964c57868c705b2d120, which means both now provide
a '_build_minimal_create_server_request' function. However, only
'_IntegratedTestBase' provides a '_build_server' function. The
'_build_minimal_create_server_request' and '_build_server' functions do
pretty much the same thing but there are some differences. Combine these
under the '_build_server' alias.

Change-Id: I91fa2f73185fef48e9aae9b7f61389c374e06676
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2020-01-15 10:31:24 +00:00
Stephen Finucane d05fd530c7 functional: Make '_IntegratedTestBase' subclass 'InstanceHelperMixin'
This means we can drop many duplicate implementations of things

- nova.tests.function.integrated_helpers._IntegratedTestBase

  - _build_minimal_create_server_request

- nova.tests.functional.test_servers.ServersTestBase

  - _wait_for_server_parameter
  - _wait_for_state_change
  - _wait_until_deleted

Change-Id: I0d21cb94c932e6e556eca964c57868c705b2d120
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-12-06 15:36:23 +00:00
Stephen Finucane 7ae1a10913 functional: Remove 'api' parameter
Pretty much every test case in 'nova.tests.functional' defines an 'api'
attribute, and many define an 'admin_api' attribute. We can pull these
from the class rather than explicitly passing them to helpers. Rework
things so this happens.

Note that the bulk of the changes here are in the
'nova/tests/functional/integrated_helpers.py' file. The rest of the
changes were auto-generated using the following script (my sed-fu is
non-existent):

  $ cd nova/tests/functional
  $ python3
  >>> import glob
  >>> import re
  >>> pattern = r'_state_change\((\n\s+)?self\.(admin_)?api,\s+'
  >>> replace = r'_state_change(\1'
  >>> for path in glob.glob('*.py') + glob.glob('*/*.py'):
  ...     with open(path) as fh:
  ...         data = fh.read()
  ...     new = re.sub(pattern, replace, data, flags=re.MULTILINE)
  ...     if new != data:
  ...         with open(path, 'w') as fh:
  ...             fh.write(new)
  ...
  >>> quit()

(ditto for the other substitutions)

Some manual fixups were required after, which pre-commit highlighted :)

Change-Id: I8c96b337f32148f8f5899c9b87af331b1fa41424
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-12-06 15:35:33 +00:00
Stephen Finucane 431237d2ee functional: Make '_wait_for_state_change' behave consistently
Both 'ServersTestBase' and 'InstanceHelperMixin' provide implementations
of '_wait_for_state_change' but they behave differently. The former
waits for an instance to transition *from* the provided state, while the
latter, somewhat more sanely, waits for the transition *to* the provided
state. Switch to using the latter everywhere and make the necessary
changes. Due to class hierarchies, we end up with two nearly identical
implementations but these will be merged in a future change.

Change-Id: I80cdc0a33ec27b1389130c22f9c3a8ff69f6b1a0
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-11-19 09:09:29 +00:00
Dan Smith 3391298706 Add image caching API for aggregates
This adds a new microversion and support for requesting image pre-caching
on an aggregate.

Related to blueprint image-precache-support

Change-Id: I4ab96095106b38737ed355fcad07e758f8b5a9b0
2019-10-15 21:22:31 -04:00
Balazs Gibizer 60b097a5ef Move HostNameWeigher to a common fixture
Multiple functional tests used its own copy-pasted implementation of a
weigher to make the scheduling deterministic based on host names.

This patch moves the HostNameWeigher implementation to a common place
and adds a fixtures to set up the weigher in the conf.

Change-Id: I4c23a3c4f8963c42379e7a8d63c6c1e089350eb3
2019-09-25 09:57:38 +02:00
Matt Riedemann eddfa6f0e0 Remove SchedulerReportClient from AggregateRequestFiltersTest
Before Ibd7aa4f8c4ea787774becece324d9051521c44b6 in Rocky the
AggregateRequestFiltersTest tests were manually mirroring nova
host aggregate membership to placement resource provider aggregate
membership. But since that change, and I'm not sure why it wasn't
removed in that change, the compute API automatically mirrors the
aggregate membership to placement so we don't need that manual
hack in the test code and is removed here.

Change-Id: Id895223471c9bc1356c5547b8e11d92420f9a956
2019-09-16 16:45:38 -04:00
Matt Riedemann c074a7bd31 Remove redundancies from AggregateRequestFiltersTest.setUp
Change I9ab9d7d65378be564b3731b5227ede8cece71bef made
AggregateRequestFiltersTest extend ProviderUsageBaseTestCase
but left a lot of redundant setUp for fixtures and services
in it which might be contributing to test_fail_set_az failing
intermittently. My theory is that we're starting multiple API
and scheduler workers and when setting an AZ on an aggregate
the API will RPC cast to all schedulers to store that metadata
information in the scheduler process. Since we have more than
one scheduler process, and I'm not sure how stable the RPC cast
fanout capability is in the fake messaging driver, we could be
hitting a scheduler worker during the test that does not have
the AZ metadata and the AvailabilityZoneFilter filters out the
expected host.

This simply removes the redundant setUp. Even if this isn't the
root of the problem it at least rules it out.

Change-Id: I45107ff686456d5db91c615830fa443b3902ddfb
Partial-Bug: #1844174
2019-09-16 16:41:09 -04:00
shilpa 1db3b89cae Add a new request filter to isolate aggregates
Added a new request filter 'isolate_aggregates' for filtering of hosts
by isolated aggregates. This filter prepares a list of aggregates that
should be ignored by the placement service. It checks if aggregates has
metadata 'trait:<trait_name>='required' and if these traits <trait_name>
are present in flavor extra specs or image properties of the request
otherwise all those aggregates will be included in the list of isolated
aggregates.

The filter is enabled by a new conf option,
[scheduler]enable_isolated_aggregate_filtering.

Change-Id: I9ab9d7d65378be564b3731b5227ede8cece71bef
Implements: blueprint placement-req-filter-forbidden-aggregates
2019-09-12 16:56:32 -05:00
Matt Riedemann a638685c46 Add functional test for AggregateMultiTenancyIsolation + migrate
A bug was reported against Ocata where a non-admin user
creates a server and the user's project is isolated to a
set of hosts via the AggregateMultiTenancyIsolation filter.

The admin, with a different project, cold migrates the server
and the filter rejects the request because before change
I195d389ac59574724a5e7202ba1a17d92c53a676 the cold migrate
task would re-generate the RequestSpec using the request context
which was from the admin, not the owner of the instance.

Even though this is not a problem past Ocata, we did not have
functional test coverage for this scenario so it is added here.

This will also be used to backport the fix to Ocata to show
the regression and fix in that branch.

Change-Id: I97559607fc720fb98c3543ff3dd6095281752cd4
Related-Bug: #1774205
Related-Bug: #1675607
2019-08-29 16:06:54 -04:00
Balazs Gibizer b5666fb492 Remove global state from the FakeDriver
The virt driver FakeDriver used in both the functional and in the unit
test used a global state to configure the host and node names the driver
reports. This was hard to use when more then one compute service is started.
Also global state is dangerous.

It turned out that only a set of unit tests are using multiple nodes per
compute the rest of the tests can simply use host=<hostname>,
nodes=[<hostname>] setup.

So this removes the global state.

Change-Id: I2cf2fcbaebc706f897ce5dfbff47d32117064f9c
2019-06-21 10:37:20 +02:00
Eric Fried c43f7e664d Use aggregate_add_host in nova-manage
When nova-manage placement sync_aggregates was added [1], it duplicated
some report client logic (aggregate_add_host) to do provider aggregate
retrieval and update so as not to duplicate a call to retrieve the
host's resource provider record. It also left a TODO to handle
generation conflicts.

Here we change the signature of aggregate_add_host to accept *either*
the host name or RP UUID, and refactor the nova-manage placement
sync_aggregates code to use it.

The behavior in terms of exit codes and messaging should be largely
unchanged, though there may be some subtle differences in corner cases.

[1] Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8

Change-Id: Iaa4ddf786ce7d31d2cee660d5196e5e530ec4bd3
2019-03-26 17:38:48 -05:00
Andrey Volkov 8e19ef4173 Check hosts have no instances for AZ rename
Update aggregate and update aggregate metadata API calls have the
ability to update availability zone name for the aggregate. If the
aggregate is not empty (has hosts with instances on it)
the update leads to discrepancy for objects saving availability zone as a
string but not reference.

From devstack DB they are:
- cinder.backups.availability_zone
- cinder.consistencygroups.availability_zone
- cinder.groups.availability_zone
- cinder.services.availability_zone
- cinder.volumes.availability_zone
- neutron.agents.availability_zone
- neutron.networks.availability_zone_hints
- neutron.router_extra_attributes.availability_zone_hints
- nova.dns_domains.availability_zone
- nova.instances.availability_zone
- nova.volume_usage_cache.availability_zone
- nova.shadow_dns_domains.availability_zone
- nova.shadow_instances.availability_zone
- nova.shadow_volume_usage_cache.availability_zone

Why that's bad?
First, API and Horizon show different values for host and instance for
example. Second, migration for instances with changed availability
zone fails with "No valid host found" for old AZ.

This change adds an additional check to aggregate an Update Aggregate API call.
With the check, it's not possible to rename AZ if the corresponding
aggregate has instances in any hosts.

PUT /os-aggregates/{aggregate_id} and
POST /os-aggregates/{aggregate_id}/action return HTTP 400 for
availability zone renaming if the hosts of the aggregate have any instances.
It's similar to conflicting AZ names error already available.

Change-Id: Ic27195e46502067c87ee9c71a811a3ca3f610b73
Closes-Bug: #1378904
2019-03-01 12:25:16 -05:00
Chris Dent 787bb33606 Use external placement in functional tests
Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).

openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.

The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.

tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.

The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.

Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,

TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.

Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
2018-12-12 18:46:49 +00:00
Matt Riedemann 6bc32481fb Fix docs and add functional test for AggregateMultiTenancyIsolation
The docs for AggregateMultiTenancyIsolation were misleading in that
tenants are not restricted to hosts only in a tenant-isolated
aggregate. It's the opposite: hosts in the tenant-isolated aggregate
are only available for tenants configured for that aggregate.

This fixes the docs including an example for clarification, and also
adds a functional test to show the behavior of the filter.

Change-Id: Ic55b88e7ad21ab5b7ad063eac743ff9406aae559
Related-Bug: #1771523
2018-09-13 16:46:53 -04:00
Jay Pipes 5eda1fab85 mirror nova host aggregate members to placement
This patch is the first step in syncing the nova host aggregate
information with the placement service. The scheduler report client gets
a couple new public methods -- aggregate_add_host() and
aggregate_remove_host(). Both of these methods do **NOT** impact the
provider tree cache that the scheduler reportclient keeps when
instantiated inside the compute resource tracker.

Instead, these two new reportclient methods look up a resource provider
by *name* (not UUID) since that is what is supplied by the
os-aggregates Compute API when adding or removing a "host" to/from a
nova host aggregate.

Change-Id: Ibd7aa4f8c4ea787774becece324d9051521c44b6
blueprint: placement-mirror-host-aggregates
2018-05-30 12:45:20 -04:00
Dan Smith 96f1071166 Honor availability_zone hint via placement
This adds a request filter that, if enabled, allows us to use placement
to select hosts in the desired availability zone by looking up the uuid
of the associated host aggregate and using that in our query for
allocation candidates. The deployer needs the same sort of mirrored
aggregate setup as the tenant filter, and documentation is added here to
make that clear.

Related to blueprint placement-req-filter

Change-Id: I7eb6de435e10793f5445724d847a8f1bf25ec6e3
2018-05-22 08:56:50 -07:00
Dan Smith 732e202e81 Add require_tenant_aggregate request filter
This adds a require_tenant_aggregate request filter which uses overlaid
nova and placement aggregates to limit placement results during scheduling.
It uses the same `filter_tenant_id` metadata key as the existing scheduler
filter we have today, so people already doing this with that filter will
be able to enable this and get placement to pre-filter those hosts for
them automatically.

This also allows making this filter advisory but not required, and supports
multiple tenants per aggregate, unlike the original filter.

Related to blueprint placement-req-filter

Change-Id: Idb52b2a9af539df653da7a36763cb9a1d0de3d1b
2018-03-28 15:58:46 -07:00
Dan Smith 62878ef5a6 re-Allow adding computes with no ComputeNodes to aggregates
After the cellification of the Aggregates API, we introduced a requirement
that the service must have a HostMapping record, so that we know which
cell the service is in. This is normally fine, but for weird drivers such
as ironic, there may be valid cases (i.e. during setup) where no ironic
nodes are present, thus one or more services may not have any compute node
records, and thus cannot be added to aggregates.

This adds a cell scan, only if necessary, to find the desired service so
that the operation may proceed as it did before. To do this, we refactor
the _find_service() helper to a more generic utility and use that if we
don't find a HostMapping during the add operation.

Change-Id: Idc97126d63684e7d638b974d7226ff210c744404
Closes-Bug: #1686744
2017-05-25 10:56:29 -07:00