Commit Graph

24 Commits

Author SHA1 Message Date
Chris Dent 787bb33606 Use external placement in functional tests
Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).

openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.

The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.

tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.

The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.

Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,

TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.

Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
2018-12-12 18:46:49 +00:00
Eric Fried 8e1ca5bf34 Use uuidsentinel from oslo.utils
oslo.utils release 3.37.0 [1] introduced uuidsentinel [2]. This change
rips out nova's uuidsentinel and replaces it with the one from
oslo.utils.

[1] https://review.openstack.org/#/c/599754/
[2] https://review.openstack.org/#/c/594179/

Change-Id: I7f5f08691ca3f73073c66c29dddb996fb2c2b266
Depends-On: https://review.openstack.org/600041
2018-09-05 09:08:54 -05:00
Chris Dent 4d525b4ec1 [placement] Add /reshaper handler for POST
/reshaper provides a way to atomically modify some allocations and
inventory in a single transaction, allowing operations like migrating
some inventory from a parent provider to a new child.

A fair amount of code is reused from handler/inventory.py, some
refactoring is in order before things get too far with that.

In handler/allocation.py some code is extracted to its own methods
so it can be reused from reshaper.py.

This is done as microversion 1.30.

A suite of gabbi tests is provided which attempt to cover various
failures including schema violations, generation conflicts, and
data conflicts.

api-ref, release notes and rest history are updated

Change-Id: I5b33ac3572bc3789878174ffc86ca42ae8035cfa
Partially-Implements: blueprint reshape-provider-tree
2018-08-23 00:36:17 +00:00
Chris Dent 2b46354d5a Set policy_opt defaults in placement gabbi fixture
Without this change, tests can intermittently fail with NoSuchOptError
when a single process does not have other tests running prior to
gabbi tests. This change ensure the opts are registered and defaulted.

Change-Id: I1c7e347b6e788928bef96e32c3365d0fdc5ba00f
Related-Bug: #1786498
Closes-Bug: #1788176
2018-08-21 14:28:32 +01:00
Zuul 681cb7f21f Merge "Use common functions in granular fixture" 2018-08-07 23:16:15 +00:00
Zuul 41692fc5e9 Merge "Use common functions in NonSharedStorageFixture" 2018-08-07 05:01:48 +00:00
Tetsuro Nakamura 45e34808f5 Use common functions in granular fixture
For code refactoring purpose in the placement granular fixture
setup, this patch substitutes common functions in test_base.py
for existing local functions of _create_providers(),
_add_inventory(), and _set_traits().

Change-Id: I76a3f7d7e446e0f3af379f83c9d8333279884c73
2018-08-07 11:03:37 +09:00
Tetsuro Nakamura 9cfc598acf Adds a test for getting allocations API
`GET /resource_provider/{uuid}/allocations` API currently doesn't
return all the allocations made by multiple users.

This patch adds a test to describe this bug. The fix for this
is coming in a follow up.

Change-Id: I2b01e27922f11bef2defcb01fe415692de1578ea
Partial-Bug: #1785382
2018-08-04 19:20:30 +09:00
Tetsuro Nakamura 7d824e6d37 Refactor AllocationFixture in placement test
This patch refactors the allocation fixture used in some placement
api tests by substituting common functions in test_base.py for the
existing object management functions.

Change-Id: Ide6544d1cf9e1ed154b42075acdd7af986c3afe8
2018-08-02 14:46:10 +09:00
Tetsuro Nakamura e4923abaeb Increase max_unit in placement test fixture
In the AllocationFixture setup, to avoid the limitation of the
max_unit that is set to the inventory, we were using a bit hacky
way creating multiple allocation objects that have the same resource
class, the same consumer, and the same resource provider.

Since this is not how it works in real cases, and this prevents us
from refactoring, this patch fixes it.

Change-Id: I8ba378ff5eeaf6c9cca11c5874708a17d4640097
2018-08-02 14:00:45 +09:00
Tetsuro Nakamura e23237c375 Use common functions in NonSharedStorageFixture
For code refactoring purpose in the placement NonSharedStorageFixture
setup, this patch substitutes common functions in test_base.py for
existing native setup functions for creating providers and setting
inventories.

Change-Id: I312333ed8ecd51b9f3f6b818c33b3ef54703f997
2018-08-02 10:56:57 +09:00
Chris Dent 4b1d38e88a [placement] Use a simplified WarningsFixture
Use a WarningsFixture specific to placement that worries about fewer
warnings and is not dependent on nova.

blueprint: placement-extract

Change-Id: Idfcc6882d7fe5141dcc793f0409f75c51fd26234
2018-07-30 19:38:35 +01:00
Chris Dent 1de7ac302a [placement] Use a non-nova log capture fixture
We want to avoid using the nova StandardLogging fixture, to limit
imports from nova, but it has two useful features that we want:

* always capture
* if the chosen log level is more than DEBUG, format DEBUG message
  anyway, but don't output

blueprint: placement-extract

Change-Id: Iadd32c731ebfb5a62308a4d5f907a69f93590935
2018-07-30 19:38:22 +01:00
Chris Dent bee0a133e5 [placement] Use oslotest CaptureOutput fixture
Instead of the nova fixture OutputStreamCapture. They do effectively
the same thing and once placement is extracted we'd like to not
have duplication.

blueprint: placement-extract

Change-Id: I4636533b1262f819e34ea78cca33ad9f90a35702
2018-07-30 19:12:19 +01:00
Chris Dent f5783d90bc [placement] Use base test in placement functional tests
There is now a placement.base.TestCase for placement functional tests
which assembles the necessary configuration and fixtures. This change
uses that base class in the db functional tests and extends the base
class as required to add all the necessary functionality.

In the process issues were exposed in the fixtures.gabbits use of
oslo_config (causing tests to fail based on fallout from changes
elsewhere in the functional tests) so this change also fixes that
and limits the gabbi tests to only caring about the placement database
connection, which is more correct and how it should have been all
along.

This change removes the ConfPatcher fixture in fixtures.placement
because it is no better than using the oslo_config provided
fixture and was in the way while diagnosing difficulties with
getting these changes to work correctly.

The root cause of those problems was that placement changes were
at cross purposes with how the nova.tests.fixtures.Database fixture
expects to work: The goal of these changes was to only configure
and establish those fixtures that were strictly necessary for
placement in the placements tests. However, when _any_ database
is requested from the Database fixture, the context managers for
all of them are configured.

This means, for example, that if a placement fixture, which
originally was not configuring a 'connection' string for the api
or main databases, ran before a later api db fixture, the api db
fixture would fail with no connection string available.

The quick and dirty fix is used here to fix the problem: we set
reasonable configuration for all three databases in the placement
tests that need the placement database fixture.

In the future when placement is extracted, these problems go away
so it does not seem worth the effort (at least not now) to
restructure the nova Database fixture.

blueprint: placement-extract
Change-Id: Ice89e9a25f74caaa53b7df079bd529d172354524
2018-07-26 17:52:59 +01:00
Chris Dent 13bbe6e891 Use placement context in placement functional tests
A few placement functional tests were importing and using
nova.context for a RequestContext. Placement has its own
in nova.api.openstack.placement.context so use that instead.

In the process, stop using an 'admin' context. It is not
necessary in placement as we do not do any policy checking
at the database level.

Change-Id: I1f6e6db6aca7dd160d3e94bb0b6ebf9b4f8dfd7d
2018-07-24 20:54:26 +01:00
Jay Pipes 11c29ae470 do not assume 1 consumer in AllocList.delete_all()
Ever since we introduced support for setting multiple consumers in a
single POST /allocations, the AllocationList.delete_all() method has
been housing a latent bad assumption and bug.

The AllocationList.delete_all() method used to assume that the
AllocationList's Allocation objects were only ever for a single
consumer, and took a shortcut in deleting the allocation by deleting all
allocations with the "first" Allocation's consumer UUID:

```python
    def delete_all(self):
        # Allocations can only have a single consumer, so take advantage of
        # that fact and do an efficient batch delete
        consumer_uuid = self.objects[0].consumer.uuid
        _delete_allocations_for_consumer(self._context, consumer_uuid)
        consumer_obj.delete_consumers_if_no_allocations(
            self._context, [consumer_uuid])
```

The problem with the above is that if you get all the allocations for a
single resource provider, using
AllocationList.get_all_by_resource_provider() and there are more than
one consumer allocating resources against that provider, then calling
AllocationList.delete_all() will only delete *some* of the resource
provider's allocations, not all of them.

Luckily, the handler code has never used AllocationList.delete_all()
after calling AllocationList.get_all_by_resource_provider(), and so
we've not hit this latent bug in production.

However, in the next patch in this series (the reshaper DB work), we
*do* call AllocationList.delete_all() for allocation lists for each
provider involved in the reshape operation, which is why this fix is
important to get done correctly.

Note that this patch renames AllocationList.create_all() to
AllocationList.replace_all() to make it absolutely clear that all of
the allocations for all consumers in the list are first *deleted* by the
codebase and then re-created. We also remove the check in
AllocationList.create_all() that the Allocation objects in the list must
not have an 'id' field set. The reason for that is because in order to
properly implement AllocationList.delete_all() to call DELETE FROM
allocations WHERE id IN (<...>) we need the list of allocation record
internal IDs. These id field values are now properly set on the
Allocation objects when AllocationList.get_all_by_resource_provider()
and AllocationList.get_all_by_consumer_id() are called. This allows that
returned object to have delete_all() called on it and the DELETE
statement to work properly.

Change-Id: I12393b033054683bcc3e6f20da14e6243b4d5577
Closes-bug: #1781430
2018-07-12 16:57:31 -04:00
Tetsuro Nakamura 5b4aa78459 Add microversion for nested allocation candidate
This patch adds a microversion with a release note for allocation
candidates with nested resource provider trees.

From now on we support allocation candidates with nested resource
providers with the following features.

1) ``GET /allocation_candidates`` is aware of nested providers.
   Namely, when provider trees are present, ``allocation_requests``
   in the response of ``GET /allocation_candidates`` can include
   allocations on combinations of multiple resource providers
   in the same tree.
2) ``root_provider_uuid`` and ``parent_provider_uuid`` fields are
    added to ``provider_summaries`` in the response of
   ``GET /allocation_candidates``.

Change-Id: I6cecb25c6c16cecc23d4008474d150b1f15f7d8a
Blueprint: nested-resource-providers-allocation-candidates
2018-06-29 17:38:10 +09:00
Chris Dent 0372d82c2c Ensure that os-traits sync is attempted only at start of process
Traits sync had been tried any time a request that might involve
traits was called. If the global was set no syncing was done, but
lock handling was happening.

This change moves the syncing into the the deploy.load_app() handling.
This means that the syncing will be attempted any time a new WSGI
application is created. Most of the time this will be at the start of a
new process, but some WSGI servers have interesting threading models so
there's a (slim) possibility that it could be in a thread. Because of
this latter possibility, the locking is still in place.

Functional tests are updated to explicitly do the sync in their
setUp(). Some changes in fixtures are required to make sure that
the database is present prior to the sync.

While these changes are not strictly part of extracting placement, the
consolidation and isolation of database handling code makes where to put
this stuff a bit cleaner and more evident: an update_database() method
in deploy uses an empty DbContext class from db_api to the call the
ensure_trait_sync method in resource_provider. update_database is in
deploy because it an app deployment task and because putting it in
db_api leads to circual import problems.

blueprint placement-extract
Closes-Bug: #1756151

Change-Id: Ic87518948ed5bf4ab79f9819cd94714e350ce265
2018-06-19 13:22:04 +01:00
Zuul 55371110ae Merge "Ignore UserWarning for scope checks during test runs" 2018-06-15 22:46:05 +00:00
Chris Dent 0044beb358 Optional separate database for placement API
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.

When placement_database.connection is not None a replica of the
structure of the API database is used, using the same migrations
used for the API database.

A placement_context_manager is added and used by the OVO objects in
nova.api.openstack.placement.objects.*. If there is no separate
placement database, this is still used, but points to the API
database.

nova.test and nova.test.fixtures are adjusted to add awareness of
the placement database.

This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later. A reno is added to explain
the existence of the configuration setting.

This change returns the behavior removed by the revert in commit
39fb302fd9 but done in a more
appropriate way.

Note that with the advent of the nova-status command, which checks
to see if placement is "ready" the tests here had to be adjusted.
If we do allow a separate database the code will now check the
separate database (if configured), but nothing is done with regard
to migrating from the api to placement database or checking that.

blueprint placement-extract

Change-Id: I7e1e89cd66397883453935dcf7172d977bf82e84
Implements: blueprint optional-placement-database
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
2018-06-15 13:01:50 +01:00
Jay Pipes f449650109 placement: Allocation.consumer field
Removes the consumer_id, project_id and user_id fields from the
Allocation object definition. These values are now found in the Consumer
object that is embedded in the Allocation object which is now
non-nullable.

Modifies the serialization in the allocation handler to output
Allocation.consumer.project.external_id and
Allocation.consumer.user.external_id when appropriate for the
microversion.

Calls the create_incomplete_consumers() method during
AllocationList.get_all_by_consumer_id() and
AllocationList.get_all_by_resource_provider() to online-migrate missing
consumer records.

Change-Id: Icae5038190ab8c7bbdb38d54ae909fcbf9048912
blueprint: add-consumer-generation
2018-06-13 18:18:37 -04:00
Matt Riedemann 7b6fb27452 Ignore UserWarning for scope checks during test runs
Placement API policy rules are defaulting to system scope.
Scope checks are disabled by default in oslo.policy, but
if you hit the API with a token that doesn't match the scope,
it generates a UserWarning, for every policy check on that
request. This is pretty annoying, so just filter those warnings
during our test runs.

Change-Id: I30ed00a96390d2c76cfc2a40c5a47c16eb51711c
2018-06-13 17:18:44 -04:00
Chris Dent b268454987 Extract part of PlacementFixture to placement
In nova.tests.fixtures is a PlacementFixture that peforms two roles. One
is to provide a working placement service via wsgi-intercept. The other
is to provide a working client of that service in the form of a
scheduler report client that uses appropriate headers and endpoints.

This change extracts the first role to a new fixture hosted within the
placement hierarchy and makes the second role a subclass of the first.

To make this work nicely, existing placement fixtures (in fixtures.py),
used solely for the gabbi tests are moved into a new directory called
"fixtures" and renamed to "gabbits.py". "gabbi.py" can't be used
because of naming conflicts while importing.

This is a small part of ongoing work related to
blueprint placement-extract

Change-Id: I126ada549d3879f89d1ec64b743da14807a39351
2018-06-09 00:55:06 +01:00