Commit Graph

13 Commits

Author SHA1 Message Date
Chris Dent 787bb33606 Use external placement in functional tests
Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).

openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.

The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.

tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.

The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.

Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,

TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.

Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
2018-12-12 18:46:49 +00:00
Eric Fried 8e1ca5bf34 Use uuidsentinel from oslo.utils
oslo.utils release 3.37.0 [1] introduced uuidsentinel [2]. This change
rips out nova's uuidsentinel and replaces it with the one from
oslo.utils.

[1] https://review.openstack.org/#/c/599754/
[2] https://review.openstack.org/#/c/594179/

Change-Id: I7f5f08691ca3f73073c66c29dddb996fb2c2b266
Depends-On: https://review.openstack.org/600041
2018-09-05 09:08:54 -05:00
Tetsuro Nakamura 7d824e6d37 Refactor AllocationFixture in placement test
This patch refactors the allocation fixture used in some placement
api tests by substituting common functions in test_base.py for the
existing object management functions.

Change-Id: Ide6544d1cf9e1ed154b42075acdd7af986c3afe8
2018-08-02 14:46:10 +09:00
Chris Dent f5783d90bc [placement] Use base test in placement functional tests
There is now a placement.base.TestCase for placement functional tests
which assembles the necessary configuration and fixtures. This change
uses that base class in the db functional tests and extends the base
class as required to add all the necessary functionality.

In the process issues were exposed in the fixtures.gabbits use of
oslo_config (causing tests to fail based on fallout from changes
elsewhere in the functional tests) so this change also fixes that
and limits the gabbi tests to only caring about the placement database
connection, which is more correct and how it should have been all
along.

This change removes the ConfPatcher fixture in fixtures.placement
because it is no better than using the oslo_config provided
fixture and was in the way while diagnosing difficulties with
getting these changes to work correctly.

The root cause of those problems was that placement changes were
at cross purposes with how the nova.tests.fixtures.Database fixture
expects to work: The goal of these changes was to only configure
and establish those fixtures that were strictly necessary for
placement in the placements tests. However, when _any_ database
is requested from the Database fixture, the context managers for
all of them are configured.

This means, for example, that if a placement fixture, which
originally was not configuring a 'connection' string for the api
or main databases, ran before a later api db fixture, the api db
fixture would fail with no connection string available.

The quick and dirty fix is used here to fix the problem: we set
reasonable configuration for all three databases in the placement
tests that need the placement database fixture.

In the future when placement is extracted, these problems go away
so it does not seem worth the effort (at least not now) to
restructure the nova Database fixture.

blueprint: placement-extract
Change-Id: Ice89e9a25f74caaa53b7df079bd529d172354524
2018-07-26 17:52:59 +01:00
Chris Dent 13bbe6e891 Use placement context in placement functional tests
A few placement functional tests were importing and using
nova.context for a RequestContext. Placement has its own
in nova.api.openstack.placement.context so use that instead.

In the process, stop using an 'admin' context. It is not
necessary in placement as we do not do any policy checking
at the database level.

Change-Id: I1f6e6db6aca7dd160d3e94bb0b6ebf9b4f8dfd7d
2018-07-24 20:54:26 +01:00
Jay Pipes 11c29ae470 do not assume 1 consumer in AllocList.delete_all()
Ever since we introduced support for setting multiple consumers in a
single POST /allocations, the AllocationList.delete_all() method has
been housing a latent bad assumption and bug.

The AllocationList.delete_all() method used to assume that the
AllocationList's Allocation objects were only ever for a single
consumer, and took a shortcut in deleting the allocation by deleting all
allocations with the "first" Allocation's consumer UUID:

```python
    def delete_all(self):
        # Allocations can only have a single consumer, so take advantage of
        # that fact and do an efficient batch delete
        consumer_uuid = self.objects[0].consumer.uuid
        _delete_allocations_for_consumer(self._context, consumer_uuid)
        consumer_obj.delete_consumers_if_no_allocations(
            self._context, [consumer_uuid])
```

The problem with the above is that if you get all the allocations for a
single resource provider, using
AllocationList.get_all_by_resource_provider() and there are more than
one consumer allocating resources against that provider, then calling
AllocationList.delete_all() will only delete *some* of the resource
provider's allocations, not all of them.

Luckily, the handler code has never used AllocationList.delete_all()
after calling AllocationList.get_all_by_resource_provider(), and so
we've not hit this latent bug in production.

However, in the next patch in this series (the reshaper DB work), we
*do* call AllocationList.delete_all() for allocation lists for each
provider involved in the reshape operation, which is why this fix is
important to get done correctly.

Note that this patch renames AllocationList.create_all() to
AllocationList.replace_all() to make it absolutely clear that all of
the allocations for all consumers in the list are first *deleted* by the
codebase and then re-created. We also remove the check in
AllocationList.create_all() that the Allocation objects in the list must
not have an 'id' field set. The reason for that is because in order to
properly implement AllocationList.delete_all() to call DELETE FROM
allocations WHERE id IN (<...>) we need the list of allocation record
internal IDs. These id field values are now properly set on the
Allocation objects when AllocationList.get_all_by_resource_provider()
and AllocationList.get_all_by_consumer_id() are called. This allows that
returned object to have delete_all() called on it and the DELETE
statement to work properly.

Change-Id: I12393b033054683bcc3e6f20da14e6243b4d5577
Closes-bug: #1781430
2018-07-12 16:57:31 -04:00
Tetsuro Nakamura 5b4aa78459 Add microversion for nested allocation candidate
This patch adds a microversion with a release note for allocation
candidates with nested resource provider trees.

From now on we support allocation candidates with nested resource
providers with the following features.

1) ``GET /allocation_candidates`` is aware of nested providers.
   Namely, when provider trees are present, ``allocation_requests``
   in the response of ``GET /allocation_candidates`` can include
   allocations on combinations of multiple resource providers
   in the same tree.
2) ``root_provider_uuid`` and ``parent_provider_uuid`` fields are
    added to ``provider_summaries`` in the response of
   ``GET /allocation_candidates``.

Change-Id: I6cecb25c6c16cecc23d4008474d150b1f15f7d8a
Blueprint: nested-resource-providers-allocation-candidates
2018-06-29 17:38:10 +09:00
Chris Dent 0372d82c2c Ensure that os-traits sync is attempted only at start of process
Traits sync had been tried any time a request that might involve
traits was called. If the global was set no syncing was done, but
lock handling was happening.

This change moves the syncing into the the deploy.load_app() handling.
This means that the syncing will be attempted any time a new WSGI
application is created. Most of the time this will be at the start of a
new process, but some WSGI servers have interesting threading models so
there's a (slim) possibility that it could be in a thread. Because of
this latter possibility, the locking is still in place.

Functional tests are updated to explicitly do the sync in their
setUp(). Some changes in fixtures are required to make sure that
the database is present prior to the sync.

While these changes are not strictly part of extracting placement, the
consolidation and isolation of database handling code makes where to put
this stuff a bit cleaner and more evident: an update_database() method
in deploy uses an empty DbContext class from db_api to the call the
ensure_trait_sync method in resource_provider. update_database is in
deploy because it an app deployment task and because putting it in
db_api leads to circual import problems.

blueprint placement-extract
Closes-Bug: #1756151

Change-Id: Ic87518948ed5bf4ab79f9819cd94714e350ce265
2018-06-19 13:22:04 +01:00
Chris Dent 0044beb358 Optional separate database for placement API
If 'connection' is set in the 'placement_database' conf group use
that as the connection URL for the placement database. Otherwise if
it is None, the default, then use the entire api_database conf group
to configure a database connection.

When placement_database.connection is not None a replica of the
structure of the API database is used, using the same migrations
used for the API database.

A placement_context_manager is added and used by the OVO objects in
nova.api.openstack.placement.objects.*. If there is no separate
placement database, this is still used, but points to the API
database.

nova.test and nova.test.fixtures are adjusted to add awareness of
the placement database.

This functionality is being provided to allow deployers to choose
between establishing a new database now or requiring a migration
later. The default is migration later. A reno is added to explain
the existence of the configuration setting.

This change returns the behavior removed by the revert in commit
39fb302fd9 but done in a more
appropriate way.

Note that with the advent of the nova-status command, which checks
to see if placement is "ready" the tests here had to be adjusted.
If we do allow a separate database the code will now check the
separate database (if configured), but nothing is done with regard
to migrating from the api to placement database or checking that.

blueprint placement-extract

Change-Id: I7e1e89cd66397883453935dcf7172d977bf82e84
Implements: blueprint optional-placement-database
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
2018-06-15 13:01:50 +01:00
Jay Pipes f449650109 placement: Allocation.consumer field
Removes the consumer_id, project_id and user_id fields from the
Allocation object definition. These values are now found in the Consumer
object that is embedded in the Allocation object which is now
non-nullable.

Modifies the serialization in the allocation handler to output
Allocation.consumer.project.external_id and
Allocation.consumer.user.external_id when appropriate for the
microversion.

Calls the create_incomplete_consumers() method during
AllocationList.get_all_by_consumer_id() and
AllocationList.get_all_by_resource_provider() to online-migrate missing
consumer records.

Change-Id: Icae5038190ab8c7bbdb38d54ae909fcbf9048912
blueprint: add-consumer-generation
2018-06-13 18:18:37 -04:00
Eric Fried 71de700c8f Use helpers in test_resource_provider (func)
Refactor test_resource_provider to use helper methods from test_base.

Change-Id: I577751867346454e2c49dd303bb79bcb2b8f6686
2018-05-04 09:17:50 -05:00
Eric Fried 69baecbcc2 Use test_base symbols directly
The preceding patch brought in symbols from the test_base module to
minimize the delta.  This patch removes these, renames the private ones
to be public, and changes all their usages to reference them from
test_base directly.

Change-Id: I43b7cfe9dbcb6de607f0c166b065a1ec3543c256
2018-05-04 08:43:20 -05:00
Eric Fried e856112afa Base test module/class for functional placement db
Initial change set factoring a common base test class and helper
utilities out of test_resource_provider and test_allocation_candidates.
This one minimizes the delta by copying some module-level symbols into
the test suites.  Subsequent patches will refactor to avoid this, clean
up naming (so we're not accessing privates from another module), and use
the now-common utility methods in more places.

Change-Id: Ibd38a3903a2d347a4ff4702d0d1172f6e37e7d19
2018-05-04 08:43:20 -05:00