This adds a get_available_node_uuids() method to the virt driver
interface. This aims to eventually replace the nodename-based
interface, but currently provides an implementation that will work
for most drivers. Any driver that does not override this method
will get the locally-persistent UUID from nova.virt.node.
Ironic obviously needs to override this (which is easy), as well as
the fake driver because it supports multiple nodes for testing. The
libvirt driver overrides it only because we test multiple libvirt
driver instances on a single host and we need each instantiation
of it to "capture" the UUID we have mocked out at the time it is
started.
Change-Id: Ibe14d2b223c737d82c217a74bc94e41603271a9d
I thought we fixed all the double mocking issues with
I3998d0d49583806ac1c3ae64f1b1fe343cefd20d but I was wrong.
While we used both mock and unittest.mock the fixtures.MockPatch
used the mock lib instead of the unittest.mock lib.
The path Ibf4f36136f2c65adad64f75d665c00cf2de4b400 (Remove the PowerVM driver)
removed the last user of mock lib from nova. So it is also
removed the mock from test-requirements. This triggered that
fixtures.MockPatch athat started using unittest.mock too.
Before Ibf4f36136f2c65adad64f75d665c00cf2de4b400 a function can be mocked
twice once with unittest.mock and once with fixtures.MockPatch (still
using mock). However after that patch both path of such double
mocking goes through unittest.mock and the second one fails.
So this patch fixes double mocking so far hidden behind
fixtures.MockPatch.
Also this patch makes the py310 and functional-py310 jobs voting at
least in the check queue to prevent future changes adding double mocks.
Change-Id: Ic1352ec31996577a5d0ad18a057339df3e49de25
Now that we no longer support py27, we can use the standard library
unittest.mock module instead of the third party mock lib. Most of this
is autogenerated, as described below, but there is one manual change
necessary:
nova/tests/functional/regressions/test_bug_1781286.py
We need to avoid using 'fixtures.MockPatch' since fixtures is using
'mock' (the library) under the hood and a call to 'mock.patch.stop'
found in that test will now "stop" mocks from the wrong library. We
have discussed making this configurable but the option proposed isn't
that pretty [1] so this is better.
The remainder was auto-generated with the following (hacky) script, with
one or two manual tweaks after the fact:
import glob
for path in glob.glob('nova/tests/**/*.py', recursive=True):
with open(path) as fh:
lines = fh.readlines()
if 'import mock\n' not in lines:
continue
import_group_found = False
create_first_party_group = False
for num, line in enumerate(lines):
line = line.strip()
if line.startswith('import ') or line.startswith('from '):
tokens = line.split()
for lib in (
'ddt', 'six', 'webob', 'fixtures', 'testtools'
'neutron', 'cinder', 'ironic', 'keystone', 'oslo',
):
if lib in tokens[1]:
create_first_party_group = True
break
if create_first_party_group:
break
import_group_found = True
if not import_group_found:
continue
if line.startswith('import ') or line.startswith('from '):
tokens = line.split()
if tokens[1] > 'unittest':
break
elif tokens[1] == 'unittest' and (
len(tokens) == 2 or tokens[4] > 'mock'
):
break
elif not line:
break
if create_first_party_group:
lines.insert(num, 'from unittest import mock\n\n')
else:
lines.insert(num, 'from unittest import mock\n')
del lines[lines.index('import mock\n')]
with open(path, 'w+') as fh:
fh.writelines(lines)
Note that we cannot remove mock from our requirements files yet due to
importing pypowervm unit test code in nova unit tests. This library
still uses the mock lib, and since we are importing test code and that
lib (correctly) only declares mock in its test-requirements.txt, mock
would not otherwise be installed and would cause errors while loading
nova unit test code.
[1] https://github.com/testing-cabal/fixtures/pull/49
Change-Id: Id5b04cf2f6ca24af8e366d23f15cf0e5cac8e1cc
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
The fake_notifier uses module globals and also needs careful stub and
reset calls to work properly. This patch wraps the fake_notifier into a
proper Fixture that automates the complexity.
This is fairly rage patch but it does not change any logic just redirect
calls from the fake_notifier to the new NotificationFixture
Change-Id: I456f685f480b8de71014cf232a8f08c731605ad8
This rather beefy (but also quite simple) patch replaces the
'stub_out_image_service' call and associated cleanup in all functional
tests with a new 'GlanceFixture', based on the old 'FakeImageService'.
The use of a fixture means we don't have to worry about teardown and
allows us to stub Glance in the same manners as Cinder, Neutron,
Placement etc.
Unit test cleanup is handled in a later patch.
Change-Id: I6daea47988181dfa6dde3d9c42004c0ecf6ae87a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This patch is mainly for handling the remaining code review comments
as a follow-up work to improve coding style and test cases and there
is no code logic change to feature itself.
Change Summary:
1) remove unnecessary logic judgement code in _validate_rc function
2) regroup import order for standard libary and 3rd-party library
in test_resource_tracker.py
3) unify test cases in ValidateProviderConfigTestCases class
for both positive and negative test
4) rename test cases and test data files with more meaningful names
Change-Id: If940dfeb5b62ff9f11ca98e9125357c0a472dbfe
Blueprint: provider-config-file
This series implements the referenced blueprint to allow for specifying
custom resource provider traits and inventories via yaml config files.
This fourth commit adds the config option, release notes, documentation,
functional tests, and calls to the previously implemented functions in
order to load provider config files and merge them to the provider tree.
Change-Id: I59c5758c570acccb629f7010d3104e00d79976e4
Blueprint: provider-config-file
It's unnecessary, particularly when nothing of the other service
fixtures use it.
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Change-Id: If849f80c0372872b2de57b20e8b63c069a54ccff
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
PlacementDirect was integrated into a functional test suite when it was
first created as a way to prove that it worked [1] and demonstrate how
to use it.
However, it was a pain then, because the interceptor needs to be created
every time you want to use it; and since extracted placement started
diverging from in-tree placement, other problems started cropping up
(see the associated bug).
So this commit removes the use of PlacementDirect from nova. Details:
- test_report_client now uses PlacementFixture. So all the `with
interceptor` context management is gone. This accounts for the vast
majority of the apparent change, which is just outdenting those
contexts.
- SchedulerReportClientTestBase, which was doing some hocus pocus to
wrap the SchedulerReportClient such that we could do some microversion
checks, is removed. The test suite simply instantiates the
microversion-checking wrapper class directly as the client used by the
test cases.
- We were taking advantage of a PlacementDirect feature allowing us to
default to the latest microversion if not explicitly specified in the
request. Without this, we had to add the `version` kwarg to some of
the calls we were making to SchedulerReportClient primitives
(get/put/post/delete).
- A piece of test_update_from_provider_tree was using a
deliberately-broken interceptor to prove that the code in question
wasn't hitting the API. We replace this with a non-callable mock on
the Adapter's request method.
- test_global_request_id was taking advantage of the interceptor to
validate that the global request ID was making it to the "other side"
of the API boundary. This was fun, but overkill. We now simply assert
that the correct HTTP header is making it into the ksa Adapter's
request method.
- Functional test suite test_resource_tracker.IronicResourceTrackerTest
was inheriting from the SchedulerReportClientTestBase class, but not
using the interceptor anywhere. Can't tell you why that was done. So
now it just uses the plain old test.TestCase like everyone else.
[1] This commit does remove all of nova's testing of PlacementDirect.
However, it is still tested in the placement repository itself:
69b9659a45/placement/tests/functional/test_direct.py
Change-Id: Icb889c09a69e7c5cbf9330e5d9917d6ab3ac3dc5
Related-Bug: #1818560
This resolves the TODO in the test_resource_tracker module by
implementing the update_provider_tree method on the mocked
virt driver. As a result a few tests need to mock out the call to
_sync_compute_service_disabled_trait since we no longer hit the
NotImplementedError block and skip that method.
The same is done in the functional test as well even though there
was no TODO for it in that module.
This is part of a series of changes to eventually drop compat for
non-update_provider_tree implementations.
Change-Id: Iff7805deb041596db30465b52658ca77ddf598dd
Moves the allocation retrieving early, it will be passed to
instance_claim/rebuild_claim/resize_claim in ResourceTracker,
then we can claim resources according to allocations.
Change-Id: I59aec72e158eb2859bb6178b2a42d3f3438ab0f3
Partially-Implements: blueprint virtual-persistent-memory
Co-Authored-By: He Jie Xu <hejie.xu@intel.com>
With the extraction of placement we ended up with resource class names
being duplicated between nova and placement. To address that, the
os-resource-classes library [1] was created to provide a single
authority for standard resource classes and the format of custom
classes.
This patch changes nova to use it, removing the use of the rc_fields
module which used to have the information. A method left in it
(normalize_name) has been moved to utils.py, renamed as
normalize_rc_name, and callers and tests updated accordingly.
Because the placement code is being kept in nova for the time being,
that code's use of rc_fields is maintained, and the module too.
A note is added in the module explain that. Backporting the changes
from extracted-placement to placement-in-nova was considered but
because we no longer have placement tests in nova, that didn't seem
like the right thing to do.
requirements and lower-constraints have been updated.
os-resource-classes is already in global requirements.
For reference the related placement change is at [2].
[1] https://docs.openstack.org/os-resource-classes
[2] https://review.openstack.org/#/c/623556/
Change-Id: I8e579920c0eaca81b563a87429c930b21b3d4dc5
There was one edge case in the compute manager wherein we would
reinitialize the resource tracker. Jay promises that isn't needed
anymore, so this change removes it. That allows us to remove the
_get_resource_tracker() helper and set up the resource tracker just once
during __init__ of the compute manager.
Change-Id: Ibb8c12fb2799bb5ceb9e3d72a2b86dbb4f14451e
A step toward getting rid of the SchedulerClient intermediary, this
patch removes the reportclient member from SchedulerClient, instead
instantiating SchedulerReportClient directly wherever it's needed.
Change-Id: I14d1a648843c6311a962aaf99a47bb1bebf7f5ea
Adjust the fixtures used by the functional tests so they
use placement database and web fixtures defined by placement
code. To avoid making redundant changes, the solely placement-
related unit and functional tests are removed, but the placement
code itself is not (yet).
openstack-placement is required by the functional tests. It is not
added to test-requirements as we do not want unit tests to depend
on placement in any way, and we enforce this by not having placement
in the test env.
The concept of tox-siblings is used to ensure that the
placement requirement will be satisfied correctly if there is a
depends-on. To make this happen, the functional jobs defined in
.zuul.yaml are updated to require openstack/placement.
tox.ini has to be updated to use a envdir that is the same
name as job. Otherwise the tox siblings role in ansible cannot work.
The handling of the placement fixtures is moved out of nova/test.py
into the functional tests that actually use it because we do not
want unit tests (which get the base test class out of test.py) to
have anything to do with placement. This requires adjusting some
test files to use absolute import.
Similarly, a test of the comparison function for the api samples tests
is moved into functional, because it depends on placement functionality,
TestUpgradeCheckResourceProviders in unit.cmd.test_status is moved into
a new test file: nova/tests/functional/test_nova_status.py. This is done
because it requires the PlacementFixture, which is only available to
functional tests. A MonkeyPatch is required in the test to make sure that
the right context managers are used at the right time in the command
itself (otherwise some tables do no exist). In the test itself, to avoid
speaking directly to the placement database, which would require
manipulating the RequestContext objects, resource providers are now
created over the API.
Co-Authored-By: Balazs Gibizer <balazs.gibizer@ericsson.com>
Change-Id: Idaed39629095f86d24a54334c699a26c218c6593
This patch adds new ``initial_xxx_allocation_ratio`` CONF options
and modifies the resource tracker's initial compute node creation to
use these values.
During the update_available_resource periodic task, the allocation
ratios reported to inventory for VCPU, MEMORY_MB and DISK_GB will
be based on:
* If CONF.*_allocation_ratio is set, use it. This overrides everything
including externally set allocation ratios via the placement API.
* If reporting inventory for the first time, the
CONF.initial_*_allocation_ratio value is used.
* For everything else, the inventory reported remains unchanged which
allows operators to set the allocation ratios on the inventory records
in placement directly without worrying about nova-compute overwriting
those changes.
As a result, several TODOs are removed from the virt drivers that
implement the update_provider_tree interface and a TODO in the resource
tracker about unset-ing allocation ratios to get back to initial values.
Change-Id: I14a310b20bd9892e7b34464e6baad49bf5928ece
blueprint: initial-allocation-ratios
The purpose of the RT._normalize_inventory_from_cn_obj method is
to set allocation_ratio and reserved amounts on standard resource
class inventory records that get sent to placement if the virt driver
did not specifically set a ratio or reserved value (which none but
the ironic driver do).
If the allocation_ratio or reserved amount is in the inventory
data dict from the virt driver, then the normalize method ignores
it and lets the virt driver take priority.
However, with change I6a706ec5966cdc85f97223617662fe15d3e6dc08,
any virt driver that implements the update_provider_tree() interface
is storing the inventory data on the ProviderTree object which gets
cached and re-used, meaning once allocation_ratio/reserved is set
from RT._normalize_inventory_from_cn_obj, it doesn't get unset and
the normalize method always assumes the driver provided a value which
should not be changed, even if the configuration value changes.
We can make the config option changes take effect by changing
the semantics between _normalize_inventory_from_cn_obj and
drivers that implement the update_provider_tree interface, like
for the libvirt driver. Effectively with this change, when a driver
implements update_provider_tree(), they now control setting the
allocation_ratio and reserved resource amounts for inventory they
report. The libvirt driver will use the same configuration option
values that _normalize_inventory_from_cn_obj used. The only difference
is in update_provider_tree we don't have the ComputeNode facade to
get the "real" default values when the allocation_ratio is 0.0, so
we handle that like "CONF.cpu_allocation_ratio or 16.0". Eventually
that will get cleaned up with blueprint initial-allocation-ratios.
Change-Id: I72c83a95dabd581998470edb9543079acb6536a5
Closes-Bug: #1799727
This adds a functional test which recreates the
bug where config-driven reserved and allocation ratio
overrides are not being reflected in resource provider
inventory once initially set.
The reserved and allocation_ratio values set in the
FakeDriver.update_provider_tree method, added in change
I69d760aaf931d46f011cfd229b88f400837662e8, are removed
here otherwise they hard-code the values which get sent
to placement and ResourceTracker._normalize_inventory_from_cn_obj
won't update the reserved / ratios based on config. The
fake virt driver shouldn't really need to hard-code these
values since the RT will provide those based on config.
Change-Id: Ie66d6f4c83a7d6fc64a64dbd752e427cee1356d0
Related-Bug: #1799727
The driver "capability" of requires_allocation_refresh was only needed
for old pre-Pike code in Ironic where we needed to correct migrated
allocation records before Ironic was using custom resource classes. Now
that Ironic is only using custom resource classes and we're past Pike,
rip this code out.
Change-Id: If272365e58a583e2831a15a5c2abad2d77921729
As of change I6827137f35c0cb4f9fc4c6f753d9a035326ed01b in
Ocata, the ResourceTracker manages multiple compute nodes
via its "compute_nodes" variable, but the "stats" variable
was still being shared across all nodes, which leads to
leaking stats across nodes in an ironic deployment where
a single nova-compute service host is managing multiple
ironic instances (nodes).
This change makes ResourceTracker.stats node-specific
which fixes the ironic leak but also allows us to remove
the stats deepcopy while iterating over instances which
should improve performance for single-node deployments with
potentially a large number of instances, i.e. vCenter.
Change-Id: I0b9e5b711878fa47ba90e43c0b41437b57cf8ef6
Closes-Bug: #1784705
Closes-Bug: #1777422
With change I6827137f35c0cb4f9fc4c6f753d9a035326ed01b in
Ocata, we changed the ComputeManager to manage a single
ResourceTracker and that single ResourceTracker will
manage multiple compute nodes. The only time a single
nova-compute service hosts multiple compute nodes is for
ironic where there is a compute node per instance. The
problem is the ResourceTracker.stats variable, unlike the
ResourceTracker.compute_nodes variable, is not node-specific
so it's possible for node stats to leak across nodes based
on how the stats are used (and copied).
This change adds a functional recreate test to show the issue
before it's fixed. The fixture setup had to be tweaked a
bit to avoid modifying class variables by reference between
test cases.
Change-Id: Icc5f615baa1042347ec1699eb84ba0670445b995
Related-Bug: #1784705
This is a method of using wsgi-intercept to provide a context
manager that allows talking to placement over requests, but without
a network. It is a quick and dirty way to talk to and make changes
in the placement database where the only network traffic is with the
placement database.
This is expected to be useful in the creation of tools for
performing fast forward upgrades where each compute node may need to
"migrate" its resource providers, inventory and allocations in the
face of changing representations of hardware (for example
pre-existing VGPUs being represented as nested providers) but would
like to do so when all non-database services are stopped. A system
like this would allow code on the compute node to update the
placement database, using well known HTTP interactions, without the
placement service being up.
The basic idea is that we spin up the WSGI stack with no auth,
configured using whatever already loaded CONF we happen to have
available. That CONF points to the placement database and all the
usual stuff. The context manager provides a keystoneauth1 Adapter
class that operates as a client for accessing placement. The full
WSGI stack is brought up because we need various bits of middleware
to help ensure that policy calls don't explode and so JSON
validation is in place.
In this model everything else is left up to the caller: constructing
the JSON, choosing which URIs to call with what methods (see
test_direct for minimal examples that ought to give an idea of what
real callers could expect).
To make things friendly in the nova context and ease creation of fast
forward upgrade tools, SchedulerReportClient is tweaked to take an
optional adapter kwarg on construction. If specified, this is used
instead of creating one with get_ksa_adapter(), using settings from
[placement] conf.
Doing things in this way draws a clear line between the placement parts
and the nova parts while keeping the nova parts straightforward.
NoAuthReportClient is replaced with a base test class,
test_report_client.SchedulerReportClientTestBase. This provides an
_interceptor() context manager which is a wrapper around
PlacementDirect, but instead of producing an Adapter, it produces a
SchedulerReportClient (which has been passed the Adapter provided by
PlacementDirect). test_resource_tracker and test_report_client are
updated accordingly.
Caveats to be aware of:
* This is (intentionally) set up to circumvent authentication and
authorization. If you have access to the necessary database
connection string, then you are good to go. That's what we want,
right?
* CONF construction being left up to the caller is on purpose
because right now placement itself is not super flexible in this
area and flexibility is desired here.
This is not (by a long shot) the only way to do this. Other options
include:
* Constructing a WSGI environ that has all the necessary bits to
allow calling the methods in the handlers directly (as python
commands). This would duplicate a fair bit of the middleware and
seems error prone, because it's hard to discern what parts of the
environ need to be filled. It's also weird for data input: we need
to use a BytesIO to pass in data on PUTs and POSTs.
* Using either the WSGI environ or wsgi-intercept models but wrap it
with a pythonic library that exposes a "pretty" interface to
callers. Something like:
placement.direct.allocations.update(consumer_uuid, {data})
* Creating a python library that assembles the necessary data for
calling the methods in the resource provider objects and exposing
that to:
a) the callers who want this direct stuff
b) the existing handlers in placement (which remain responsible
for json manipulation and validation and microversion handling,
and marshal data appropriately for the python lib)
I've chosen the simplest thing as a starting point because it gives
us something to talk over and could solve the immediate problem. If
we were to eventually pursue the 4th option, I would hope that we
had some significant discussion before doing so as I think it is a)
harder than it might seem at first glance, b) likely to lead to many
asking "why bother with the http interface at all?". Both require
thought.
Partially implements blueprint reshape-provider-tree
Co-Authored-By: Eric Fried <efried@us.ibm.com>
Change-Id: I075785abcd4f4a8e180959daeadf215b9cd175c8
test_report_client provides functional tests of the report client using
a fully operating placement service (via wsgi-intercept) but it is not,
in itself, testing placement. Therefore this change moves the test
into nova/tests/functional where it can sit besides other genral purpose
nova-related functional tests.
As noted in the moved file, in a future where placement is extracted,
nova could choose to import a fixture that placement (installed as a
test dependency) provides so that this test and ones like it can
continue to run as desired.
compute/test_resource_tracker.py is updated to reflect the new location
of the module as it makes use of it.
partially implements blueprint placement-extract
Change-Id: I433700e833f97c0fec946dafc2cdda9d49e1100b
The resource tracker calls the new update_provider_tree virt driver
method - using it if available, falling back to the existing
get_inventory-if-available business if not - and flushes the changes
back to placement accordingly.
Change-Id: I5ee11274816cd9e4f0669e9e52468a29262c9020
blueprint: update-provider-tree
Until now, mock.Mock did not receive any autospec argument, yet
there are a few tests which are using it.
The autospec argument is being added by a fixture, and some unit
tests are not using it properly.
Depends-On: I0e4a55fbf4c1d175726ca22b664e240849a99856
Partial-Bug: #1735588
Change-Id: I3636833962c905faa0f144c7fdc4833037324d31
Move the ResourceClass field to its own package, and move that package
to the top of the nova hierarchy since it is used by both nova tooling
and placement tooling but we don't want the placement version to have to
incorporate the nova code. Eventually we'd like to see an
os-resource-classes library, similar to os-traits, which will serve this
functionality. This is a step in that direction.
Changes in this patch are quite widespread, but are mostly only changes
of imports.
Change-Id: Iea182341f9419cb514a044f76864d6bec60a3683
Add the 'X-Openstack-Request-Id' header in the request of DELETE.
When deleting allocations for a server (instance),
the header is added.
Subsequent patches will add the header in the other cases.
Change-Id: If38e4a6d49910f0aa5016e1bcb61aac2be416fa7
Partial-Bug: #1734625
Because we need to allow for a smooth upgrade from Ocata to
Pike, we need Pike compute hosts to be tolerant of the bad accounting
assumptions that Ocata compute hosts were making. If a user migrates an
instance from an Ocata compute host to a Pike compute host, the Ocata
compute host will continue essentially re-setting the instance
allocation to be an allocation against only the source Ocata host
(during the migration operation). We need to have the Pike destination
compute host recognize when its in a mixed Ocata/Pike environment and
tolerate this incorrect "healing" that the Ocata source host will do.
To tolerate this, the Pike destination compute host must continue to
behave like an Ocata compute host until all compute hosts are upgraded
to Pike or beyond.
Note that this adds service version caching for the compute service.
We were already doing the lookup for the RPC pin and caching that,
so this is not much of a change. Also note that we weren't clearing
this caching in tests, so any test that ran code that cached the
service version would affect later ones. This clears it as part of the
base test setup too.
Co-Authored-By: Jay Pipes <jaypipes@gmail.com>
Change-Id: Ia93168b1560267178059284186fb2b7096c7e81f
This makes the scheduler reporting client consider resource overrides
stored in instance flavors when making allocations against placement.
This should ensure that compute nodes and scheduler calculate the same
allocations for resource overrides, and will mean that ironic computes
will start allocating custom resource amounts after existing instances
have their flavors healed.
Related to blueprint custom-resource-classes-in-flavors
Change-Id: Ib1b05e33e2a2f4ed1c3f8949df19d1c0f48ae07f
This adds project_id and user_id required request parameters as part of
a new microversion 1.8 of the placement API.
Two new fields, for project and user ID, have been added to the
AllocationList object, and the method AllocationList.create_all() has
been changed to ensure that records are written to the consumers,
projects, and users tables when project_id and user_id are not None.
After an upgrade, new allocations will write consumer records and
existing allocations will have corresponding consumer records written
when they are updated as part of the resource tracker periodic task for
updating available resources.
Part of blueprint placement-project-user
Co-Authored-By: Jay Pipes <jaypipes@gmail.com>
Change-Id: I3c3b0cfdd33da87160255ead51a0d9ff73667655
We commonly have to refer to 'self.scheduler_client.reportclient' in the
code, and this long name makes for many ugly continued lines. This
shortens that reference, and makes the code cleaner and more readable.
Blueprint: placement-claims
Change-Id: Ia202e3d8c585b821eca88a01294df89b85aff2b3
We've always left users a choice whether to do exact matching or
"at least" matching for baremetal flavors, by installing the
exact match scheduler filters. The patch to add get_inventory
broke this by setting min_unit and max_unit to be equal for
baremetal resources.
Set min_unit to 1 for these resources so that deployers can continue
to use the exact match filters to decide how they want baremetal
flavors to be matched.
Change-Id: I04fdcb73674eb7193e82a61d856747d7985a2b65
Closes-Bug: #1674236
This patch implements the new get_inventory() virt driver API method for
the Ironic driver. Included is a new functional test of the interaction
between the placement API, the resource tracker, and the scheduler
reporting client with respect to the change in the Ironic resource
reporting behaviour that corresponds to this change.
Change-Id: I59be1cbedc99dcbb0ccde089a9f4737305176324
blueprint: custom-resource-classes-pike