Commit Graph

183 Commits

Author SHA1 Message Date
Stephen Finucane 463017b51b trivial: Rename 'nova.tests.unit.test_nova_manage'
Move this to the 'nova.tests.unit.cmd' module and rename to
'test_manage', so we can find it alongside all the other 'nova-manage'
tests.

Change-Id: Ice1852cf2339a826b6415fadbf6ac183d28bb641
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-08-29 11:02:03 +01:00
Kevin_Zheng 97b8cb3f58 nova-manage db archive_deleted_rows is not multi-cell aware
The archive_deleted_rows cmd depend on DB connection config from config
file, and when applying super-conductor mode, there are several config
files for different cells. If so, the command can only archive rows in
cell0 DB as it only reads the nova.conf

This patch added code that provides --all-cells parameter to the
command and read info for all cells from the api_db and then archive
rows across all cells.

The --all-cells parameter is passed on to the purge command when
archive_deleted_rows is called with both --all-cells and --purge.

Co-Authored-By: melanie witt <melwittt@gmail.com>

Change-Id: Id16c3d91d9ce5db9ffd125b59fffbfedf4a6843d
Closes-Bug: #1719487
2019-08-27 06:01:58 +00:00
Matt Riedemann df2845308d Change nova-manage unexpected error return code to 255
If any nova-manage command fails in an unexpected way and
it bubbles back up to main() the return code will be 1.
There are some commands like archive_deleted_rows,
map_instances and heal_allocations which return 1 for flow
control with automation systems. As a result, those tools
could be calling the command repeatedly getting rc=1 thinking
there is more work to do when really something is failing.

This change makes the unexpected error code 255, updates the
relevant nova-manage command docs that already mention return
codes in some kind of list/table format, and adds an upgrade
release note just to cover our bases in case someone was for
some weird reason relying on 1 specifically for failures rather
than anything greater than 0.

Change-Id: I2937c9ef00f1d1699427f9904cb86fe2f03d9205
Closes-Bug: #1840978
2019-08-21 17:03:11 -04:00
Matt Riedemann 2c5134d5f3 Don't mention CONF.api_database.connection in user-facing messages/docs
CONF.api_database.connection is a variable in code, not something
an operator needs to know what it means, so this changes that
mention in the docs and error message for the nova-manage db
archive_deleted_rows command.

Change-Id: If27814e0006a6c33ae6270dff626586c41eafcad
Closes-Bug: #1839391
2019-08-07 17:29:51 -04:00
Zuul 8fc20874b8 Merge "nova-manage: heal port allocations" 2019-07-22 21:59:30 +00:00
Zuul 063ef486e9 Merge "Exit 1 when db sync runs before api_db sync" 2019-07-20 03:26:41 +00:00
Balazs Gibizer 54dea2531c nova-manage: heal port allocations
Before I97f06d0ec34cbd75c182caaa686b8de5c777a576 it was possible to
create servers with neutron ports which had resource_request (e.g. a
port with QoS minimum bandwidth policy rule) without allocating the
requested resources in placement. So there could be servers for which
the allocation needs to be healed in placement.

This patch extends the nova-manage heal_allocation CLI to create the
missing port allocations in placement and update the port in neutron
with the resource provider uuid that is used for the allocation.

There are known limiations of this patch. It does not try to reimplement
Placement's allocation candidate functionality. Therefore it cannot
handle the situation when there is more than one RP in the compute
tree which provides the required traits for a port. In this situation
deciding which RP to use would require the in_tree allocation candidate
support from placement which is not available yet and 2) information
about which PCI PF an SRIOV port is allocated from its VF and which RP
represents that PCI device in placement. This information is only
available on the compute hosts.

For the unsupported cases the command will fail gracefully. As soon as
migration support for such servers are implemented in the blueprint
support-move-ops-with-qos-ports the admin can heal the allocation of
such servers by migrating them.

During healing both placement and neutron need to be updated. If any of
those updates fail the code tries to roll back the previous updates for
the instance to make sure that the healing can be re-run later without
issue. However if the rollback fails then the script will terminate with
an error message pointing to documentation that describes how to
recover from such a partially healed situation manually.

Closes-Bug: #1819923
Change-Id: I4b2b1688822eb2f0174df0c8c6c16d554781af85
2019-07-15 17:22:40 +02:00
Balazs Gibizer e6f0119262 Remove assumption of http error if consumer not exists
The heal_allocations_for_instance assumes that placement GET
/allocations/<instance_uuid> query returns an error code if the consumer
does not exists in placement. However placement returns an empty
allocation instead.

This patch removes such assumption and treates the negative response
from placement as a fatal error.

Change-Id: I7e2df32029e4cff57a0dddcd905b6c1aac207546
Closes-Bug: #1835419
2019-07-08 17:30:12 +02:00
Mark Goddard e99937c9a9 Exit 1 when db sync runs before api_db sync
Since cells v2 was introduced, nova operators must run two commands to
migrate the database schemas of nova's databases - nova-manage api_db
sync and nova-manage db sync. It is necessary to run them in this order,
since the db sync may depend on schema changes made to the api database
in the api_db sync. Executing the db sync first may fail, for example
with the following seen in a Queens to Rocky upgrade:

nova-manage db sync
ERROR: Could not access cell0.
Has the nova_api database been created?
Has the nova_cell0 database been created?
Has "nova-manage api_db sync" been run?
Has "nova-manage cell_v2 map_cell0" been run?
Is [api_database]/connection set in nova.conf?
Is the cell0 database connection URL correct?
Error: (pymysql.err.InternalError) (1054, u"Unknown column
        'cell_mappings.disabled' in 'field list'") [SQL: u'SELECT
cell_mappings.created_at AS cell_mappings_created_at,
cell_mappings.updated_at AS cell_mappings_updated_at,
cell_mappings.id AS cell_mappings_id, cell_mappings.uuid AS
cell_mappings_uuid, cell_mappings.name AS cell_mappings_name,
cell_mappings.transport_url AS cell_mappings_transport_url,
cell_mappings.database_connection AS
cell_mappings_database_connection, cell_mappings.disabled AS
cell_mappings_disabled \nFROM cell_mappings \nWHERE
cell_mappings.uuid = %(uuid_1)s \n LIMIT %(param_1)s'] [parameters:
{u'uuid_1': '00000000-0000-0000-0000-000000000000', u'param_1': 1}]
(Background on this error at: http://sqlalche.me/e/2j85)

Despite this error, the command actually exits zero, so deployment tools
are likely to continue with the upgrade, leading to issues down the
line.

This change modifies the command to exit 1 if the cell0 sync fails.

This change also clarifies this ordering in the upgrade and nova-manage
documentation, and adds information on exit codes for the command.

Change-Id: Iff2a23e09f2c5330b8fc0e9456860b65bd6ac149
Closes-Bug: #1832860
2019-07-04 09:16:41 +01:00
Balazs Gibizer e2866609bb pull out put_allocation call from _heal_*
Both allocation healing steps calls the placement API. This patch pulls
out the placement updating code to a single place. To do that it change
the healing steps to only generate / update the allocation individually
and then at the end of the healing there will be a single placement
update with this allocation.

This will help us to include the port related allocation into the instance
allocation by modifying a single place in the code.

Related-Bug: #1819923

Change-Id: I0e9f9a488141da599c10af8cabb4f6a5d111104f
2019-06-27 10:37:25 +02:00
Balazs Gibizer 307999c581 Prepare _heal_allocations_for_instance for nested allocations
When no allocations exist for an instance the current heal code uses a
report client call that can only handle allocations from a single RP.
This call is now replaced with a more generic one so in a later patch
port allocations can be added to this code path too.

Related-Bug: #1819923
Change-Id: Ide343c1c922dac576b1944827dc24caefab59b74
2019-06-27 10:33:14 +02:00
melanie witt 5c544c7e2a Warn for duplicate host mappings during discover_hosts
When the 'nova-manage cellv2 discover_hosts' command is run in parallel
during a deployment, it results in simultaneous attempts to map the
same compute or service hosts at the same time, resulting in
tracebacks:

  "DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, u\"Duplicate
  entry 'compute-0.localdomain' for key 'uniq_host_mappings0host'\")
  [SQL: u'INSERT INTO host_mappings (created_at, updated_at, cell_id,
  host) VALUES (%(created_at)s, %(updated_at)s, %(cell_id)s,
  %(host)s)'] [parameters: {'host': u'compute-0.localdomain',
  %'cell_id': 5, 'created_at': datetime.datetime(2019, 4, 10, 15, 20,
  %50, 527925), 'updated_at': None}]

This adds more information to the command help and adds a warning
message when duplicate host mappings are detected with guidance about
how to run the command. The command will return 2 if a duplicate host
mapping is encountered and the documentation is updated to explain
this.

This also adds a warning to the scheduler periodic task to recommend
enabling the periodic on only one scheduler to prevent collisions.

We choose to warn and stop instead of ignoring DBDuplicateEntry because
there could potentially be a large number of parallel tasks competing
to insert duplicate records where only one can succeed. If we ignore
and continue to the next record, the large number of tasks will
repeatedly collide in a tight loop until all get through the entire
list of compute hosts that are being mapped. So we instead stop the
colliding task and emit a message.

Closes-Bug: #1824445

Change-Id: Ia7718ce099294e94309103feb9cc2397ff8f5188
2019-06-13 17:18:16 +00:00
Jake Yip e822360b66 Add --before to nova-manage db archive_deleted_rows
Add a parameter to limit the archival of deleted rows by date. That is,
only rows related to instances deleted before provided date will be
archived.

This option works together with --max_rows, if both are specified both
will take effect.

Closes-Bug: #1751192
Change-Id: I408c22d8eada0518ec5d685213f250e8e3dae76e
Implements: blueprint nova-archive-before
2019-05-23 11:07:08 +10:00
Zuul 1388855be2 Merge "Delete the placement code" 2019-05-04 09:16:41 +00:00
Zuul 4578aa967b Merge "Use aggregate_add_host in nova-manage" 2019-04-29 08:38:04 +00:00
Chris Dent 70a2879b2c Delete the placement code
This finalizes the removal of the placement code from nova.
This change primarily removes code and makes fixes to cmd,
test and migration tooling to adapt to the removal.

Placement tests and documention were already removed in
early patches.

A database migration that calls
consumer_obj.create_incomplete_consumers in nova-manage has been
removed.

A functional test which confirms the default incomplete
consumer user and project id has been changes so its its use of
conf.placement.incomplete_* (now removed) is replaced with a
constant. The placement server, running in the functional
test, provides its own config.

placement-related configuration is updated to only register those
opts which are relevant on the nova side. This mostly means
ksa-related opts. placement-database configuration is removed
from nova/conf/database.

tox.ini is updated to remove the group_regex required by the
placement gabbi tests. This should probably have gone when the
placement functional tests went, but was overlooked.

A release note is added which describes that this is cleanup,
the main action already happened, but points people to the
nova to placement upgrade instructions in case they haven't
done it yet.

Change-Id: I4181f39dea7eb10b84e6f5057938767b3e422aff
2019-04-28 20:06:15 +00:00
Stephen Finucane 7954b2714e Remove 'nova-manage cell' commands
These are no longer necessary with the removal of cells v1. A check for
cells v1 in 'nova-manage cell_v2 simple_cell_setup' is also removed,
meaning this can no longer return the '2' exit code.

Part of blueprint remove-cells-v1

Change-Id: I8c2bfb31224300bc639d5089c4dfb62143d04b7f
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2019-04-16 18:26:17 +01:00
Chris Dent 8ab3300d5d Don't report 'exiting' when mapping cells
When running 'nova-manage simple_cell_setup...' if there are not hosts
to map, but there remaining instances to map, an '..., exiting' message
is produced. This is misleading because "exiting" implies a return of
control to the user. That doesn't happen if there are many instances
left to inspect or map.

This change gets around that by getting rid of the exiting message
in the case where instance mapping can still happen.

Change-Id: I62b20a3676429b5cc756884275138566785b347e
Closes-Bug: #1821737
2019-04-05 23:22:59 +00:00
Eric Fried c43f7e664d Use aggregate_add_host in nova-manage
When nova-manage placement sync_aggregates was added [1], it duplicated
some report client logic (aggregate_add_host) to do provider aggregate
retrieval and update so as not to duplicate a call to retrieve the
host's resource provider record. It also left a TODO to handle
generation conflicts.

Here we change the signature of aggregate_add_host to accept *either*
the host name or RP UUID, and refactor the nova-manage placement
sync_aggregates code to use it.

The behavior in terms of exit codes and messaging should be largely
unchanged, though there may be some subtle differences in corner cases.

[1] Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8

Change-Id: Iaa4ddf786ce7d31d2cee660d5196e5e530ec4bd3
2019-03-26 17:38:48 -05:00
melanie witt a7de4917a0 Populate InstanceMapping.user_id during migrations and schedules
The InstanceMapping user_id field is a new, non-nullable field
representing the user_id for the instance.

When new instance create requests come in, we create the instance
mapping. We will set user_id here before creating the record.

Some virtual interface online data migration and map_instances routine
create InstanceMapping records and since the user_id field did not
previously exist, they were not setting it. We will populate user_id in
these cases.

Finally, whenever an API does a compute_api.get(), we can
opportunistically set and save user_id on the instance mapping if it is
not set.

Part of blueprint count-quota-usage-from-placement

Change-Id: Ic4bb7b49b90a3d6d7ce6c6c62d87836f96309f06
2019-03-08 19:01:25 -05:00
Dan Smith edd1cd9ee4 Fix using template cell urls with nova-manage
When nova-manage went to validate the transport-url given in config or on the
command line, it was not doing the translation before passing the url to the
oslo.messaging parse routine to check it. This exposes the format functions
from the CellMapping object, and makes our _validate_transport_url() format
the url before passing it to parse.

This also adds a test that makes sure the template makes it into the database
(as a template) and that it gets loaded out in translated form with an
object load.

Change-Id: I40a435b8e97c8552c2f5f0ca3a24de2edd9f81bd
Closes-Bug: #1812196
2019-01-17 14:18:14 -08:00
imacdonn 3eea37b85b Handle online_data_migrations exceptions
When online_data_migrations raise exceptions, nova/cinder-manage catches
the exceptions, prints fairly useless "something didn't work" messages,
and moves on. Two issues:

1) The user(/admin) has no way to see what actually failed (exception
   detail is not logged)

2) The command returns exit status 0, as if all possible migrations have
   been completed successfully - this can cause failures to get missed,
   especially if automated

This change adds logging of the exceptions, and introduces a new exit
status of 2, which indicates that no updates took effect in the last
batch attempt, but some are (still) failing, which requires intervention.

Change-Id: Ib684091af0b19e62396f6becc78c656c49a60504
Closes-Bug: #1796192
2018-10-16 15:49:51 +00:00
Zuul 114b9f4db5 Merge "nova-manage - fix online_data_migrations counts" 2018-09-28 21:14:38 +00:00
imacdonn c4c6dc736e nova-manage - fix online_data_migrations counts
When running online_data_migrations in batches, totals were not
being accumulated - rather the counts each batch would clobber
those from the previous one, and the last batch would run no
migrations, so the totals were reported as zero.

Change-Id: Ib616f2efb69baa16e18601d27b747220bbefeb16
Closes-Bug: #1794364
2018-09-26 18:04:05 +00:00
Zuul 41ac87f812 Merge "Consumer gen support for put allocations" 2018-09-26 05:25:31 +00:00
Balazs Gibizer dfa2e6f221 Consumer gen support for put allocations
The placement API version 1.28 introduced consumer generation as a way
to make updating allocation safe even if it is done from multiple
places.

This patch changes the scheduler report client put_allocations
function to raise AllocationUpdateFailed in case of generation conflict.
The only direct user of this call is the nova-manage heal_allocations
CLI which will simply fail to heal the allocation for this instance.

Blueprint: use-nested-allocation-candidates
Change-Id: Iba230201803ef3d33bccaaf83eb10453eea43f20
2018-09-25 13:02:02 +02:00
Zuul 2f635fa914 Merge "Validate transport_url in nova-manage cell_v2 commands" 2018-09-25 10:10:24 +00:00
Zuul 755d82a7eb Merge "Fail heal_allocations if placement is borked" 2018-09-18 04:39:26 +00:00
Zuul 957f4818b0 Merge "Use uuidsentinel from oslo.utils" 2018-09-08 07:19:34 +00:00
Eric Fried 8e1ca5bf34 Use uuidsentinel from oslo.utils
oslo.utils release 3.37.0 [1] introduced uuidsentinel [2]. This change
rips out nova's uuidsentinel and replaces it with the one from
oslo.utils.

[1] https://review.openstack.org/#/c/599754/
[2] https://review.openstack.org/#/c/594179/

Change-Id: I7f5f08691ca3f73073c66c29dddb996fb2c2b266
Depends-On: https://review.openstack.org/600041
2018-09-05 09:08:54 -05:00
Matt Riedemann a4f1274f40 Fix TypeError in nova-manage cell_v2 list_cells
Cell mappings don't require a name, so when listing
cells, if any mappings don't have a name, the sorted
function will fail with a TypeError since you can't compare
None to a string.

This fixes the issue by using the empty string if the cell
mapping name is None.

Change-Id: I4fc9d8d1a96f1ec722c2c92dead3f5c4c94d4382
Closes-Bug: #1790695
2018-09-04 18:28:21 -04:00
Zuul 4031a88052 Merge "Delete instance_group_member records from API DB during archive" 2018-08-31 10:13:57 +00:00
Matt Riedemann 5162a9a1de Delete instance_group_member records from API DB during archive
Like we do for instance mappings and request specs in the API DB
when archiving deleted instances, this adds code to delete
instance group member records from the API DB when archiving deleted
instances. This should improve performance in the server groups
API because it will have a smaller set of group members to determine
if they are actually related to deleted instances, see change
Idd2e35bc95ed98ebc0340ff62e109e23c8adcb21 for context.

Change-Id: I960f8fd44d98427a72cb2bb0b238fdf2f734390f
Closes-Bug: #1751186
2018-08-29 16:20:39 -04:00
Eric Fried b7aa6a3b93 Fail heal_allocations if placement is borked
Following up on [1] to resolve the TODO, make nova-manage
heal_allocations fail fast if we can't talk to placement. (Note that the
existing behavior is preserved if we can talk to placement, but some
other error occurs retrieving allocations.)

[1] https://review.openstack.org/#/c/584599/21/nova/cmd/manage.py@1814

Change-Id: I1b79cc2c556fb06b8ffb8b9d6cabf980fa08a3aa
2018-08-28 15:53:39 -05:00
Eric Fried 176d1d90fd Report client: Real get_allocs_for_consumer
In preparation for reshaper work, implement a superior method to
retrieve allocations for a consumer. The new get_allocs_for_consumer:
- Uses the microversion that returns consumer generations (1.28).
- Doesn't hide error conditions:
  - If the request returns non-200, instead of returning {}, it raises a
    new ConsumerAllocationRetrievalFailed exception.
  - If we fail to communicate with the placement API, instead of
    returning None, it raises (a subclass of) ksa ClientException.
- Returns the entire payload rather than just the 'allocations' dict.

The existing get_allocations_for_consumer is refactored to behave
compatibly (except it logs warnings for the previously-silently-hidden
error conditions). In a subsequent patch, we should rework all callers
of this method to use the new one, and get rid of the old one.

Change-Id: I0e9a804ae7717252175f7fe409223f5eb8f50013
blueprint: reshape-provider-tree
2018-08-24 15:31:04 -05:00
Zuul 99d2a34d1f Merge "Add nova-manage placement sync_aggregates" 2018-07-25 18:56:26 +00:00
Matt Riedemann aa6360d683 Add nova-manage placement sync_aggregates
This adds the "nova-manage placement sync_aggregates"
command which will compare nova host aggregates to
placement resource provider aggregates and add any
missing resource provider aggregates based on the nova
host aggregates.

At this time, it's only additive in that the command
does not remove resource provider aggregates if those
matching nodes are not found in nova host aggregates.
That likely needs to happen in a change that provides
an opt-in option for that behavior since it could be
destructive for externally-managed provider aggregates
for things like ironic nodes or shared storage pools.

Part of blueprint placement-mirror-host-aggregates

Change-Id: Iac67b6bf7e46fbac02b9d3cb59efc3c59b9e56c8
2018-07-24 11:19:23 -04:00
Matt Riedemann 660e328a25 Use consumer generation in _heal_allocations_for_instance
If we're updating existing allocations for an instance due
to the project_id/user_id not matching the instance, we should
use the consumer_generation parameter, new in placement 1.28,
to ensure we don't overwrite the allocations while another
process is updating them.

As a result, the include_project_user kwarg to method
get_allocations_for_consumer is removed since nothing else
is using it now, and the minimum required version of placement
checked by nova-status is updated to 1.28.

Change-Id: I4d5f26061594fa9863c1110e6152069e44168cc3
2018-07-23 14:09:55 -04:00
Zuul 094370cfef Merge "fix cellv2 delete_host" 2018-07-17 00:46:46 +00:00
Zuul 21a368e1a6 Merge "Heal allocations with incomplete consumer information" 2018-07-13 19:26:59 +00:00
Matt Riedemann 6b6d81cf2b Heal allocations with incomplete consumer information
Allocations created before microversion 1.8 didn't have project_id
/ user_id consumer information. In Rocky those will be migrated
to have consumer records, but using configurable sentinel values.

As part of heal_allocations, we can detect this and heal the
allocations using the instance.project_id/user_id information.

This is something we'd need if we ever use Placement allocation
information counting quotas.

Note that we should be using Placement API version 1.28 with
consumer_generation when updating the allocations, but since
people might backport this change the usage of consumer
generations is left for a follow up patch.

Related to blueprint add-consumer-generation

Change-Id: Idba40838b7b1d5389ab308f2ea40e28911aecffa
2018-07-13 11:29:54 -04:00
Chen 0ef4ed96d1 fix cellv2 delete_host
When trying to delete host that can be found in host_mappings
but not in compute_nodes, current cellv2 delete_host will throw
an exception but does not really handle it.

This patch tries to handle this exception and allow the delete
operation to continue since it shows the host has gone anyway.

Change-Id: I99bd79fb45777edc0e33d846ba478b0a94a1191e
Closes-Bug: #1781391
2018-07-13 23:16:40 +08:00
Chris Dent def4b17934 Use nova.db.api directly
nova/db/__init__.py was importing * from nova.db.api. This meant that
any time any code anywhere within the nova.db package was imported
then nova.db.api was too, leading to a cascade of imports that may
not have been desired. Also, in general, code in __init__.py is a pain.

Therefore, this change adjusts code that so that either:

* nova.db.api is used directly
* nova.db.api is imported as 'db'

In either case, the functionality remains the same.

The primary goal of this change was to make it possible to import the
model files without having to import the db api. Moving the model files
to a different place in the directory hierarchy was considered, but
given that "code in __init__.py is a pain" this mode was chosen.

This looks like a very large change, but it is essentially adjusting
package names, many in mocks.

Change-Id: Ic1fd7c87ceda05eeb96735da2a415ef37060bb1a
2018-07-10 14:56:27 +00:00
Steve Kowalik 04e4c68efc Switch to oslo_messaging.ConfFixture.transport_url
oslo_messaging's rpc_backend setting, which is set by
ConfFixture.transport_driver has been deprecated since Newton. To allow
oslo_messaging to remove it, switch to setting transport_url instead.

Change-Id: Ideded5eff79425a813062cfb341ae8c005030544
Partial-Bug: #1712399
2018-06-25 13:58:43 +10:00
Matt Riedemann bc6ca87a6a Validate transport_url in nova-manage cell_v2 commands
In the three commands that take a --transport-url option,
or reads it from config, this will validate the tranport URL is
valid by calling the parsing code in oslo.messaging and fail
if the URL does not parse correctly.

Change-Id: If60cdf697cab2f035cd22830303f5ecaba0f3969
Closes-Bug: #1770341
2018-06-19 17:29:30 -05:00
Matt Riedemann 95106d2fa1 Add nova-manage placement heal_allocations CLI
This adds a new CLI which will iterate all non-cell0
cells looking for instances that (1) have a host,
(2) aren't undergoing a task state transition and
(3) don't have allocations in placement and try
to allocate resources, based on the instance embedded
flavor, against the compute node resource provider
on which the instance is currently running.

This is meant as a way to help migrate CachingScheduler
users off the CachingScheduler by first shoring up
instance allocations in placement for any instances
created after Pike, when the nova-compute resource
tracker code stopped creating allocations in placement
since the FilterScheduler does it at the time of
scheduling (but the CachingScheduler doesn't).

This will be useful beyond just getting deployments
off the CachingScheduler, however, since operators
will be able to use it to fix incorrect allocations
resulting from failed operations.

There are several TODOs and NOTEs inline about things
we could build on top of this or improve, but for now
this is the basic idea.

Change-Id: Iab67fd56ab4845f8ee19ca36e7353730638efb21
2018-06-01 18:45:10 -04:00
Balazs Gibizer 0c20743cb3 Suppress UUID warning in map_instance unit tests
nova-manage cells_v2 map_instances call uses a non canonical UUID
serialization to store the instance marker in the DB. The
oslo.versionedobjects UUIDField emits a warning. A later patch would
like to turn this warning to an error during the unit and functional
test to avoid adding new violations.

As the underlying DB schema is not violated this patch proposes to
suppress the warning in the affected unit tests.

Change-Id: I5b11b9df26e4e38516b5674e0e6c1fc79527129b
2018-05-14 13:18:29 +02:00
Zuul 7891acf809 Merge "Marker reset option for nova-manage map_instances" 2018-04-11 17:57:04 +00:00
Surya Seetharaman cd01cbe65e Add --enable and --disable options to nova-manage update_cell
Through these new options, users can enable or disable a cell
through the CLI.

Related to blueprint cell-disable

Change-Id: I761f2e2b1f1cc2c605f7da504a8c8647d6d6a45e
2018-04-04 20:23:51 +00:00
Zuul 9c7ebf90fa Merge "Add disabled option to create_cell command" 2018-03-26 13:26:35 +00:00