Commit Graph

873 Commits

Author SHA1 Message Date
Konrad Gube 2a1a0bc3e2 Add the os-extend_volume_completion volume action
Split off the finalization part of the volume manager's
extend_volume method and make it externally callable as the new
os-extend_volume_completion admin volume action.

This is the first part of a feature that will allow volume drivers
to rely on feedback from Nova when extending attached volumes,
allowing e.g. NFS-based drivers to support online extend.

See the linked blueprint for details.

Implements: bp extend-volume-completion-action
Change-Id: I4aaa5da1ad67a948102c498483de318bd245d86b
2024-02-16 18:14:33 +01:00
Zuul 48cc790ed2 Merge "Clean old temporary tracking" 2024-02-06 01:10:41 +00:00
Zuul e38f3b799b Merge "Skip sparse copy during volume reimage" 2024-01-15 19:45:27 +00:00
Gorka Eguileor 402787ffcc Clean old temporary tracking
On Xena we added the use_quota DB field to volumes and snapshots to
unify the tracking of temporary resources, but we still had to keep
compatibility code for the old mechanisms (due to rolling
upgrades).

This patch removes compatibility code with the old mechanism and adds
additional cleanup code to remove the tracking in the volume metadata.

Change-Id: I3f9ed65b0fe58f7b7a0867c0e5ebc0ac3c703b05
2024-01-12 14:25:44 +01:00
whoami-rajat 1a8ea0eac4 Skip sparse copy during volume reimage
When rebuilding a volume backed instance, while copying the new
image to the existing volume, we preserve sparseness.
This could be problematic since we don't write the zero blocks of
the new image and the data in the old image can still persist
leading to a data leak scenario.

To prevent this, we are using `-S 0`[1][2] option with the `qemu-img convert`
command to write all the zero bytes into the volume.

In the testing done, this doesn't seem to be a problem with known 'raw'
images but good to handle the case anyway.

Following is the testing performed with 3 images:

1. CIRROS QCOW2 to RAW
======================

Volume size: 1 GiB
Image size (raw): 112 MiB

CREATE VOLUME FROM IMAGE (without -S 0)

LVS (10.94% allocated)
  volume-91ea43ef-684c-402f-896e-63e45e5f4fff stack-volumes-lvmdriver-1 Vwi-a-tz-- 1.00g stack-volumes-lvmdriver-1-pool 10.94

REBUILD (with -S 0)

LVS (10.94% allocated)
  volume-91ea43ef-684c-402f-896e-63e45e5f4fff stack-volumes-lvmdriver-1 Vwi-aotz-- 1.00g stack-volumes-lvmdriver-1-pool 10.94

Conclusion:
Same space is consumed on the disk with and without preserving sparseness.

2. DEBIAN QCOW2 to RAW
======================

Volume size: 3 GiB
Image size (raw): 2 GiB

CREATE VOLUME FROM IMAGE (without -S 0)

LVS (66.67% allocated)
  volume-edc42b6a-df5d-420e-85d3-b3e52bcb735e stack-volumes-lvmdriver-1 Vwi-a-tz-- 3.00g stack-volumes-lvmdriver-1-pool 66.67

REBUILD (with -S 0)

LVS (66.67% allocated)
  volume-edc42b6a-df5d-420e-85d3-b3e52bcb735e stack-volumes-lvmdriver-1 Vwi-aotz-- 3.00g stack-volumes-lvmdriver-1-pool 66.67

Conclusion:
Same space is consumed on the disk with and without preserving sparseness.

3. FEDORA QCOW2 TO RAW
======================

CREATE VOLUME FROM IMAGE (without -S 0)

Volume size: 6 GiB
Image size (raw): 5 GiB

LVS (83.33% allocated)
  volume-efa1a227-a30d-4385-867a-db22a3e80ad7 stack-volumes-lvmdriver-1 Vwi-a-tz-- 6.00g stack-volumes-lvmdriver-1-pool 83.33

REBUILD (with -S 0)

LVS (83.33% allocated)
  volume-efa1a227-a30d-4385-867a-db22a3e80ad7 stack-volumes-lvmdriver-1 Vwi-aotz-- 6.00g stack-volumes-lvmdriver-1-pool 83.33

Conclusion:
Same space is consumed on the disk with and without preserving sparseness.

Another testing was done to check if the `-S 0` option actually
works in OpenStack setup.
Note that we are converting qcow2 to qcow2 image which won't
happen in a real world deployment and only for test purposes.

DEBIAN QCOW2 TO QCOW2
=====================

CREATE VOLUME FROM IMAGE (without -S 0)

LVS (52.61% allocated)
  volume-de581f84-e722-4f4a-94fb-10f767069f50 stack-volumes-lvmdriver-1 Vwi-a-tz-- 3.00g stack-volumes-lvmdriver-1-pool 52.61

REBUILD (with -S 0)

LVS (66.68% allocated)
  volume-de581f84-e722-4f4a-94fb-10f767069f50 stack-volumes-lvmdriver-1 Vwi-aotz-- 3.00g stack-volumes-lvmdriver-1-pool 66.68

Conclusion:
We can see that the space allocation increased hence we are not preserving sparseness when using the -S 0 option.

[1] https://qemu-project.gitlab.io/qemu/tools/qemu-img.html#cmdoption-qemu-img-common-opts-S
[2] abf635ddfe/qemu-img.c (L182-L186)

Closes-Bug: #2045431

Change-Id: I5be7eaba68a5b8e1c43f0d95486b5c79c14e1b95
2023-12-26 13:08:58 +05:30
Eric Harney 349b0a1ccd mypy: Cleanup "noqa: H301" comments
We decided that H301 makes no sense for the "typing"
module, just set that in tox.ini instead of every
time it is used.

Change-Id: Id983fb0a9feef2311bf4b2e6fd70386ab60e974a
2023-12-14 16:29:27 +00:00
Zuul dbb82ce48f Merge "Revert "Driver assisted migration on retype when it's safe"" 2023-11-13 17:47:06 +00:00
Eric Harney 13196c700c Revert "Driver assisted migration on retype when it's safe"
This reverts commit 5edc77a18c.

Reason for revert: Bug: #2019190
This revealed a bug in Cinder optimized migration code that can lead to data loss with RBD and potentially other drivers, by using the optimized migration path in more cases.  Once the issues there are fixed, this should be re-introduced.

Change-Id: I893105cbd270300be9ec48b3127e66022f739314
2023-10-25 16:12:01 +00:00
Zuul 3cdf861bef Merge "Handle external events in extend volume" 2023-02-08 18:39:50 +00:00
Peter Penchev 08e80390f3 Send the correct location URI to the Glance v2 API
When uploading a volume to an image, send the new-style
cinder://<store-id>/<volume-id> URL to the Glance API if
image_service:store_id is present in the volume type extra specs.

Closes-Bug: #1978020
Co-Authored-By: Rajat Dhasmana <rajatdhasmana@gmail.com>
Change-Id: I815706f691a7d1e5a0c54eb15222417008ef1f34
2023-01-16 16:34:21 +02:00
whoami-rajat 78f8a7bbe6 Handle external events in extend volume
The support to extend attached volumes is being added to glance
cinder store with change[1].
Currently, cinder sends external events to nova if the volume is
in ``in-use`` state irrespective of the instance uuid value in the
attachment records.
When using glance cinder store, the instance_uuid field is None in
the attachment record and that signifies that the volume is attached
to glance host and not to a nova instance. We should not send any
external events in this case.

With this change, we check if the instance UUIDs are not None and
send external events to nova. If None, we don't send any external
events.

[1] https://review.opendev.org/c/openstack/glance_store/+/868742

Closes-Bug: 2000724
Change-Id: Ia0c8ff77139524e9934ef354b1455ea01b4c31ef
2023-01-03 14:35:31 +05:30
Eric Harney 5c03c4ef6c mypy: Correct return types for volumes/snapshots summary
These return a tuple of items, not object Lists.

Change-Id: I98e4082f78e78c44163be1dc51c23fc39fc3c994
2022-09-22 10:47:21 -04:00
Hemna 72da8249d9 Bugfix: Account for consumed space better
When the volume service starts up, it goes through all
volumes for a host in the db and adds up the volume size as
a mechanism to account for that allocate space against the
backend.  The problem was that the volume manager was only
counting volumes with a state of 'in-use' or 'available'.
If a volume has a host set on it, then we account for it's
allocated space.

This patch adds other volume states to use to account for
allocated space at volume service startup.

Closes-Bug: 1910767
Change-Id: I90d5dfbe62e630dc8042e725d411cadc2762db56
2022-08-09 13:24:36 +00:00
Zuul 2790c631d0 Merge "Report tri-state shared_targets for NVMe volumes" 2022-07-27 08:47:56 +00:00
Zuul 3dfc519301 Merge "Groups: remove unneeded "status" variable" 2022-06-14 17:08:28 +00:00
ricolin a719525c1c Add image_conversion_disable config
Add new config option `image_conversion_disable`, when it's set to
`True`, image disk_format and volume format must be the same format.
Otherwise will raise ImageUnacceptable exception.
`image_conversion_disable` config is bool option, default to `False`.

The idea behind this was that in certain high scale environments,
it is possible that a cloud allows both qcow2 and raw image uploads.

However, uploading a qcow2 image and creating a large number of
volumes can cause a tremendous amount of conversions that will kill
cinder-volume. It may be undesirable to have this, so a cloud
operator can opt to disallow conversions and enforce that the user
uploads the correct image type if they want to have volumes (aka
raw in rbd case).

Closes-Bug: #1970115
Change-Id: Ic481d68639d9460d1fd14225bc17a0d8287d5fd9
2022-06-01 03:56:02 +08:00
Zuul 2937ef7702 Merge "Modify manner of retrieving volume_ref" 2022-05-27 17:46:58 +00:00
Eric Harney 70590f991e Groups: remove unneeded "status" variable
Tidy up the create_group method a bit.

Change-Id: Ic646ce475def284850af9b98d7dc2cab61c86a20
2022-05-27 15:21:03 +00:00
Zuul 0242947802 Merge "pylint: tidy up clean_volume_locks" 2022-05-26 16:38:23 +00:00
Gorka Eguileor ef741228d8 Report tri-state shared_targets for NVMe volumes
NVMe-oF drivers that share the subsystem have the same race condition
issue that iSCSI volumes that share targets do.

The race condition is caused by AER messages that trigger automatic
rescans on the connector host side in both cases.

For iSCSI we added a feature on the Open-iSCSI project that allowed
disabling these scans, and added support for it in os-brick.

Since manual scans is a new feature that may be missing in a host's
iSCSI client, cinder has a flag in volumes to indicate when they use
shared targets.  Using that flag os-brick consumers can use the
"guard_connection" context manager to ensure race conditions don't
happen.

The race condition is prevented by os-brick using manual scans if they
are available in the iSCSI client, or a file lock if not.

The problem we face now is that we also want to use the lock for NVMe-oF
volumes that share a subsystem for multiple namespaces (there is no way
to disable automatic scans), but cinder doesn't (and shouldn't) expose
the actual storage protocol on the volume resource, so we need to
leverage the "shared_targets" parameter.

So with a single boolean value we need to encode 3 possible options:

- Don't use locks because targets/subystems are not shared
- Use locks if iSCSI client doesn't support automatic connections
- Always use locks (for example for NVMe-oF)

The only option we have is using the "None" value as well. That way we
can encode 3 different cases.

But we have an additional restriction, "True" is already taken for the
iSCSI case, because there will exist volumes in the database that
already have that value stored.

And making guard_connection always lock when shared_targets is set to
True will introduce the bottleneck from bug (#1800515).

That leaves us with the "None" value to force the use of locks.

So we end up with the following tristate for "shared_targets":

- True to use lock if iSCSI initiator doesn't support manual scans
- False means that os-brick should never lock.
- None means that os-brick should always lock.

The alternative to this encoding would be to have an online data
migration for volumes to change "True" to "None", and accept that there
could be race conditions during the rolling upgrade (because os-brick on
computes will interpret "None" as "False").

Since "in theory" Cinder was only returning True or False for the
"shared_target", we add a new microversion with number 3.69 that returns
null when the value is internally set to None.

The patch also updates the database with a migration, though it looks
like it's not necessary since the DB already allows null values, but it
seems more correct to make sure that's always the case.

This patch doesn't close but #1961102 because the os-brick patch is
needed for that.

Related-Bug: #1961102
Change-Id: I8cda6d9830f39e27ac700b1d8796fe0489fd7c0a
2022-05-24 15:13:23 +02:00
Eric Harney 5179e4f6bf Use modern type annotation format for collections
This works in Python 3.7 or greater and is
cleaner looking.

See PEP-585 for more info.
https://peps.python.org/pep-0585/

Change-Id: I4c9da881cea1a3638da504c4b79ca8db13851b06
2022-05-18 10:01:18 -04:00
Eric Harney 8b55f6f1b2 pylint: tidy up clean_volume_locks
Initialize set_clean to False so it always has a value in
the except block.

Change-Id: Ia7ac2aa6c741bdade7aeb4d22f7a2269f8db878c
2022-05-17 10:38:03 -04:00
Zuul aa774d6faf Merge "Prevent temporary volume from being deleted accidentally" 2022-04-29 20:27:36 +00:00
Zuul 2611f67393 Merge "Fix cacheable capability" 2022-04-29 19:50:24 +00:00
Hironori Shiina 53c13891b3 Prevent temporary volume from being deleted accidentally
A temporary volume can be deleted while it is in use by DELETE API
accidentally because its status is `available`. To avoid this deletion,
this fix sets a value which doesn't accept deletion to volume status of
a temporary volume. When a temporary volume is used for backing up,
`backing-up` is set. When a temporary volume is used for reverting a
snapshot, `in-use` is set because the volume is attached by a host.

Closes-Bug: #1970768
Change-Id: Ib6a2e4d68e532b91161df5245c17ce815f12f935
2022-04-28 14:35:24 -04:00
Gorka Eguileor 68311a0794 Fix cacheable capability
When using the LVM cinder driver the cacheable capability is not being
reported by the backend to the scheduler when the transport protocol is
NVMe-oF (nvmet target driver), but it is properly reported if it's the
LIO target driver.

This also happens with other drivers that should be reporting that they
are cacheable.

This happens because even if the volume manager correctly uses the
"storage_protocol" reported by the drivers on their stats to add the
"cacheable" capability for iSCSI, FC, and NVMe-oF protocols, it isn't
taking into account all the variants these have:

- FC, fc, fibre_channel
- iSCSI, iscsi
- NVMe-oF, nvmeof, NVMeOF

Same thing happens for the shared_targets of the volumes, which are not
missing an iSCSI variant.

This patch creates constants for the different storge protocols to try
to avoid these variants (as agreed on the PTG) and also makes the
cacheable and shared_targets check against all the existing variants.

This change facilitates identifying NVMe-oF drivers (for bug 1961102)
for the shared_targets part.

Closes-Bug: #1969366
Related-Bug: #1961102
Change-Id: I1333b0471974e94eb2b3b79ea70a06e0afe28cd9
2022-04-20 18:47:46 +02:00
Gorka Eguileor 6474afc3da Warn on driver detach errors
There are cases on the detach process where an error is not considered a
failure, so the exception is ignored.

The problem is that we currently may not leave any trace in the logs
that an error actually happened.

This patch adds a simple warning log message to ensure that at least
there is some kind of notification about it.

Change-Id: Ib23debd0a2ec09c588ad122277f97a04ddb70bff
2022-03-31 20:03:54 +02:00
Zuul b49fb59a6b Merge "Remove attach and detach volume driver methods" 2022-03-02 19:21:56 +00:00
Zuul fab51b6af0 Merge "Move nimble driver code to hpe folder" 2022-03-02 11:54:20 +00:00
whoami-rajat fedd9b1459 Followup: Address review comments on re-image patch
This is a followup patch to address review comments on change
I031aae50ee82198648f46c503bba04c6e231bbe5.

Change-Id: I38884b313ea63ff76bc65c41580343226a128ae8
2022-02-24 22:58:53 +05:30
Yikun Jiang d69e89ea3b Support volume re-image
This patch adds volume re-image API to enable the ability to
re-image a specific volume.

Implements: blueprint add-volume-re-image-api

Co-Authored-by: Rajat Dhasmana <rajatdhasmana@gmail.com>

Change-Id: I031aae50ee82198648f46c503bba04c6e231bbe5
2022-02-24 15:23:38 +05:30
Zuul d6788fb24d Merge "Rework backup process to make it async" 2022-02-05 19:00:43 +00:00
Gorka Eguileor d5a8c72032 Remove attach and detach volume driver methods
As part of the effort agreed in the last PTG to simplify and properly
define the driver interface this patch removes "attach_volume" and
"detach_volume" methods from the driver interface and from the SolidFire
driver, the only driver that is using it.

Change-Id: Ieb08a6870da92e438b3e7d6f48c1bdeb4d560e22
2022-01-31 15:46:22 +01:00
Hemna e38fb71aac Rework backup process to make it async
This patch updates the backup process to call the volume manager
asynchronously to get the backup device in which to do the backup on.
This fixes a major issue with certain cinder drivers that take a long
time to create a temporary clone of the volume being backed up.

Closes-Bug: #1916843
Change-Id: Ib861e1bc35247f932fbae3796ed9025a560461c4
2022-01-19 15:43:54 +00:00
Eric Harney 54ac21f73b mypy: Allow mypy to pass with requests-packaged urllib3
mypy fails with:
    error: Module has no attribute "packages"

Set type ignores for requests.packages.urllib3.

Change-Id: I553be293142c3e9525ca9aedfeb1a788830570cb
2022-01-18 10:33:15 -05:00
Ajitha Robert d67dcf42d4 Move nimble driver code to hpe folder
blueprint nimble-change-location

Change-Id: Id1f111462fc2558b90f9066387498bcb5e3217b2
2021-12-02 07:43:44 +00:00
Zuul af19ba08a7 Merge "Fix extra_capabilities" 2021-11-19 16:09:28 +00:00
Gorka Eguileor 2ec2222841 Fix: Race between attachment and volume deletion
There are cases where requests to delete an attachment made by Nova can
race other third-party requests to delete the overall volume.

This has been observed when running cinder-csi, where it first requests
that Nova detaches a volume before itself requesting that the overall
volume is deleted once it becomes `available`.

This is a cinder race condition, and like most race conditions is not
simple to explain.

Some context on the issue:

- Cinder API uses the volume "status" field as a locking mechanism to
  prevent concurrent request processing on the same volume.

- Most cinder operations are asynchronous, so the API returns before the
  operation has been completed by the cinder-volume service, but the
  attachment operations such as creating/updating/deleting an attachment
  are synchronous, so the API only returns to the caller after the
  cinder-volume service has completed the operation.

- Our current code **incorrectly** modifies the status of the volume
  both on the cinder-volume and the cinder-api services on the
  attachment delete operation.

The actual set of events that leads to the issue reported in this bug
are:

[Cinder-CSI]
- Requests Nova to detach volume (Request R1)

[Nova]
- R1: Asks cinder-api to delete the attachment and **waits**

[Cinder-API]
- R1: Checks the status of the volume
- R1: Sends terminate connection request (R1) to cinder-volume and
  **waits**

[Cinder-Volume]
- R1: Ask the driver to terminate the connection
- R1: The driver asks the backend to unmap and unexport the volume
- R1: The last attachment is removed from the DB and the status of the
      volume is changed in the DB to "available"

[Cinder-CSI]
- Checks that there are no attachments in the volume and asks Cinder to
  delete it (Request R2)

[Cinder-API]

- R2: Check that the volume's status is valid. It doesn't have
  attachments and is available, so it can be deleted.
- R2: Tell cinder-volume to delete the volume and return immediately.

[Cinder-Volume]
- R2: Volume is deleted and DB entry is deleted
- R1: Finish the termination of the connection

[Cinder-API]
- R1: Now that cinder-volume has finished the termination the code
  continues
- R1: Try to modify the volume in the DB
- R1: DB layer raises VolumeNotFound since the volume has been deleted
  from the DB
- R1: VolumeNotFound is converted to HTTP 404 status code which is
  returned to Nova

[Nova]
- R1: Cinder responds with 404 on the attachment delete request
- R1: Nova leaves the volume as attached, since the attachment delete
  failed

At this point the Cinder and Nova DBs are out of sync, because Nova
thinks that the attachment is connected and Cinder has detached the
volume and even deleted it.

Hardening is also being done on the Nova side [2] to accept that the
volume attachment may be gone.

This patch fixes the issue mentioned above, but there is a request on
Cinder-CSI [1] to use Nova as the source of truth regarding its
attachments that, when implemented, would also fix the issue.

[1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645
[2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova

Closes-Bug: #1937084
Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614
2021-10-15 17:47:38 +02:00
Zuul 51fd9aeed4 Merge "Delete attachment on remove_export failure" 2021-09-23 23:14:10 +00:00
Zuul a20a354a62 Merge "Fix detach notification" 2021-09-22 23:06:37 +00:00
Zuul 01183a1717 Merge "db: Remove 'db' argument from various managers" 2021-09-15 22:53:02 +00:00
Gorka Eguileor 3aa00b0878 Delete attachment on remove_export failure
When deleting an attachment, if the driver's remove_export or the
detach_volume method call fails in the cinder driver, then the
attachment status is changed to error_detaching but the REST API call
doesn't fail.

The end result is:
- Volume status is "available"
- Volume attach_status is "detached"
- There is a volume_attachment record for the volume
- The volume may still be exported in the backend

The volume still being exported in the storage array is not a problem,
since the next attach-detach cycle will give it another opportunity for
it to succeed, and we also do the export on volume deletion.

So in the end leaving the attachment in error_detaching status doesn't
have any use and creates confusion.

This patch removes the attachment record when on an attachment delete
request if the error happens on remove_export or detach_volume calls.

This doesn't change how the REST API attachment delete operation
behaves, the change is that there will not be a leftover attachment
record with the volume in available and detached status.

Closes-Bug: #1935057
Change-Id: I442a42b0c098775935a799876ad8efbe141829ad
2021-09-15 16:55:50 +02:00
Gorka Eguileor 68d4944577 Fix detach notification
Our current `attachment_delete` methods in the volume API and the
manager are using DB methods directly, which makes the OVOs present in
those methods get out of sync with the latest data, which leads to
notifications having the wrong data when we send them on volume detach.

This patch replaces DB method calls with OVO calls and moves the
notification call to the end of the method, where we have the final
status on the volume.

It also adds the missing detach.start notification when deleting an
attachment in the reserved state.

Closes-Bug: #1916980
Closes-Bug: #1935011
Change-Id: Ie48cf55deacd08e7716201dac00ede8d57e6632f
2021-09-15 16:55:50 +02:00
Zuul 6f8215131a Merge "Log connection info returned from driver" 2021-09-08 18:27:41 +00:00
Stephen Finucane 4b246564ac db: Remove 'db' argument from various managers
This is no longer used for anything.

Change-Id: Idb1492012487625772528d957ff65e2070e7248d
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-08-27 15:13:21 +01:00
Gorka Eguileor 94dfad99c2 Improve quota usage for temporary resources
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.

Cinder currently has a problem to track quota usage is when deleting
temporary resources.

Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.

For snapshots we don't have any way to track which ones are temporary,
which creates some issues:

- Quota sync mechanism will count them as normal snapshots.

- Manually deleting temporary snapshots after an operation fails will
  mess the quota.

- If we are using snapshots instead of clones for backups of in-use
  volumes the quota will be messed on completion.

This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.

The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.

Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.

Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.

This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches.  To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.

The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.

With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.

Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
2021-08-26 18:47:27 +02:00
Gorka Eguileor 28d9bca7d6 Fix notifications of migration temp volume
Volume usage notifications in Cinder during Cinder volume migrations are
incorrect.

Part of a volume migration is creating a new volume on the destination
backend and during this process Cinder will issue volume usage
notifications "create.start" and "create.end" even though this is not a
user volume creation and is inconsistent with other temporary volume and
snapshot creation cases.

Also one of the latest steps during the volume creation is to delete one
of the 2 volumes (the source or the destination) and in that case Cinder
will only issue a "delete.end" notification without its corresponding
"delete.start".

Since temporary volumes (for backups or migrations) are not counted
towards quota usage they should also not issue volume usage
notifications.

This patch makes sure that we don't do notifications when creating or
deleting temporary migration volumes.

In both cases it checks the migration_status field to see if it starts
with 'target:'.  For creation the migration_status is set in
_migrate_volume_generic method before making the RPC call, so the data
will be there from the start, before the manager flow starts.

Closes-Bug: #1922920
Change-Id: I7164d700ef56a29e5d4f707fd2340e621bd6f351
2021-08-25 17:50:46 +02:00
Rajat Dhasmana 6a0b41a8ff Log connection info returned from driver
Currently there is no way to verify the connection info returned
from the driver to cinder is the same as what cinder sends to nova
(or other consumers) to connect to the volume.
This log will help narrow down issues during attachments since there
are many components involved (nova, cinder, os-brick).

Change-Id: I8ed3567f8ae6c6384244cc1d07f1eaafbd7bf58e
2021-08-24 04:42:54 -04:00
Eric Harney b5ac2af0c2 mypy: continued manager, scheduler, rpcapi
Change-Id: I9a8d24ac27af8fe4864934d1b9bc5b66da6d2c1e
2021-08-11 08:36:09 -04:00
Eric Harney 8c46c09ad5 mypy: image cache
Change-Id: Iebb002cba51fc0f5e42565151a1687da20bd5a24
2021-08-10 10:26:39 -04:00