On Xena we added the use_quota DB field to volumes and snapshots to
unify the tracking of temporary resources, but we still had to keep
compatibility code for the old mechanisms (due to rolling
upgrades).
This patch removes compatibility code with the old mechanism and adds
additional cleanup code to remove the tracking in the volume metadata.
Change-Id: I3f9ed65b0fe58f7b7a0867c0e5ebc0ac3c703b05
This patch updates the message being logged after the flow manager
creates a volume from backup. When the flow manager called the driver
to try and do a backend assisted restore and the driver doesn't support
that, the log entry said it created the volume successfully. This was
incorrect as the volume wasn't finished. The raw volume was created,
but the data hadn't been copied into the volume yet. This is a
confusing message when trying to debug the process for failures in
production.
This patch updates the log message to only say successfull when the
driver assisted restore is implemented and worked.
Change-Id: I65d84f1e5566d7189ffd3315aeeef09c7cd73c68
We decided that H301 makes no sense for the "typing"
module, just set that in tox.ini instead of every
time it is used.
Change-Id: Id983fb0a9feef2311bf4b2e6fd70386ab60e974a
Unfortunately, the commit b75c29c7d8
did not update all the places where BackupAPI.restore_backup()
was called. One of them was in the flow manager.
Although this regression being undetected in is disappointing,
we are not changing how unit tests are performed in any
fundamental way for this patch. An outstanding patch using
MyPy is aready in review, which would capture this case.
Closes-bug: #2025277
Related-Change-Id: I54b81a568a01af44e3f74bcac55e823cdae9bfbf
Change-Id: Iabfebacfea44916f89584ffd019d848e53302eaf
Support for the human format by oslo_utils.imageutils.QemuImgInfo was
deprecated since oslo.utils 4.9.1 [1]. This change replaces the human
format with the json format which will be used by default.
[1] 73eb0673f627aad382e08a816191b637af436465
Closes-Bug: #1940540
Change-Id: Ia0353204abf849467106ee08982d1271de23101a
Per I8516178941, create_volume here doesn't return anything.
Make this more clear.
Depends-On: I851617894157d9fcb14b1e1294ac8b39c4a7a558
Change-Id: I612bd1c3e200554d4d6e687fa2aee802951a1f48
Add new config option `image_conversion_disable`, when it's set to
`True`, image disk_format and volume format must be the same format.
Otherwise will raise ImageUnacceptable exception.
`image_conversion_disable` config is bool option, default to `False`.
The idea behind this was that in certain high scale environments,
it is possible that a cloud allows both qcow2 and raw image uploads.
However, uploading a qcow2 image and creating a large number of
volumes can cause a tremendous amount of conversions that will kill
cinder-volume. It may be undesirable to have this, so a cloud
operator can opt to disallow conversions and enforce that the user
uploads the correct image type if they want to have volumes (aka
raw in rbd case).
Closes-Bug: #1970115
Change-Id: Ic481d68639d9460d1fd14225bc17a0d8287d5fd9
This works in Python 3.7 or greater and is
cleaner looking.
See PEP-585 for more info.
https://peps.python.org/pep-0585/
Change-Id: I4c9da881cea1a3638da504c4b79ca8db13851b06
There are multiple places in Cinder where the os-brick call to get the
connector properties is not passing the "multipath" and
"enforce_multipath" configuration options.
This means that some driver won't return a multipathed connection
information on the initialize_connection RPC call, resulting in a single
pathed attachment even when the os-brick connector is instantiated with
the right configuration option.
This patch fixes the different calls where this is an issue:
- Backup create
- Backup restore
- Kaminario create volume from snapshot
- Kaminario create volume from volume
- Volume rekeying
And also in places where this is not affecting operations because
drivers are always returning multipathed information:
- Unity driver
- IBM FlashSystem
Closes-Bug: #1951982
Closes-Bug: #1951977
Closes-Bug: #1951981
Change-Id: I73ab87b5aaa4835a814389bc1cdd8016d75f52ef
This is only needed for messaging when failures occur,
so wait until then to initialize it to save overhead.
Change-Id: Ib82f46d71a00c572080c861db122158afd747689
We only use the backup API and RPCAPI when handling
backup-related requests -- don't bother loading this
up for all create volume requests.
Change-Id: I7783a6c9d2ecd62eabd4766a076e3fe28994a0a8
Format info support was added to the nfs driver with change[1]
and due to a issue with groups, it was restricted for volumes
with groups with change[2].
This was a problem with OVO handling of consistency groups/volume
groups and is fixed in this patch thereby also removing the
limitation on nfs driver.
[1] https://review.opendev.org/c/openstack/cinder/+/761152
[2] https://review.opendev.org/c/openstack/cinder/+/780700
Co-Authored-By: Gorka Eguileor <geguileo@redhat.com>
Change-Id: I078a54a47e43b1cc83b4fb1a8063b41ab35358a1
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.
Cinder currently has a problem to track quota usage is when deleting
temporary resources.
Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.
For snapshots we don't have any way to track which ones are temporary,
which creates some issues:
- Quota sync mechanism will count them as normal snapshots.
- Manually deleting temporary snapshots after an operation fails will
mess the quota.
- If we are using snapshots instead of clones for backups of in-use
volumes the quota will be messed on completion.
This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.
The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.
Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.
Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.
This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches. To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.
The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.
With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.
Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
Volume usage notifications in Cinder during Cinder volume migrations are
incorrect.
Part of a volume migration is creating a new volume on the destination
backend and during this process Cinder will issue volume usage
notifications "create.start" and "create.end" even though this is not a
user volume creation and is inconsistent with other temporary volume and
snapshot creation cases.
Also one of the latest steps during the volume creation is to delete one
of the 2 volumes (the source or the destination) and in that case Cinder
will only issue a "delete.end" notification without its corresponding
"delete.start".
Since temporary volumes (for backups or migrations) are not counted
towards quota usage they should also not issue volume usage
notifications.
This patch makes sure that we don't do notifications when creating or
deleting temporary migration volumes.
In both cases it checks the migration_status field to see if it starts
with 'target:'. For creation the migration_status is set in
_migrate_volume_generic method before making the RPC call, so the data
will be there from the start, before the manager flow starts.
Closes-Bug: #1922920
Change-Id: I7164d700ef56a29e5d4f707fd2340e621bd6f351
Glance has changed the format of the cinder URIs in image locations
so that they can look like
cinder://glance-store-name/volume_id
in addition to the legacy format
cinder://volume_id
Change the cinder code so that it can handle both formats for
reading. (We only need to write the legacy format.)
Change-Id: I8c176bf4c875061591bb6c94654a2cef643a4dcb
Closes-bug: #1898075
File locks are never removed from the system, so they keep increasing in
the locks directory, which can become problematic.
In this patch we start trying to delete these lock files when we delete
a volume or a snapshot.
This affects the 2 type of file locks we currently have:
- Using oslo lockutils synchronized with external=True
- Using coordination.synchronized when deployed in Active-Passive and no
DLM
This will alleviate the ever increasing files in the locks directory.
Deployment tools should implement a service that runs when the host is
booting and cleans of the locks directory before the OpenStack services
are started.
Partial-Bug: #1432387
Change-Id: Ic73ee64257aeb024383c6cb79f2e8c04810aaf69
After a clone of an encrypted volume is created, an attach is attempted.
PowerMax driver requires the provider_location be populated in order
to find the volume to attach. For this, the volume object needs to be
updated.
Closes-Bug: #1913054
Change-Id: Idf5b3783ddc333d6d60f28a3d08e5fd28e5c1fa8
Move code that calls out to os-brick for connectors,
encryptors, etc., into volume_utils.
This leaves cinder/utils.py more general-purpose
cinder-wide code, which reduces unnecessary binding
between things like "cinder-manage db" and calls
that load much more of cinder's code (and external
libraries like os-brick).
This also means that some drivers only need to
import volume_utils and not cinder.utils.
Partial-Bug: #1912278
Change-Id: Ib2e2960ca354a47d303e0633c7d84e6da4b55b82
LUKS password quality checking is not useful
since we only use long hex strings for passwords.
Not skipping this means that we have to install
cracklib-dicts for cryptsetup to work, which is
unnecessary weight.
Closes-Bug: #1861120
Change-Id: I1105c16caaf916e9101b6dca34a7f13936ce2240
When we reach xtremio_volumes_per_glance_cache, it is not able
to create more clones for the image-volume therefore requires to
create a new cache entry.
Keeping in mind the case in [1], we can get CinderException for
various reasons from different drivers during the clone operation.
So we define a new exception for the xtremio driver to force create
a new cache entry when the limit reaches and not enforce the same
for other drivers.
[1] https://bugs.launchpad.net/cinder/+bug/1552734
Closes-Bug: #1858169
Change-Id: I2bf964d5a7b2048db9be1ea3eb97cd517e112c5b
The new volume's encryption key that was cloned
earlier in the volume creation process should be
delete after rekey succeeds, because it is no
longer used.
Change-Id: I243d1b47f3996ccdda977ef21b979fd3fc49a2f9
Closes-Bug: #1844556
When cloning an encrypted volume, change the
encryption key used on the destination volume.
This is currently implemented for iSCSI/FC
drivers only.
Change-Id: Id797af4f8ff001ec3d55cb4eda19988a314b700d
Much of our code renames this at import already --
just name it "volume_utils" for consistency, and
to make code that imports other modules named "utils"
less confusing.
Change-Id: I3cdf445ac9ab89b3b4c221ed2723835e09d48a53
The previous patch to update hacking
and pycodestyle turned off some new errors to
keep the patch smaller.
Fix those errors here.
Change-Id: Ib22f63e98eefb36b9b2a8be55c15271824408d5d
This patch fixes the image volume cache so that a new cache entry is
created whenever cloning an existing entry fails (which can happen, for
example, when a cached volume reaches its snapshot limit). This restores
the original behavior, which was broken by [1].
[1] I547fb4bcdd4783225b8ca96d157c61ca3bcf4ef4
Closes-Bug: #1801595
Change-Id: Ib5947e2c7300730adb851ad58e898a29f2b88525
The "_copy_image_to_volume" method is a common function to
copy image to volume.
This also can be used by re-image interface, this patch
extract them into volume utils.
part of blueprint: add-volume-re-image-api
Change-Id: I89471fd2737d0b21ce029a6ad7ed43a6e8bf4810
This patch is to log user message when volume creation fails.
This is only for _create_raw_volume, and other patches will be
submitted for _create_from_snapshot etc.
Change-Id: I9ba87863623a9c5806e93b69e1992cabce2f13b9
Partial-Bug: #1799159
The "get_volume_image_metadata" method is a common function to
get the volume image metadata, and "enable_bootable_flag" method
is used to enable volume bootable flag.
These are also can be used by re-image interface, this patch
extract them into volume utils.
Change-Id: I37af102c95023f7ad6d23c8aa93d63d8fe6f547d
blueprint: add-volume-re-image-api
When volume is created via restoring backup action, the volume
status would change into 'restoring-backup' from 'creating', this
state change would confusing end user. This patch updates this
and keep volume status in 'creating' when creating.
Change-Id: I82d741ea8278de75ed817379b0a04f1f919a27c7
Add image signature verification support when
creating from image.
Change-Id: I37b7a795da18e3ddb18e9f293a9c795e207e7b7e
Partial-Implements: bp cinder-support-image-signing
The Cinder code that processes Glance image metadata
is a bit confused about whether this particular field
is a Glance property or metadata.
Since it isn't a defined a Glance property and is stored
in image metadata, ensure that Cinder also tracks it
metadata and not as a property.
This mismatch prior to this fix causes Cinder to create
volumes with the wrong encryption key when creating a
volume from an encrypted image, which results in an
unreadable volume.
Closes-Bug: #1764125
Change-Id: Ie5af3703eaa82d23b50127f611235d86e4104369
Refactor the code that creates a volume from a downloaded glance image
to minimize the scope of the lock that prevents multiple entries in the
volume image cache. Now the lock serializes only the portion of the
code that causes the cache entry to be created. Locking is minimized
when the volume is already cached, or when the volume won't be cached.
Closes-Bug: #1758414
Change-Id: I547fb4bcdd4783225b8ca96d157c61ca3bcf4ef4
At the moment, the check for disk space when booting from an image
currently happens regardless if the backend supports cloning or not.
This means that even if a backend can clone the image directly, the
control plane must have enough disk to download the entire image
which can be unreasonable in backend such as RBD.
This patch moves the code which checks for enough disk space to be
in the same function that effectively downloads the image, ensuring
that it only runs when an image has to be downloaded and avoiding
the check if the backend successfully cloned the image.
Closes-Bug: #1744383
Change-Id: Ibfd6f40e8b8ab88d4ec76e9ac27617a0f97b6c29