We decided that H301 makes no sense for the "typing"
module, just set that in tox.ini instead of every
time it is used.
Change-Id: Id983fb0a9feef2311bf4b2e6fd70386ab60e974a
The initial cinder design[1][2][3] allowed users to create mutliattach
volumes by spcifying the ``multiattach`` parameter in the request
body of volume create operation (``--allow-multiattach`` option in
cinderclient).
This functionality changed in Queens with the introduction of
microversion 3.50[4] where we used volume types to store
the multiattach capabilities. Any volume created with a multiattach
volume type will be a multiattach volume[5].
While implementing the new functionality, we had to keep backward
compatibility with the *old way* of creating multiattach volumes.
We deprecated the ``multiattach`` (``--allow-multiattach`` on cinderclient
side) parameter in the queens release[6][7].
We also removed the support of the ``--allow-multiattach`` optional
parameter from cinderclient in the train release[8] but the API
side never removed the compatibility code to disallow functionality
of creating multiattach volumes by using the ``multiattach``
parameter (instead of a multiattach volume type).
This patch removes the support of providing the ``multiattach``
parameter in the request body of a volume create operation and will
fail with a BadRequest exception stating the reason of failure
and how it can be fixed.
[1] https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
[2] https://review.opendev.org/c/openstack/cinder/+/85847/
[3] https://review.opendev.org/c/openstack/python-cinderclient/+/85856
[4] f1bfd9790d
[5] https://docs.openstack.org/cinder/latest/admin/volume-multiattach.html#how-to-create-a-multiattach-volume
[6] 94dbf5cce2
[7] adb141a262
[8] 3c1b417959
Depends-On: https://review.opendev.org/c/openstack/tempest/+/875372
Closes-Bug: 2008259
Change-Id: I0ece6e279048abcc04b3674108290a80eca6bd62
This works in Python 3.7 or greater and is
cleaner looking.
See PEP-585 for more info.
https://peps.python.org/pep-0585/
Change-Id: I4c9da881cea1a3638da504c4b79ca8db13851b06
This patch adds a feature by which we allow setting default volume types
for projects.
The following changes are made to achieve the feature:
1) Add 4 set of APIs, set, get, get_all, unset volume type
2) All policies (except get_all) default to system/domain/project admin
3) Preference order: project default, conf default
4) Logic to not allow deletion of default type
We validate set, get and unset APIs with keystone to verify a valid
project id is passed in the request and user has proper authorization
rights to show the project.
The policies are system/domain/project admin by default except get_all
policy which defaults to system admin.
Implements: Blueprint multiple-default-volume-types
Change-Id: Idcc949ed6adbaea0c2337fac83014998b81ff1f8
If a volume_type is not specified in a volume-create request, change
I4da0c13b5b3f8174a30b8557f968d6b9e641b091 (introduced in Train) sets a
default volume_type in the REST API layer. This prevents the
selection logic in cinder.volume.flows.api.create_volume.
ExtractVolumeRequestTask from being able to infer the appropriate
volume_type from the source volume, snapshot, or image metadata, and
has caused a regression where the created volume is of the default
type instead of the inferred type.
This patch removes setting the default volume_type in the REST API
and modifies the selection code in ExtractVolumeRequestTask slightly
to make sure a volume_type is always assigned in that function, and
adds and revises some tests.
Change-Id: I05915f2e32b1229ad320cd1c5748de3d63183b91
Closes-bug: #1879578
When creating volume, the volume can be created from snapshot or
source volume, can't be created from both snapshot and source volume.
Line 262 has judged whether availability_zone is None. When cloning
the volume, line 263-267 will not be run, so line 268, there is no
need to judge whether availability_zone is None.
Change-Id: I46ad820aeaf6882cef38630d215496cf228fde05
The flake8-logging-format extension includes several checks for things
we've had to try to catch in code reviews until now. This enables the
extension and fixes the few cases where things had slipped through code
review.
G200: Logging statements should not include the exception in logged string
is disabled since that triggers a lot more issues, some of which may be
acceptable. That can be left as a follow up exercise if we want to clean
those up and enable all checks.
Change-Id: I1dedc0b31f78f518c2ab5dee5ed7abda1c1d9296
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
This adds usage of the flake8-import-order extension to our flake8
checks to enforce consistency on our import ordering to follow the
overall OpenStack code guidelines.
Since we have now dropped Python 2, this also cleans up a few cases for
things that were third party libs but became part of the standard
library such as mock, which is now a standard part of unittest.
Some questions, in order of importance:
Q: Are you insane?
A: Potentially.
Q: Why should we touch all of these files?
A: This adds consistency to our imports. The extension makes sure that
all imports follow our published guidelines of having imports ordered
by standard lib, third party, and local. This will be a one time
churn, then we can ensure consistency over time.
Q: Why bother. this doesn't really matter?
A: I agree - but...
We have the issue that we have less people actively involved and less
time to perform thorough code reviews. This will make it objective and
automated to catch these kinds of issues.
But part of this, even though it maybe seems a little annoying, is for
making it easier for contributors. Right now, we may or may not notice
if something is following the guidelines or not. And we may or may not
comment in a review to ask for a contributor to make adjustments to
follow the guidelines.
But then further along into the review process, someone decides to be
thorough, and after the contributor feels like they've had to deal with
other change requests and things are in really good shape, they get a -1
on something mostly meaningless as far as the functionality of their
code. It can be a frustrating and disheartening thing.
I believe this actually helps avoid that by making it an objective thing
that they find out right away up front - either the code is following
the guidelines and everything is happy, or it's not and running local
jobs or the pep8 CI job will let them know right away and they can fix
it. No guessing on whether or not someone is going to take a stand on
following the guidelines or not.
This will also make it easier on the code reviewers. The more we can
automate, the more time we can spend in code reviews making sure the
logic of the change is correct and less time looking at trivial coding
and style things.
Q: Should we use our hacking extensions for this?
A: Hacking has had to keep back linter requirements for a long time now.
Current versions of the linters actually don't work with the way
we've been hooking into them for our hacking checks. We will likely
need to do away with those at some point so we can move on to the
current linter releases. This will help ensure we have something in
place when that time comes to make sure some checks are automated.
Q: Didn't you spend more time on this than the benefit we'll get from
it?
A: Yeah, probably.
Change-Id: Ic13ba238a4a45c6219f4de131cfe0366219d722f
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
When cloning an encrypted volume, change the
encryption key used on the destination volume.
This is currently implemented for iSCSI/FC
drivers only.
Change-Id: Id797af4f8ff001ec3d55cb4eda19988a314b700d
Much of our code renames this at import already --
just name it "volume_utils" for consistency, and
to make code that imports other modules named "utils"
less confusing.
Change-Id: I3cdf445ac9ab89b3b4c221ed2723835e09d48a53
Fix an issue with the 'resource_backend' included in the scheduler spec
for creating a volume associated with another volume, snapshot, or
group/cg. When running A/A, the 'resource_backend' must reference the
cluster, not the host.
Enhance the unit tests that cover this area. This includes fixing the
'expected_spec' so it copies a dictionary rather than referencing it,
so that external changes to the dictionary don't inadvertently update
the unit test's expected results.
Closes-Bug: #1808343
Change-Id: I7d414844d094945b55a094a8426687595f22de28
The "_get_image_metadata" method is a common function to check
whether we can use this image to write a volume or not.
These logic also can be used by re-image interface, this patch
extract it into volume utils.
Change-Id: Ie1b9fd8b4f335f0b660984ca1b3121b2f203704d
blueprint: add-volume-re-image-api
When creating a volume from a snapshot of a bootable volume the
bootable setting is ignored. This changes the creation logic so
when snapshot_id is not None, bootable can be set to be the same
as the snapshot's source_volume.
Change-Id: Ifca9ca04dfa8eca987dc44ec23cda5e0d431cba2
Volume type's reserved key 'availability_zones' will not be
recognized if type is specified in source volume or snapshot.
This patch fixes this issue while refactor some codes regarding
collecting volume type from resource.
Change-Id: Ibe937d5a97d685f5d9c3f8c03c5c384bb02a6942
This patch fixs two issues related to multiattach extra specs:
1. Volume's multiattach attribute will always be False if it's
created by a multiattach type while type is specified in source
volume, snapshot, image metadata or config file, related patch [1].
2. 'MULTIATTACH_POLICY' will not be validated if multiattach flag is
specified in the volume type that from source volume, snapshot,
image metadata or config file [2].
[1]: a32e24ff3e
[2]: ff30971d1f
Closes-Bug: #1770671
Change-Id: Ibec0656d9ead726ca12e62091fda74c04ec99eda
Now availability zone is highly integrated into
volume type's extra spec, it will be recognized
when creating and retyping, also we can filter
volume type by extra spec now.
Change-Id: I4e6aa7af707bd063e7edf2b0bf28e3071ad5c67a
Partial-Implements: bp support-az-in-volumetype
We can't assume that the LUKS layer used for
volume encryption functions in a way that will
safely work with multiattach.
Closes-Bug: #1770689
Change-Id: I613b48a9e89270b2f0266bffc5aeeefad37ce8fb
A regression was introduced in
I970c10f9b50092b659fa2d88bd6a02f6c69899f2
on backends supporting storage pools. By extracting the
'host@backend' out of the host attr from a parent volume,
the scheduler may attempt to put volumes into other pools
belonging to the backend.
Many backends cannot clone across storage pools,
or create storage volumes from snapshots from other
storage pools.
Change-Id: Ic4c8f29bef2c82550d6d6f03f8fa1dc80696f56e
Closes-Bug: #1732557
This patch implements the spec of creating volume from backup.
Change-Id: Icdc6c7606c43243a9e12d7a42df293b729f589e5
Partial-Implements: blueprint support-create-volume-from-backup
This patch is a minor reorg of existing code to facilitate subsequent
changes for migrating legacy keys to a modern key manager such as
Barbican.
volume/utils.py now supports these functions that manage the lifecycle
of an encryption key:
- create_encryption_key()
- delete_encryption_key()
- clone_encryption_key()
create_encryption_key() is an existing function, but it was not used
everywhere a key was created. All code that needs to create a key now
does so by calling this function.
delete_encryption_key() is primarily a small wrapper, but it will play
a larger role in a subsequent patches related to key migration. One
functional improvement in this patch is it no longer propagates an
exception when deleting an unknown key ID. This avoids needlessly
blocking attempts to delete an encrypted volume for fear that the key
could not be deleted.
clone_encryption_key() is a small wrapper that improves the transparency
of code that needs to create a new encryption key ID from an existing
key ID.
Implements: blueprint migrate-fixed-key-to-barbican
Change-Id: I2108e77a8d07dddfb9ec284b3930a197854bd884
Cinder would commit quota twice when managing resource
in a clean environment that the corresponding
'quota_usage' record is empty. This is all because of
we create db entity before reservation, hence the SYNC
mechanism would refresh new record into quota_usage
before actually reserved.
This patch fix this issue by introducing 2 phases
reserve&commit, the latter only intends to update
the actual size.
Closes-Bug: #1587376
Change-Id: I79940e534ec03f2d327e8a7e14e45bc93ae41b0c
Currently we are incrementing gigabytes quota twice, once while
creating snapshot & then while creating individual volumes & this
is fixed in this change. Also, snapshot quota updation was put in
a loop of volumes because of which it gets incremented to number
of volume times.
Change-Id: I9ef79a21c7438e69221a5ed2a1c1bfb59f3f9a32
Closes-Bug: 1728834
Pass the request to scheduler rather than volume service in
order to check the backend's capacity.
Change-Id: I970c10f9b50092b659fa2d88bd6a02f6c69899f2
Partial-Implements: blueprint inspection-mechanism-for-capacity-limited-host
This patch adds policy in code support for volume
and volume type resources.
Change-Id: I47d11a2f6423a76ca053abf075791ed70ee84b07
Partial-Implements: blueprint policy-in-code
When creating volume, Cinder use az cache to validate az. This
will lead a problem: If a new cinder-volume node with a different
az is added to the OpenStack environment or admin change an exist
cinder-volume's az, Creating volume with this az will fail until
the cache is refreshed.
This patch will refresh the cached az if the previous creating
task failed to find the target.
Change-Id: I3e884af1499ea2ddea1a3603d3d09c31a1f62811
Cloese-bug: #1693084
If encryption types are setup in an invalid way, Barbican
will throw a 400 error which results in Cinder producing
a 500 error here.
For example, setting key size = 2 produces:
"4xx Client error: Bad Request: Provided field value
is not supported" from Castellan/Barbican.
Catch this and produce a generic HTTP 400 instead.
Closes-Bug: #1675895
Change-Id: Icd64d7e54d61a415b1be3b438625e880a7259416
Volume encryption keys specified in Glance image
metadata are copied to new keys for volumes cloned
from that image.
If the image key and volume key match (i.e. when using
the conf key manager), the data is copied directly from
the image to the volume.
When creating a new volume in a Barbican environment, we
will generate a new encryption key which is identical to
the original key.
If creating an unencrypted volume, the encrypted image data
is copied into the volume. This is not directly usable
without the encryption key, but may be useful for data transfer
purposes.
Closes-Bug: #1485449
Related bp: improve-encrypted-volume
Change-Id: I1f4ea35cd05f4c43a5ae07d8a541ff6495d5f8e9
1. While creating a consistency group from a source consistency group
having thinly provisioned volume(s), the target volumes would not
be thinly provisioned because 'tpvv' parameter was not being passed.
Same thing was happening for thinly deduplicated volume(s).
Extracted the optional boolean parameters 'tpvv' and 'tdvv', if present
, from the source volume(s) to be used for corresponding target
volume(s) creation. In case these are not present, default value of
False is used.
Fixed the test case too so that it could accept these addtional two
parameters.
2. Cloning a CG with a bootable volume would create a CG with a
non-bootable volume. This was due to 'bootable' flag of source
volume not being used while creating DB entry for target volume.
Closes-Bug: #1655541
Change-Id: Ia997629a3b2189d1722f87edda5f3989fe79ffdb
If user passes large number "11111111111111111111111111111" as size to
the create volume api, then the api behaves inconsistently depending
on whether the cinder-api service is running on Py2 and Py3.
In case of Py2, it returns following error message:
ERROR: Invalid input received: Volume size '11111111111111111111111111111'
must be an integer and greater than 0 (HTTP 400) (Request-ID: req-abe6cd5
e-a0c6-4e0d-ba05-c9eceef547bf)
But in case of Py3, it passes this validation.
This patch fixes this inconsistency issue by checking isinstance(size, int)
to isinstance(size, six.integer_types).
NOTE:
The unit test written in the patch will fail on master on Py2 environment
and it's written to just ensure that the issue is resolved for Py2.
Closes-Bug: #1651103
Change-Id: Ib43b96d458f0d4e9477886bd19c35a7e05664929
The scheduler is currently always setting `replication_status` to
"disabled" regardles of the real replication status.
Some drivers are updating it on volume creation, but even in that case
the user will have received the API REST response with the
`replication_status` set to "disabled" and will have to do a `show` of
the volume once the creation has been completed to confirm if the volume
is actually replicated or not.
This patch changes this and allows the scheduler to correctly set the
`replication_status` on volume creation based on the standard property
`replication_enable`.
This doesn't solve 100% of the cases, as a Cloud Administrator may not
be using the `replication_enable` extra specs in volume types to set
replication because it's using a backed that is replicating at
backend/pool level.
In that case there will be a discrepancy, just like we have now, until
the driver completes the creation and returns the model update for the
volume.
Closes-Bug: #1643883
Change-Id: Id4df2b7ad55e9b5def91329f5437da9caa185c30
This patch allows scheduler to work with clustered hosts to support A/A
operations.
Reporting capabilities of clustered hosts will be grouped by the
cluster_name instead of the host, and non clustered hosts will still be
stored by host.
To avoid replacing a newer capability report with an older version we
timestamp capabilities on the volumes (it's backward compatible) and
only replace currently stored values in scheduler when they are newer.
Following actions now support A/A operation:
- manage_existing
- manage_existing_snapshot
- get_pools
- create_volume
- retype
- migrate_volume_to_host
- create_consistencygroup
- create_group
- update_service_capabilities
- extend_volume
And Affinity and Driver filters have been updated.
The new functionality to notify service capabilities has not been
changed to Active/Active and will be done in another patch.
APIImpact: Added microversion 3.16
Specs: https://review.openstack.org/327283
Implements: blueprint cinder-volume-active-active-support
Change-Id: I611e75500f3d5281188c5aae287c62e5810e6b72
In this class and it's related modules I have not found this
unused db_api parameters. And I think function parameters in
execute are got from taskflow.
So I think we can delete it.
Change-Id: Iad9e0ebe7d03210efe30964623e69a57d9fb0d18
This change adds a new enum and field, VolumeAttachStatus
and VolumeAttachStatusField, that will hold the constants for the
'attach_status' field of the VolumeAttachStatus object. This enum
and field are based on the base oslo.versionedobjects enum and field.
This also changes over the volume object to use the new field. Finally,
all uses of strings for comparison and assignment to this field are
changed over to use the constants defined within the enum.
Partial-Implements: bp cinder-object-fields
Change-Id: Ie727348daf425bd988425767f9dfb82da4c3baa8
Currently, when creating an encrypted volume from an image, Cinder
writes raw data to the encrypted volume.
As a result, Nova can't use these volumes.
This patch is to implement following function:
When creating an encrypted volume from an image, it encrypts the
data and writes to the volume.
This patch adds a new interface copy_image_to_encrypted_volume in
driver's API, vendor company can implement it by itself.
Change-Id: I213459193550198c570615e381797db5e08e2cce
Implements: blueprint improve-encrypted-volume
Closes-bug: #1482464
Closes-bug: #1465656