Split off the finalization part of the volume manager's
extend_volume method and make it externally callable as the new
os-extend_volume_completion admin volume action.
This is the first part of a feature that will allow volume drivers
to rely on feedback from Nova when extending attached volumes,
allowing e.g. NFS-based drivers to support online extend.
See the linked blueprint for details.
Implements: bp extend-volume-completion-action
Change-Id: I4aaa5da1ad67a948102c498483de318bd245d86b
We decided that H301 makes no sense for the "typing"
module, just set that in tox.ini instead of every
time it is used.
Change-Id: Id983fb0a9feef2311bf4b2e6fd70386ab60e974a
When we try to reimage a volume, we update the status of
volume to 'downloading'.
We later validate the image metadata (like image is 'active',
image size is less than volume size, etc), and in case the
validation fails, we currently don't revert the volume status
back to original ('available', 'in-use' etc) and volume stays
in 'downloading' state.
This patch fixes this by catching the failure exception and
doing a DB update to restore the volume status back to it's
previous state.
Closes-Bug: #2036994
Change-Id: I05bf29e2a089b06398414b542b655a8083c9a21f
Due to how the Linux SCSI kernel driver works there are some storage
systems, such as iSCSI with shared targets, where a normal user can
access other projects' volume data connected to the same compute host
using the attachments REST API.
This affects both single and multi-pathed connections.
To prevent users from doing this, unintentionally or maliciously,
cinder-api will now reject some delete attachment requests that are
deemed unsafe.
Cinder will process the delete attachment request normally in the
following cases:
- The request comes from an OpenStack service that is sending the
service token that has one of the roles in `service_token_roles`.
- Attachment doesn't have an instance_uuid value
- The instance for the attachment doesn't exist in Nova
- According to Nova the volume is not connected to the instance
- Nova is not using this attachment record
There are 3 operations in the actions REST API endpoint that can be used
for an attack:
- `os-terminate_connection`: Terminate volume attachment
- `os-detach`: Detach a volume
- `os-force_detach`: Force detach a volume
In this endpoint we just won't allow most requests not coming from a
service. The rules we apply are the same as for attachment delete
explained earlier, but in this case we may not have the attachment id
and be more restrictive. This should not be a problem for normal
operations because:
- Cinder backup doesn't use the REST API but RPC calls via RabbitMQ
- Glance doesn't use this interface anymore
Checking whether it's a service or not is done at the cinder-api level
by checking that the service user that made the call has at least one of
the roles in the `service_token_roles` configuration. These roles are
retrieved from keystone by the keystone middleware using the value of
the "X-Service-Token" header.
If Cinder is configured with `service_token_roles_required = true` and
an attacker provides non-service valid credentials the service will
return a 401 error, otherwise it'll return 409 as if a normal user had
made the call without the service token.
Closes-Bug: #2004555
Change-Id: I612905a1bf4a1706cce913c0d8a6df7a240d599a
mypy 1.0 no longer needs one "type: ignore" comment
that we have currently, which must be fixed to resolve
an unused-ignore error.
Depends-On: Ide07cf7f7c5175026f897e0a1686911c0c93da21
Change-Id: If2e7e94af0725421403ca8bfad0e5fdfd513ab12
The initial cinder design[1][2][3] allowed users to create mutliattach
volumes by spcifying the ``multiattach`` parameter in the request
body of volume create operation (``--allow-multiattach`` option in
cinderclient).
This functionality changed in Queens with the introduction of
microversion 3.50[4] where we used volume types to store
the multiattach capabilities. Any volume created with a multiattach
volume type will be a multiattach volume[5].
While implementing the new functionality, we had to keep backward
compatibility with the *old way* of creating multiattach volumes.
We deprecated the ``multiattach`` (``--allow-multiattach`` on cinderclient
side) parameter in the queens release[6][7].
We also removed the support of the ``--allow-multiattach`` optional
parameter from cinderclient in the train release[8] but the API
side never removed the compatibility code to disallow functionality
of creating multiattach volumes by using the ``multiattach``
parameter (instead of a multiattach volume type).
This patch removes the support of providing the ``multiattach``
parameter in the request body of a volume create operation and will
fail with a BadRequest exception stating the reason of failure
and how it can be fixed.
[1] https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
[2] https://review.opendev.org/c/openstack/cinder/+/85847/
[3] https://review.opendev.org/c/openstack/python-cinderclient/+/85856
[4] f1bfd9790d
[5] https://docs.openstack.org/cinder/latest/admin/volume-multiattach.html#how-to-create-a-multiattach-volume
[6] 94dbf5cce2
[7] adb141a262
[8] 3c1b417959
Depends-On: https://review.opendev.org/c/openstack/tempest/+/875372
Closes-Bug: 2008259
Change-Id: I0ece6e279048abcc04b3674108290a80eca6bd62
Cinder is currently not able to upload a volume that is based on an
image back to glance. This bug is triggered if glance multistore is
enabled (devstack in this example).
When enabling multistore, the following properties will be stored in Cinder:
* os_glance_failed_import=''
* os_glance_importing_to_stores=''
Those properties will cause problems when Cinder tries to perform some
actions with Glance. Error msg:
```
cinderclient.exceptions.BadRequest: HTTP 403 Forbidden: Access was denied to this resource.: Attribute 'os_glance_failed_import' is reserved. (HTTP 400)
```
Nova had the same issue and solved it with:
50fdbc752a/releasenotes/notes/absolutely-non-inheritable-image-properties-85f7f304fdc20b61.yaml
and
dda179d3f9
Therefore, this patch is intended to apply a similar solution in Cinder.
Change-Id: I79d70543856c01a45e2d8c083ab8df6b9c047ebc
Closes-Bug: #1945500
This works in Python 3.7 or greater and is
cleaner looking.
See PEP-585 for more info.
https://peps.python.org/pep-0585/
Change-Id: I4c9da881cea1a3638da504c4b79ca8db13851b06
When a user tries to delete a volume in "awaiting-transfer" state ,
the error message received does not include the "awaiting-transfer"
state in invalid states of Volume for deletion.
This leads to the user believing the volume-delete request is valid
and is not able to debug the reason for failure of the volume deletion
for volume present in "awaiting-transfer" state.
Closes-Bug: #1971603
Change-Id: I78915c332169b26ffb2b97310efedec65bc25e4d
Call _is_encrypted() instead of volume_types.is_encrypted()
since we already have the volume type object.
Change-Id: Id82a5bc251a8cea4febdc429329cd136c805487d
Managing a volume to an encrypted volume type should not be allowed.
One reason is that there is no way for an operator to specify an
encryption key ID for the volume. Another is that we already don't
allow a volume of an encrypted type to be un-managed, so this change
will be symmetric.
Also update and correct the api-ref for this call.
Co-authored-by: Yadiel Xuan(轩艳东) <xuanyandong@inspur.com>
Co-authored-by: Brian Rosmaita <rosmaita.fossdev@gmail.com>
Change-Id: Ic2da41f3962c1108f974aca952bce3da6d6ac277
Closes-bug: #1944577
This patch adds volume re-image API to enable the ability to
re-image a specific volume.
Implements: blueprint add-volume-re-image-api
Co-Authored-by: Rajat Dhasmana <rajatdhasmana@gmail.com>
Change-Id: I031aae50ee82198648f46c503bba04c6e231bbe5
There is an initial policy check in the transfers accept API[1]
which validates correctly if the user is authorized to perform
the operation or not. However, we've a duplicate check in the volume
API layer which passes a target object (volume) while authorizing
which is wrong for this API. While authorizing, we enforce check on
the project id of the target object i.e. volume in this case which,
before the transfer operation is completed, contains the project id
of source project hence making the validation wrong.
In the case of transfers API, any project is able to accept the transfer
given they've the auth key required to secure the transfer accept
So this patch removes the duplicate policy check.
[1] https://opendev.org/openstack/cinder/src/branch/master/cinder/transfer/api.py#L225
Closes-Bug: #1950474
Change-Id: I3930bff90df835d9d8bbf7e6e91458db7e5654be
There are cases where requests to delete an attachment made by Nova can
race other third-party requests to delete the overall volume.
This has been observed when running cinder-csi, where it first requests
that Nova detaches a volume before itself requesting that the overall
volume is deleted once it becomes `available`.
This is a cinder race condition, and like most race conditions is not
simple to explain.
Some context on the issue:
- Cinder API uses the volume "status" field as a locking mechanism to
prevent concurrent request processing on the same volume.
- Most cinder operations are asynchronous, so the API returns before the
operation has been completed by the cinder-volume service, but the
attachment operations such as creating/updating/deleting an attachment
are synchronous, so the API only returns to the caller after the
cinder-volume service has completed the operation.
- Our current code **incorrectly** modifies the status of the volume
both on the cinder-volume and the cinder-api services on the
attachment delete operation.
The actual set of events that leads to the issue reported in this bug
are:
[Cinder-CSI]
- Requests Nova to detach volume (Request R1)
[Nova]
- R1: Asks cinder-api to delete the attachment and **waits**
[Cinder-API]
- R1: Checks the status of the volume
- R1: Sends terminate connection request (R1) to cinder-volume and
**waits**
[Cinder-Volume]
- R1: Ask the driver to terminate the connection
- R1: The driver asks the backend to unmap and unexport the volume
- R1: The last attachment is removed from the DB and the status of the
volume is changed in the DB to "available"
[Cinder-CSI]
- Checks that there are no attachments in the volume and asks Cinder to
delete it (Request R2)
[Cinder-API]
- R2: Check that the volume's status is valid. It doesn't have
attachments and is available, so it can be deleted.
- R2: Tell cinder-volume to delete the volume and return immediately.
[Cinder-Volume]
- R2: Volume is deleted and DB entry is deleted
- R1: Finish the termination of the connection
[Cinder-API]
- R1: Now that cinder-volume has finished the termination the code
continues
- R1: Try to modify the volume in the DB
- R1: DB layer raises VolumeNotFound since the volume has been deleted
from the DB
- R1: VolumeNotFound is converted to HTTP 404 status code which is
returned to Nova
[Nova]
- R1: Cinder responds with 404 on the attachment delete request
- R1: Nova leaves the volume as attached, since the attachment delete
failed
At this point the Cinder and Nova DBs are out of sync, because Nova
thinks that the attachment is connected and Cinder has detached the
volume and even deleted it.
Hardening is also being done on the Nova side [2] to accept that the
volume attachment may be gone.
This patch fixes the issue mentioned above, but there is a request on
Cinder-CSI [1] to use Nova as the source of truth regarding its
attachments that, when implemented, would also fix the issue.
[1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645
[2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova
Closes-Bug: #1937084
Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614
Our current `attachment_delete` methods in the volume API and the
manager are using DB methods directly, which makes the OVOs present in
those methods get out of sync with the latest data, which leads to
notifications having the wrong data when we send them on volume detach.
This patch replaces DB method calls with OVO calls and moves the
notification call to the end of the method, where we have the final
status on the volume.
It also adds the missing detach.start notification when deleting an
attachment in the reserved state.
Closes-Bug: #1916980
Closes-Bug: #1935011
Change-Id: Ie48cf55deacd08e7716201dac00ede8d57e6632f
Introduces API microversion 3.66, which allows
snapshot creation on in-use volumes without the
force flag being passed.
Co-authored-by: Eric Harney <eharney@redhat.com>
Co-authored-by: Brian Rosmaita <rosmaita.fossdev@gmail.com>
Implements: blueprint fix-snapshot-create-force
Change-Id: I6d45aeab065197a85ce62740fc95306bce9dfc45
This is a silly config option. We only have one database driver in-tree
and no plans to add more (SQLAlchemy is best in class). There's also no
way we'd be able to support out-of-tree drivers. Remove it entirely.
Change-Id: Ica3b2e8fcb079beca652e81d2230bcca82fb49d7
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.
Cinder currently has a problem to track quota usage is when deleting
temporary resources.
Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.
For snapshots we don't have any way to track which ones are temporary,
which creates some issues:
- Quota sync mechanism will count them as normal snapshots.
- Manually deleting temporary snapshots after an operation fails will
mess the quota.
- If we are using snapshots instead of clones for backups of in-use
volumes the quota will be messed on completion.
This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.
The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.
Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.
Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.
This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches. To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.
The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.
With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.
Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
File locks are never removed from the system, so they keep increasing in
the locks directory, which can become problematic.
In this patch we start trying to delete these lock files when we delete
a volume or a snapshot.
This affects the 2 type of file locks we currently have:
- Using oslo lockutils synchronized with external=True
- Using coordination.synchronized when deployed in Active-Passive and no
DLM
This will alleviate the ever increasing files in the locks directory.
Deployment tools should implement a service that runs when the host is
booting and cleans of the locks directory before the OpenStack services
are started.
Partial-Bug: #1432387
Change-Id: Ic73ee64257aeb024383c6cb79f2e8c04810aaf69
Both volumes and snapshots have a volume_type_id field in the DB, but
when we migrate a volume we leave the snapshots with the old type. This
can only happen for retypes that don't do migration, since we cannot do
a retype with migration if the volume has snapshots.
Leaving the snapshots with the old time makes a mess of the quota usage
when we do the retype as well as when we delete the snapshots.
This patch fixes the quota issue by making sure the snapshots are
retyped as well. This means that we will check quotas for the snapshots
when retyping a volume that have them and we will properly reserve and
set the quota on retype.
Closes-Bug: #1877164
Change-Id: I90e9f85d192e1f2fee4ec8615a5bc95851a90f8e
Attachment OVO automatically loads the Volume OVO into its `volume`
attribute, so we don't need to load it again in the volume's API
`attachment_delete` method.
Change-Id: I79f2e58de42fca69c08f3636d72b80bdf8457e9a
When we create a volume in the DB using the OVO interface we were
missing some fields in the volume present in memory after it has been
created.
This was caused by the create method not passing the expected attributes
to the _from_db_object.
Because of this missing information we have places in the code where we
forcefully a reload of the whole Volume OVO when it shouldn't be
necessary.
This patch fixes the create method of the Volume OVO and removes an
instance of the forceful reload of the volume, reducing our DB calls.
Change-Id: Ia59cbc5a4eb279e56f07ff9f44aa40b582aea829
In Change-ID Ic8a8ba2271d6ed672b694d3991dabd46bd9a69f4 we added:
vref.multiattach = self._is_multiattach(volume_type)
vref.save()
Then we remove the assignment but forgot to remove the save.
This patch removes that unnecessary save call.
Change-Id: I993444ba3b6e976d40ae7c5858b32999eb337c66
The old attachment API has a mix of OVO and DB method calls that can
result in admin metadata being removed.
When we automatically update the admin metadata using DB methods these
changes are not reflected in the volume OVO, so when we pass the OVO via
RPC the receptor will assume that admin metadata present in the OVO is
up to date, and will delete key-value pairs that exist in the DB and are
not in memory.
That is happening on the old `attach` method with the `readonly` key
that gets removed by the volume service after it was added (calling the
DB) on the API service.
Patch doesn't include a release note to avoid making unnecessary noise
in our release notes, because it is unlikely to affect existing users
since Nova will be using the new attachment API.
Change-Id: Id3c7783a80614e8a980d942343ecb9f47a5a805a
This patch fixes volume type retyping when multiple
availability zones are in place. This ensures that the
request_spec to the scheduler has the availability zone
in the right place for the AvailabilityZoneFilter to be
able to see it and use it for filtering.
Change-Id: I3f6cca6eb87ac4727b06b167b5aa12da07ba8fb5
Closes-Bug: #1883928
This patch includes 2 fix for the following issues:
1) ast.literal_eval() doesn't work with int and float values
See comment inline
2) Do not traverse and modify the same dict:
while traversing filters dict, we're modifying it inside the loop
which distorts the order of elements (adding modified elements in the
end). to fix this, i've used a temp dict that will be used to traverse
and modification will be done in the filters dict.
Closes-Bug: #1883490
Change-Id: I18b4b0b1b71904b766f7b89df49f5539e3c7662a
When a non user calls revert to snapshot using the snapshot name the
client returns a failure status:
$ cinder --os-volume-api-version 3.59 revert-to-snapshot snap1
ERROR: No snapshot with a name or ID of 'snap1' exists.
The revert to snapshot API requires an UUID, so the cinderclient lists
the snapshots passing "all_tenants=1" and the name of the snapshot to
find its first when it's not a UUID.
The problem that we were only removing the "all_tenants" filter for
admins, leaving it for normal users and passing it to the DB which
wouldn't understand that filter and return None.
This patch ensure we always remove the "all_tenants" filter (as we do in
other places) before calling the OVO method to list snapshots.
Change-Id: I5fd3d7840b36f6805143cd1b837258232e7bc58a
Fixes-Bug: #1889758
Allows support for setting a minimum and/or maximum vol size that
can be created in extra_specs for each volume_type. This allows
setting size restrictions on different "tiers" of storage.
If configured, the size restrictions will be checked at the API level
as part of volume creation or retype.
2 new volume type keys are supported for setting the minimum volume
size and maximum volume size for that type.
'provisioning:min_vol_size'
'provisioning:max_vol_size'
Implements: blueprint min-max-vol-size-by-vol-type
Change-Id: I222e778902a41e552e812896d7afd0516ee7fe68
When uploading a volume to an image, Cinder checks to see if there's
any image metadata already present on the volume, and if so, it adds
it to the image-create request so that Glance will persist these
additional properties on the new image. If during upload preparation,
Cinder wants to add specific metadata to the new image (for example,
a new encryption key id), we need to ensure that the new value for
this property is preferred to a value in the volume_glance_metadata
table.
Closes-bug: #1844725
Change-Id: Iba3c5fa4db87641a84eb22a0fc93294dd55a3132