This patch adds a snapshot clean operation in the
cleanup_incomplete_backup_operations process of c-bak.
Co-Authored-by: Christian Rohmann <christian.rohmann@inovex.de>
Closes-Bug: #1938488
Change-Id: Ifa3d572139fc37c94e3b50a02e61c9818a1b6501
The cinder-core gerrit group is configured directly in project-config
to act as openstackclient/openstackSDK "service cores". Add info to
the cinder-groups document so that we know where to look if this needs
to be changed or adjusted in the future.
Change-Id: I6b626604f9ff573ab59fd4867fdee7a0178ed7f0
https://github.com/PyCQA/pycodestyle/issues/622 was fixed
in pycodestyle 2.4.0, which triggers E501 for docstrings
that didn't previously.
This shows up if we pull in a hacking>=6.0.0.
Change-Id: Id3c8a66c26f01ca734e6bc33c14f5deedf8e15f3
This change is to update the live migration ability in environments
using PowerMax. In previous 2023.1 version, the live migration fails
without a pool name.
The update adds the ability of live migration without a pool name.
Change-Id: Iad767cd516c8527136508470629236f68e0c7cc2
Closes-Bug: #2034937
Dell PowerMax SnapVx unlink fails as the linked device
is not yet fully defined.
This patch fixed the issue by checking the new configuration
'snapvx_unlink_symforce' and the linked device 'defined' status
to determine the value of 'symforce' in the payload of
SnapVx unlink restcall.
Closes-Bug: #2045230
Change-Id: I614f6aef2d4da76c417b4a143ab80e4a5f716dcd
Two possibilities of clone volume:
1] same size, online copy
Existing behaviour: start clone & return from function.
Error occur because clone is not yet complete and
code tries to create vol on secondary array.
2] size is different, offline copy
Existing behaviour: (i) create new replicated vol.
(ii) during clone operation below error occur:
Volume is involved in remote copy
(iii) Since clone operation fails, delete new replicated vol (as cleanup).
To overcome both possibilities, code changes are done.
For clone of replicated vol, create offline copy only.
Steps:
(i) Create new vol without replication.
(ii) Perform clone operation; wait till completion (offline copy).
(iii) Create vol on secondary array.
Closes-Bug: #2021941
Change-Id: I1f025542a2509e36919ece01b29064377dbbe189
Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: I4311c34b40838da8ecc48802091908f08bf6f48f
The stable/victoria and stable/wallaby branches have been deleted,
so reno can't find their release notes. Use the victoria-eom and
wallaby-eom tags to indicate the end of the
Cinder project's maintenance of these series.
Also fix our override for the reno closed_branch_tag_re to include
the -eom tags. Add the changed reno.yaml file to all relevant
irrelevant-files lists.
This strategy is what we used for the yoga transition, and was
discussed at a cinder weekly meeting:
https://meetings.opendev.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2024-02-07.log.html#t2024-02-07T14:06:09
Change-Id: I505b7cc12888d2373a0550b40bb945d75bb11067
The following three volume drivers are no longer supported, because
the storage products are now EOL[1].
- Dell SC Series Storage Driver (iSCSI, FC)
- Dell VNX Storage Driver (FC, iSCSI)
- Dell XtremeIO Storage Driver (iSCSI, FC)
This change marks these drivers unsupported, so that we can remove
these drivers after 2024.1 release.
[1] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/7D7GUOI66BOD7Z3ZQWPWYYHH6VHAY4JJ/
Change-Id: I763278ec72adaf6f2ddd21edbfc687ca2f17f09a
Space allocation is an important NetApp driver specific feature.
This needs to be set when the cinder volume is created.
This is not related to thin/thick provisioning feature of cinder
volumes.It is independent of that. It enables ONTAP to reclaim
space automatically when host deletes data.This helps ONTAP
and host to see the actual space correctly when the host
deletes data.
It also helps to keep a LUN (cinder volume) online when the
LUN (cinder volume) in ontap runs out of space and containing
volume (in ONTAP) cannot automatically grow more space.
User can configure it by using volume type extra spec.
By default Space allocation value is disabled for ONTAP LUN
netapp:space_allocation: "<is> True" # to enable space allocation
netapp:space_allocation: "<is> False" # to disable space allocation
Blueprint: netapp-space-allocation-support
Change-Id: Ib7072f3093067ecd8ad84e396aaecec8f15c49ba
Add volume type extra spec boolean property
'powermax:disable_protected_snap'. Enabled when set to: `'<is> True'`,
`'True'`, `'true'`, or True, otherwise the property is disabled by
default.
When set to True, snapshot won't be replicated and won't
match the source volume type, it will be a regular device
no matter what.
If the property is not enabled in the volume type extra_specs, then
there is no change from the current behavior.
Implements blueprint powermax-protected-snap-config
Change-Id: Iafa44dcf0e8f46749b5ef37f0b8d341e8253a3bd
An apostrophe was in place of a backtick, and as a result the
hyperlink to the LP bug didn't display as a hyperlink.
Change-Id: I42867de19c35255f275f47eb78062130f5756cab
Added connect and read timeout into rest API call
of PowerFlex to avoid cinder hang issue.
Deafult value of connect and read timeout
is 30 seconds.
Closes-Bug: #2052995
Change-Id: I032d76627466f74121e3dc4fb2c8e175d830fa14
Added connect and read timeout into rest API call
of PowerMax to avoid cinder hang issue.
Deafult value of connect and read timeout
is 30 seconds.
Closes-Bug: #2051830
Change-Id: I2d419b4257bae75c69577a34758910c4889e2507
This fixes unit test compatibility with
jsonschema 4.21 while maintaining compat
with jsonschema 3.2.
Change-Id: If9b8b4ccc805c8086c180e881b0ddd712289ad13
The online migration remove_temporary_admin_metadata_data_migration
was recently merged and broke gate for the kolla project.
There are 2 issues with the current code:
1. The db api doesn't return the total and updated values
2. We are issuing a limit and update query together which is
not allowed generating the following error:
sqlalchemy.exc.InvalidRequestError: Can't call Query.update()
or Query.delete() when limit() has been called
This patch fixes the issue by creating a select subquery to get
all the ID values with the limit as max_count. We then create
a new query for update and pass the select subquery as a filter.
We don't require a releasenote since the bug and fix are in the
same release.
Closes-Bug: 2052805
Change-Id: Ida994f767eecb094c177db15dfc80a0c0fe56447