This is a silly config option. We only have one database driver in-tree
and no plans to add more (SQLAlchemy is best in class). There's also no
way we'd be able to support out-of-tree drivers. Remove it entirely.
Change-Id: Ica3b2e8fcb079beca652e81d2230bcca82fb49d7
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
resource_backend should be used during groups update validation
procedure instead of 'host' attribute because it works well
both in clustered and single node deployments.
Change-Id: I4f80bb3e39e04ac5c524c7c1ad8859eef76a910a
Closes-Bug: #1876133
In Active/Active HA mode we need to store not only worker hostname but
cluster name too. This patch saves cluster name for groups created from other
groups or snaphshots.
Change-Id: I6f160cc44350f0d378fdddb7943e8cdcc15be1b8
Closes-Bug: #1867906
Much of our code renames this at import already --
just name it "volume_utils" for consistency, and
to make code that imports other modules named "utils"
less confusing.
Change-Id: I3cdf445ac9ab89b3b4c221ed2723835e09d48a53
We can't create a group with a uuid format name volume type,
there is a uuid check in "volume_types_get_by_name_or_id()",
and the uuid-like type name would be mistakenly recognized as
a id, finally, it will use "get_volume_type"(by_id) to get
volume type and cause a 404 error.
So, this patch try to fix this error, if we can't find a
type by uuid, we need call "_volume_type_get_by_name" to check
again can we get this type by name.
Change-Id: Id09230bffc0ad83093bb6254b2e09aca5d1c58b1
Closes-bug: #1794716
Related-bug: #1794237
Remove unecessary "pass" where they are not needed.
Change-Id: I200a3c0e40720cd53694ae157861d62dee42ab1f
Signed-off-by: Chuck Short <chucks@redhat.com>
Group availability zone is not set correctly in scheduler filter
properties which results in group availability zone-backend host
mismatch. This can lead to volume create failure for volumes in
the group. Fixing it by setting the availability zone in the
request spec in scheduler RPC call.
Change-Id: Icfa437d2d81ed29d0aceee776d86e28862c85274
Closes-bug: 1773446
This patch adds jsonschema validation for below volume groups API's
* POST /v3/{project_id}/groups (create)
* PUT /v3/{project_id}/groups/{group_id} (update)
* POST /v3/{project_id}/groups/action (create from source)
* POST /v3/{project_id}/groups/{group_id}/action (delete)
* POST /v3/{project_id}/groups/{group_id}/action (reset status)
* POST /v3/{project_id}/groups/{group_id}/action (failover replication)
* POST /v3/{project_id}/groups/{group_id}/action (enable replication)
* POST /v3/{project_id}/groups/{group_id}/action (disable replication)
* POST /v3/{project_id}/groups/{group_id}/action (list replication)
Change-Id: Ie91a52cc7f0245e5ecb3a9382691d78f5f92aa4f
Partial-Implements: bp json-schema-validation
Generally, we have to pass target object to ``authorize``
when enforce policy check, but this is ignored during
our develop and review process for a long time, and the
potential issue is anyone can handle the target resource
as ``authorize`` will always succeed if rule is defined
``admin_or_owner`` [1]. Luckily, for most of those APIs
this security concern is protected by our database access
code [2] that only project scope resource is allowed.
However, there is one API that do have security issue when
administrator change the rule into "admin_or_owner".
1. "volume reset_status", which cinder will update the
resource directly in the database, procedure to reproduce
bug is described on the launchpad.
This patch intends to correct most of cases which can be
easily figured out in case of future code changes.
[1]:
73e6e3c147/cinder/context.py (L206)
[2]:
73e6e3c147/cinder/db/sqlalchemy/api.py (L3058)
[3]:
73e6e3c147/cinder/api/contrib/admin_actions.py (L161)
Partial-Bug: #1714858
Change-Id: I351b3ddf8dfe29da8d854d4038d64ca7be17390f
When failed to created corresponding volumes in group, Cinder
will destory the volume object, but consumed quota is left.
Change-Id: Ief0637768cf1fe04bb4162e02008b4884a184051
This patch adds jsonschema validation for below group snapshots API's
* POST /v3/{project_id}/group_snapshots
* POST /v3/{project_id}/group_snapshots/{group_snapshot_id}/action
Made changes to unit tests to pass body as keyword argument as wsgi
calls action method [1] and passes body as keyword argument.
[1] https://github.com/openstack/cinder/blob/master/cinder/api/openstack/wsgi.py#L997
Change-Id: Ie3b8ffb209b30edf2a26a935aab840441b43adfa
Partial-Implements: bp json-schema-validation
While creating a group from snapshot group or source group, if the
quota exceeds during volume creation, the volume which are scheduled
for creation are left behind in the DB.Only Group gets destroyed from
the DB. To solve this, If error happens, taskflow will handle
rollback of quota and removal of volume entry in the db.
Change-Id: I5d60680fa92e50e51c9b3a6bcc940b18bef5b150
Closes-Bug: #1727314
Pass the request to scheduler rather than volume service in
order to check the backend's capacity.
Change-Id: Ie4c157f11e5fde0c2dd1d3e06feb0caa9d2d9ede
Partial-Implements: bp inspection-mechanism-for-capacity-limited-host
This patch adds policy in code support for group&group
snapshot resources and depends on the backup patch [1].
[1]: https://review.openstack.org/#/c/507015/
Change-Id: If95a8aaa70614902a06420d1afa487827f8a3f03
Partial-Implements: blueprint policy-in-code
Consistency groups had conditional updating to handle API race
conditions, but the switch to groups did not include that.
Adding conditional update handling for update and delete so we
have the same protection. Also relaxed the restriction on update
to allow updating name or description when in states other than
Available.
Change-Id: I9ddd7e881be23be8b7d37063d87417c47deda9e8
Closes-bug: #1673319
The problem here was when we do reservation after creating group,
synchronization during quota_reserve() gets in_use value as 1 as
the group has already been created befre reserving & quota commit
updates groups in_use value as +1, hence making 2 for the first
time. So, reservations have been made before creating groups.
Change-Id: If5f3cf75e39ed932028be7a2fb583c2576cb04bf
Closes-Bug: 1711381
This patch adds group snapshot's existence validation
before performing group's deletion operation.
Closes-Bug: #1705375
Change-Id: I928eded513772b8b1c9f050f2d31d4334b1da8ae
failover_host is the interface for Cheesecake.
Currently it passes volumes to the failover_host
interface in the driver. If a backend supports both
Cheesecase and Tiramisu, it makes sense for the driver
to failover a group instead of individual volumes if a
volume is in a replication group. So this patch passes
groups to the failover_host interface in the driver in
addition to volumes so driver can decide whether to
failover a replication group.
Change-Id: I9842eec1a50ffe65a9490e2ac0c00b468f18b30a
Partially-Implements: blueprint replication-cg
The GroupStatus and GroupStatusField have been defined already, This
change just replace the omissive group status string with group enum
field.
Change-Id: Ic9c43a3fa95901de7d68b0e23358e2a742a9901a
Partial-Implements: bp cinder-object-fields
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.
Client side patch is here:
https://review.openstack.org/#/c/352229/
To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38
Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).
cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property | Value |
+-------------+---------------------------------------+
| description | None |
| group_specs | group_replication_enabled : <is> True |
| id | 66462b5c-38e5-4a1a-88d6-7a7889ffec55 |
| is_public | True |
| name | my_group_type |
+-------------+---------------------------------------+
cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------+--------------------------------------+
| description | None |
| extra_specs | replication_enabled : <is> True |
| id | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public | True |
| name | my_volume_type |
| os-volume-type-access:is_public | True |
| qos_specs_id | None |
+---------------------------------+--------------------------------------+
Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group
Enable replication group on the primary storage:
cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.
Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.
Run failover replication group again to fail the group back to
the primary storage:
cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.
Disable replication group:
cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.
APIImpact
DocImpact
Implements: blueprint replication-cg
Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
All cinder APIs should do policy checks to enforce
role based access controls. All group apis except
groups update has this in place. This changeset
adds policy check for the update groups api.
Adds policy check for create_group_snapshot
Adds policy check for reset_status
Updates unit testcases
Change-Id: I36d3c929709b82cf5f34f681a2e1c34bba9feef9
Closes-Bug: 1676278
Add filter, sorter and pagination support in group
snapshot with new microversion v3.29.
APIImpact
Closes-Bug: #1670540
Change-Id: I2ed1b87b022314b157fe432a97783ab50316367b
The volume type in the create method in volume API
was changed from dict to ovo by the following patch:
https://review.openstack.org/#/c/406780/
However, this is not changed in create group from src
API which calls volume create and therefore introduced
this bug. It throws an exception when calling volume_type.id
because volume_type is still a dict.
This patch fixed this problem.
Change-Id: I63cb785d27fa9e43da16a27da6d7b92052badf06
Closes-Bug: #1665549
Freeze functionality in the replication feature doesn't work as
expected, since it is not being used on the scheduler to exclude
backends or used on the API or volume nodes so API-to-Vol operations
like delete and create snapshot will also work.
This patch fixes the freeze mechanism by excluding frozen backends in
the scheduler and checking the if the service is frozen on all other
modifying operations.
Since extend operation now goes through the scheduler it will be frozen
there.
Closes-Bug: #1616974
Change-Id: I4561500746c95b96136878ddfde8ca88e96b28c6
Currently the administrator could only reset the generic group
status by db operation,this change intends to add new admin
actions to achieve these.
The patch list:
1. group API(this).
2. group snapshot API(https://review.openstack.org/#/c/389577/).
3. cinder client(https://review.openstack.org/390169/).
4. documentation(https://review.openstack.org/#/c/395464).
APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status
Change-Id: Ib8bffb806f878c67bb12fd5ef7ed8cc15606d1c5
CG APIs work as follows:
* Create CG - Create only in groups table
* Modify CG - Modify in CG table if CG in CG table, otherwise modify
in groups table.
* Delete CG - Delete from CG or groups table depending on where it is
* List CG - Check both CG and groups tables
* List CG snapshots - Check both CG and groups tables
* Show CG - Check both tables
* Show CG snapshot - Check both tables
* Create CG snapshot - Create either in CG or groups table depending on
the CG.
* Create CG from source - Create in either CG or groups table
depending on the source.
* Create volume - Add volume either to CG or group
Additional notes:
* default_cgsnapshot_type is reserved for migrating CGs.
* Group APIs will only write/read in/from the groups table.
* Group APIs won't work on groups with default_cgsnapshot_type.
* Groups with default_cgsnapshot_type can only be operated by CG APIs.
* After CG tables are removed, we'll allow default_cgsnapshot_type
to be used by group APIs.
Partial-Implements: blueprint generic-volume-group
Change-Id: Idd88a5c9587023a56231de42ce59d672e9600770
Currently the administrator could only reset the group snapshot
status by db operation, this change intends to add new admin
action to achieve this.
The patch list:
1. group API(https://review.openstack.org/#/c/389091/).
2. group snapshot API(this).
3. cinder client(https://review.openstack.org/390169/).
4. documentation(https://review.openstack.org/#/c/395464/).
APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status
Change-Id: I9e3a26950c435038cf40bea4b27aea1bd5049e95
When trying to create a consistency group with the quota reached
already, the error raised to the user is the generic "ERROR: The
server has either erred or is incapable of performing the requested
operation. (HTTP 500)" message.
This patch will make the error message return to the user with
a 413 error code. Such as "ERROR: GroupLimitExceeded: Maximum number
of groups allowed (10) exceeded'.(HTTP 413)"
Change-Id: I0dd86dbc84d3dc75568c39aca8150c8fa12c4811
Closes-Bug: #1610295
group_type_id should be set when a group_snapshot is created,
however it was missed in the code. This patch adds the missing
group_type_id in create_group_snapshot.
Change-Id: I5fdb3324e19f53a1116a04fcb34f6776c42a798d
Closes-Bug: #1632265
When creating groups, if volume type name or group type name is
passed, cinder-api will try to save the name rather uuid into
the database. It will make a foreign key constraint fails. After
this change, we always save uuid into database.
Change-Id: Ib333130325fc12a4753c7a128e823e992e8c8682
Closes-Bug: #1622476
Update the code to support create volume group with group type's
name.
ralated-to: blueprint generic-volume-group
Change-Id: I35b843a8c830e039ff595cb9c590a31d8902e6c0
This is the fifth patch that implements the generic-volume-group
bluerpint. It adds APIs for group snapshots and create group
from source.
This patch depends on the fourth patch which implements group
snapshots support in the volume manager:
https://review.openstack.org/#/c/361376/
Client side patch is here:
https://review.openstack.org/#/c/329770/
Current microversion is 3.14. The following CLI's are supported:
cinder --os-volume-api-version 3.14 group-create-from-src
--name my_group --group-snapshot <group snapshot uuid>
cinder --os-volume-api-version 3.14 group-create-from-src
--name my_group --source-group <source group uuid>
cinder --os-volume-api-version 3.14 group-snapshot-create
--name <name> <group uuid>
cinder --os-volume-api-version 3.14 group-snapshot-list
cinder --os-volume-api-version 3.14 group-snapshot-show
<group snapshot uuid>
cinder --os-volume-api-version 3.14 group-snapshot-delete
<group snapshot uuid>
APIImpact
DocImpact
Partial-Implements: blueprint generic-volume-group
Change-Id: I2e628968afcf058113e1f1aeb851570c7f0f3a08
Cinder had config options to customize the RPC topics on which the
scheduler, volume and backup nodes listen. But this feature has been
dysfunctional for quite a long time. This commit removed this feature.
For more details: Refer the bug comments.
DocImpact
Change-Id: Ie76f070fe9a1222c209e8defd0d04fa7a7931b14
Closes-Bug: #1301888
This is the second patch that implements the generic-volume-group
bluerpint. It adds the groups table and introduces create/delete/
update/list/show APIs for groups.
It depends on the first patch which adds group types and group specs:
https://review.openstack.org/#/c/320165/
Client side patch is here:
https://review.openstack.org/#/c/322627/
Current microversion is 3.13. The following CLI's are supported:
cinder --os-volume-api-version 3.13 group-create --name my_group
<group type uuid> <volume type uuid>
cinder --os-volume-api-version 3.13 group-list
cinder --os-volume-api-version 3.13 create --group-id <group uuid>
--volume-type <volume type uuid> <size>
cinder --os-volume-api-version 3.13 group-update <group uuid>
--name new_name description new_description
--add-volumes <uuid of volume to add>
--remove-volumes <uuid of volume to remove>
cinder --os-volume-api-version 3.13 group-show <group uuid>
cinder --os-volume-api-version 3.13 group-delete
--delete-volumes <group uuid>
APIImpact
DocImpact
Change-Id: I35157439071786872bc9976741c4ef75698f7cb7
Partial-Implements: blueprint generic-volume-group