Commit Graph

40 Commits

Author SHA1 Message Date
Stephen Finucane 8353d6e204 db: Remove 'db_driver' option
This is a silly config option. We only have one database driver in-tree
and no plans to add more (SQLAlchemy is best in class). There's also no
way we'd be able to support out-of-tree drivers. Remove it entirely.

Change-Id: Ica3b2e8fcb079beca652e81d2230bcca82fb49d7
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-08-27 15:13:21 +01:00
Ivan Kolodyazhny 7d211d6221 Use resource_backend for volumes and groups
resource_backend should be used during groups update validation
procedure  instead of 'host' attribute because it works well
both in clustered and single node deployments.

Change-Id: I4f80bb3e39e04ac5c524c7c1ad8859eef76a910a
Closes-Bug: #1876133
2020-08-14 08:13:42 +00:00
Ivan Kolodyazhny 8d67562831 Set cluster name for volume groups
In Active/Active HA mode we need to store not only worker hostname but
cluster name too. This patch saves cluster name for groups created from other
groups or snaphshots.

Change-Id: I6f160cc44350f0d378fdddb7943e8cdcc15be1b8
Closes-Bug: #1867906
2020-07-22 10:45:14 +03:00
Eric Harney ca5c2ce4e8 Continue renaming volume_utils (core)
Now that volume_utils has been renamed, import it
and use it consistently everywhere.

Change-Id: I6a74f664ff890ff3f24f715a1e93df7e0384aa6b
2019-09-09 20:48:26 -04:00
Eric Harney de789648e5 Rename volume/utils.py to volume/volume_utils.py
Much of our code renames this at import already --
just name it "volume_utils" for consistency, and
to make code that imports other modules named "utils"
less confusing.

Change-Id: I3cdf445ac9ab89b3b4c221ed2723835e09d48a53
2019-09-09 15:00:07 -04:00
Yikun Jiang 0b8b3a4b47 Fix wrong uuid recognized when create group
We can't create a group with a uuid format name volume type,
there is a uuid check in "volume_types_get_by_name_or_id()",
and the uuid-like type name would be mistakenly recognized as
a id, finally, it will use "get_volume_type"(by_id) to get
volume type and cause a 404 error.

So, this patch try to fix this error, if we can't find a
type by uuid, we need call "_volume_type_get_by_name" to check
again can we get this type by name.

Change-Id: Id09230bffc0ad83093bb6254b2e09aca5d1c58b1
Closes-bug: #1794716
Related-bug: #1794237
2018-10-08 09:51:02 +08:00
Chuck Short 348b7a9f7d Remove unecessary pass
Remove unecessary "pass" where they are not needed.

Change-Id: I200a3c0e40720cd53694ae157861d62dee42ab1f
Signed-off-by: Chuck Short <chucks@redhat.com>
2018-09-25 15:12:50 -04:00
Vipin Balachandran e08da42d81 Fix group availability zone-backend host mismatch
Group availability zone is not set correctly in scheduler filter
properties which results in group availability zone-backend host
mismatch. This can lead to volume create failure for volumes in
the group. Fixing it by setting the availability zone in the
request spec in scheduler RPC call.

Change-Id: Icfa437d2d81ed29d0aceee776d86e28862c85274
Closes-bug: 1773446
2018-06-05 12:09:49 -07:00
pooja jadhav 57983ba67c V3 json schema validation: generic volume groups
This patch adds jsonschema validation for below volume groups API's
* POST /v3/{project_id}/groups (create)
* PUT  /v3/{project_id}/groups/{group_id} (update)
* POST /v3/{project_id}/groups/action (create from source)
* POST /v3/{project_id}/groups/{group_id}/action (delete)
* POST /v3/{project_id}/groups/{group_id}/action (reset status)
* POST /v3/{project_id}/groups/{group_id}/action (failover replication)
* POST /v3/{project_id}/groups/{group_id}/action (enable replication)
* POST /v3/{project_id}/groups/{group_id}/action (disable replication)
* POST /v3/{project_id}/groups/{group_id}/action (list replication)

Change-Id: Ie91a52cc7f0245e5ecb3a9382691d78f5f92aa4f
Partial-Implements: bp json-schema-validation
2018-05-08 18:07:28 +05:30
TommyLike 7391070474 Add missing 'target_obj' when perform policy check
Generally, we have to pass target object to ``authorize``
when enforce policy check,  but this is ignored during
our develop and review process for a long time, and the
potential issue is anyone can handle the target resource
as ``authorize`` will always succeed if rule is defined
``admin_or_owner`` [1]. Luckily, for most of those APIs
this security concern is protected by our database access
code [2] that only project scope resource is allowed.

However, there is one API that do have security issue when
administrator change the rule into "admin_or_owner".

1. "volume reset_status", which cinder will update the
resource directly in the database, procedure to reproduce
bug is described on the launchpad.

This patch intends to correct most of cases which can be
easily figured out in case of future code changes.

[1]:
73e6e3c147/cinder/context.py (L206)
[2]:
73e6e3c147/cinder/db/sqlalchemy/api.py (L3058)
[3]:
73e6e3c147/cinder/api/contrib/admin_actions.py (L161)

Partial-Bug: #1714858
Change-Id: I351b3ddf8dfe29da8d854d4038d64ca7be17390f
2018-03-19 19:02:00 +08:00
TommyLike 91c672e340 Revert consumed quota when failed to create group from source group
When failed to created corresponding volumes in group, Cinder
will destory the volume object, but consumed quota is left.

Change-Id: Ief0637768cf1fe04bb4162e02008b4884a184051
2018-02-06 16:30:55 +08:00
Zuul 0ec6fd5b1a Merge "Create group from snapshot-group failure leaves behind the volume" 2017-12-25 16:27:41 +00:00
pooja jadhav 65d57cf4b1 V3 jsonschema validation: Group Snapshots
This patch adds jsonschema validation for below group snapshots API's

* POST /v3/{project_id}/group_snapshots
* POST /v3/{project_id}/group_snapshots/{group_snapshot_id}/action

Made changes to unit tests to pass body as keyword argument as wsgi
calls action method [1] and passes body as keyword argument.

[1] https://github.com/openstack/cinder/blob/master/cinder/api/openstack/wsgi.py#L997

Change-Id: Ie3b8ffb209b30edf2a26a935aab840441b43adfa
Partial-Implements: bp json-schema-validation
2017-12-20 18:54:24 +05:30
Aseel Awwad 3786219ce8 Create group from snapshot-group failure leaves behind the volume
While creating a group from snapshot group or source group, if the
quota exceeds during volume creation, the volume which are scheduled
for creation are left behind in the DB.Only Group gets destroyed from
the DB. To solve this, If error happens, taskflow will handle
rollback of quota and removal of volume entry in the db.

Change-Id: I5d60680fa92e50e51c9b3a6bcc940b18bef5b150
Closes-Bug: #1727314
2017-12-19 08:30:31 -05:00
TommyLike d812f5705f Schedule request to scheduler when create group from resource
Pass the request to scheduler rather than volume service in
order to check the backend's capacity.

Change-Id: Ie4c157f11e5fde0c2dd1d3e06feb0caa9d2d9ede
Partial-Implements: bp inspection-mechanism-for-capacity-limited-host
2017-11-13 11:24:53 +00:00
TommyLike d2c6dfb3d3 [policy in code] Add support for group, g-snapshot resources
This patch adds policy in code support for group&group
snapshot resources and depends on the backup patch [1].

[1]: https://review.openstack.org/#/c/507015/

Change-Id: If95a8aaa70614902a06420d1afa487827f8a3f03
Partial-Implements: blueprint policy-in-code
2017-10-11 13:19:33 +00:00
Jenkins 6538682ecb Merge "Use conditional update for group update and delete" 2017-09-15 18:02:25 +00:00
Abhishek Sharma 055433efdc Adding project id check
Add volume to group only when group's project and volumes's
project are same.

Change-Id: I493067344405a5b4a26a2330f7ea662398e8fd0a
Closes-Bug: 1712588
2017-09-10 06:09:20 -05:00
Sean McGinnis fdfb2d51a4 Use conditional update for group update and delete
Consistency groups had conditional updating to handle API race
conditions, but the switch to groups did not include that.

Adding conditional update handling for update and delete so we
have the same protection. Also relaxed the restriction on update
to allow updating name or description when in states other than
Available.

Change-Id: I9ddd7e881be23be8b7d37063d87417c47deda9e8
Closes-bug: #1673319
2017-09-07 16:42:00 -05:00
Abhishek Sharma fc99c3cfdd Making reservations before group creation
The problem here was when we do reservation after creating group,
synchronization during quota_reserve() gets in_use value as 1 as
the group has already been created befre reserving & quota commit
updates groups in_use value as +1, hence making 2 for the first
time. So, reservations have been made before creating groups.

Change-Id: If5f3cf75e39ed932028be7a2fb583c2576cb04bf
Closes-Bug: 1711381
2017-08-29 12:01:29 -05:00
TommyLike 252ff38a9d Do not delete group if group snapshot exists
This patch adds group snapshot's existence validation
before performing group's deletion operation.

Closes-Bug: #1705375
Change-Id: I928eded513772b8b1c9f050f2d31d4334b1da8ae
2017-07-21 08:33:03 +08:00
xing-yang 32e67f3119 Tiramisu: Add groups param to failover_host
failover_host is the interface for Cheesecake.
Currently it passes volumes to the failover_host
interface in the driver. If a backend supports both
Cheesecase and Tiramisu, it makes sense for the driver
to failover a group instead of individual volumes if a
volume is in a replication group. So this patch passes
groups to the failover_host interface in the driver in
addition to volumes so driver can decide whether to
failover a replication group.

Change-Id: I9842eec1a50ffe65a9490e2ac0c00b468f18b30a
Partially-Implements: blueprint replication-cg
2017-07-10 09:30:13 -07:00
junboli 91da6a3df0 Use GroupStatus enum field
The GroupStatus and GroupStatusField have been defined already, This
change just replace the omissive group status string with group enum
field.

Change-Id: Ic9c43a3fa95901de7d68b0e23358e2a742a9901a
Partial-Implements: bp cinder-object-fields
2017-07-02 16:46:10 +08:00
xing-yang 18744ba199 Tiramisu: replication group support
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.

Client side patch is here:
    https://review.openstack.org/#/c/352229/

To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38

Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).

cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property    | Value                                 |
+-------------+---------------------------------------+
| description | None                                  |
| group_specs | group_replication_enabled : <is> True |
| id          | 66462b5c-38e5-4a1a-88d6-7a7889ffec55  |
| is_public   | True                                  |
| name        | my_group_type                         |
+-------------+---------------------------------------+

cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property                        | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| extra_specs                     | replication_enabled : <is> True      |
| id                              | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public                       | True                                 |
| name                            | my_volume_type                       |
| os-volume-type-access:is_public | True                                 |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group

Enable replication group on the primary storage:
    cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.

Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
    cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
    cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.

Run failover replication group again to fail the group back to
the primary storage:
    cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.

Disable replication group:
    cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.

APIImpact
DocImpact
Implements: blueprint replication-cg

Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
2017-04-30 22:49:13 -04:00
Monica Joshi f8088946b5 Fix for Group API update to include check policy
All cinder APIs should do policy checks to enforce
role based access controls. All group apis except
groups update has this in place. This changeset
adds policy check for the update groups api.

Adds policy check for create_group_snapshot
Adds policy check for reset_status
Updates unit testcases

Change-Id: I36d3c929709b82cf5f34f681a2e1c34bba9feef9
Closes-Bug: 1676278
2017-04-13 03:54:43 -04:00
Sean McGinnis a55a6b5c71 Remove log translations
Log messages are no longer being translated. This removes all use of
the _LE, _LI, and _LW translation markers to simplify logging and to
avoid confusion with new contributions.

See:
http://lists.openstack.org/pipermail/openstack-i18n/2016-November/002574.html
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html

Change-Id: I4c96f3590d46205c45d12ee4ead8c208e11c52c5
2017-03-19 14:59:57 +00:00
TommyLike cb5aaf0bcb Add filter, sorter and pagination for group snapshot
Add filter, sorter and pagination support in group
snapshot with new microversion v3.29.

APIImpact
Closes-Bug: #1670540

Change-Id: I2ed1b87b022314b157fe432a97783ab50316367b
2017-03-15 13:46:27 +00:00
xing-yang 71aa2a27e3 Change volume_type dict to ovo
The volume type in the create method in volume API
was changed from dict to ovo by the following patch:
  https://review.openstack.org/#/c/406780/
However, this is not changed in create group from src
API which calls volume create and therefore introduced
this bug. It throws an exception when calling volume_type.id
because volume_type is still a dict.

This patch fixed this problem.

Change-Id: I63cb785d27fa9e43da16a27da6d7b92052badf06
Closes-Bug: #1665549
2016-12-06 20:45:27 -05:00
Gorka Eguileor 2195885e77 Fix replication freeze mechanism
Freeze functionality in the replication feature doesn't work as
expected, since it is not being used on the scheduler to exclude
backends or used on the API or volume nodes so API-to-Vol operations
like delete and create snapshot will also work.

This patch fixes the freeze mechanism by excluding frozen backends in
the scheduler and checking the if the service is frozen on all other
modifying operations.

Since extend operation now goes through the scheduler it will be frozen
there.

Closes-Bug: #1616974
Change-Id: I4561500746c95b96136878ddfde8ca88e96b28c6
2017-01-19 10:42:24 +01:00
TommyLike 15c555445b [1/4]Reset generic volume group status
Currently the administrator could only reset the generic group
status by db operation,this change intends to add new admin
actions to achieve these.

The patch list:
    1. group API(this).
    2. group snapshot API(https://review.openstack.org/#/c/389577/).
    3. cinder client(https://review.openstack.org/390169/).
    4. documentation(https://review.openstack.org/#/c/395464).

APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status

Change-Id: Ib8bffb806f878c67bb12fd5ef7ed8cc15606d1c5
2016-12-23 22:58:48 +08:00
xing-yang 44ebdd2252 CG API changes for migrating CGs
CG APIs work as follows:
 * Create CG - Create only in groups table
 * Modify CG - Modify in CG table if CG in CG table, otherwise modify
               in groups table.
 * Delete CG - Delete from CG or groups table depending on where it is
 * List CG - Check both CG and groups tables
 * List CG snapshots - Check both CG and groups tables
 * Show CG - Check both tables
 * Show CG snapshot - Check both tables
 * Create CG snapshot - Create either in CG or groups table depending on
                        the CG.
 * Create CG from source - Create in either CG or groups table
                           depending on the source.
 * Create volume - Add volume either to CG or group

Additional notes:
 * default_cgsnapshot_type is reserved for migrating CGs.
 * Group APIs will only write/read in/from the groups table.
 * Group APIs won't work on groups with default_cgsnapshot_type.
 * Groups with default_cgsnapshot_type can only be operated by CG APIs.
 * After CG tables are removed, we'll allow default_cgsnapshot_type
   to be used by group APIs.

Partial-Implements: blueprint generic-volume-group
Change-Id: Idd88a5c9587023a56231de42ce59d672e9600770
2016-11-22 19:08:20 -05:00
TommyLike 304ff4c23d [2/4]Reset group snapshot status
Currently the administrator could only reset the group snapshot
status by db operation, this change intends to add new admin
action to achieve this.

The patch list:
    1. group API(https://review.openstack.org/#/c/389091/).
    2. group snapshot API(this).
    3. cinder client(https://review.openstack.org/390169/).
    4. documentation(https://review.openstack.org/#/c/395464/).

APIImpact
DocImpact
Partial-Implements: blueprint reset-cg-and-cgs-status

Change-Id: I9e3a26950c435038cf40bea4b27aea1bd5049e95
2016-12-22 11:11:02 +08:00
haobing1 2655e8bba9 Cinder consistency group returning generic error message
When trying to create a consistency group with the quota reached
already, the error raised to the user is the generic "ERROR: The
server has either erred or is incapable of performing the requested
operation. (HTTP 500)" message.
This patch will make the error message return to the user with
a 413 error code. Such as "ERROR: GroupLimitExceeded: Maximum number
of groups allowed (10) exceeded'.(HTTP 413)"

Change-Id: I0dd86dbc84d3dc75568c39aca8150c8fa12c4811
Closes-Bug: #1610295
2016-12-21 12:52:06 +08:00
xing-yang 7aab553313 Add group_type_id in create_group_snapshot
group_type_id should be set when a group_snapshot is created,
however it was missed in the code. This patch adds the missing
group_type_id in create_group_snapshot.

Change-Id: I5fdb3324e19f53a1116a04fcb34f6776c42a798d
Closes-Bug: #1632265
2016-07-28 10:38:38 -04:00
Michał Dulko d056718962 Remove support for 2.x scheduler RPC API
This commit gets rid of our Mitaka compatibility code in scheduler RPC API.

Change-Id: I270d6db4c15a0bcf7b26af3c68749646f09e7959
2016-10-05 10:57:58 +02:00
Cao Shufeng cee739a4a8 Save volume_type/group_type uuid into db when creating group
When creating groups, if volume type name or group type name is
passed, cinder-api will try to save the name rather uuid into
the database. It will make a foreign key constraint fails. After
this change, we always save uuid into database.

Change-Id: Ib333130325fc12a4753c7a128e823e992e8c8682
Closes-Bug: #1622476
2016-09-12 06:37:17 -04:00
wangxiyuan b11602e57e Support create group with group type name
Update the code to support create volume group with group type's
name.

ralated-to: blueprint generic-volume-group
Change-Id: I35b843a8c830e039ff595cb9c590a31d8902e6c0
2016-09-05 11:44:21 +08:00
xing-yang 708b9be9c0 Add group snapshots - APIs
This is the fifth patch that implements the generic-volume-group
bluerpint. It adds APIs for group snapshots and create group
from source.

This patch depends on the fourth patch which implements group
snapshots support in the volume manager:
    https://review.openstack.org/#/c/361376/

Client side patch is here:
    https://review.openstack.org/#/c/329770/

Current microversion is 3.14. The following CLI's are supported:
cinder --os-volume-api-version 3.14 group-create-from-src
    --name my_group --group-snapshot <group snapshot uuid>
cinder --os-volume-api-version 3.14 group-create-from-src
    --name my_group --source-group <source group uuid>
cinder --os-volume-api-version 3.14 group-snapshot-create
    --name <name> <group uuid>
cinder --os-volume-api-version 3.14 group-snapshot-list
cinder --os-volume-api-version 3.14 group-snapshot-show
    <group snapshot uuid>
cinder --os-volume-api-version 3.14 group-snapshot-delete
    <group snapshot uuid>

APIImpact
DocImpact
Partial-Implements: blueprint generic-volume-group

Change-Id: I2e628968afcf058113e1f1aeb851570c7f0f3a08
2016-07-19 11:27:15 -04:00
Vivek Dhayaal f6c20ed00b Removed RPC topic config options
Cinder had config options to customize the RPC topics on which the
scheduler, volume and backup nodes listen. But this feature has been
dysfunctional for quite a long time. This commit removed this feature.
For more details: Refer the bug comments.

DocImpact
Change-Id: Ie76f070fe9a1222c209e8defd0d04fa7a7931b14
Closes-Bug: #1301888
2016-08-29 11:16:53 +05:30
xing-yang 8c74c74695 Add generic volume groups
This is the second patch that implements the generic-volume-group
bluerpint. It adds the groups table and introduces create/delete/
update/list/show APIs for groups.

It depends on the first patch which adds group types and group specs:
    https://review.openstack.org/#/c/320165/

Client side patch is here:
    https://review.openstack.org/#/c/322627/

Current microversion is 3.13. The following CLI's are supported:
cinder --os-volume-api-version 3.13 group-create --name my_group
    <group type uuid> <volume type uuid>
cinder --os-volume-api-version 3.13 group-list
cinder --os-volume-api-version 3.13 create --group-id <group uuid>
    --volume-type <volume type uuid> <size>
cinder --os-volume-api-version 3.13 group-update <group uuid>
    --name new_name  description new_description
    --add-volumes <uuid of volume to add>
    --remove-volumes <uuid of volume to remove>
cinder --os-volume-api-version 3.13 group-show <group uuid>
cinder --os-volume-api-version 3.13 group-delete
    --delete-volumes <group uuid>

APIImpact
DocImpact
Change-Id: I35157439071786872bc9976741c4ef75698f7cb7
Partial-Implements: blueprint generic-volume-group
2016-07-16 19:34:39 -04:00