Commit Graph

145 Commits

Author SHA1 Message Date
Zuul 47bf6af18c Merge "db: Set name for FK constraint" 2024-02-22 14:20:24 +00:00
Zuul d3a80ad134 Merge "db: Remove erroneous primary key definitions" 2024-02-22 14:20:20 +00:00
Gorka Eguileor 1a9e911ad4 Remove leftover nested quota DB fields from model
Nested quotas used a couple of DB fields named allocated and
allocated_id, but nested quotas driver has been gone for a while, and in
W we removed code references to these 2 fields, but we forgot to remove
the reference from the ORM models.

This means that we cannot remove the DB fields yet, as that would break
the rolling upgrades, because SQLAlchemy from the X release would still
try to load the fields based on its ORM models and would break because
it's expecting those 2 fields.  We should have removed the ORM model
references when we removed the code that used those fields.

In this patch we remove the ORM model fields and also leave a note as a
comment with the required changes (migration and tests) for the next
release to complete this process.

This creates an inconsistency between the ORM models in models.py which
would make the test_models_sync test fail.  To prevent it, the patch
also improves the filter_metadata_diff method to support diff directives
in the form of lists and leverages that functionality to be able to
ignore the discrepancies introduced in this patch until we make them be
on sync on the next release with the migration code that is included in
the comments.

Change-Id: I5b89ba78d02c9374a7078607ea4f348a1acc4abd
2024-01-12 14:25:44 +01:00
Gorka Eguileor 402787ffcc Clean old temporary tracking
On Xena we added the use_quota DB field to volumes and snapshots to
unify the tracking of temporary resources, but we still had to keep
compatibility code for the old mechanisms (due to rolling
upgrades).

This patch removes compatibility code with the old mechanism and adds
additional cleanup code to remove the tracking in the volume metadata.

Change-Id: I3f9ed65b0fe58f7b7a0867c0e5ebc0ac3c703b05
2024-01-12 14:25:44 +01:00
Gorka Eguileor 3a968212d6 DB: Set quota resource property length to 300
On change I6c30a6be750f6b9ecff7399dbb0aea66cdc097da we increased the
`resource` column of the quota_usages table from 255 to 300, because its
value is constructed from (prefix + volume_type_name), but the length of
`volume_type_name` can be up to 255 characters, so if we add a prefix
such as 'volumes_' or 'gigabytes_' to it we'll exceed the db length
limit of the `resource` column.

There are other 3 quota related tables (quotas, quota_classes,
reservations) that have a `resource` column, and they are all
referencing the same kind of thing, but they still have a maximum size
of 255 characters, so there will be things that we won't be able to do
when using a volume type with a 255 characters name. Some of the
operations we won't be able to do are setting a default quota limit for
it or migrate volumes using that volume type.

Related-Bug: #1798327
Related-Bug: #1608849
Closes-Bug: #1948962
Change-Id: I40546b20322443dc34556de4aababf33a230db78
2024-01-12 14:25:43 +01:00
Stephen Finucane 6e30355d57 db: Set name for FK constraint
The name is present on the migrations. Set it on the models.

Change-Id: I9766dd02d3c97c419234b35cb4c45c21d6aab449
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2023-11-10 11:15:39 +00:00
Stephen Finucane 4230fbc823 db: Remove erroneous primary key definitions
The migrations were not creating a composite primary key for the
'workers' table. Correct the model.

Strange that the migration auto-generation logic didn't pick this up.

Change-Id: Icf8a47248827de30bfa81279b63ffbc2b2a88331
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2023-11-10 11:15:39 +00:00
Eric Harney b261fa205b DB: Align volumes_service_uuid index in model with migration
The migration specifies a column order of
('service_uuid', 'deleted'), make the model use the same order.

Closes-Bug: #2012289
Change-Id: I2332bf4657761076c3d72e41d089ec014e73fb52
2023-03-30 13:29:33 +00:00
Gorka Eguileor bbe42df26c Improve resource listing efficiency
Cinder's resource tables (volumes, snapshots, backups, groups,
group_snapshots) don't have required indexes to do efficient resource
listings on the database engine.

This forces the database to go through all existing database records for
any listing (even when there are no additional user requested filtering)
and check one by one the conditions, resulting in high CPU load on the
database servers.

As an example a listing for a project with a single volume:

$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 8a6b11d5-3343-4c0d-8a64-8e7070d1988e | available | test | 1    | lvmdriver-1 | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

May result in the database going through thousand of records (all
deleted records and all records for other projects), as demonstrated by
the following SQL queries where 10435 rows existed in the database and
had to be checked just to return a single one.

This is the SQL equivalent of the earlier cinder list command:

$ mysql cinder -e 'select id, display_name from volumes where not deleted and project_id="a41464e54125407aab09e0236cce2c3c"'
+--------------------------------------+--------------+
| id                                   | display_name |
+--------------------------------------+--------------+
| 8a6b11d5-3343-4c0d-8a64-8e7070d1988e | test         |
+--------------------------------------+--------------+

Which if we look at the numbers of rows that it hits with `explain` we
can see it hits every single row:

$ mysql cinder -e 'explain select id, display_name from volumes where not deleted and project_id="a41464e54125407aab09e0236cce2c3c"'
+------+-------------+---------+------+---------------+------+---------+------+-------+-------------+
| id   | select_type | table   | type | possible_keys | key  | key_len | ref  | rows  | Extra       |
+------+-------------+---------+------+---------------+------+---------+------+-------+-------------+
|    1 | SIMPLE      | volumes | ALL  | NULL          | NULL | NULL    | NULL | 10435 | Using where |
+------+-------------+---------+------+---------------+------+---------+------+-------+-------------+

This patch introduces a deleted and project_id index for the volumes,
snapshots, groups, group_snapshots, and backups tables, which will allow
the database to do efficient retrieval of records for listings.

The reason why we order first by deleted and then by project_id is
because when an admin does a listing with `--all-tenants` that query
will be able to use the deleted table of the new compound index.

We can see the new index this patch adds and how it allows the DB engine
to efficiently retrieve non deleted volumes from the specific project.

$ mysql cinder -e 'show index from volumes'
+---------+------------+--------------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table   | Non_unique | Key_name                       | Seq_in_index | Column_name         | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+---------+------------+--------------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| volumes |          0 | PRIMARY                        |            1 | id                  | A         |           1 |     NULL | NULL   |      | BTREE      |         |               |
| volumes |          1 | volumes_service_uuid_idx       |            1 | service_uuid        | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | volumes_service_uuid_idx       |            2 | deleted             | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | ix_volumes_consistencygroup_id |            1 | consistencygroup_id | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | ix_volumes_group_id            |            1 | group_id            | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | volumes_deleted_project_id_idx |            1 | deleted             | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | volumes_deleted_project_id_idx |            2 | project_id          | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | volumes_deleted_host_idx       |            1 | deleted             | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
| volumes |          1 | volumes_deleted_host_idx       |            2 | host                | A         |           1 |     NULL | NULL   | YES  | BTREE      |         |               |
+---------+------------+--------------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+

$ mysql cinder -e 'explain select id, display_name from volumes where not deleted and project_id="a41464e54125407aab09e0236cce2c3c"'
+------+-------------+---------+------+--------------------------------+--------------------------------+---------+-------------+------+-----------------------+
| id   | select_type | table   | type | possible_keys                  | key                            | key_len | ref         | rows | Extra                 |
+------+-------------+---------+------+--------------------------------+--------------------------------+---------+-------------+------+-----------------------+
|    1 | SIMPLE      | volumes | ref  | volumes_deleted_project_id_idx | volumes_deleted_project_id_idx | 770     | const,const |    1 | Using index condition |
+------+-------------+---------+------+--------------------------------+--------------------------------+---------+-------------+------+-----------------------+

We also add another missing index for the volumes that is used by the
create volume from image.

The patch also updates 3 tests that were expecting the result from a
query to be in a specific order when there is no actual ORDER BY in the
query.

Closes-Bug: #1952443
Change-Id: I8456a9f82bdf18ada76874dc0c4f59542e1c03ab
2023-03-06 14:04:57 +00:00
Zuul 2790c631d0 Merge "Report tri-state shared_targets for NVMe volumes" 2022-07-27 08:47:56 +00:00
Stephen Finucane 58f97d0525 db: Don't use legacy calling style of select()
Resolve the following RemovedIn20Warning warning:

  The legacy calling style of select() is deprecated and will be removed
  in SQLAlchemy 2.0.  Please use the new calling style described at
  select().

Change-Id: I3a944dedc43502183726797279e1db3b1d5cb98d
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-06-16 13:04:50 +01:00
Stephen Finucane 0569e3450e models: Remove implicit coercion of SELECT to scalar subquery
Resolve the following SAWarning warning:

  implicitly coercing SELECT object to scalar subquery; please use the
  .scalar_subquery() method to produce a scalar subquery.

Change-Id: Ib0f8ddaef230292fa55e513c94fbd32a4a1977bc
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-06-16 13:04:50 +01:00
Gorka Eguileor ef741228d8 Report tri-state shared_targets for NVMe volumes
NVMe-oF drivers that share the subsystem have the same race condition
issue that iSCSI volumes that share targets do.

The race condition is caused by AER messages that trigger automatic
rescans on the connector host side in both cases.

For iSCSI we added a feature on the Open-iSCSI project that allowed
disabling these scans, and added support for it in os-brick.

Since manual scans is a new feature that may be missing in a host's
iSCSI client, cinder has a flag in volumes to indicate when they use
shared targets.  Using that flag os-brick consumers can use the
"guard_connection" context manager to ensure race conditions don't
happen.

The race condition is prevented by os-brick using manual scans if they
are available in the iSCSI client, or a file lock if not.

The problem we face now is that we also want to use the lock for NVMe-oF
volumes that share a subsystem for multiple namespaces (there is no way
to disable automatic scans), but cinder doesn't (and shouldn't) expose
the actual storage protocol on the volume resource, so we need to
leverage the "shared_targets" parameter.

So with a single boolean value we need to encode 3 possible options:

- Don't use locks because targets/subystems are not shared
- Use locks if iSCSI client doesn't support automatic connections
- Always use locks (for example for NVMe-oF)

The only option we have is using the "None" value as well. That way we
can encode 3 different cases.

But we have an additional restriction, "True" is already taken for the
iSCSI case, because there will exist volumes in the database that
already have that value stored.

And making guard_connection always lock when shared_targets is set to
True will introduce the bottleneck from bug (#1800515).

That leaves us with the "None" value to force the use of locks.

So we end up with the following tristate for "shared_targets":

- True to use lock if iSCSI initiator doesn't support manual scans
- False means that os-brick should never lock.
- None means that os-brick should always lock.

The alternative to this encoding would be to have an online data
migration for volumes to change "True" to "None", and accept that there
could be race conditions during the rolling upgrade (because os-brick on
computes will interpret "None" as "False").

Since "in theory" Cinder was only returning True or False for the
"shared_target", we add a new microversion with number 3.69 that returns
null when the value is internally set to None.

The patch also updates the database with a migration, though it looks
like it's not necessary since the DB already allows null values, but it
seems more correct to make sure that's always the case.

This patch doesn't close but #1961102 because the os-brick patch is
needed for that.

Related-Bug: #1961102
Change-Id: I8cda6d9830f39e27ac700b1d8796fe0489fd7c0a
2022-05-24 15:13:23 +02:00
Zuul a803c275c1 Merge "db: Enable auto-generation of database migrations" 2022-03-08 15:45:40 +00:00
Zuul 121b77cfb6 Merge "db: Resolve additional migration-model mismatches" 2022-03-07 16:52:04 +00:00
Zuul 07424f0fcc Merge "db: Add missing foreign keys, indexes to models" 2022-03-01 19:07:11 +00:00
Stephen Finucane 31f8ad4eb4 db: Enable auto-generation of database migrations
Alembic does lots of new things. Provide docs for how to use this. This
doesn't actually work that well at the moment because it appears our
database migrations are not in sync with the models. Those issues will
have to be resolved separately.

Change-Id: I6645ca951114b94ce9ed5a83e5e11d53879a7cd5
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-02-20 19:06:01 +00:00
Stephen Finucane c3e8f0a8a2 db: Resolve additional migration-model mismatches
In addition to the various constraint mismatches resolved in previous
patches, we have a number of other cases where the model field doesn't
match what we added in the migration. Resolve the all of these
outstanding issues in one fell swoop. A future change will add a test to
prove this work.

Change-Id: Ie4d75b9c95b1b3518ebc2d1dd9dbf8a2f0bbb981
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-02-20 19:05:46 +00:00
Stephen Finucane 1990ce4201 db: Add missing foreign keys, indexes to models
There were a number of discrepancies between the models and migrations
(both sqlalchemy-migrate and alembic-based - they're identical). These
need to be addressed. In this patch, we correct a number of
discrepancies related to ForeignKey and Index constraints. Most of these
take the form of adding the constraint to the model, but we also remove
a number of indexes defined on the models but not actually present in
the database (since they're not defined in migration).

Change-Id: I0b53e2ccc0b03e24b8b6f67e67ea065ab5b85d07
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-02-20 19:05:46 +00:00
Stephen Finucane 0c0a11026e db: Remove unnecessary timezone configuration
The DateTime data type in SQLAlchemy defaults to non-timezone aware
timestamps [1]. There's no need to specify this. Removing it aligns our
model definition with the migration creating it.

[1] https://docs.sqlalchemy.org/en/14/core/type_basics.html#sqlalchemy.types.DateTime

Change-Id: I74e82c088015d2383c81b0951378058a4ba530c0
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-01-24 16:18:24 +00:00
Stephen Finucane dc574fa7b9 db: Correct 'nullable' mismatches on models
There were a number of discrepancies between the models and migrations
(both sqlalchemy-migrate and alembic-based - they're identical). These
need to be addressed. In this patch, we resolve discrepancies between
the 'nullable' values, where the models field indicated 'nullable=True'
but the migration had the default 'nullable=False', or vice versa. The
migrations are truth so the models are updated to reflect this.

Change-Id: Id4bbf46d55cdbdd5060436c6d81fad7927508739
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-12-15 10:01:17 +00:00
Stephen Finucane b59de266e3 db: Fix formatting of database models
I'm going to do some surgery on these files. Address some formatting
nits now so I don't end up mixing style and functional changes later.

Change-Id: Idf0d8d5137262835a38f2e9943e15903e5000361
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-12-15 10:01:17 +00:00
Gorka Eguileor 2ec2222841 Fix: Race between attachment and volume deletion
There are cases where requests to delete an attachment made by Nova can
race other third-party requests to delete the overall volume.

This has been observed when running cinder-csi, where it first requests
that Nova detaches a volume before itself requesting that the overall
volume is deleted once it becomes `available`.

This is a cinder race condition, and like most race conditions is not
simple to explain.

Some context on the issue:

- Cinder API uses the volume "status" field as a locking mechanism to
  prevent concurrent request processing on the same volume.

- Most cinder operations are asynchronous, so the API returns before the
  operation has been completed by the cinder-volume service, but the
  attachment operations such as creating/updating/deleting an attachment
  are synchronous, so the API only returns to the caller after the
  cinder-volume service has completed the operation.

- Our current code **incorrectly** modifies the status of the volume
  both on the cinder-volume and the cinder-api services on the
  attachment delete operation.

The actual set of events that leads to the issue reported in this bug
are:

[Cinder-CSI]
- Requests Nova to detach volume (Request R1)

[Nova]
- R1: Asks cinder-api to delete the attachment and **waits**

[Cinder-API]
- R1: Checks the status of the volume
- R1: Sends terminate connection request (R1) to cinder-volume and
  **waits**

[Cinder-Volume]
- R1: Ask the driver to terminate the connection
- R1: The driver asks the backend to unmap and unexport the volume
- R1: The last attachment is removed from the DB and the status of the
      volume is changed in the DB to "available"

[Cinder-CSI]
- Checks that there are no attachments in the volume and asks Cinder to
  delete it (Request R2)

[Cinder-API]

- R2: Check that the volume's status is valid. It doesn't have
  attachments and is available, so it can be deleted.
- R2: Tell cinder-volume to delete the volume and return immediately.

[Cinder-Volume]
- R2: Volume is deleted and DB entry is deleted
- R1: Finish the termination of the connection

[Cinder-API]
- R1: Now that cinder-volume has finished the termination the code
  continues
- R1: Try to modify the volume in the DB
- R1: DB layer raises VolumeNotFound since the volume has been deleted
  from the DB
- R1: VolumeNotFound is converted to HTTP 404 status code which is
  returned to Nova

[Nova]
- R1: Cinder responds with 404 on the attachment delete request
- R1: Nova leaves the volume as attached, since the attachment delete
  failed

At this point the Cinder and Nova DBs are out of sync, because Nova
thinks that the attachment is connected and Cinder has detached the
volume and even deleted it.

Hardening is also being done on the Nova side [2] to accept that the
volume attachment may be gone.

This patch fixes the issue mentioned above, but there is a request on
Cinder-CSI [1] to use Nova as the source of truth regarding its
attachments that, when implemented, would also fix the issue.

[1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645
[2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova

Closes-Bug: #1937084
Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614
2021-10-15 17:47:38 +02:00
Gorka Eguileor 94dfad99c2 Improve quota usage for temporary resources
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.

Cinder currently has a problem to track quota usage is when deleting
temporary resources.

Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.

For snapshots we don't have any way to track which ones are temporary,
which creates some issues:

- Quota sync mechanism will count them as normal snapshots.

- Manually deleting temporary snapshots after an operation fails will
  mess the quota.

- If we are using snapshots instead of clones for backups of in-use
  volumes the quota will be messed on completion.

This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.

The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.

Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.

Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.

This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches.  To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.

The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.

With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.

Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
2021-08-26 18:47:27 +02:00
Stephen Finucane 2b2993cddd db: Use 'import sqlalchemy as sa' pattern
This makes it a little clearer where things are coming from.

Change-Id: Icb1cf73722e0e9bf1364d64522e38b94aa716bea
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2021-07-12 16:16:55 +01:00
Gorka Eguileor 1fb0767d88 Fix quota usage duplicate entries
Our current quota system has a race condition on reservations that only
happens when we are creating new entries in the quota_usages table.

We normally lock quota_usages rows using a SELECT ... FOR UPDATE query,
but that's only effective when the entries exist, and current code just
creates them and proceeds without a lock on them.

This, together with the table not having unique constraint means that we
can get duplicated entries and we can have one entry overwriting the
data written by another request.

The quota_usages table does soft deletes, so the project_id and resource
fields are not enough for a unique constraint, so we add a new column
called race_preventer so we can have a unique constraint with the
3 fields.

Additionally we have to make sure that we acquire the locks before doing
the reservation calculations or the syncs, so once we create any missing
entries we close the session and try to get the locks again.

With these 2 changes we'll avoid duplicated entries as well as avoid
getting our quota usage out of sync right from the start.

For the unique constraint part of the code there were 2 alternatives
(one was even used in an earlier patchset):

- Create a virtual/computed column for the Table that sets it to a fixed
  value when deleted_at is NULL and to NULL in any other case, then use
  this virtual/computed column together with project_id and resource
  fields for a unique constraint.

  This change was my preferred solution, but it requires bumping the
  SQLAlchemy version to 1.3.11 where the feature was added as computed
  columns [1] and in some DB engines requires a relatively new version,
  for example for PostgreSQL is only supported on version 12 or later.

- Set deleted_at to a non NULL value by default on creation, and make
  sure our code always uses the deleted field to filter values.

  This is a bit nasty, but it has the advantage of not requiring new DB
  fields, no DB data migrations for existing entries, and easy to
  rollback once we figure out the underlying issue (although it may
  require a DB data migration on rollback if we want to leave the
  deleted_at entry at NULL).

The decision to add a new field was because one of the alternatives is
kind of hacky and the other one depends on specific DBMS versions and
requires a SQLAlchemy version bump.

[1]: https://docs.sqlalchemy.org/en/13/core/defaults.html#computed-generated-always-as-columns

Closes-Bug: #1484343
Change-Id: I9000c16c5b3e6f313f02256a10cb4bc0a26379f7
2021-03-30 16:20:39 +02:00
Gorka Eguileor 6005fc25fb Remove nested quota leftovers
In change-id Ide2d53caf1bc5e3ba49f34b2f48de31abaf655d0 we removed the
nested quotas, but there are still some leftover code of it in cinder.

This code is useless, and in some cases not cheap, for example the call
to quota_allocated_get_all_by_project we do in every reservation.

This patch removes most of the remaining nested quota code and leaves
the DB structure changes for the next cycle, since removing them now
would break rolling upgrades.

Change-Id: Ibdbef651208c856ed75dd9af4dd711895e212910
2021-03-30 16:20:39 +02:00
Rajat Dhasmana 14a552c10e Follow Up: Default type overrides
Remove redundant unqiue constraint (as it's already primary key)
Correct spacing of code block in doc

Change-Id: I726d0a6ddb3db3092a004b6f5e74bb6d9bd3db74
2020-09-17 09:05:54 +00:00
Rajat Dhasmana e63cb8548a Default type overrides
This patch adds a feature by which we allow setting default volume types
for projects.
The following changes are made to achieve the feature:

1) Add 4 set of APIs, set, get, get_all, unset volume type
2) All policies (except get_all) default to system/domain/project admin
3) Preference order: project default, conf default
4) Logic to not allow deletion of default type

We validate set, get and unset APIs with keystone to verify a valid
project id is passed in the request and user has proper authorization
rights to show the project.

The policies are system/domain/project admin by default except get_all
policy which defaults to system admin.

Implements: Blueprint multiple-default-volume-types

Change-Id: Idcc949ed6adbaea0c2337fac83014998b81ff1f8
2020-09-16 14:05:31 +00:00
Yikun Jiang 53ec4c8c4d Add x_project_id, accepted to transfers
This patch adds the 'source_project_id', 'destination_project_id',
'accepted' fields to transfers table and model.

Part of blueprint: improve-volume-transfer-records

Change-Id: I33bd43b2d62b2caec0a579b209dd334e32ac8f04
2018-12-20 15:42:36 +08:00
zhangbailin 24dd74748d Increase the length of resource property in quota_usages
When updating the volume type, the length of the name is checked to be
0-255. If the user input length is 255 characters
(e.g. volume_type_name='X*255'), when synchronizing the quota resource,
the resource name needs to be updated to the form of 'resource +
volume_type_name' in quota_usages [1], because the resource attribute in
the DB is set to 'String(255)', the length limit is exceeded.

Therefore, the resource column attribute in the quota_usages database table
needs to be changed from 'String(255)' to 'String(300)'.

[1]https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/api.py#L352

Closes-Bug: #1798327
Closes-Bug: #1608849
Change-Id: I6c30a6be750f6b9ecff7399dbb0aea66cdc097da
2018-11-02 05:52:33 -04:00
Liang Fang e468e97ab9 Fix for field type error
The data type of QuotaUsage.deleted is boolean, here assigned an
integer to a boolean. Not good to assume 0 is the same as False.

Similar sentence such as 'VolumeTypeProjects.deleted == 0' is correct
because the type of VolumeTypeProjects.deleted is integer.

Change-Id: Ia3f62c93dc2621474907906aeda0ddf1469d5c8f
Signed-off-by: Liang Fang <liang.a.fang@intel.com>
2018-10-14 09:36:31 -07:00
wanghao c0efaa1d46 Transfer snapshots with volumes
This feature changes Cinder to transfer
snapshots with volumes at the same time by default.
If user doesn't want to transfer snapshots, they could use
a new optional argument '--no-snapshots' after microversion 3.55.
And we also introduce the new V3 api 'v3/volume_transfers'
to move this API out of contrib into Cinder V3 API.

The cinderclient patch: https://review.openstack.org/#/c/577611/

Change-Id: If848d131e5edcdb77d0b3c2ca45a99c4d5e14d1e
Implements: blueprint transfer-snps-with-vols
2018-07-19 09:42:43 +08:00
wanghao e396560f33 Keep ORM names matching their VO counterparts
Now cinder had some Versioned Objects which names do not
match their ORM counterparts. In method: get_model_for_versioned_object,
we handles those exceptions.

This patch fix this issue to keep the names match.

Change-Id: Icf709d87be99df95e5b52204032b730cd790096c
Closes-Bug: #1493112
2018-06-19 17:50:47 +08:00
Alan Bishop bec756e040 Fix how backups handle encryption key IDs
As described in the launchpad bug [1], backup operations must take care
to ensure encryption key ID resources aren't lost, and that restored
volumes always have a unique encryption key ID.

[1] https://bugs.launchpad.net/cinder/+bug/1745180

This patch adds an 'encryption_key_id' column to the backups table. Now,
when a backup is created and the source volume's encryption key is
cloned, the cloned key ID is stored in the table. This makes it possible
to delete the cloned key ID when the backup is deleted. The code that
clones the volume's encryption key has been relocated from the common
backup driver layer to the backup manager. The backup manager now has
full responsibility for managing encryption key IDs.

When restoring a backup of an encrypted volume, the backup manager now
does this:
1) If the restored volume's encryption key ID has changed, delete the
   key ID it had prior to the restore operation. This ensures no key IDs
   are leaked.
2) If the 'encryption_key_id' field in the backup table is empty, glean
   the backup's cloned key ID from the backup's "volume base metadata."
   This helps populate the 'encryption_key_id' column for backup table
   entries created prior to when the column existed.
3) Re-clone the backup's key ID to ensure the restored volume's key ID
   is always unique.

Closes-Bug: #1745180
Change-Id: I6cadcbf839d146b2fd57d7019f73dce303f9e10b
2018-01-30 22:12:49 +00:00
Matt Riedemann 7875f14199 Store host connector in volume_attachment.connector column
The attachment_specs table's key and value columns are strict
strings, which means things like a wwpns list value for a fibrechannel
connector can't get stored there and results in a DB error during
attach with the new volume attach flow in Nova.

The attachment_specs table is arguably not the best way to store
this data, which is just a dict like the connection_info.

A better way to store this is as a serialized json blob on the
volume_attachment record itself.

This patch adds the database migration to add the column and
an online data migration routine to migrate existing attachment_specs
entries when a volume attachment object is loaded from the database.

The volume manager attachment_update flow is changed to store
new connector attachments in the volume_attachment table directly.

An online data migration hook for the CLI will be added in a follow
up change.

Change-Id: Ica1f0e06adf0afcf740aad8cdc8d133ada1760c8
Closes-Bug: #1737724
2017-12-14 14:29:41 -05:00
John Griffith 2fa6fdd784 Add shared_targets flag to Volumes
This adds a bool column to volumes to notify consumers if
the backend hosting the volume utilizes shared_targets
or not.

We use the volume-drivers capabilities report to determine
this and default to True if a driver doesn't report anything.

The purpose of the column is to notify Nova that it needs to
do some sort of locking around connect/disconnect to be sure
other volumes on the same node aren't sharing the iscsi connection.

Using a default of "True" is safe because although locking and doing
the extra checks might be somewhat inefficient it works fine because
it will just appear that there's never any other volumes in use.

So this change adds the column to the DB as well as an online migration
to go through and update any existing volumes.  With this and the
service_uuid column consumers will have everything the need to:
1. determine if they need to lock
2. use the service_uuid as a unique lock name

That last remaining change in this set will be to add the fields to
the view-builder and bump the API version.

Change-Id: If600c28c86511cfb83f38d92cf6418954fb4975e
2017-11-28 13:55:23 -07:00
John Griffith cdb6cdcc96 Add service_uuid FK to volumes
This patch adds a service_uuid FK to the volumes table.
Up until now we've just done some host name parsing and
to match up service node with where the volume is being
serviced from.

With this, we now have a unique identifier that's user
visible to indicate what node the volume-service for a
particular volume is being serviced by.

We'll use this for things like share-target locks going
forward.

Change-Id: Ia5d1e988256246e3552e3a770146503ea7f7bf73
2017-11-21 18:27:32 +00:00
wuyuting 12819a1b6d Add index for reservations on (deleted, uuid)
The query for uuid_reservations currently does a full table scan.
This adds an index so frequent invocations of uuid does not bog
down the database.

Change-Id: I149a9de82fc1003b88e0c0852a0b64634f0c622e
Closes-Bug: #1540750
2017-10-18 08:50:04 -05:00
j-griffith 2becd847fb Add uuid to services entries
Our services ID column is still just an integer and that's *ok* for the
most part.  There are things that it would be nice to advertise a
more meaningful and unique identifier to the users of services though.

This change adds an indexable UUID column to the services, this will
be useful for things like identifying backends without leaking
details to end users.

Change-Id: I67e52c6a8634b74bf5975290298d6fbcadc7dd50
2017-10-03 04:48:27 +00:00
Sean McGinnis 91218584a0 Add indexes to SQLAlchemy models
We've added several index to tables, but we have not always updated
that in the models. This should be defined there to help with
optimal querying.

Change-Id: Ia0f7587f568af1fc41d3bf1dbffca300f12474ca
2017-09-26 00:29:04 -05:00
TommyLike ad90f40d85 Don't lock whole project's resource when reserve/commit
We lock whole project's resource by using 'FOR UPDATE'
statement when reserve and commit quotas.
This patch only locks the required resources by
adding complex index (project_id and resource) and
filter quota_usage by project and resources.

Change-Id: Ia6fdcbe048e2a5614e789926a21c687c959d15e9
2017-08-30 12:59:14 +00:00
Armando Migliaccio c35b101b9e Switch to using bool for filtering non-deleted volume attributes
This avoid programmatic errors when talking to a DB backend like
Postgres due to integer/bool type mismatch.

This patch also addresses model/schema mismatches where column
'deleted' was defined as Integer, but created as Boolean.

Closes-bug: #1707989

Change-Id: I1f6da598b5bb45f7ce3deb1416eeec9ceb486f02
2017-08-01 18:01:43 -07:00
Jenkins 1768ca6a8d Merge "Revert MySQL Cluster Support until an approach is worked out" 2017-07-27 11:11:23 +00:00
Mike Bayer d8b6e97901 Revert MySQL Cluster Support until an approach is worked out
The approach merged in
I6e92b6057d319e20706608f984a4e0836eaf6a16 is being revised,
specifically changes to datatypes within the migration file
will no longer work in the way specified.  Either the names
and arguments of the datatypes will change, or the concept of
altering a migration in place will not be used.  In any case,
as this is the only merged patch and I'd like to hopefully
cleanly remove types like
AutoStringText from oslo_db, if this can be reverted before a release
that will give us time to rethink this approach without struggling
with backwards incompatibility.  Similar patches for Nova, Neutron
etc. are all waiting for now.

See also: http://lists.openstack.org/pipermail/openstack-dev/2017-July/120037.html

Change-Id: Ibd1a5c316027a9e5230b3bf06a311b9d089f6b7e
2017-07-26 18:09:31 -04:00
wangxiyuan f9a4ee90b7 Support metadata for backup resource
Now only volume and snapshot has metadata property.
We should support it for backup as well.

This patch added/updated the related db and ovo model,
updated the related backup CRUD APIs.

Change-Id: I6c4c175ec3be9423cdc821ccb52578dcfe442cbe
Implements: blueprint metadata-for-backup
2017-07-26 14:23:58 +08:00
oorgeron c81a65eb91 Enables MySQL Cluster Support for Cinder
Implements minor fixes to the SQL Alchemy scripts in Cinder to support the
MySQL Cluster DB. This includes the usage of the boolean mysql_enable_ndb
setting from oslo.db from bug 1564110. This allows operators to select from
MySQL (InnoDB) or MySQL Cluster (NDB) as the storage engine backend.
Additionally, this fixes the models.py to support this enhancement.

Change-Id: I6e92b6057d319e20706608f984a4e0836eaf6a16
Implements: blueprint mysql-cluster-support
Depends-On: I9f1fd2a87fdf75332de2339d3ff4f08ce9220dcf
2017-07-19 10:23:15 -06:00
junboli db4327b6fe Keep consistent of naming convention
In the project, some of the terminology, like URL, URLs, API, APIs, OpenStack,
UUID, Cinder are neglectfully written as url, api, openstack, uuid, cinder.
This patch is to keep consistent of naming convention.

Change-Id: I98777fb4748cbc58b6e2fd1aca058d3e44069d07
2017-07-10 10:46:44 +08:00
Pranali Deore 5b9ae3cde8 Modify the length of project_id for Messages
Length of 'project_id' in Messages should be 255,
similar to other ORM models.

Modified the project_id legnth to 255 from 36.

Change-Id: I3b4e14bcf490046ec2251de4ca95571f439ca0eb
Closes-Bug: 1691060
2017-07-06 12:32:41 +05:30
TommyLike 848ff3ad86 Explicit user messages
Use 'action', 'resource', 'detail' to replace 'event'
in user messages.

APIImpact
DocImpact

Partial-Implements: blueprint better-user-message
Change-Id: I8a635a07ed6ff93ccb71df8c404c927d1ecef005
2017-06-16 14:35:24 +08:00