Commit Graph

136 Commits

Author SHA1 Message Date
Rajat Dhasmana 45250e9a92 Fix online data migration
The online migration remove_temporary_admin_metadata_data_migration
was recently merged and broke gate for the kolla project.

There are 2 issues with the current code:
1. The db api doesn't return the total and updated values
2. We are issuing a limit and update query together which is
not allowed generating the following error:

sqlalchemy.exc.InvalidRequestError: Can't call Query.update()
or Query.delete() when limit() has been called

This patch fixes the issue by creating a select subquery to get
all the ID values with the limit as max_count. We then create
a new query for update and pass the select subquery as a filter.

We don't require a releasenote since the bug and fix are in the
same release.

Closes-Bug: 2052805
Change-Id: Ida994f767eecb094c177db15dfc80a0c0fe56447
2024-02-20 13:59:46 +05:30
Gorka Eguileor 402787ffcc Clean old temporary tracking
On Xena we added the use_quota DB field to volumes and snapshots to
unify the tracking of temporary resources, but we still had to keep
compatibility code for the old mechanisms (due to rolling
upgrades).

This patch removes compatibility code with the old mechanism and adds
additional cleanup code to remove the tracking in the volume metadata.

Change-Id: I3f9ed65b0fe58f7b7a0867c0e5ebc0ac3c703b05
2024-01-12 14:25:44 +01:00
Stephen Finucane dc7c101480 db: Remove weird error handling code
The 'driver_initiator_data_insert_by_key' DB API is exceptional in that
it attempts to create a DB entry and returns a boolean value if there's
a duplicate error rather than raise an exception. This means we need to
do some special handling. Avoid the need for this special handling by
raising an exception like everyone else and do the mapping to boolean
values in the sole caller.

Change-Id: I25472f592dbdb487fbbb376cb92ee0dda76b677a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-08-31 09:52:56 +01:00
Stephen Finucane 0eb2d1f0a7 db: Remove unnecessary engine facade decorator
We're using a context manager in one function so there's no need to wrap
that function in a decorator.

While we're here, we add tests to prove that the decorator is sufficient
elsewhere.

Change-Id: I3be92fcd4ac1f86230adf7c2998e2f0100a2ec33
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-08-31 09:52:56 +01:00
Stephen Finucane c4a4c91ee6 db: Remove final users of 'get_session'
We also update 'get_engine' to avoid calling 'get_legacy_facade'.

Change-Id: Ic8c37145c66069e9194458fa7f622d98022fee4f
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-04-14 19:19:18 +01:00
Stephen Finucane 62ade4243e db: Final cleanup for context-based enginefacade
Resolve all the TODOs we left in here, now that we've converted
everything.

Change-Id: Ifb32b0d09f1e08422ceb98e0eb60f48aed85e3df
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-04-14 19:19:16 +01:00
Stephen Finucane 2727029bc6 db: Migrate online upgrade helpers to enginefacade
Migrate only upgrade helpers from the legacy enginefacade to the modern
context-based enginefacade.

Change-Id: Iaba291b92a29f2dcbdf18e146257ae4e173c788a
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-04-12 11:03:58 +01:00
Stephen Finucane 4ffe3139cf db: Migrate "volume", "volume attachment" APIs to enginefacade
Migrate volume- and volume attachment-related APIs from the legacy
enginefacade to the modern context-based enginefacade.

Change-Id: I1aaff94190ccbe82d231b3064aec3753d8be945e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-04-12 11:03:50 +01:00
Stephen Finucane bc9397073e db: Migrate "quota usage", "quota reservation" APIs to enginefacade
Migrate quota usage- and quota reservation-related APIs from the legacy
enginefacade to the modern context-based enginefacade.

Change-Id: If2960fa84646f1e8ac8e3c563a10653e58755c92
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
2022-04-12 11:03:50 +01:00
Gorka Eguileor 2ec2222841 Fix: Race between attachment and volume deletion
There are cases where requests to delete an attachment made by Nova can
race other third-party requests to delete the overall volume.

This has been observed when running cinder-csi, where it first requests
that Nova detaches a volume before itself requesting that the overall
volume is deleted once it becomes `available`.

This is a cinder race condition, and like most race conditions is not
simple to explain.

Some context on the issue:

- Cinder API uses the volume "status" field as a locking mechanism to
  prevent concurrent request processing on the same volume.

- Most cinder operations are asynchronous, so the API returns before the
  operation has been completed by the cinder-volume service, but the
  attachment operations such as creating/updating/deleting an attachment
  are synchronous, so the API only returns to the caller after the
  cinder-volume service has completed the operation.

- Our current code **incorrectly** modifies the status of the volume
  both on the cinder-volume and the cinder-api services on the
  attachment delete operation.

The actual set of events that leads to the issue reported in this bug
are:

[Cinder-CSI]
- Requests Nova to detach volume (Request R1)

[Nova]
- R1: Asks cinder-api to delete the attachment and **waits**

[Cinder-API]
- R1: Checks the status of the volume
- R1: Sends terminate connection request (R1) to cinder-volume and
  **waits**

[Cinder-Volume]
- R1: Ask the driver to terminate the connection
- R1: The driver asks the backend to unmap and unexport the volume
- R1: The last attachment is removed from the DB and the status of the
      volume is changed in the DB to "available"

[Cinder-CSI]
- Checks that there are no attachments in the volume and asks Cinder to
  delete it (Request R2)

[Cinder-API]

- R2: Check that the volume's status is valid. It doesn't have
  attachments and is available, so it can be deleted.
- R2: Tell cinder-volume to delete the volume and return immediately.

[Cinder-Volume]
- R2: Volume is deleted and DB entry is deleted
- R1: Finish the termination of the connection

[Cinder-API]
- R1: Now that cinder-volume has finished the termination the code
  continues
- R1: Try to modify the volume in the DB
- R1: DB layer raises VolumeNotFound since the volume has been deleted
  from the DB
- R1: VolumeNotFound is converted to HTTP 404 status code which is
  returned to Nova

[Nova]
- R1: Cinder responds with 404 on the attachment delete request
- R1: Nova leaves the volume as attached, since the attachment delete
  failed

At this point the Cinder and Nova DBs are out of sync, because Nova
thinks that the attachment is connected and Cinder has detached the
volume and even deleted it.

Hardening is also being done on the Nova side [2] to accept that the
volume attachment may be gone.

This patch fixes the issue mentioned above, but there is a request on
Cinder-CSI [1] to use Nova as the source of truth regarding its
attachments that, when implemented, would also fix the issue.

[1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645
[2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova

Closes-Bug: #1937084
Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614
2021-10-15 17:47:38 +02:00
Rajat Dhasmana 53c3b192ec Fix: Online migration for volume_use_quota_online_data_migration
In the calculate_use_quota method, added a check for the case when
there is no migration i.e. volume.migration_status is None and
return True in that case since we should count that volume in quotas.

Also changed the property from admin_metadata to volume_admin_metadata
since we are dealing with ORM objects and not OVO.

It also fixes another issue where the db/api layer wasn't returning
the result of total and updated records.

Closes-Bug: #1942077
Change-Id: I640e039833001a91b23bfe0e6bb9c4e083d012a3
2021-09-10 13:16:21 -04:00
Gorka Eguileor 94dfad99c2 Improve quota usage for temporary resources
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.

Cinder currently has a problem to track quota usage is when deleting
temporary resources.

Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.

For snapshots we don't have any way to track which ones are temporary,
which creates some issues:

- Quota sync mechanism will count them as normal snapshots.

- Manually deleting temporary snapshots after an operation fails will
  mess the quota.

- If we are using snapshots instead of clones for backups of in-use
  volumes the quota will be messed on completion.

This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.

The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.

Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.

Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.

This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches.  To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.

The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.

With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.

Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
2021-08-26 18:47:27 +02:00
Gorka Eguileor 935ddf7c72 Prevent quota and reservations to go into negative
Our current quota usage code doesn't prevent our usage or reservations
to go into negative, though it warns at reservation time.

This patch makes sure that we don't go into negative, setting it to 0 in
case it would be negative.

Change-Id: I487157d7092fa4cee1cd72cf01ce7031c94f553f
2021-03-30 16:23:10 +02:00
Gorka Eguileor 1fb0767d88 Fix quota usage duplicate entries
Our current quota system has a race condition on reservations that only
happens when we are creating new entries in the quota_usages table.

We normally lock quota_usages rows using a SELECT ... FOR UPDATE query,
but that's only effective when the entries exist, and current code just
creates them and proceeds without a lock on them.

This, together with the table not having unique constraint means that we
can get duplicated entries and we can have one entry overwriting the
data written by another request.

The quota_usages table does soft deletes, so the project_id and resource
fields are not enough for a unique constraint, so we add a new column
called race_preventer so we can have a unique constraint with the
3 fields.

Additionally we have to make sure that we acquire the locks before doing
the reservation calculations or the syncs, so once we create any missing
entries we close the session and try to get the locks again.

With these 2 changes we'll avoid duplicated entries as well as avoid
getting our quota usage out of sync right from the start.

For the unique constraint part of the code there were 2 alternatives
(one was even used in an earlier patchset):

- Create a virtual/computed column for the Table that sets it to a fixed
  value when deleted_at is NULL and to NULL in any other case, then use
  this virtual/computed column together with project_id and resource
  fields for a unique constraint.

  This change was my preferred solution, but it requires bumping the
  SQLAlchemy version to 1.3.11 where the feature was added as computed
  columns [1] and in some DB engines requires a relatively new version,
  for example for PostgreSQL is only supported on version 12 or later.

- Set deleted_at to a non NULL value by default on creation, and make
  sure our code always uses the deleted field to filter values.

  This is a bit nasty, but it has the advantage of not requiring new DB
  fields, no DB data migrations for existing entries, and easy to
  rollback once we figure out the underlying issue (although it may
  require a DB data migration on rollback if we want to leave the
  deleted_at entry at NULL).

The decision to add a new field was because one of the alternatives is
kind of hacky and the other one depends on specific DBMS versions and
requires a SQLAlchemy version bump.

[1]: https://docs.sqlalchemy.org/en/13/core/defaults.html#computed-generated-always-as-columns

Closes-Bug: #1484343
Change-Id: I9000c16c5b3e6f313f02256a10cb4bc0a26379f7
2021-03-30 16:20:39 +02:00
Gorka Eguileor 8f03b26f50 Fix automatic quota sync for temporary volumes
When using the automatic quota refresh via `until_refresh` and `max_age`
configuration options the calculated quota usage by the refresh will not
be correct if there are temporary volumes (such as those created from
snapshots for backups).

Normal quota usage calculation does not work like this, as it has admin
metadata stating that those are temporary volumes and are not used to
increase/decrease quota usage.

This patch fixes this by modifying existing filter that ignores
migrating volumes so it can also ignore temporary volumes based on their
admin metadata.

Closes-Bug: #1919161
Change-Id: I0fd89028684c1406f427f4c97b120ece5cf2780b
2021-03-26 12:26:15 +01:00
Gorka Eguileor e55043ff00 Fix automatic quota sync for migrating volumes
When using the automatic quota refresh via `until_refresh` and `max_age`
configuration options the calculated quota usage by the refresh will not
be correct if there are volumes that are migrating, since source and
destination volumes are counted in the refresh.

Normal quota usage calculation does not work like this, it only counts
it once, and so should the automatic quota calculation.

This patch fixes this by adding a filter to the query that skips
migration destination volumes.

It also updatest the DB implementation of volume_data_get_for_project
that didn't match the signature in cinder.db.api

Closes-Bug: #1917450
Change-Id: Ifff092917abe07726367a953f5ed420626c53bb9
2021-03-17 09:06:07 +01:00
xuanyandong 72c6dc564a Remove six of files under cinder/test/unit
Replace the following items with Python 3 style code.

- six.PY3
- six.add_metaclass
- six.string_types
- six.text_type
- six.moves
- six.StringIO
- six.reraise
- six.wraps

Change-Id: Id27e6823244bc6a81ce1301b32cee79dd99e771a
Implements: blueprint six-removal
2020-10-14 09:19:08 +08:00
Rodrigo Barbieri 8ebeafcbba Fix cross-project incremental backups
This patch addresses the scenario where an
incremental backup can be created having a
parent backup that was taken in a different
project. This scenario ultimately leads to
a silent error when creating/deleting the
incremental backup and quotas going out of
sync.

The proposed fix is to narrow the backup search
down to the same project. To achieve this, a
method's signature had to be updated to achieve
the desired optimized behavior of passing the
volume's project_id parameter.

Closes-bug: #1869746
Closes-bug: #1873518

Change-Id: Icb621ff5966133f59d9d43ca2dd9f8e1919b1149
2020-05-21 09:38:25 -03:00
Eric Harney d4eb4a9ba1 Move unit test code under tests/unit/
test.py was previously not in the tests
directory.  This means that downstream packagers
of Cinder have to specifically exclude it from
the main Cinder package (which does not typically
include unit tests).

Move it under the cinder/tests/unit/ dir where it
should be, to clean this up.

Change-Id: I65c50722f5990f540d84fa361b997302bbc935c5
2020-04-30 18:13:54 -04:00
whoami-rajat a5bb17bdfc Make volume soft delete more thorough
When a volume record is soft-deleted in the database,
dependent records in other tables (for example,
Transfers, VolumeGlanceMetadata, etc.) must be soft
deleted as well.  Otherwise, we will get FK dependency
errors when the database is purged.

This patch adds that support for VolumeAttachment table.
(other tables were already covered, just refactored)

Also adds tests.

Co-authored-by: Rajat Dhasmana <rajatdhasmana@gmail.com>
Co-authored-by: Brian Rosmaita <rosmaita.fossdev@gmail.com>

Change-Id: Ibfa6c4ba2f162681756ec3203991351345b65346
Related-Bug: #1542169
2020-01-11 13:01:11 -05:00
Sean McGinnis 3eb9b422f4
Introduce flake8-import-order extension
This adds usage of the flake8-import-order extension to our flake8
checks to enforce consistency on our import ordering to follow the
overall OpenStack code guidelines.

Since we have now dropped Python 2, this also cleans up a few cases for
things that were third party libs but became part of the standard
library such as mock, which is now a standard part of unittest.

Some questions, in order of importance:

Q: Are you insane?
A: Potentially.

Q: Why should we touch all of these files?
A: This adds consistency to our imports. The extension makes sure that
   all imports follow our published guidelines of having imports ordered
   by standard lib, third party, and local. This will be a one time
   churn, then we can ensure consistency over time.

Q: Why bother. this doesn't really matter?
A: I agree - but...

We have the issue that we have less people actively involved and less
time to perform thorough code reviews. This will make it objective and
automated to catch these kinds of issues.

But part of this, even though it maybe seems a little annoying, is for
making it easier for contributors. Right now, we may or may not notice
if something is following the guidelines or not. And we may or may not
comment in a review to ask for a contributor to make adjustments to
follow the guidelines.

But then further along into the review process, someone decides to be
thorough, and after the contributor feels like they've had to deal with
other change requests and things are in really good shape, they get a -1
on something mostly meaningless as far as the functionality of their
code. It can be a frustrating and disheartening thing.

I believe this actually helps avoid that by making it an objective thing
that they find out right away up front - either the code is following
the guidelines and everything is happy, or it's not and running local
jobs or the pep8 CI job will let them know right away and they can fix
it. No guessing on whether or not someone is going to take a stand on
following the guidelines or not.

This will also make it easier on the code reviewers. The more we can
automate, the more time we can spend in code reviews making sure the
logic of the change is correct and less time looking at trivial coding
and style things.

Q: Should we use our hacking extensions for this?
A: Hacking has had to keep back linter requirements for a long time now.
   Current versions of the linters actually don't work with the way
   we've been hooking into them for our hacking checks. We will likely
   need to do away with those at some point so we can move on to the
   current linter releases. This will help ensure we have something in
   place when that time comes to make sure some checks are automated.

Q: Didn't you spend more time on this than the benefit we'll get from
   it?
A: Yeah, probably.

Change-Id: Ic13ba238a4a45c6219f4de131cfe0366219d722f
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2020-01-06 09:59:35 -06:00
whoami-rajat a550ade303 Untyped to Default Volume Type
The patch adds the following functionality:
* A default volume type (``__DEFAULT__``) will be
  created during cinder DB migration.
* Admins can override the default type in cinder.conf
* Creating a new volume without any type (and
  `default_volume_type` option is unset in cinder.conf)
  will be assigned the ``__DEFAULT__`` type.
* The default volume type cannot be deleted
* All untyped volumes and snapshots will be migrated to the
  ``__DEFAULT__`` type.

Change-Id: I4da0c13b5b3f8174a30b8557f968d6b9e641b091
Implements: blueprint untyped-volumes-default-volume-type
2019-09-20 05:37:54 +00:00
Gorka Eguileor 230bda82c1 Fix online data migrations
Online data migrations are part of the rolling upgrades mechanism, and
they are run on the current installed version before we upgrade to the
next one to ensure that all DB records have been upgraded even if they
have not been accessed during the life of the current release service
life.

This means that online migrations from one release should be removed on
the next release, as they are no longer needed.

We still have the Queens online data migrations in our code, and we
should remove them in master, Stein, and Rocky.

This patch also adds checks to the status tool to check that these
online data migrations have completed.

Closes-bug: #1837703
Change-Id: I025789f92eb318b8e5a531bea6cf106d52f268ae
2019-09-06 12:48:23 +02:00
Eric Harney 47d2a98fa6 Fix service_uuid migration for volumes with no host
For volumes that have no host assigned, this
migration will fail.  Skip those volumes, as they
don't need to be updated.

Closes-Bug: #1821207
Change-Id: I08487575e52c5cfd5a268a23d97b6bd931aeaa45
2019-03-21 15:01:39 -04:00
zhu.boxiang 18570a716c Fix wrong filter of backups in db api
The function _process_backups_filters aims to filter backups. So
fix the model and metadata attribute.

Change-Id: Ie6807745fa89731b9db3bc29da653db269219bce
Closes-Bug: #1791003
2018-09-11 22:17:39 +08:00
Alan Bishop bec756e040 Fix how backups handle encryption key IDs
As described in the launchpad bug [1], backup operations must take care
to ensure encryption key ID resources aren't lost, and that restored
volumes always have a unique encryption key ID.

[1] https://bugs.launchpad.net/cinder/+bug/1745180

This patch adds an 'encryption_key_id' column to the backups table. Now,
when a backup is created and the source volume's encryption key is
cloned, the cloned key ID is stored in the table. This makes it possible
to delete the cloned key ID when the backup is deleted. The code that
clones the volume's encryption key has been relocated from the common
backup driver layer to the backup manager. The backup manager now has
full responsibility for managing encryption key IDs.

When restoring a backup of an encrypted volume, the backup manager now
does this:
1) If the restored volume's encryption key ID has changed, delete the
   key ID it had prior to the restore operation. This ensures no key IDs
   are leaked.
2) If the 'encryption_key_id' field in the backup table is empty, glean
   the backup's cloned key ID from the backup's "volume base metadata."
   This helps populate the 'encryption_key_id' column for backup table
   entries created prior to when the column existed.
3) Re-clone the backup's key ID to ensure the restored volume's key ID
   is always unique.

Closes-Bug: #1745180
Change-Id: I6cadcbf839d146b2fd57d7019f73dce303f9e10b
2018-01-30 22:12:49 +00:00
wanghao 91953041c0 Use constants for cinder-volume
Since we have defined constants.VOLUME_BINARY, this patch
change some cinder volume binary strings to use constans.VOLUME_BINARY.

Change-Id: I91b1ed2331a3b197a2ba39fa5cfb02e9d161d709
2017-12-29 09:05:08 +08:00
Matt Riedemann bf5c34e30a Add online data migration routine for attachment_specs
This adds the hook to the cinder-manage db online_data_migration
CLI for migrating attachment_specs entries to the volume_attachment
table.

Along the way, two changes have to be made in the VolumeAttachment
object:

1. Handle 'connector' in updates in VolumeAttachment.create(); this
   makes the test work to create a volume attachment with the connector
   already set, something we should support so you don't have to create()
   and then save() the object.
2. In _from_db_object, we have to set the context on the attachment
   object and reset the volume field changes *before* saving the
   connector changes, otherwise save() fails because (a) it has an
   empty context and (2) you can't change a volume through the attachment.

Change-Id: Ic0698e4fc0f6519c2676fe91e1a6c0fb47bbeb95
Related-Bug: #1737724
2017-12-19 16:42:08 -05:00
John Griffith cdb6cdcc96 Add service_uuid FK to volumes
This patch adds a service_uuid FK to the volumes table.
Up until now we've just done some host name parsing and
to match up service node with where the volume is being
serviced from.

With this, we now have a unique identifier that's user
visible to indicate what node the volume-service for a
particular volume is being serviced by.

We'll use this for things like share-target locks going
forward.

Change-Id: Ia5d1e988256246e3552e3a770146503ea7f7bf73
2017-11-21 18:27:32 +00:00
Zuul faa588b672 Merge "Fix earlier backup records can't be restored" 2017-11-17 06:06:23 +00:00
TommyLike 7bae5759c8 Fix earlier backup records can't be restored
We upgraded the backup service attribute from
module name to class name. This results in
the failure of restoring earlier backups
with new code. This patch upgrades the older
backup records as well as adds compatibility
in code.

Change-Id: I0374ab9f2d37ecf8d8bce191c7535601266d432e
Closes-Bug: #1727912
2017-11-14 23:13:45 +00:00
wangxiyuan f50b355577 Fix resource count for os-host show
os-host show API should only return the resouce count on the
specified host.
The logic in Cinder which counts the project's resouce on all host
is wrong.

Change-Id: I96cd285a82b44af8818514692818e739443dcc45
Closes-bug: #1699936
2017-11-14 14:23:51 +00:00
Hanxi_Liu 9d61c33bdc Use oslo_db.sqlalchemy.enginefacade
EngineFacade is deprecated, so use oslo_db.sqlalchemy.enginefacade
and update 'get_session' method to keep backward compatibility.

Blueprint: use-oslodb-enginefacade
Change-Id: Idae54a17567b558352031ecc573d83ab8b8f04b9
2017-11-09 15:08:54 +08:00
John Griffith 950e693697 Make service object UUID not nullable
Fix the UUID entry for the newly added service attribute and
udpate the unit tests appropriately.

This makes the UUID entry in the service object not nullable
and fixes up the unit tests to work properly.  Also introduces
a unit test specifically for the online migration api in the db.

Closes-Bug: #1727091

Change-Id: I17d3a873cfc8f056c2d31f6c8710489785998d3c
2017-10-26 10:17:50 -06:00
Gorka Eguileor 720f07f0d6 Create custom assertTrue and assertFalse
Python's unittest framework assertTrue and assertFalse methods don't
check that the provided value is False or True, like the documentation
says:

  Note that this is equivalent to bool(expr) is True and not to expr is
  True (use assertIs(expr, True) for the latter).

According to the documentation we should try to avoid using it as much
as possible:

  This method should also be avoided when more specific methods are
  available (e.g.  assertEqual(a, b) instead of assertTrue(a == b)),
  because they provide a better error message in case of failure.

But in our current code we keep abusing its use in many ways, among
which we can easily find the following:

- Thinking it only matches False/True

- assertTrue(expected, actual) --> This will always be true as long as
bool(expected) is True, because actual is interpreted as the message to
return on failure.

- Incorrectly testing values, like checking for a tuple of the form
(True, {...}) instead of testing for True in the first position.

This patch fixes incorrect some of the incorrect uses of the methods and
overwrites default methods so that they behave the way many people think
they do:

- assertTrue(x) == assertIs(True,x)
- assertFalse(x) == assertIs(False,x)

This will hopefully help us prevent incorrect usage of the methods.

Change-Id: I1e8c82702932e6b6d940bd83d0a2d4494576a81b
2017-09-11 18:31:20 +02:00
TommyLike ad90f40d85 Don't lock whole project's resource when reserve/commit
We lock whole project's resource by using 'FOR UPDATE'
statement when reserve and commit quotas.
This patch only locks the required resources by
adding complex index (project_id and resource) and
filter quota_usage by project and resources.

Change-Id: Ia6fdcbe048e2a5614e789926a21c687c959d15e9
2017-08-30 12:59:14 +00:00
Jenkins 7ef07b8f75 Merge "Add group to cluster when init host" 2017-08-01 19:20:40 +00:00
wangxiyuan f9a4ee90b7 Support metadata for backup resource
Now only volume and snapshot has metadata property.
We should support it for backup as well.

This patch added/updated the related db and ovo model,
updated the related backup CRUD APIs.

Change-Id: I6c4c175ec3be9423cdc821ccb52578dcfe442cbe
Implements: blueprint metadata-for-backup
2017-07-26 14:23:58 +08:00
wangxiyuan 48ce34a68e Add group to cluster when init host
If cluster is enabled, the group should be updated with cluster
as well.

Change-Id: I72dc6c61f10417f8382c7c488be27ed5199b75b9
Implements: blueprint cinder-volume-active-active-support
2017-07-12 14:08:43 +08:00
TommyLike 8fba9a9080 Cinder volume revert to snapshot
This patch implements the spec of reverting volume to
latest snapshot.
Related tempest and client patches:

[1] https://review.openstack.org/#/c/463906/
[2] https://review.openstack.org/#/c/464903/

APIImpact
DocImpact
Partial-Implements: blueprint revert-volume-to-snapshot

Change-Id: Ib20d749c2118c350b5fa0361ed1811296d518a17
2017-06-21 10:35:32 +08:00
Jenkins 1fa43b8f79 Merge "Tests: Disallow use of assertTrue(str)" 2017-06-18 20:45:32 +00:00
Eric Harney 7a84172cb2 Tests: Disallow use of assertTrue(str)
self.assertTrue(string) is fairly dangerous because
things like self.assertTrue('False') will pass.

Require a more specific assertion when passing a
string to assertTrue, which should help prevent mistakes.

Fix up tests that do this currently, some of which
appear to not be testing what they intended.

Change-Id: I0cd29b9f3247ae5b7b3c478b96970b73d69a72c0
2017-06-15 12:09:52 -04:00
Eric Harney 331a8637c9 API/DB: Fix hash randomization test issues
These tests fail if hash randomization is enabled.

Change-Id: I96b9b413387f2ef02ea7ccfb6e676cad0b87c6f1
2017-06-05 11:21:17 -04:00
TommyLike 5f580ff1ad Fix like filter related issues
1. Users would filter resource with this input below:
```
command: cinder list --filters name=123
```
The '123' will be recognised as an integer, but we currently
only support string value, it's safe to filter resource
if the input is int.

2. For some resource such as 'volume', we expose different
attributes ('name'&'display_name') to API user, so if
user filter resource by 'name~=az_volume' we should
convert it back before processing database layer filter.

Change-Id: Ie950dd9fdaf0ca860f81e8ba984f7e6a9696a86a
2017-05-27 09:24:25 +08:00
TommyLike 6df8415411 Support 'LIKE' operator to filter resource
Added like operator support to filters
for the following resources.

1. volume
2. snapshot
3. backup
4. group
5. group-snapshot
6. attachment
7. message

Depends-On: ff3d41b15a

APIImpact
DocImpact
Partial-Implements: blueprint support-regexp-based-query

Change-Id: I6c2ea07b0bfc5852b28e44989406cc10eb972e26
2017-05-22 14:23:31 +08:00
xing-yang 18744ba199 Tiramisu: replication group support
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.

Client side patch is here:
    https://review.openstack.org/#/c/352229/

To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38

Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).

cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property    | Value                                 |
+-------------+---------------------------------------+
| description | None                                  |
| group_specs | group_replication_enabled : <is> True |
| id          | 66462b5c-38e5-4a1a-88d6-7a7889ffec55  |
| is_public   | True                                  |
| name        | my_group_type                         |
+-------------+---------------------------------------+

cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property                        | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| extra_specs                     | replication_enabled : <is> True      |
| id                              | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public                       | True                                 |
| name                            | my_volume_type                       |
| os-volume-type-access:is_public | True                                 |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group

Enable replication group on the primary storage:
    cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.

Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
    cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
    cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.

Run failover replication group again to fail the group back to
the primary storage:
    cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.

Disable replication group:
    cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.

APIImpact
DocImpact
Implements: blueprint replication-cg

Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
2017-04-30 22:49:13 -04:00
Gorka Eguileor 66efc6c8d9 Fix host check in is_backend_frozen
When we fixed the freeze mechanism we introduced a DB function called
``is_backend_frozen`` that is used to check in the DB whether a resource
(volume, group, etc.) is frozen or not.

The issue is that this function is not taking into consideration that in
some cases host/cluster_name may be comming with the pool information,
so the check will fail.

This patch fixes this by removing the pool information from host and
cluster_name parameters before doing the check in the DB.

TrivialFix

Change-Id: Ie776adf9e746cf4cb7a2856d64d0b94423149b8d
Closes-Bug: #1616974
2017-04-27 10:51:14 +02:00
TommyLike dd42d0af96 Clean up expired user messages
Use periodic task to clean up expired messages.

Change-Id: Ia44f46497b8a515de73e73c0ad70966a094a9b76
Partial-Implements: blueprint summarymessage
2017-04-18 01:09:19 +00:00
TommyLike f9012466db [BugFix]Cinder forgets to update 'deleted_at' when deleting
Usually we would update the 'deleted_at' column when
soft-deleting records, otherwise the 'db purge' command
would fail because it depends on the 'deleted_at' column.

Close-Bug: #1671354

Change-Id: Ib302488d3a007df09a2e7ece40488c00bf732119
2017-03-28 19:45:43 +08:00
Jenkins 00153284b2 Merge "Remove Ocata's data migrations" 2017-02-15 14:53:56 +00:00