Commit Graph

105 Commits

Author SHA1 Message Date
Eric Harney af749d3107 Objects: Make OPTIONAL_FIELDS a tuple
In some objects this is a list, in some it is a tuple.

Just use tuples for consistency.

Change-Id: I14871e2abd8680db72ab8e3d9ca873cb608e4039
2023-02-09 12:08:59 -05:00
Rajat Dhasmana 05cbe04518 Remove unused session parameter
In the conditional_update OVO method, we accept a session
parameter but don't use it. Also recent changes to our DB layer
use context based session and we don't pass session anymore
across methods so it won't be useful anyway.
This patch removes the session parameter from the
conditiona_update method.

Change-Id: I4deae6cfc71c74e568a9941766847997d9a08fb1
2022-07-21 04:52:14 +00:00
Eric Harney fb430df6fb Clarify conditional_update return types
Comments currently state that conditional_update() returns
an int, but it actually returns a boolean.

Change-Id: I5d2349db2a884c7bea63c2f23da5b666a0d9029c
2021-11-09 10:20:12 -05:00
Gorka Eguileor 94dfad99c2 Improve quota usage for temporary resources
Cinder creates temporary resources, volumes and snapshots, during some
of its operations, and these resources aren't counted towards quota
usage.

Cinder currently has a problem to track quota usage is when deleting
temporary resources.

Determining which volumes are temporary is a bit inconvenient because we
have to check the migration status as well as the admin metadata, so
they have been the source of several bugs, though they should be
properly tracked now.

For snapshots we don't have any way to track which ones are temporary,
which creates some issues:

- Quota sync mechanism will count them as normal snapshots.

- Manually deleting temporary snapshots after an operation fails will
  mess the quota.

- If we are using snapshots instead of clones for backups of in-use
  volumes the quota will be messed on completion.

This patch proposes the introduction of a new field for those database
resource tables where we create temporary resources: volumes and
snaphots.

The field will be called "use_quota" and will be set to False for
temporary resources to indicate that we don't want them to be counted
towards quota on deletion.

Instead of using "temporary" as the field name "use_quota" was used to
allow other cases that should not do quota in the future.

Moving from our current mechanism to the new one is a multi-release
process because we need to have backward compatibility code for rolling
upgrades.

This patch adds everything needed to complete the multi-release process
so that anybody can submit next release patches.  To do so the patch
adds backward compatible code adding the feature in this release and
TODO comments with the exact changes that need to be done for the next
2 releases.

The removal of the compatibility code will be done in the next release,
and in the one after that we'll remove the temporary metadata rows that
may still exist in the database.

With this new field we'll be able to make our DB queries more efficient
for quota usage calculations, reduce the chances of introducing new
quota usage bugs in the future, and allow users to filter in/out
temporary volumes on listings.

Closes-Bug: #1923828
Closes-Bug: #1923829
Closes-Bug: #1923830
Implements: blueprint temp-resources
Change-Id: I98bd4d7a54906b613daaf14233d749da1e1531d5
2021-08-26 18:47:27 +02:00
Gorka Eguileor b78997c2bb Clear OVO history and compatibility
The Oslo Versioned Objects history is used to generate the manifests
required to do compatibility changes to OVOs on data serialization
between services running with different OVO history versions.

We haven't updated our OVO history since Train so all the history and
compatibility code (obj_make_compatible method) is no longer necessary.

This patch consolidates the OVO history into a single version reflecting
the current status of the OVO versions and removes the compatibility
code from the OVO classes.

Since we tend to forget to update the obj_make_compatible when we add a
field (like it happened with Volume in version 1.8 when we added
shared_targets) this patch also adds a note next to the "fields"
attribute (except for the list OVOs which are never updated).

Change-Id: Ibfacccfb7c7dc70bc8f8e5ab98cc9c8feae694fb
2021-08-25 17:50:48 +02:00
Sean McGinnis d6df2c20cb
Remove collections.abc backwards compatibility
The collections module moves some abstract classes into the abc
submodules in py3. While we supported older versions of python, we
needed to handle importing from either the old or new locations.

Now that we only support runtimes that include the collections.abc
module, we can remove the backwards compatibility handling we had for
the old location.

Change-Id: Idd106a8199fa586e0b34c054383d64218383c001
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2020-10-16 07:52:36 -05:00
Sean McGinnis d9ce598f0c Raise hacking version to 2.0.0
We've kept hacking capped for a long time now. This raises the hacking
package version to the latest release and fixes the issues that it
found.

Change-Id: I933d541d9198f9742c95494bae6030cb3e4f2499
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2020-01-02 14:42:49 -06:00
Zuul 6cf1656a94 Merge "Support Incremental Backup Completion In RBD" 2019-09-06 16:25:51 +00:00
Eric Harney ec70a02ddf Log exception info when objects fail to init
This will now log an error with a traceback
indicating what caused the ProgrammingError exception
instead of hiding the cause of the failure.

Change-Id: I82e4d2c6961c6b456d129d8a0afd5972ff53785f
2019-08-26 15:36:09 +00:00
wanghao 5018727f8e Support Incremental Backup Completion In RBD
Ceph RBD backend ignores the `--incremental` option when creating a
volume backup. The first backup of a given volume is always a full
backup, and each subsequent backup is always an incremental backup.
This behavior makes it impossible to remove old backups while
keeping at least one recent backup.

Since Cinder will not find the latest_backup id as parent_id if
'--incremental=False', so we can use the parent_id to ensure
whether do the full backup in rbd driver or not.

If the incremental flag '--incremental' is not specified, this
patch will always create a new full backup for rbd volume.

Change-Id: I516b7c82b05b26e81195f7f106d43a9e0804082d
Closes-Bug: #1810270
Closes-Bug: #1790713
Co-Authored-By: Sofia Enriquez <lsofia.enriquez@gmail.com>
2019-08-23 23:52:22 -03:00
Sean McGinnis 920a87866b Handle collections.abc deprecations
The use of ABC classes directly from collections has been deprecated in
3.x versions of Python. The direction is to use the classes defined in
collections.abc. Python 2.7 does not have this, but Python 3.8 will be
dropping the backwards compatibility to use the old location.

Six also does not have support for this yet, so in the mean time to make
sure we don't run into issues as folks try to move to 3.8, and to get
rid of deprecation warnings in logs, this handles importing from the
preferred location and falls back if it not available.

Change-Id: I5a7ccd21cf83c068bddd62fc8e35cf60aa39d14f
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
2019-05-14 14:45:17 -05:00
Alan Bishop 541168b86e Fix A/A 'resource_backend' when scheduling volumes
Fix an issue with the 'resource_backend' included in the scheduler spec
for creating a volume associated with another volume, snapshot, or
group/cg. When running A/A, the 'resource_backend' must reference the
cluster, not the host.

Enhance the unit tests that cover this area. This includes fixing the
'expected_spec' so it copies a dictionary rather than referencing it,
so that external changes to the dictionary don't inadvertently update
the unit test's expected results.

Closes-Bug: #1808343
Change-Id: I7d414844d094945b55a094a8426687595f22de28
2018-12-13 09:56:34 -05:00
whoami-rajat 043ac5e574 Make code py3-compatible (global callable())
global function callable(f) is removed in python3.
It can be replaced with
isinstance(f, collections.Callable)
This patch addresses the change.

Ref : https://docs.python.org/3.1/whatsnew/3.0.html

Change-Id: I47a50fffac14668f90aac043ee22a91bdb7dca41
2018-08-13 22:45:36 +05:30
TommyLike 306fa19079 Support availability-zone type
Now availability zone is highly integrated into
volume type's extra spec, it will be recognized
when creating and retyping, also we can filter
volume type by extra spec now.

Change-Id: I4e6aa7af707bd063e7edf2b0bf28e3071ad5c67a
Partial-Implements: bp support-az-in-volumetype
2018-05-17 12:09:12 +00:00
TommyLike e1ec4b4c2e Support filter backend based on operation type
During Rocky PTG, we discussed the concept of
'sold out'. In order to fully utilize
current codes, we decided to achieve this via
scheduler filters, cloud vendors can write their own
scheduler filter plugin to disable new resource
creation actions on sold out pools. Therefore, the
only change on cinder framework side is to delivery
'operation' when asking scheduler to filter hosts.

For this first stage, the initial operations are:
1. create_group
2. manage_existing
3. extend_volume
4. create_volume
5. create_snapshot
6. migrate_volume
7. retype_volume
8. manage_existing_snapshot

Partial-Implements: bp support-mark-pool-sold-out
Change-Id: I4f0a14444675ebd0fe6397a5ff2ef9dca62b4453
2018-05-07 17:58:36 +08:00
Zuul 0c2efdba32 Merge "Fix leftovers after backup abort" 2018-03-11 01:48:44 +00:00
Gorka Eguileor 4ff9e63707 Fix leftovers after backup abort
When aborting a backup on any chunked driver we will be leaving chunks
in the backend without Cinder knowing so and with no way of deleting
them from Cinder.  In this case the only way to delete them is going to
the storage itself and deleting them manually.

Another issue that will happen if we are using a temporary resource for
the backup, be it a volume or a snapshot, is that it will not be cleaned
up and will be left for us to manually issue the delete through the
Cinder API.

The first issue is caused by the chunked driver's assumption that the
`refresh` method in an OVO will ignore the context's `read_deleted`
configuration and always read the record, which is not true.  And since
it doesn't work when the record is deleted there will be leftovers if
the status of the backup transitions to deleted during the processing of
a chunk.

The second issue is caused by the same thing, but in this case is when
the backup manager refreshes the backup OVO to know the temporary
resource it needs to clean up.

This patches fixes the incorrect behavior of the backup abort mechanism
to prevent leaving things behind.

Closes-Bug: #1746559
Change-Id: Idcfdbf815f404982d26618710a291054f19be736
2018-02-01 12:08:23 +01:00
Alan Bishop bec756e040 Fix how backups handle encryption key IDs
As described in the launchpad bug [1], backup operations must take care
to ensure encryption key ID resources aren't lost, and that restored
volumes always have a unique encryption key ID.

[1] https://bugs.launchpad.net/cinder/+bug/1745180

This patch adds an 'encryption_key_id' column to the backups table. Now,
when a backup is created and the source volume's encryption key is
cloned, the cloned key ID is stored in the table. This makes it possible
to delete the cloned key ID when the backup is deleted. The code that
clones the volume's encryption key has been relocated from the common
backup driver layer to the backup manager. The backup manager now has
full responsibility for managing encryption key IDs.

When restoring a backup of an encrypted volume, the backup manager now
does this:
1) If the restored volume's encryption key ID has changed, delete the
   key ID it had prior to the restore operation. This ensures no key IDs
   are leaked.
2) If the 'encryption_key_id' field in the backup table is empty, glean
   the backup's cloned key ID from the backup's "volume base metadata."
   This helps populate the 'encryption_key_id' column for backup table
   entries created prior to when the column existed.
3) Re-clone the backup's key ID to ensure the restored volume's key ID
   is always unique.

Closes-Bug: #1745180
Change-Id: I6cadcbf839d146b2fd57d7019f73dce303f9e10b
2018-01-30 22:12:49 +00:00
Matt Riedemann 7875f14199 Store host connector in volume_attachment.connector column
The attachment_specs table's key and value columns are strict
strings, which means things like a wwpns list value for a fibrechannel
connector can't get stored there and results in a DB error during
attach with the new volume attach flow in Nova.

The attachment_specs table is arguably not the best way to store
this data, which is just a dict like the connection_info.

A better way to store this is as a serialized json blob on the
volume_attachment record itself.

This patch adds the database migration to add the column and
an online data migration routine to migrate existing attachment_specs
entries when a volume attachment object is loaded from the database.

The volume manager attachment_update flow is changed to store
new connector attachments in the volume_attachment table directly.

An online data migration hook for the CLI will be added in a follow
up change.

Change-Id: Ica1f0e06adf0afcf740aad8cdc8d133ada1760c8
Closes-Bug: #1737724
2017-12-14 14:29:41 -05:00
John Griffith 2fa6fdd784 Add shared_targets flag to Volumes
This adds a bool column to volumes to notify consumers if
the backend hosting the volume utilizes shared_targets
or not.

We use the volume-drivers capabilities report to determine
this and default to True if a driver doesn't report anything.

The purpose of the column is to notify Nova that it needs to
do some sort of locking around connect/disconnect to be sure
other volumes on the same node aren't sharing the iscsi connection.

Using a default of "True" is safe because although locking and doing
the extra checks might be somewhat inefficient it works fine because
it will just appear that there's never any other volumes in use.

So this change adds the column to the DB as well as an online migration
to go through and update any existing volumes.  With this and the
service_uuid column consumers will have everything the need to:
1. determine if they need to lock
2. use the service_uuid as a unique lock name

That last remaining change in this set will be to add the fields to
the view-builder and bump the API version.

Change-Id: If600c28c86511cfb83f38d92cf6418954fb4975e
2017-11-28 13:55:23 -07:00
luqitao 39694623e4 Support create volume from backup
This patch implements the spec of creating volume from backup.

Change-Id: Icdc6c7606c43243a9e12d7a42df293b729f589e5
Partial-Implements: blueprint support-create-volume-from-backup
2017-11-28 09:16:59 +08:00
John Griffith cdb6cdcc96 Add service_uuid FK to volumes
This patch adds a service_uuid FK to the volumes table.
Up until now we've just done some host name parsing and
to match up service node with where the volume is being
serviced from.

With this, we now have a unique identifier that's user
visible to indicate what node the volume-service for a
particular volume is being serviced by.

We'll use this for things like share-target locks going
forward.

Change-Id: Ia5d1e988256246e3552e3a770146503ea7f7bf73
2017-11-21 18:27:32 +00:00
TommyLike 935eee712b Schedule the request to scheduler when creating from snapshot/volume
Pass the request to scheduler rather than volume service in
order to check the backend's capacity.

Change-Id: I970c10f9b50092b659fa2d88bd6a02f6c69899f2
Partial-Implements: blueprint inspection-mechanism-for-capacity-limited-host
2017-11-02 16:00:35 +08:00
John Griffith 950e693697 Make service object UUID not nullable
Fix the UUID entry for the newly added service attribute and
udpate the unit tests appropriately.

This makes the UUID entry in the service object not nullable
and fixes up the unit tests to work properly.  Also introduces
a unit test specifically for the online migration api in the db.

Closes-Bug: #1727091

Change-Id: I17d3a873cfc8f056c2d31f6c8710489785998d3c
2017-10-26 10:17:50 -06:00
John Griffith e88d3b2c82 Fix migration 112 to use live_data_migration API
In the 112 db migration I was being lazy and generating and
updating the newly added UUID column.  I also didn't update
the Service object (left that for the follow on patch).

Turns out we're going to want/need the online_data_migration
pieces for some follow up work and we might as well do this at
least a little bit more efficiently/correctly now.

This patch modifies the db migration to NOT try and populate
the newly added UUID fields in the Service table, and it
updates the Service object properly including adding the gen UUID
components to service.create()

This also fixes up what we had in place for the online_data_migration
code in cmd/manage.py; note that we had a framework there/started but
it wasn't being used up to this point.

Change-Id: I6696a15cd2c8fbf851a59b8d6d60ae1981bb1b89
Closes-Bug: #1721837
2017-10-20 18:24:33 -06:00
wangxiyuan f9a4ee90b7 Support metadata for backup resource
Now only volume and snapshot has metadata property.
We should support it for backup as well.

This patch added/updated the related db and ovo model,
updated the related backup CRUD APIs.

Change-Id: I6c4c175ec3be9423cdc821ccb52578dcfe442cbe
Implements: blueprint metadata-for-backup
2017-07-26 14:23:58 +08:00
TommyLike 8fba9a9080 Cinder volume revert to snapshot
This patch implements the spec of reverting volume to
latest snapshot.
Related tempest and client patches:

[1] https://review.openstack.org/#/c/463906/
[2] https://review.openstack.org/#/c/464903/

APIImpact
DocImpact
Partial-Implements: blueprint revert-volume-to-snapshot

Change-Id: Ib20d749c2118c350b5fa0361ed1811296d518a17
2017-06-21 10:35:32 +08:00
xing-yang 18744ba199 Tiramisu: replication group support
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.

Client side patch is here:
    https://review.openstack.org/#/c/352229/

To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38

Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).

cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property    | Value                                 |
+-------------+---------------------------------------+
| description | None                                  |
| group_specs | group_replication_enabled : <is> True |
| id          | 66462b5c-38e5-4a1a-88d6-7a7889ffec55  |
| is_public   | True                                  |
| name        | my_group_type                         |
+-------------+---------------------------------------+

cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property                        | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| extra_specs                     | replication_enabled : <is> True      |
| id                              | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public                       | True                                 |
| name                            | my_volume_type                       |
| os-volume-type-access:is_public | True                                 |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group

Enable replication group on the primary storage:
    cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.

Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
    cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
    cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.

Run failover replication group again to fail the group back to
the primary storage:
    cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.

Disable replication group:
    cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.

APIImpact
DocImpact
Implements: blueprint replication-cg

Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
2017-04-30 22:49:13 -04:00
Gorka Eguileor a60a09ce5f Add service dynamic log change/query
This patch adds 2 new APIs for microversion 3.32, one to dynamically
change the log level of cinder services, and the other that allows
querying their current log levels.

DocImpact
APIImpact
Implements: blueprint dynamic-log-levels
Change-Id: Ia5ef81135044733f1dd3970a116f97457b0371de
2017-05-16 13:37:35 +02:00
Ngo Quoc Cuong 64eeff2e90 Trivial fix typos while reading code
Change-Id: Id7785bc448d92935d94d9babb667c2733002dd35
2017-05-04 10:27:57 +07:00
TommyLike 8031fb1e98 Add 'connection_info' to attachment object
There are some issues around new attach/detach API/CLI,
fix them step by step. This patch added attribute
'connection_info' to attachment object, also add related
testcases in API unit testcases.

Depends-On: 87982a5677
Closes-Bug: #1681297

Change-Id: Idbc1049e8adf1d5b955bda01d58bb6b89fc6c5c7
2017-04-19 17:44:48 +08:00
wangxiyuan d304f621c2 Don't change volume's status when create backups from snapshots
when users try to create a backup from a snapshot, the volume
related to the snapshot is set to backing-up during the action
and can't be used for other actions.
When create a backup from a large snapshot, such as larger
than 1 TB, it will cost few hours generally. It's really a
problem that the volume is not available for such a long time.

If the snapshot is provided, we change the status of the snapshot;
otherwise, we change the status of the volume as usual.

This patch added "backing-up" status for snapshot as well

DocImpact

Change-Id: I86d34c470fabbf4132b5e004d9f368e751c893a5
Closes-bug: #1670541
2017-03-21 09:46:33 +08:00
Karthik Prabhu Vinod 8e554059e2 Switch ManageableSnaphots & ManageableVolumes list to OVO
Currently, the results from ManageableVolumes & ManageableSnapshots list
are being returned as Dict. This needs to be modeled as OVO as this
is being returned via rpc from the driver to the api layer.

We also change all occurences of List Manageable Volumes & snapshots to ovo

Change-Id: Id63e4c35deec6dccc0ae6a82b004618cd214d96e
2017-01-25 00:41:30 +00:00
Gorka Eguileor 2195885e77 Fix replication freeze mechanism
Freeze functionality in the replication feature doesn't work as
expected, since it is not being used on the scheduler to exclude
backends or used on the API or volume nodes so API-to-Vol operations
like delete and create snapshot will also work.

This patch fixes the freeze mechanism by excluding frozen backends in
the scheduler and checking the if the service is frozen on all other
modifying operations.

Since extend operation now goes through the scheduler it will be frozen
there.

Closes-Bug: #1616974
Change-Id: I4561500746c95b96136878ddfde8ca88e96b28c6
2017-01-19 10:42:24 +01:00
Gorka Eguileor b4a13281ea Make Replication support Active-Active
This patch adds new methods to our failover mechanism to allow failover
to work when a backend is clustered.

Adds REST API microversion 3.26 that adds a new `failover` method
equivalent to `failover_host` but accepting `cluster` field as well as
the `host` field.

Thaw and Freeze are updated to update cluster and all services within
the cluster.

Now cluster listings accepts new filtering fields `replication_status`,
`frozen`, and `active_backend_id`.

Summary listings return `replication_status` field and detailed listings
also return `frozen` and `active_backend_id`.

Specs: https://review.openstack.org/401392

APIImpact: New service failover action and new fields in cluster listings.
Implements: blueprint cinder-volume-active-active-support
Change-Id: Id3291b28242d5814c259283fa629b48f22e70260
2017-01-19 10:42:18 +01:00
xing-yang 44ebdd2252 CG API changes for migrating CGs
CG APIs work as follows:
 * Create CG - Create only in groups table
 * Modify CG - Modify in CG table if CG in CG table, otherwise modify
               in groups table.
 * Delete CG - Delete from CG or groups table depending on where it is
 * List CG - Check both CG and groups tables
 * List CG snapshots - Check both CG and groups tables
 * Show CG - Check both tables
 * Show CG snapshot - Check both tables
 * Create CG snapshot - Create either in CG or groups table depending on
                        the CG.
 * Create CG from source - Create in either CG or groups table
                           depending on the source.
 * Create volume - Add volume either to CG or group

Additional notes:
 * default_cgsnapshot_type is reserved for migrating CGs.
 * Group APIs will only write/read in/from the groups table.
 * Group APIs won't work on groups with default_cgsnapshot_type.
 * Groups with default_cgsnapshot_type can only be operated by CG APIs.
 * After CG tables are removed, we'll allow default_cgsnapshot_type
   to be used by group APIs.

Partial-Implements: blueprint generic-volume-group
Change-Id: Idd88a5c9587023a56231de42ce59d672e9600770
2016-11-22 19:08:20 -05:00
Eric Harney 3b17143979 Add 'unmanaging' state to volumes and snapshots
'unmanaging' is different from 'deleting', and must
be distinguishable to know how to recover from a failed
delete/unmanage operation.  The current state of things
makes it so that an 'unmanaging' volume may end up being
accidentally deleted.

Add unmanage notifications, as well.

Partial-Bug: #1478959

Change-Id: I06f60a584f219043673095346282c429773704f8
2016-12-19 12:24:12 -05:00
Jenkins 323c8acd91 Merge "Add get_all capability to volume_attachments" 2016-12-18 22:47:47 +00:00
John Griffith 3f930bb10d Add get_all capability to volume_attachments
One of the useful things that was missing from the
volume_attachments code was get_all methods.

This patch adds a get_all and a get_all_by_project, it
also goes ahead and adds some filtering capability to
the existing get_by_xxxx calls since we added the framework
for it in the get_all additions.

I also looked at refactoring our db methods for attach to just:
  * attach_create
  * attach_update
  * attach_destroy
  * attach_get
  * attach_get_all

This would probably be good as an independent effort to
clean things up and bring these calls more in line with
others, but there's a lot of work to update the objects
and existing code, might be better to wait until after
implementing the new attach API.

Co-Authored-By: Michał Dulko <michal.dulko@intel.com>

Change-Id: I40614fe702f726c74ff05f93faaf6ee79253447f
2016-12-16 14:25:21 -07:00
Gorka Eguileor 9acf079b8c Support A/A on Scheduler operations
This patch allows scheduler to work with clustered hosts to support A/A
operations.

Reporting capabilities of clustered hosts will be grouped by the
cluster_name instead of the host, and non clustered hosts will still be
stored by host.

To avoid replacing a newer capability report with an older version we
timestamp capabilities on the volumes (it's backward compatible) and
only replace currently stored values in scheduler when they are newer.

Following actions now support A/A operation:

- manage_existing
- manage_existing_snapshot
- get_pools
- create_volume
- retype
- migrate_volume_to_host
- create_consistencygroup
- create_group
- update_service_capabilities
- extend_volume

And Affinity and Driver filters have been updated.

The new functionality to notify service capabilities has not been
changed to Active/Active and will be done in another patch.

APIImpact: Added microversion 3.16
Specs: https://review.openstack.org/327283
Implements: blueprint cinder-volume-active-active-support
Change-Id: I611e75500f3d5281188c5aae287c62e5810e6b72
2016-12-14 17:48:28 +01:00
Jenkins d920648080 Merge "Delete the redundant expression expected_attrs" 2016-11-23 20:37:32 +00:00
xianming mao 04807b9f50 Delete the redundant expression expected_attrs
There have two similar expression, the difference of them are the
second one has more optional function paremeter, and the second one
can replace the first one.
So I think we can delete the first one because in face the second one
has been override the first one.

Change-Id: I79a6fada5a00f3774ff6f5b203b44a177e0153a5
2016-11-16 11:00:16 +08:00
Szymon Borkowski a08aa7ad79 Convert backup_device to OVO
This commit introduces BackupDevice object to formalize data earlier
sent over RPC as an undefined dict. This is required to be able to make
non-backward compatible changes to data sent as this parameter while
maintaining compatibility with previous release - so to support rolling
upgrades.

Change-Id: Ie57d84e32ec1c5fcfac27a7bb6d4bbb189108a5b
Partial-Implements: blueprint cinder-objects
2016-11-08 16:49:53 +01:00
Gorka Eguileor d2ec578725 Make c-vol use workers table for cleanup
To be able to support multiple hosts working with the same resources we
have added the workers table to keep track of which host is working with
each specific resource.

This patch makes c-vol service work with this new table by adding
entries on cleanable operations and removing them once these operations
have completed.

Service cleanup on initialization has also been changed to use this new
table so hosts will cleanup only resources from operations they left on
the air and leave any operations that are being processed by other
hosts.

Specs: https://review.openstack.org/236977

Implements: blueprint cinder-volume-active-active-support
Change-Id: I4e5440b8450558add372214fd1a0373ab4ad2434
2016-11-03 10:17:38 +01:00
John Griffith 6f174b4126 Remove volid from attachments_get_by_host|instance
Attachments_get_by_host|instance should be just that, associating
them with a specified volume-id doesn't really solve the problem.

If that relationship is needed a simple get_by_volume will work with
some inspection.

This patch removes the volume_id arg from those get methods.  A follow
up patch will add get_all with filters for more complex relationships.

Change-Id: Ic5ffdced96fdf780cce2a1227c5f2a599860f0ca
Closes-Bug: #1632433
2016-10-11 12:43:26 -06:00
Gorka Eguileor 95170e54b2 Add cleanable base object and cleanup request VO
This patch adds CinderCleanableObject class that is a Versioned Object
base class and CleanupRequest Versioned Object that will be used to pass
cleanup requests to c-vol and c-bak nodes but will not have a DB
representation.

This will be used for non Active-Active configurations as well.

Specs: https://review.openstack.org/236977

Implements: blueprint cinder-volume-active-active-support
Change-Id: Ia84b2f55a782c5e881bab03a8469b884f265910c
2016-10-04 15:17:31 +02:00
Gorka Eguileor e4921b4990 Allow attribute lazy loading in VolumeType OVO
VolumeType OVO class does not allow lazy loading of attributes, which is
usually not a problem if you are loading the volume type using
get_by_id, but is a problem if you load for example a list of volumes
and then go to their volume_type.extra_specs, as this will raise an
error because extra_specs cannot be retrieved.

This patch changes VolumeType OVO class to allow lazy loading all
optional fields.

Change-Id: Ief143a6c981cec4bdb21888776d610aa9d5dc9d8
2016-09-26 14:29:32 +02:00
Gorka Eguileor 2987b33970 Have a default OPTIONAL_FIELDS for persistent OVOs
Conditional update method from the persistent OVO class is blindly
relying on persistent OVOs having an OPTIONAl_FIELDS attribute without
defining a default.

This patch adds an empty default so we can do conditional updates on
persistent OVO instances that don't have optional fields.

Change-Id: Icf640da34df0990b5ad2609d5d230ac9b0a51311
2016-09-09 21:22:48 +02:00
xing-yang 325f99a64a Add group snapshots - db and objects
This is the third patch that implements the generic-volume-group
bluerpint. It adds database and object changes in order to support
group snapshots and create group from source. The API changes will
be added in the next patch.

This patch depends on the second patch which adds create/delete/update
groups support which was already merged:
    https://review.openstack.org/#/c/322459/

The next patch to add volume manager changes is here:
    https://review.openstack.org/#/c/361376/

Partial-Implements: blueprint generic-volume-group
Change-Id: I2d11efe38af80d2eb025afbbab1ce8e6a269f83f
2016-07-18 22:19:10 -04:00
Jenkins 631f39101c Merge "TrivialFix: remove unnecessary VERSION_COMPATIBILITY" 2016-08-29 09:53:54 +00:00