Commit Graph

44 Commits

Author SHA1 Message Date
Sean McGinnis a7ab5ba1f3 Use new oslo.db test cases
Base migration test cases have moved to a new location in oslo.db
resulting in DeprecationWarnings in the Cinder logs. This updates
the migration tests to use the new location for these base test
classes.

Also tries to clean up slightly by moving the test_migrations
tests under the db directory.

Change-Id: Iaf77db73e368aee0d09b4c8e76f180f394f1aa37
Closes-bug: #1733903
2017-11-27 14:15:56 -06:00
John Griffith cdb6cdcc96 Add service_uuid FK to volumes
This patch adds a service_uuid FK to the volumes table.
Up until now we've just done some host name parsing and
to match up service node with where the volume is being
serviced from.

With this, we now have a unique identifier that's user
visible to indicate what node the volume-service for a
particular volume is being serviced by.

We'll use this for things like share-target locks going
forward.

Change-Id: Ia5d1e988256246e3552e3a770146503ea7f7bf73
2017-11-21 18:27:32 +00:00
wuyuting 12819a1b6d Add index for reservations on (deleted, uuid)
The query for uuid_reservations currently does a full table scan.
This adds an index so frequent invocations of uuid does not bog
down the database.

Change-Id: I149a9de82fc1003b88e0c0852a0b64634f0c622e
Closes-Bug: #1540750
2017-10-18 08:50:04 -05:00
Jenkins 313f76ecfe Merge "Add uuid to services entries" 2017-10-05 07:52:54 +00:00
j-griffith 2becd847fb Add uuid to services entries
Our services ID column is still just an integer and that's *ok* for the
most part.  There are things that it would be nice to advertise a
more meaningful and unique identifier to the users of services though.

This change adds an indexable UUID column to the services, this will
be useful for things like identifying backends without leaking
details to end users.

Change-Id: I67e52c6a8634b74bf5975290298d6fbcadc7dd50
2017-10-03 04:48:27 +00:00
Sean McGinnis a9afbddd11 Compact Newton database migrations
This compacts all database migrations up to Newton into one
intial schema to remove the need to apply every database
change along the way.

Change-Id: I7b5833296292df2e6cc7d8d9306115e590fff25a
2017-09-26 00:32:21 -05:00
TommyLike ad90f40d85 Don't lock whole project's resource when reserve/commit
We lock whole project's resource by using 'FOR UPDATE'
statement when reserve and commit quotas.
This patch only locks the required resources by
adding complex index (project_id and resource) and
filter quota_usage by project and resources.

Change-Id: Ia6fdcbe048e2a5614e789926a21c687c959d15e9
2017-08-30 12:59:14 +00:00
wangxiyuan f9a4ee90b7 Support metadata for backup resource
Now only volume and snapshot has metadata property.
We should support it for backup as well.

This patch added/updated the related db and ovo model,
updated the related backup CRUD APIs.

Change-Id: I6c4c175ec3be9423cdc821ccb52578dcfe442cbe
Implements: blueprint metadata-for-backup
2017-07-26 14:23:58 +08:00
Pranali Deore 5b9ae3cde8 Modify the length of project_id for Messages
Length of 'project_id' in Messages should be 255,
similar to other ORM models.

Modified the project_id legnth to 255 from 36.

Change-Id: I3b4e14bcf490046ec2251de4ca95571f439ca0eb
Closes-Bug: 1691060
2017-07-06 12:32:41 +05:30
TommyLike 848ff3ad86 Explicit user messages
Use 'action', 'resource', 'detail' to replace 'event'
in user messages.

APIImpact
DocImpact

Partial-Implements: blueprint better-user-message
Change-Id: I8a635a07ed6ff93ccb71df8c404c927d1ecef005
2017-06-16 14:35:24 +08:00
xing-yang 18744ba199 Tiramisu: replication group support
This patch adds support for replication group.
It is built upon the generic volume groups.
It supports enable replication, disable replication,
failover replication, and list replication targets.

Client side patch is here:
    https://review.openstack.org/#/c/352229/

To test this server side patch using the client side patch:
export OS_VOLUME_API_VERSION=3.38

Make sure the group type has group_replication_enabled or
consistent_group_replication_enabled set in group specs,
and the volume types have replication_enabled set in extra specs
(to be compatible with Cheesecake).

cinder group-type-show my_group_type
+-------------+---------------------------------------+
| Property    | Value                                 |
+-------------+---------------------------------------+
| description | None                                  |
| group_specs | group_replication_enabled : <is> True |
| id          | 66462b5c-38e5-4a1a-88d6-7a7889ffec55  |
| is_public   | True                                  |
| name        | my_group_type                         |
+-------------+---------------------------------------+

cinder type-show my_volume_type
+---------------------------------+--------------------------------------+
| Property                        | Value                                |
+---------------------------------+--------------------------------------+
| description                     | None                                 |
| extra_specs                     | replication_enabled : <is> True      |
| id                              | 09c1ce01-87d5-489e-82c6-9f084107dc5c |
| is_public                       | True                                 |
| name                            | my_volume_type                       |
| os-volume-type-access:is_public | True                                 |
| qos_specs_id                    | None                                 |
+---------------------------------+--------------------------------------+

Create a group:
cinder group-create --name my_group my_group_type my_volume_type
cinder group-show my_group

Enable replication group on the primary storage:
    cinder group-enable-replication my_group
Expected results: replication_status becomes “enabled”.

Failover replication group to the secondary storage.
If secondary-backend-id is not specified, it will go to the
secondary-backend-id configured in cinder.conf:
    cinder group-failover-replication my_group
If secondary-backend-id is specified (not “default”), it will go to
the specified backend id:
    cinder group-failover-replication my_group
--secondary-backend-id <backend_id>
Expected results: replication_status becomes “failed-over”.

Run failover replication group again to fail the group back to
the primary storage:
    cinder group-failover-replication my_group
--secondary-backend-id default
Expected results: replication_status becomes “enabled”.

Disable replication group:
    cinder group-disable-replication my_group
Expected results: replication_status becomes “disabled”.

APIImpact
DocImpact
Implements: blueprint replication-cg

Change-Id: I4d488252bd670b3ebabbcc9f5e29e0e4e913765a
2017-04-30 22:49:13 -04:00
Gorka Eguileor 3963595bed Make failover DB changes consistent
There are some inconsistencies among the different drivers when it comes
to setting the replication_status field in the DB on a failover.

To ensure that consistent behavior is provided the manager will oversee
all DB changes during failover and failback and make additional changes
when necessaries to maintain this consistency.

The drivers will no longer need to process non replicated volumes on
failover/failback and can simplify their logic as they will only receive
replicated volumes based on the replication_status field.

On failover:

- Non replicated volumes will have their status changed to error, have
  their current status saved to the previous_status field, and their
  replication_status changed to not_replicated.
- All non replicated snapshots will have their statuses changed to
  error.
- All replicated volumes that failed on the failover will get their
  status changed to error, their current status saved in
  previous_status, and their replication_status set to failover-error.
- All snapshots from volumes with failover errors will have their status
  set to error.
- All volumes successfully failed over will get their replication_status
  changed to failed-over.

On failback:

- All non replicated volumes and snapshots will have no model update.
- All replicated volumes that failed on the failover will get their
  status changed to error, their current status saved in
  previous_status, and their replication_status set to failover-error.
- All snapshots from volumes with failover errors will have their status
  set to error.
- All volumes successfully failed back will get their replication_status
  changed to enabled.

A more detailed explanation is provided in the updated replication
devref [3].

Since there were some drivers that where not setting the
replication_status on creation [1] and retype [2] we have added a
migration script to fix this inconsistency.

[1]: Fixed in change: I10a7931ed4a73dfd2b69f0db979bc32a71aedb11
[2]: Fixed in change: Id4df2b7ad55e9b5def91329f5437da9caa185c30
[3]: Change: I180b89c1ceaeea6d4da8e995e46181990d52825f

Closes-Bug: #1643886
Change-Id: Iefc409f2b19d8575a4ca1ec98a15276f5604eb8d
2017-05-30 10:15:22 +02:00
TommyLike b3911ccfc1 Add missing testcases for migration scripts
Use oslo_db to create index and add missing testcases
for migration script 091, 098 and 099, also eliminate
the deprecation warning.

Change-Id: I6163489ca0f59160e7d3bfd837b72995be17c059
2017-05-03 14:34:36 +08:00
Mate Lakat c72f5ef8e3 Create indexes for foreign keys
Some database backends (for example PostgreSQL) do not automatically
create an index on a foreign key. As a result database queries are slow.
Adding the missing indexes and a migration that will only add indexes if
they were not already there.

In total, 26 foreign keys were identified as missing indexes. 2 of them
are already covered by UniqueConstraints, for the rest, new indexes have
been created.

Closes-Bug: #1666547
Change-Id: I1437c3a1aa13142ee7a7e3e7bf9ff867b9d72652
2017-04-25 09:07:29 +02:00
Gorka Eguileor 24bab6b7f6 Prevent claiming and updating races on worker
Current code for claiming and updating workers relays on the updated_at
field to determine when a DB record has changed.  This is usually enough
for any DB with sub-second resolution since the likeliness of us having
a race condition is very unlikely.

But not all DBs support sub-second resolution, and in those cases the
likeliness of a race condition increases considerable since we are
working with a 1 second granularity.

This patch completely removes the possibility of having race conditions
using a specific integer field that will be increased on each DB update.
It is compatible with both types of DBMs and will also work with rolling
upgrades.

The reason why we are not using the version counting provided by
SQLAlchemy [1] is because we require an ORM instance to use the feature,
and in some of our usages we don't have an instance to work with, and
adding an additional read query is unnecessarily inefficient.

Additionally we will no longer see spurious errors in unit test
test_do_cleanup_not_cleaning_already_claimed_by_us.

[1] http://docs.sqlalchemy.org/en/latest/orm/versioning.html

Implements: blueprint cinder-volume-active-active-support
Change-Id: Ief9333a2389d98f5d0a11d8da94d160de8ecce0e
2017-01-19 10:42:24 +01:00
Gorka Eguileor 4d3e1e7c35 Make Image Volume Cache cluster aware
Image Volume Cache mechanism was not cluster aware and therefore cached
images would not be shared among different services in the same cluster.

This patch addresses this issue and makes sure that they share by
cluster if there is one.

This patch does not address any concurrency issues that may currently
exist in the caching mechanism's code.

Implements: blueprint cinder-volume-active-active-support
Change-Id: I9be2b3c6dc571ce2e0e4ccf7557123a7858c1990
2017-01-19 10:42:23 +01:00
Gorka Eguileor b4a13281ea Make Replication support Active-Active
This patch adds new methods to our failover mechanism to allow failover
to work when a backend is clustered.

Adds REST API microversion 3.26 that adds a new `failover` method
equivalent to `failover_host` but accepting `cluster` field as well as
the `host` field.

Thaw and Freeze are updated to update cluster and all services within
the cluster.

Now cluster listings accepts new filtering fields `replication_status`,
`frozen`, and `active_backend_id`.

Summary listings return `replication_status` field and detailed listings
also return `frozen` and `active_backend_id`.

Specs: https://review.openstack.org/401392

APIImpact: New service failover action and new fields in cluster listings.
Implements: blueprint cinder-volume-active-active-support
Change-Id: Id3291b28242d5814c259283fa629b48f22e70260
2017-01-19 10:42:18 +01:00
Alex Meade d876680b83 Add prefix to user message event ids
This patch adds the 'VOLUME_' prefix to all message
event ids. This will prevent collisions and confusion
when other projects add user messages and create their
own event ids.

Also fix issue where request_id column is nullable in
sqlalchemy model but not properly set in the db
migration.

Implements blueprint summarymessage
Co-Authored-By: Sheel Rana <ranasheel2000@gmail.com>
Co-Authored-By: Michał Dulko <michal.dulko@gmail.com>

Change-Id: Ic23f898281870ad81c5a123302ddca50905952ea
2017-01-13 11:27:20 +01:00
xing-yang 307da0778f Migrate consistency groups to groups
This patch provides script to migrate data from consistencygroups to
groups and from cgsnapshots to group_snapshots.

In the migration script, it creates a default_cgsnapshot_type
for migrating data and copies data from consistency groups to
groups and from cgsnapshots to group_snapshots. Migrated consistency
groups and cgsnapshots will be removed from the database.

It depends on the following patch that adds generic code for
online data migrations:
    https://review.openstack.org/#/c/330391/

Run the following command to migrate CGs:
    cinder-manage db online_data_migrations
    --max_count <max>
    --ignore_state
max_count is optional. Default is 50.
ignore_state is optional. Default is False.

UpgradeImpact
Partial-Implements: blueprint generic-volume-group
Related: blueprint online-schema-upgrades
Change-Id: I1cf31e4ba4acffe08e2c09cbfd5b50cf0ea7a6e0
2016-11-21 21:43:17 -05:00
xing-yang 325f99a64a Add group snapshots - db and objects
This is the third patch that implements the generic-volume-group
bluerpint. It adds database and object changes in order to support
group snapshots and create group from source. The API changes will
be added in the next patch.

This patch depends on the second patch which adds create/delete/update
groups support which was already merged:
    https://review.openstack.org/#/c/322459/

The next patch to add volume manager changes is here:
    https://review.openstack.org/#/c/361376/

Partial-Implements: blueprint generic-volume-group
Change-Id: I2d11efe38af80d2eb025afbbab1ce8e6a269f83f
2016-07-18 22:19:10 -04:00
xing-yang 8c74c74695 Add generic volume groups
This is the second patch that implements the generic-volume-group
bluerpint. It adds the groups table and introduces create/delete/
update/list/show APIs for groups.

It depends on the first patch which adds group types and group specs:
    https://review.openstack.org/#/c/320165/

Client side patch is here:
    https://review.openstack.org/#/c/322627/

Current microversion is 3.13. The following CLI's are supported:
cinder --os-volume-api-version 3.13 group-create --name my_group
    <group type uuid> <volume type uuid>
cinder --os-volume-api-version 3.13 group-list
cinder --os-volume-api-version 3.13 create --group-id <group uuid>
    --volume-type <volume type uuid> <size>
cinder --os-volume-api-version 3.13 group-update <group uuid>
    --name new_name  description new_description
    --add-volumes <uuid of volume to add>
    --remove-volumes <uuid of volume to remove>
cinder --os-volume-api-version 3.13 group-show <group uuid>
cinder --os-volume-api-version 3.13 group-delete
    --delete-volumes <group uuid>

APIImpact
DocImpact
Change-Id: I35157439071786872bc9976741c4ef75698f7cb7
Partial-Implements: blueprint generic-volume-group
2016-07-16 19:34:39 -04:00
xing-yang 8cf9786e00 Add group type and group specs
This patch adds support for group types and group specs.
This is the first patch to implement the blueprint
generic-volume-group.

The client side patch is here:
https://review.openstack.org/#/c/320157/

Current microversion is 3.11. The following CLI's are supported.
cinder --os-volume-api-version 3.11 group-type-create my_test_group
cinder --os-volume-api-version 3.11 group-type-list
cinder --os-volume-api-version 3.11 group-type-show my_test_group
cinder --os-volume-api-version 3.11 group-type-key my_test_group
    set test_key=test_val
cinder --os-volume-api-version 3.11 group-specs-list
cinder --os-volume-api-version 3.11 group-type-key my_test_group
    unset test_key
cinder --os-volume-api-version 3.11 group-type-update <group type uuid>
    --name "new_group" --description "my group type"
cinder --os-volume-api-version 3.11 group-type-delete new_group

APIImpact
DocImpact
Change-Id: I38b938782e0c3b2df624f975bd07e0b81684c888
Partial-Implements: blueprint generic-volume-group
2016-07-08 14:26:39 -04:00
Gorka Eguileor 7294cf0352 Add workers table
This patch adds workers table required for the cleanup of failed
services, needed for the new cleanup mechanism we'll be implementing to
support Active-Active configurations.

They will be used for non Active-Active configurations as well.

Specs: https://review.openstack.org/236977

Implements: blueprint cinder-volume-active-active-support
Change-Id: I5057a4c9071ef9ca78b680bad72fd81373473ed9
2016-07-22 21:00:12 +02:00
Gorka Eguileor 57ea6967bf Add cluster table and related methods
This patch adds a new table called clusters with its ORM representation
class -Cluster-, and related DB methods.

It also updates DB tables for resources from Cinder Volume nodes that
are addressed by host to include a reference to the cluster
(cluster_name) and related DB methods.

This is part of the effort to support HA A-A in c-vol nodes.

Specs: https://review.openstack.org/327283
Change-Id: I10653d4a5fe4cb3fd1f8ccf1224938451753907e
Implements: blueprint cinder-volume-active-active-support
2016-07-22 18:40:28 +02:00
Alex Meade 53cfde43b8 User messages API for error cases
This patch implements basic user messages with the following APIs.
GET /messages
GET /messages/<message_id>
DELETE /messages/<message_id>

Implements : blueprint summarymessage

Co-Authored-By: Alex Meade <mr.alex.meade@gmail.com>
Co-Authored-By: Sheel Rana <ranasheel2000@gmail.com>

Change-Id: Id8a4a700c1159be24b15056f401a2ea77804d0a0
2016-04-29 18:41:10 +00:00
Michał Dulko 92be79e964 Cleanup DB schema after Mitaka
This commit removes tables and columns that we've stopped referencing in
Mitaka. Due to rolling upgrades guidelines we were unable to remove them
immediately, so let's do that in early Newton.

Change-Id: Iacd950d335e1142585c9c51f185fc12e7e0fd911
2016-04-01 11:02:44 +02:00
Jenkins cc2e463f97 Merge "Block subtractive operations in DB migrations" 2016-03-11 17:28:03 +00:00
wanghao 6b6249b5ff Add volumes table definition when migrating to 67
When migrating DB to 67 readd_iscsi_targets_table,
we will add foreign key "volumes.id", but we miss
to define the volumes table, that will cause the
migrating process fail: 'NoReferencedTableError'.

Fix this issue by adding volume table definition
before creating iscsi_targets table. test is
added as well.

Change-Id: Id0e0970517a5d3414e0ed124b2b7b3a5b1973761
Closes-Bug: #1554329
2016-03-08 13:57:15 +08:00
Michał Dulko 0d4bd0dc79 Block subtractive operations in DB migrations
To achieve rolling upgrades we need to make non-backward-compatible DB schema
migrations in a very specific manner that spans through 3 releases. In
particular we need to be very careful when dropping or altering columns
and tables.

This commit adds a test that blocks all the ALTER and DROP operations
from DB migrations. It allows to specify exceptions from this rule for two
purposes:
* Some DROP/ALTER migrations aren't subtractive (e.g. dropping a
  constraint).
* When following the process we've designed for non-backward-compatible
  migrations, we should be able to drop first unused columns or tables
  in O release.

The test is based on similar one implemented in Nova.

Implements: bp online-schema-upgrades

Change-Id: I07df721e5505abe17e45427c05b985b7e923a010
2016-03-07 10:10:44 +01:00
Ryan McNair c02336e4dd Re-enable -1 child limits for nested quotas
Add back support for -1 limits of child projects. The way that we
support the -1 child limits requires the following changes:
  * Continue quota validation up the hierarchy if the current limit is
    -1 until we hit a hard limit or no more parents, and update the
    any relevant parents' allocated value along the way
  * When updating limits, special care needs to be taken when updating
    child limit to be -1, or when changing from a -1 limit
  * Enable support for creating reservations for "allocated" values
    to support the scenario that:
      - a volume is created on a project with a limit of -1
      - the parent's allocated value has been updated appropriately
      - the volume create fails and the child's in_use quota rolls back
      - now we must also rollback the parent's allocated value

NOTE: There is a race condition between validation the NestedQuotas
and when the driver may be switched into use, and if -1 quotas are used
the validation could be out of date. Will look into better support for
switching on of NestedQuotas on live deployment with -1 limits, which
would likely leverage the "allocated" reservation system.

Closes-Bug: #1548645
Closes-Bug: #1544774
Closes-Bug: #1537189
Change-Id: I2d1dba87baf3595cc8f48574e0281ac17509fe7d
2016-02-27 07:16:10 +00:00
John Griffith 106c14a84b Replication v2.1 (Cheesecake)
This focuses the replication work on a specific use case,
and eliminates some of the ambiguity in earlier versions.

Additionally this implementation addresses needs for
devices that do replication based on the whole backend-device
or on Pools.

Use case:
  DR scenario, where a storage device is rendered inoperable.
  This implementation allows the preservation of user data
  for those volumes that are of type replication-enabled.

  The goal is NOT to make failures completely transparent
  but instead to preserve data access while an Admin tries
  to rebuild/recover his/her cloud.

It's very important to note that we're no longer interested in
dealing with replication in Cinder at a Volume level.  The concept
of have "some" volumes failover, and "others" left behind, proved
to not only be overly complex and difficult to implement, but we
never identified a concrete use-case where one would use failover
in a scenario where some volumes would stay and be accessible on
a primary but other may be moved and accessed via a secondary.

In this model, it's host/backend based.  So when you failover,
you're failing over an entire backend.  We heavily leverage
existing resources, specifically services, and capabilities.

Implements: blueprint replication-update

Change-Id: If862bcd18515098639f94a8294a8e44e1358c52a
2016-02-26 13:15:19 -07:00
LisaLi 4c83280125 Add restore_volume_id in backup
This patch is to add restore_volume_id in backup object.
When restoring a volume from a backup, it saves the
volume in backup object.

Currently volume service and backup service are in same host.
When backup service starts, it does cleanup tasks on both
backups and volumes on current host.

But with bp scalable-backup-service, backup service and
volume services can run on different hosts. When doing cleanup
tasks, we need to find out backing-up and restoring volumes
related to the backups on current host. Backing-up volumes can
be found with field backup.volume_id. Restoring volumes are found
by new field backup.restore_volume_id.

Change-Id: I757be7a5e47fc366c181400587b5a61fe3709a0b
Partial-Implements: bp scalable-backup-service
Co-Authored-By: Tom Barron <tpb@dyncloud.net>
2016-02-15 10:00:31 +08:00
Sheel Rana 253cab37e4 Constant defined for sqlAlchemy VARCHAR & INTEGER
sqlalchemy.types.VARCHAR and sqlalchemy.types.INTEGER are defined
and used as VARCHAR_TYPE and INTEGER_TYPE respectively in
cinder/tests/unit/test_migrations.py.

Change-Id: I3ed83f270843e93d3d1f730d6eaf2320e8269743
Closes-Bug: #1528989
2016-01-06 14:49:39 +05:30
Sheel Rana b2cd356cac Updated "deleted" column of volume_type_access
Below changes are done in this commit:

a. replaced update()
update() is replaced with oslo.db's soft_delete() used in
volume_type_access_remove() function of sqlalchemy.db.api to keep
value of "id" in "deleted" column during volume type access remove
operation.
This bug will be solved after this change.

b. updated db schema
db schema of volume_type_projects if updated.
As tinyint can store maximum 127, "deleted" column type is modified
tinyint->integer so that it can store more than 128 entries in it.

c. release notes
release notes added to prohibit addition or deletion of
volume_type_access to a project during update operation.

UpgradeImpact
Closes-Bug: #1518363

Change-Id: I638a202dfb2b4febf1d623683de22df3b6dc2615
2015-12-30 13:02:16 +00:00
Xing Yang dbc345729e Backup snapshots
Today we can backup a volume, but not a snapshot.
This patch adds support to backup snapshots and
provide another layer of data protection for the
user.

DocImpact
implements blueprint backup-snapshots

Change-Id: Ib4ab9ca9dc72b30151154f3f96037f9ce3c9c540
2015-11-21 10:15:19 -05:00
Ivan Kolodyazhny 6d678dc393 Remove downgrade migrations
According to cross project spec[1] downgrade migrations should be removed.

[1] I622f89fe63327d44f9b229d3bd9e76e15acbaa7a

Implements blueprint: no-downward-sql-migration

Change-Id: I111cdb4bba361de5da0ce7db8144965c947ada41
2015-12-18 14:04:16 +02:00
Patrick East 15c13f8aed Generic image-volume cache
This introduces a new feature for backends to be able to use cached
glance images when creating volumes from images.

If enabled, and the cinder internal tenant is configured, it will
create a minimal sized clone of a volume the first time it is
downloaded from glance. A new db table ‘image_volume_cache’ tracks
these volumes, which are owned by the Cinder internal tenant. The
original will then be extended to full size. Any subsequent requests to
create a volume from an image will be able to do a volume clone from
the image-volume cache entry instead of downloading the image data from
glance again.

In the future we could create an entry upon creating an image from a
volume right before uploading the volume to glance. This version
however does not aim to do more than add the initial framework and
help with subsequent image downloads. There are certainly more
improvements that can be done over time on top of this.

These image-volumes are host specific, so each backend may end up with
its very own image-volume to do clones from.

The cache is limited in size by number of entries allowed and size in
gb. When creating a new entry if space is needed the last used entries
will be evicted to make room.

In the current implementation the image-volumes will be viewable by a
cloud admin in the volume list, and show up as owned by the Cinder
internal tenant. They can safely be deleted manually by an admin, this
will cause the entry to be removed from the cache. These volumes will
use quota for the internal tenant.

Cache actions will post notifications to Ceilometer. There are
notifications like ‘image_volume_cache.miss’, ‘image_volume_cache.hit’,
and ‘image_volume_cache.evict’. A change will be required to the
event_definitions.yaml to see them nicely. Until then you only need to
add a new event type 'image_volume_cache.*' and look for the ‘image_id’
and ‘host’ fields.

DocImpact: We probably want some instructions on restrictions of the
cache and how to use it. The three new config options should also be
documented somewhere: ‘image_volume_cache_enabled’,
’image_volume_cache_max_size_gb’, ‘image_volume_cache_max_size_count’

Co-Authored-By: Tomoki Sekiyama <tomoki.sekiyama@hds.com>

Implements: blueprint image-volume-cache
Change-Id: If22bbaff89251e4e82a715170a48b4040f95c09f
2015-09-02 17:54:07 +00:00
wanghao bc804e79f5 Incremental backup improvements for L
1. Add 'is_incremental=True' and 'has_dependent_backups=True/False' to
response body of querying.
2. Add parent_id to notification system.

Since we need to get volume has_dependent_backups value when querying
volume detail list, to reduce the performance impact, add index to
parent_id column in backup table.

APIImpact

When showing backup detail it will return additional info
"is_incremental": True/False and "has_dependent_backups": True/False

DocImpact
Change-Id: Id2fbf5616ba7bea847cf0443006800db89dd7c35
Implements:  blueprint cinder-incremental-backup-improvements-for-l
2015-08-26 14:33:14 +08:00
Thang Pham 677ff1c699 Add version columns to services table
The following patch is part of the cinder effort to
support rolling upgrade.  This patch adds columns
to the services table to track the RPC and
oslo_versionedobjects versions of each service.

Follow up patches will be made to have each service:
register its RPC and oslo_versionedobjects versions on
startup, make the RPC and oslo_versionedobjects versions
compatible with an older release, and update the versions
once all services are updated to the latest release.

Change-Id: Ifa6c6ac230988c75dcc4e5fe220bfc5ee70ac338
Partial-Implements: blueprint rpc-object-compatibility
2015-08-17 16:31:34 -07:00
Xing Yang 2143b39da5 Attach snapshot - driver only
This patch is a continuation of adding support for
non-disruptive backup. It provides a more efficient way
to backup an attached volume by creating a temp snapshot.
Since this is used internally for backup, the attach
snapshot interface is added in the driver only. For
drivers not implementing the attach snapshot interface,
backup will still be done using a volume.

Partial-implements blueprint non-disruptive-backup
Change-Id: I3649ef1d7c8a18f9d6ed0543d463354273d5f62a
2015-08-08 11:44:46 -04:00
Xing Yang 360518ca01 Clone CG
This patch modifies the existing "create CG from source" API
to take an existing CG as a source, in addition to a CG snapshot.

APIImpact
DocImpact
implements blueprint clone-cg

Change-Id: Ieabc190a5d9a08e2c84e42140192e6ee3dac9433
2015-07-28 17:45:19 -04:00
Xing Yang e78018bd05 Non-disruptive backup
This patch adds support for non-disruptive backup for
volumes in 'in-use' status as follows:

Adds a force flag in create backup API when backing up
an 'in-use' volume.

For the default implementation in volume/driver.py:
* Create a temporary volume from the original volume
* Backup the temporary volume
* Clean up the temporary volume

For the LVM driver:
* Create a temporary snapshot
* Obtain local_path for the temporary snapshot
* Backup the temporary snapshot
* Cleanup the temporary snapshot

Attach snapshot will be implemented in another patch.

Partial-implements blueprint non-disruptive-backup
Change-Id: I915c279b526e7268d68ab18ce01200ae22deabdd
2015-07-22 16:59:19 -04:00
Vilobh Meshram 8c6fec4174 Nested Quota : Create allocated column in cinder.quotas
Create allocated column in cinder.quotas. This allocated
column will be the sum of "hard_limits" values for the
immediate child projects. This will be needed to track
allocated quota to child projects.

Change-Id: Ia80d9a6cdbbc7e86cf9f11979f5e80c53f0fac1f
Implements: bp cinder-nested-quota-driver
2015-06-19 10:22:45 -07:00
John Griffith cbcbc90cf6 Move unit tests into dedicated directory
This patch moves all of the existing cinder/tests into
cinder unit tests.  This is being done to make way for
the addition of cinder/tests/functional.

Yes, this is going to cause significant pain with
any changes that haven't merged behind it in terms
of rebase, but there's no real alternative.  We have
to rip the band-aid off at some point, and early in L
seems like a great time to do it.

Change-Id: I63b0f89474b3c139bdb89589abd85319d2aa61ec
2015-04-21 18:40:40 -06:00