Merge "Add doc for Share Replication"

This commit is contained in:
Jenkins 2016-04-14 22:57:45 +00:00 committed by Gerrit Code Review
commit 7b7f3cf46b
9 changed files with 842 additions and 358 deletions

View File

@ -14,7 +14,7 @@
Introduction to Manila Shared Filesystem Management Service
===========================================================
:term:`Manila` is the File Share service project for OpenStack. To administer the
:term:`manila` is the File Share service project for OpenStack. To administer the
OpenStack File Share service, it is helpful to understand a number of concepts
like share networks, shares, multi-tenancy and back ends that can be configured
with Manila. When configuring the File Share service, it is required to declare

View File

@ -46,36 +46,36 @@ The CapabilitiesFilter uses the following for matching operators:
This does a float conversion and then uses the python operators as expected.
* **<in>**
* **<in>**
This either chooses a host that has partially matching string in the capability
or chooses a host if it matches any value in a list. For example, if "<in> sse4"
or chooses a host if it matches any value in a list. For example, if "<in> sse4"
is used, it will match a host that reports capability of "sse4_1" or "sse4_2".
* **<or>**
This chooses a host that has one of the items specified. If the first word in
the string is <or>, another <or> and value pair can be concatenated. Examples
are "<or> 3", "<or> 3 <or> 5", and "<or> 1 <or> 3 <or> 7". This is for
string values only.
This chooses a host that has one of the items specified. If the first word in
the string is <or>, another <or> and value pair can be concatenated. Examples
are "<or> 3", "<or> 3 <or> 5", and "<or> 1 <or> 3 <or> 7". This is for
string values only.
* **<is>**
This chooses a host that matches a boolean capability. An example extra-spec value
This chooses a host that matches a boolean capability. An example extra-spec value
would be "<is> True".
* **=**
This does a float conversion and chooses a host that has equal to or greater
This does a float conversion and chooses a host that has equal to or greater
than the resource specified. This operator behaves this way for historical
reasons.
* **s==, s!=, s>=, s>, s<=, s<**
The "s" indicates it is a string comparison. These choose a host that satisfies
the comparison of strings in capability and specification. For example,
if "capabilities:replication_type s== dr", a host that reports replication_type of
"dr" will be chosen.
The "s" indicates it is a string comparison. These choose a host that satisfies
the comparison of strings in capability and specification. For example,
if "capabilities:replication_type s== dr", a host that reports
replication_type of "dr" will be chosen.
For vendor-specific capabilities (which need to be visible to the
CapabilityFilter), it is recommended to use the vendor prefix followed
@ -133,6 +133,19 @@ be created.
filter by pools that either have or don't have QoS support enabled. Added in
Mitaka.
* `replication_type` - indicates the style of replication supported for the
backend/pool. This extra_spec will have a string value and could be one
of :term:`writable`, :term:`readable` or :term:`dr`. `writable` replication
type involves synchronously replicated shares where all replicas are
writable. Promotion is not supported and not needed. `readable` and `dr`
replication types involve a single `active` or `primary` replica and one or
more `non-active` or secondary replicas per share. In `readable` type of
replication, `non-active` replicas have one or more export_locations and
can thus be mounted and read while the `active` replica is the only one
that can be written into. In `dr` style of replication, only
the `active` replica can be mounted, read from and written into. Added in
Mitaka.
Reporting Capabilities
----------------------
Drivers report capabilities as part of the updated stats (e.g. capacity)
@ -173,10 +186,19 @@ example vendor prefix:
'thin_provisioning': True, #
'max_over_subscription_ratio': 10, # (mandatory for thin)
'provisioned_capacity_gb': 270, # (mandatory for thin)
#
#
'replication_type': 'dr', # this backend supports
# replication_type 'dr'
#/
'my_dying_disks': 100, #\
'my_super_hero_1': 'Hulk', # "my" optional vendor
'my_super_hero_2': 'Spider-Man' # stats & capabilities
'my_super_hero_2': 'Spider-Man', # stats & capabilities
#/
#\
# can replicate to other
'replication_domain': 'asgard', # backends in
# replication_domain 'asgard'
#/
},
{'pool_name': 'thick pool',
@ -187,6 +209,7 @@ example vendor prefix:
'dedupe': False,
'compression': False,
'thin_provisioning': False,
'replication_type': None,
'my_dying_disks': 200,
'my_super_hero_1': 'Batman',
'my_super_hero_2': 'Robin',

View File

@ -94,7 +94,11 @@ function correctly in manila, such as:
- compression: whether the backend supports compressed shares;
- thin_provisioning: whether the backend is overprovisioning shares;
- pools: list of storage pools managed by this driver instance;
- qos: whether the backend supports quality of service for shares.
- qos: whether the backend supports quality of service for shares;
- replication_domain: string specifying a common group name for all backends
that can replicate between each other;
- replication_type: string specifying the type of replication supported by
the driver. Can be one of ('readable', 'writable' or 'dr').
.. note:: for more information please see http://docs.openstack.org/developer/manila/devref/capabilities_and_extra_specs.html
@ -199,3 +203,14 @@ consistency of multiple shares. In order to make use of this feature, driver
vendors must report this capability and implement its functions to work
according to the backend, so the feature can be properly invoked through
manila API.
Share Replication
-----------------
Replicas of shares can be created for either data protection (for disaster
recovery) or for load sharing. In order to utilize this feature, drivers must
report the ``replication_type`` they support as a capability and implement
necessary methods.
More details can be found at:
http://docs.openstack.org/developer/manila/devref/share_replication.html

View File

@ -79,6 +79,7 @@ Module Reference
fakes
manila
ganesha
share_replication
Capabilities and Extra-Specs
----------------------------

View File

@ -50,6 +50,17 @@ The following operations are supported on Clustered Data ONTAP:
- Create consistency group from CG snapshot
- Create CG snapshot
- Delete CG snapshot
- Create a replica (DHSS=False)
- Promote a replica (DHSS=False)
- Delete a replica (DHSS=False)
- Update a replica (DHSS=False)
- Create a replicated snapshot (DHSS=False)
- Delete a replicated snapshot (DHSS=False)
- Update a replicated snapshot (DHSS=False)
.. note::
:term:`DHSS` is abbreviated from `driver_handles_share_servers`.
Supported Operating Modes
-------------------------

View File

@ -90,6 +90,13 @@ Driver methods that are wrapped with hooks
- publish_service_capabilities
- shrink_share
- unmanage_share
- create_share_replica
- promote_share_replica
- delete_share_replica
- update_share_replica
- create_replicated_snapshot
- delete_replicated_snapshot
- update_replicated_snapshot
Above list with wrapped methods can be extended in future.

View File

@ -0,0 +1,313 @@
..
Copyright (c) 2016 Goutham Pacha Ravi
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=================
Share Replication
=================
As of the Mitaka release of OpenStack, :term:`manila` supports replication of
shares between different pools for drivers that operate with
``driver_handles_share_servers=False`` mode. These pools may be on different
backends or within the same backend. This feature can be used as a disaster
recovery solution or as a load sharing mirroring solution depending upon the
replication style chosen, the capability of the driver and the configuration
of backends.
This feature assumes and relies on the fact that share drivers will be
responsible for communicating with ALL storage controllers necessary to
achieve any replication tasks, even if that involves sending commands to
other storage controllers in other Availability Zones (or AZs).
End users would be able to create and manage their replicas, alongside their
shares and snapshots.
Storage availability zones and replication domains
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Replication is supported within the same availability zone, but in an ideal
solution, an Availability Zone should be perceived as a single failure domain.
So this feature provides the most value in an inter-AZ replication use case.
The ``replication_domain`` option is a backend specific StrOpt option to be
used within ``manila.conf``. The value can be any ASCII string. Two backends
that can replicate between each other would have the same
``replication_domain``. This comes from the premise that manila expects
Share Replication to be performed between backends that have similar
characteristics.
When scheduling new replicas, the scheduler takes into account the
``replication_domain`` option to match similar backends. It also ensures that
only one replica can be scheduled per pool. When backends report multiple
pools, manila would allow for replication between two pools on the same
backend.
The ``replication_domain`` option is meant to be used in conjunction with the
``storage_availability_zone`` option to utilize this solution for Data
Protection/Disaster Recovery.
Replication types
~~~~~~~~~~~~~~~~~
When creating a share that is meant to have replicas in the future, the user
will use a ``share_type`` with an extra_spec, :term:`replication_type` set to
a valid replication type that manila supports. Drivers must report the
replication type that they support as the :term:`replication_type`
capability during the ``_update_share_stats()`` call.
Three types of replication are currently supported:
**writable**
Synchronously replicated shares where all replicas are writable.
Promotion is not supported and not needed.
**readable**
Mirror-style replication with a primary (writable) copy
and one or more secondary (read-only) copies which can become writable
after a promotion.
**dr (for Disaster Recovery)**
Generalized replication with secondary copies that are inaccessible until
they are promoted to become the ``active`` replica.
.. note::
The term :term:`active` replica refers to the ``primary`` share. In
:term:`writable` style of replication, all replicas are :term:`active`,
and there could be no distinction of a ``primary`` share. In
:term:`readable` and :term:`dr` styles of replication, a ``secondary``
replica may be referred to as ``passive``, ``non-active`` or simply
``replica``.
Health of a share replica
~~~~~~~~~~~~~~~~~~~~~~~~~
Apart from the ``status`` attribute, share replicas have the
:term:`replica_state` attribute to denote the state of the replica. The
``primary`` replica will have it's :term:`replica_state` attribute set to
:term:`active`. A ``secondary`` replica may have one of the following values as
its :term:`replica_state`:
**in_sync**
The replica is up to date with the active replica
(possibly within a backend specific :term:`recovery point objective`).
**out_of_sync**
The replica has gone out of date (all new replicas start out in this
:term:`replica_state`).
**error**
When the scheduler failed to schedule this replica or some potentially
irrecoverable damage occurred with regard to updating data for this
replica.
Manila requests periodic update of the :term:`replica_state` of all non-active
replicas. The update occurs with respect to an interval defined through the
``replica_state_update_interval`` option in ``manila.conf``.
Administrators have an option of initiating a ``resync`` of a secondary
replica (for :term:`readable` and :term:`dr` types of replication). This could
be performed before a planned failover operation in order to have the most
up-to-date data on the replica.
Promotion
~~~~~~~~~
For :term:`readable` and :term:`dr` styles, we refer to the task of
switching a ``non-active`` replica with the :term:`active` replica as
`promotion`. For the :term:`writable` style of replication, promotion does
not make sense since all replicas are :term:`active` (or writable) at all
given points of time.
The ``status`` attribute of the non-active replica being promoted will be set
to :term:`replication_change` during its promotion. This has been classified
as a ``busy`` state and hence API interactions with the share are restricted
while one of its replicas is in this state.
Promotion of replicas with :term:`replica_state` set to ``error`` may not be
fully supported by the backend. However, manila allows the action as an
administrator feature and such an attempt may be honored by backends if
possible.
When multiple replicas exist, multiple replication relationships
between shares may need to be redefined at the backend during the promotion
operation. If the driver fails at this stage, the replicas may be left in an
inconsistent state. The share manager will set all replicas to have the
``status`` attribute set to ``error``. Recovery from this state would require
administrator intervention.
Snapshots
~~~~~~~~~
If the driver supports snapshots, the replication of a snapshot is expected
to be initiated simultaneously with the creation of the snapshot on the
:term:`active` replica. Manila tracks snapshots across replicas as separate
snapshot instances. The aggregate snapshot object itself will be in
``creating`` state until it is ``available`` across all of the share's replicas
that have their :term:`replica_state` attribute set to :term:`active` or
``in_sync``.
Therefore, for a driver that supports snapshots, the definition of being
``in_sync`` with the primary is not only that data is ensured (within the
:term:`recovery point objective`), but also that any 'available' snapshots
on the primary are ensured on the replica as well. If the snapshots cannot
be ensured, the :term:`replica_state` *must* be reported to manila as being
``out_of_sync`` until the snapshots have been replicated.
When a snapshot instance has its ``status`` attribute set to ``creating`` or
``deleting``, manila will poll the respective drivers for a status update. As
described earlier, the parent snapshot itself will be ``available`` only when
its instances across the :term:`active` and ``in_sync`` replicas of the share
are ``available``. The polling interval will be the same as
``replica_state_update_interval``.
Access Rules
~~~~~~~~~~~~
Access rules are not meant to be different across the replicas of the share.
Manila expects drivers to handle these access rules effectively depending on
the style of replication supported. For example, the :term:`dr` style of
replication does mean that the non-active replicas are inaccessible, so if
read-write rules are expected, then the rules should be applied on the
:term:`active` replica only. Similarly, drivers that
support :term:`readable` replication type should apply any read-write
rules as read-only for the non-active replicas.
Drivers will receive all the access rules in ``create_replica``,
``delete_replica`` and ``update_replica_state`` calls and have ample
opportunity to reconcile these rules effectively across replicas.
Understanding Replication Workflows
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Creating a share that supports replication
------------------------------------------
Administrators can create a share type with extra-spec
:term:`replication_type`, matching the style of replication the desired backend
supports. Users can use the share type to create a new share that
allows/supports replication. A replicated share always starts out with one
replica, the ``primary`` share itself.
The :term:`manila-scheduler` service will filter and weigh available pools to
find a suitable pool for the share being created. In particular,
* The ``CapabilityFilter`` will match the :term:`replication_type` extra_spec
in the request share_type with the ``replication_type`` capability reported
by a pool.
* The ``ShareReplicationFilter`` will further ensure that the pool has a
non-empty ``replication_domain`` capability being reported as well.
* The ``AvailabilityZoneFilter`` will ensure that the availability_zone
requested matches with the pool's availability zone.
Creating a replica
------------------
The user has to specify the share name/id of the share that is supposed to be
replicated and optionally an availability zone for the replica to exist in.
The replica inherits the parent share's share_type and associated
extra_specs. Scheduling of the replica is similar to that of the share.
* The `ShareReplicationFilter` will ensure that the pool is within
the same ``replication_domain`` as the :term:`active` replica and also
ensures that the pool does not already have a replica for that share.
Drivers supporting :term:`writable` style **must** set the
:term:`replica_state` attribute to :term:`active` when the replica has been
created and is ``available``.
Deleting a replica
------------------
Users can remove replicas that have their `status` attribute set to
``error``, ``in_sync`` or ``out_of_sync``. They could even delete an
:term:`active` replica as long as there is another :term:`active` replica
(as could be the case with `writable` replication style). Before the
``delete_replica`` call is made to the driver, an update_access call is made
to ensure access rules are safely removed for the replica.
Administrators may also ``force-delete`` replicas. Any driver exceptions will
only be logged and not re-raised; the replica will be purged from manila's
database.
Promoting a replica
-------------------
Users can promote replicas that have their :term:`replica_state` attribute set
to ``in_sync``. Administrators can attempt to promote replicas that have their
:term:`replica_state` attribute set to ``out_of_sync`` or ``error``. During a
promotion, if the driver raises an exception, all replicas will have their
`status` attribute set to `error` and recovery from this state will require
administrator intervention.
Resyncing a replica
-------------------
Prior to a planned failover, an administrator could attempt to update the
data on the replica. The ``update_replica_state`` call will be made during
such an action, giving drivers an opportunity to push the latest updates from
the `active` replica to the secondaries.
Creating a snapshot
-------------------
When a user takes a snapshot of a share that has replicas, manila creates as
many snapshot instances as there are share replicas. These snapshot
instances all begin with their `status` attribute set to `creating`. The driver
is expected to create the snapshot of the ``active`` replica and then begin to
replicate this snapshot as soon as the :term:`active` replica's
snapshot instance is created and becomes ``available``.
Deleting a snapshot
-------------------
When a user deletes a snapshot, the snapshot instances corresponding to each
replica of the share have their ``status`` attribute set to ``deleting``.
Drivers must update their secondaries as soon as the :term:`active` replica's
snapshot instance is deleted.
Driver Interfaces
~~~~~~~~~~~~~~~~~
As part of the ``_update_share_stats()`` call, the base driver reports the
``replication_domain`` capability. Drivers are expected to update the
:term:`replication_type` capability.
Drivers must implement the methods enumerated below in order to support
replication. ``promote_replica``, ``update_replica_state`` and
``update_replicated_snapshot`` need not be implemented by drivers that support
the :term:`writable` style of replication. The snapshot methods
``create_replicated_snapshot``, ``delete_replicated_snapshot`` and
``update_replicated_snapshot`` need not be implemented by a driver that does
not support snapshots.
Each driver request is made on a specific host. Create/delete operations
on secondary replicas are always made on the destination host. Create/delete
operations on snapshots are always made on the :term:`active` replica's host.
``update_replica_state`` and ``update_replicated_snapshot`` calls are made on
the host that the replica or snapshot resides on.
Share Replica interfaces:
-------------------------
.. autoclass:: manila.share.driver.ShareDriver
:members: create_replica, delete_replica, promote_replica, update_replica_state
Replicated Snapshot interfaces:
-------------------------------
.. autoclass:: manila.share.driver.ShareDriver
:members: create_replicated_snapshot, delete_replicated_snapshot, update_replicated_snapshot

View File

@ -4,7 +4,7 @@ Glossary
.. glossary::
Manila
manila
OpenStack project to provide "Shared Filesystems as a service".
manila-api
@ -13,8 +13,8 @@ Glossary
There is :term:`python-manilaclient` to interact with the API.
python-manilaclient
Command line interface to interact with :term:`Manila` via :term:`manila-api` and also a
Python module to interact programmatically with :term:`Manila`.
Command line interface to interact with :term:`manila` via :term:`manila-api` and also a
Python module to interact programmatically with :term:`manila`.
manila-scheduler
Responsible for scheduling/routing requests to the appropriate :term:`manila-share` service.
@ -27,3 +27,53 @@ Glossary
Acronym for 'driver handles share servers'. It defines two different share driver modes
when they either do handle share servers or not. Each driver is allowed to work only in
one mode at once. Requirement is to support, at least, one mode.
replication_type
Type of replication supported by a share driver. If the share driver supports replication
it will report a valid value to the :term:`manila-scheduler`. The value of this
capability can be one of :term:`readable`, :term:`writable` or :term:`dr`.
readable
A type of replication supported by :term:`manila` in which there is one :term:`active`
replica (also referred to as `primary` share) and one or more non-active replicas (also
referred to as `secondary` shares). All share replicas have at least one export location
and are mountable. However, the non-active replicas cannot be written to until after
promotion.
writable
A type of replication supported by :term:`manila` in which all share replicas are
writable. There is no requirement of a promotion since replication is synchronous.
All share replicas have one or more export locations each and are mountable.
dr
Acronym for `Disaster Recovery`. It is a type of replication supported by :term:`manila`
in which there is one :term:`active` replica (also referred to as `primary` share) and
one or more non-active replicas (also referred to as `secondary` shares). Only the
`active` replica has one or more export locations and can be mounted. The non-active
replicas are inaccessible until after promotion.
active
In :term:`manila`, an `active` replica refers to a share that can be written to. In
`readable` and `dr` styles of replication, there is only one `active` replica at any given
point in time. Thus, it may also be referred to as the `primary` share. In `writable`
style of replication, all replicas are writable and there may be no distinction of a
`primary` share.
replica_state
An attribute of the Share Instance (Share Replica) model in :term:`manila`. If the value is
:term:`active`, it refers to the type of the replica. If the value is one of `in_sync` or
`out_of_sync`, it refers to the state of consistency of data between the :term:`active`
replica and the share replica. If the value is `error`, a potentially irrecoverable
error may have occurred during the update of data between the :term:`active` replica and
the share replica.
replication_change
State of a non-active replica when it is being promoted to become the :term:`active`
replica.
recovery point objective
Abbreviated as ``RPO``, recovery point objective is a target window of time between which
a storage backend may guarantee that data is consistent between a primary and a secondary
replica. This window is **not** managed by :term:`manila`.

View File

@ -512,8 +512,8 @@ class ShareDriver(object):
may be added by the driver instead of R/W. Note that raising an
exception *will* result in the access_rules_status on the replica,
and the share itself being "out_of_sync". Drivers can sync on the
valid access rules that are provided on the create_replica and
promote_replica calls.
valid access rules that are provided on the ``create_replica`` and
``promote_replica`` calls.
:param context: Current context
:param share: Share model with share data.
@ -1049,15 +1049,16 @@ class ShareDriver(object):
access_rules, replica_snapshots, share_server=None):
"""Replicate the active replica to a new replica on this backend.
NOTE: This call is made on the host that the new replica is
being created upon.
.. note::
This call is made on the host that the new replica is being created
upon.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
This list also contains the replica to be created. The 'active'
replica will have its 'replica_state' attr set to 'active'.
EXAMPLE:
.. code::
This list also contains the replica to be created. The 'active'
replica will have its 'replica_state' attr set to 'active'.
Example::
[
{
@ -1086,59 +1087,69 @@ class ShareDriver(object):
},
...
]
:param new_replica: The share replica dictionary.
EXAMPLE:
.. code::
Example::
{
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'creating',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'out_of_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'out_of_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': 'e6155221-ea00-49ef-abf9-9f89b7dd900a',
'share_server': <models.ShareServer> or None,
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'creating',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'out_of_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'out_of_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': 'e6155221-ea00-49ef-abf9-9f89b7dd900a',
'share_server': <models.ShareServer> or None,
}
:param access_rules: A list of access rules that other instances of
the share already obey. Drivers are expected to apply access rules
to the new replica or disregard access rules that don't apply.
EXAMPLE:
.. code::
[ {
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}]
:param replica_snapshots: List of dictionaries of snapshot instances
for each snapshot of the share whose 'aggregate_status' property was
reported to be 'available' when the share manager initiated this
request. Each list member will have two sub dictionaries:
'active_replica_snapshot' and 'share_replica_snapshot'. The 'active'
replica snapshot corresponds to the instance of the snapshot on any
of the 'active' replicas of the share while share_replica_snapshot
corresponds to the snapshot instance for the specific replica that
will need to exist on the new share replica that is being created.
The driver needs to ensure that this snapshot instance is truly
available before transitioning the replica from 'out_of_sync' to
'in_sync'. Snapshots instances for snapshots that have an
'aggregate_status' of 'creating' or 'deleting' will be polled for in
the update_replicated_snapshot method.
EXAMPLE:
.. code::
[ {
:param access_rules: A list of access rules.
These are rules that other instances of the share already obey.
Drivers are expected to apply access rules to the new replica or
disregard access rules that don't apply.
Example::
[
{
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}
]
:param replica_snapshots: List of dictionaries of snapshot instances.
This includes snapshot instances of every snapshot of the share
whose 'aggregate_status' property was reported to be 'available'
when the share manager initiated this request. Each list member
will have two sub dictionaries: 'active_replica_snapshot' and
'share_replica_snapshot'. The 'active' replica snapshot corresponds
to the instance of the snapshot on any of the 'active' replicas of
the share while share_replica_snapshot corresponds to the snapshot
instance for the specific replica that will need to exist on the
new share replica that is being created. The driver needs to ensure
that this snapshot instance is truly available before transitioning
the replica from 'out_of_sync' to 'in_sync'. Snapshots instances
for snapshots that have an 'aggregate_status' of 'creating' or
'deleting' will be polled for in the ``update_replicated_snapshot``
method.
Example::
[
{
'active_replica_snapshot': {
'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e',
'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
@ -1153,17 +1164,27 @@ class ShareDriver(object):
'provider_location': None,
...
},
}]
:param share_server: <models.ShareServer> or None,
Share server of the replica being created.
:return: None or a dictionary containing export_locations,
replica_state and access_rules_status. export_locations is a list of
paths and replica_state is one of active, in_sync, out_of_sync or
error. A backend supporting 'writable' type replication should return
'active' as the replica_state. Export locations should be in the
same format as returned during the create_share call.
EXAMPLE:
.. code::
}
]
:param share_server: <models.ShareServer> or None
Share server of the replica being created.
:return: None or a dictionary.
The dictionary can contain export_locations replica_state and
access_rules_status. export_locations is a list of paths and
replica_state is one of 'active', 'in_sync', 'out_of_sync' or
'error'.
.. important::
A backend supporting 'writable' type replication should return
'active' as the replica_state.
Export locations should be in the same format as returned during the
``create_share`` call.
Example::
{
'export_locations': [
{
@ -1175,6 +1196,7 @@ class ShareDriver(object):
'replica_state': 'in_sync',
'access_rules_status': 'in_sync',
}
"""
raise NotImplementedError()
@ -1182,15 +1204,16 @@ class ShareDriver(object):
replica, share_server=None):
"""Delete a replica.
NOTE: This call is made on the host that hosts the replica being
deleted.
.. note::
This call is made on the host that hosts the replica being
deleted.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
This list also contains the replica to be deleted. The 'active'
replica will have its 'replica_state' attr set to 'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
This list also contains the replica to be deleted. The 'active'
replica will have its 'replica_state' attr set to 'active'.
Example::
[
{
@ -1219,38 +1242,41 @@ class ShareDriver(object):
},
...
]
:param replica: Dictionary of the share replica being deleted.
EXAMPLE:
.. code::
Example::
{
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations
],
'access_rules_status': 'out_of_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f',
'share_server': <models.ShareServer> or None,
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations
],
'access_rules_status': 'out_of_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '53099868-65f1-11e5-9d70-feff819cdc9f',
'share_server': <models.ShareServer> or None,
}
:param replica_snapshots: A list of dictionaries containing snapshot
instances that are associated with the share replica being deleted.
No model updates are possible in this method. The driver should
return when the cleanup is completed on the backend for both,
the snapshots and the replica itself. Drivers must handle situations
where the snapshot may not yet have finished 'creating' on this
replica.
EXAMPLE:
.. code::
:param replica_snapshots: List of dictionaries of snapshot instances.
The dict contains snapshot instances that are associated with the
share replica being deleted.
No model updates to snapshot instances are possible in this method.
The driver should return when the cleanup is completed on the
backend for both, the snapshots and the replica itself. Drivers
must handle situations where the snapshot may not yet have
finished 'creating' on this replica.
Example::
[
{
@ -1269,12 +1295,14 @@ class ShareDriver(object):
},
...
]
:param share_server: <models.ShareServer> or None,
Share server of the replica to be deleted.
:param share_server: <models.ShareServer> or None
Share server of the replica to be deleted.
:return: None.
:raises Exception. Any exception raised will set the share replica's
'status' and 'replica_state' to 'error_deleting'. It will not affect
snapshots belonging to this replica.
:raises: Exception.
Any exception raised will set the share replica's 'status' and
'replica_state' attributes to 'error_deleting'. It will not affect
snapshots belonging to this replica.
"""
raise NotImplementedError()
@ -1282,15 +1310,16 @@ class ShareDriver(object):
share_server=None):
"""Promote a replica to 'active' replica state.
NOTE: This call is made on the host that hosts the replica being
promoted.
.. note::
This call is made on the host that hosts the replica being
promoted.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
This list also contains the replica to be promoted. The 'active'
replica will have its 'replica_state' attr set to 'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
This list also contains the replica to be promoted. The 'active'
replica will have its 'replica_state' attr set to 'active'.
Example::
[
{
@ -1321,54 +1350,59 @@ class ShareDriver(object):
]
:param replica: Dictionary of the replica to be promoted.
EXAMPLE:
.. code::
Example::
{
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87',
'share_server': <models.ShareServer> or None,
'id': 'e82ff8b6-65f0-11e5-9d70-feff819cdc9f',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS2',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'f6e146d0-65f0-11e5-9d70-feff819cdc9f',
'export_locations': [
models.ShareInstanceExportLocations
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '07574742-67ea-4dfd-9844-9fbd8ada3d87',
'share_server': <models.ShareServer> or None,
}
:param access_rules: A list of access rules that other instances of
the share already obey.
EXAMPLE:
.. code::
[ {
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}]
:param share_server: <models.ShareServer> or None,
Share server of the replica to be promoted.
:return: updated_replica_list or None
:param access_rules: A list of access rules
These access rules are obeyed by other instances of the share
Example::
[
{
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}
]
:param share_server: <models.ShareServer> or None
Share server of the replica to be promoted.
:return: updated_replica_list or None.
The driver can return the updated list as in the request
parameter. Changes that will be updated to the Database are:
'export_locations', 'access_rules_status' and 'replica_state'.
:raises Exception
:raises: Exception.
This can be any exception derived from BaseException. This is
re-raised by the manager after some necessary cleanup. If the
driver raises an exception during promotion, it is assumed
that all of the replicas of the share are in an inconsistent
state. Recovery is only possible through the periodic update
call and/or administrator intervention to correct the 'status'
of the affected replicas if they become healthy again.
driver raises an exception during promotion, it is assumed that
all of the replicas of the share are in an inconsistent state.
Recovery is only possible through the periodic update call and/or
administrator intervention to correct the 'status' of the affected
replicas if they become healthy again.
"""
raise NotImplementedError()
@ -1377,8 +1411,9 @@ class ShareDriver(object):
share_server=None):
"""Update the replica_state of a replica.
NOTE: This call is made on the host which hosts the replica being
updated.
.. note::
This call is made on the host which hosts the replica being
updated.
Drivers should fix replication relationships that were broken if
possible inside this method.
@ -1387,11 +1422,11 @@ class ShareDriver(object):
whenever requested by the administrator through the 'resync' API.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
This list also contains the replica to be updated. The 'active'
replica will have its 'replica_state' attr set to 'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
This list also contains the replica to be updated. The 'active'
replica will have its 'replica_state' attr set to 'active'.
Example::
[
{
@ -1420,79 +1455,89 @@ class ShareDriver(object):
},
...
]
:param replica: Dictionary of the replica being updated.
Replica state will always be 'in_sync', 'out_of_sync', or 'error'.
Replicas in 'active' state will not be passed via this parameter.
EXAMPLE:
.. code::
:param replica: Dictionary of the replica being updated
Replica state will always be 'in_sync', 'out_of_sync', or 'error'.
Replicas in 'active' state will not be passed via this parameter.
Example::
{
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS1',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
}
:param access_rules: A list of access rules that other replicas of
the share already obey. The driver could attempt to sync on any
un-applied access_rules.
EXAMPLE:
.. code::
[ {
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}]
:param replica_snapshots: List of dictionaries of snapshot instances
for each snapshot of the share whose 'aggregate_status' property was
reported to be 'available' when the share manager initiated this
request. Each list member will have two sub dictionaries:
'active_replica_snapshot' and 'share_replica_snapshot'. The 'active'
replica snapshot corresponds to the instance of the snapshot on any
of the 'active' replicas of the share while share_replica_snapshot
corresponds to the snapshot instance for the specific replica being
updated. The driver needs to ensure that this snapshot instance is
truly available before transitioning from 'out_of_sync' to
'in_sync'. Snapshots instances for snapshots that have an
'aggregate_status' of 'creating' or 'deleting' will be polled for in
the update_replicated_snapshot method.
EXAMPLE:
.. code::
[ {
'active_replica_snapshot': {
'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e',
'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS1',
'status': 'available',
'provider_location': '/newton/share-snapshot-10e49c3e-aca9',
...
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
}
:param access_rules: A list of access rules
These access rules are obeyed by other instances of the share. The
driver could attempt to sync on any un-applied access_rules.
Example::
[
{
'id': 'f0875f6f-766b-4865-8b41-cccb4cdf1676',
'deleted' = False,
'share_id' = 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'access_type' = 'ip',
'access_to' = '172.16.20.1',
'access_level' = 'rw',
}
]
:param replica_snapshots: List of dictionaries of snapshot instances.
This includes snapshot instances of every snapshot of the share
whose 'aggregate_status' property was reported to be 'available'
when the share manager initiated this request. Each list member
will have two sub dictionaries: 'active_replica_snapshot' and
'share_replica_snapshot'. The 'active' replica snapshot corresponds
to the instance of the snapshot on any of the 'active' replicas of
the share while share_replica_snapshot corresponds to the snapshot
instance for the specific replica being updated. The driver needs
to ensure that this snapshot instance is truly available before
transitioning from 'out_of_sync' to 'in_sync'. Snapshots instances
for snapshots that have an 'aggregate_status' of 'creating' or
'deleting' will be polled for in the update_replicated_snapshot
method.
Example::
[
{
'active_replica_snapshot': {
'id': '8bda791c-7bb6-4e7b-9b64-fefff85ff13e',
'share_instance_id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
'status': 'available',
'provider_location': '/newton/share-snapshot-10e49c3e-aca9',
...
},
'share_replica_snapshot': {
'id': ,
'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'status': 'creating',
'provider_location': None,
'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'status': 'creating',
'provider_location': None,
...
},
}]
}
]
:param share_server: <models.ShareServer> or None
:return: replica_state
replica_state - a str value denoting the replica_state that the
replica can have. Valid values are 'in_sync' and 'out_of_sync'
or None (to leave the current replica_state unchanged).
:return: replica_state: a str value denoting the replica_state.
Valid values are 'in_sync' and 'out_of_sync' or None (to leave the
current replica_state unchanged).
"""
raise NotImplementedError()
@ -1501,9 +1546,10 @@ class ShareDriver(object):
share_server=None):
"""Create a snapshot on active instance and update across the replicas.
NOTE: This call is made on the 'active' replica's host. Drivers
are expected to transfer the snapshot created to the respective
replicas.
.. note::
This call is made on the 'active' replica's host. Drivers are
expected to transfer the snapshot created to the respective
replicas.
The driver is expected to return model updates to the share manager.
If it was able to confirm the creation of any number of the snapshot
@ -1512,11 +1558,11 @@ class ShareDriver(object):
to '100%'.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
The 'active' replica will have its 'replica_state' attr set to
'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
The 'active' replica will have its 'replica_state' attr set to
'active'.
Example::
[
{
@ -1537,11 +1583,14 @@ class ShareDriver(object):
},
...
]
:param replica_snapshots: List of all snapshot instances that track
the snapshot across the replicas. All the instances will have their
status attribute set to 'creating'.
EXAMPLE:
.. code::
:param replica_snapshots: List of dictionaries of snapshot instances.
These snapshot instances track the snapshot across the replicas.
All the instances will have their status attribute set to
'creating'.
Example::
[
{
'id': 'd3931a93-3984-421e-a9e7-d9f71895450a',
@ -1559,12 +1608,13 @@ class ShareDriver(object):
},
...
]
:param share_server: <models.ShareServer> or None
:return: List of replica_snapshots, a list of dictionaries containing
values that need to be updated on the database for the snapshot
instances being created.
:raises: Exception. Any exception in this method will set all
instances to 'error'.
:return: List of dictionaries of snapshot instances.
The dictionaries can contain values that need to be updated on the
database for the snapshot instances being created.
:raises: Exception.
Any exception in this method will set all instances to 'error'.
"""
raise NotImplementedError()
@ -1572,9 +1622,10 @@ class ShareDriver(object):
replica_snapshots, share_server=None):
"""Delete a snapshot by deleting its instances across the replicas.
NOTE: This call is made on the 'active' replica's host, since
drivers may not be able to delete the snapshot from an individual
replica.
.. note::
This call is made on the 'active' replica's host, since
drivers may not be able to delete the snapshot from an individual
replica.
The driver is expected to return model updates to the share manager.
If it was able to confirm the removal of any number of the snapshot
@ -1583,11 +1634,11 @@ class ShareDriver(object):
from the database.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
The 'active' replica will have its 'replica_state' attr set to
'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
The 'active' replica will have its 'replica_state' attr set to
'active'.
Example::
[
{
@ -1608,11 +1659,14 @@ class ShareDriver(object):
},
...
]
:param replica_snapshots: List of all snapshot instances that track
the snapshot across the replicas. All the instances will have their
status attribute set to 'deleting'.
EXAMPLE:
.. code::
:param replica_snapshots: List of dictionaries of snapshot instances.
These snapshot instances track the snapshot across the replicas.
All the instances will have their status attribute set to
'deleting'.
Example::
[
{
'id': 'd3931a93-3984-421e-a9e7-d9f71895450a',
@ -1630,14 +1684,16 @@ class ShareDriver(object):
},
...
]
:param share_server: <models.ShareServer> or None
:return: List of replica_snapshots, a list of dictionaries containing
values that need to be updated on the database for the snapshot
instances being deleted. To confirm the deletion of the snapshot
instance, set the 'status' attribute of the instance to
'deleted'(constants.STATUS_DELETED).
:raises: Exception. Any exception in this method will set all
instances to 'error_deleting'.
:return: List of dictionaries of snapshot instances.
The dictionaries can contain values that need to be updated on the
database for the snapshot instances being deleted. To confirm the
deletion of the snapshot instance, set the 'status' attribute of
the instance to 'deleted' (constants.STATUS_DELETED)
:raises: Exception.
Any exception in this method will set the status attribute of all
snapshot instances to 'error_deleting'.
"""
raise NotImplementedError()
@ -1646,8 +1702,9 @@ class ShareDriver(object):
replica_snapshot, share_server=None):
"""Update the status of a snapshot instance that lives on a replica.
NOTE: For DR and Readable styles of replication, this call is made on
the replica's host and not the 'active' replica's host.
.. note::
For DR and Readable styles of replication, this call is made on
the replica's host and not the 'active' replica's host.
This method is called periodically by the share manager. It will
query for snapshot instances that track the parent snapshot across
@ -1661,60 +1718,64 @@ class ShareDriver(object):
instance status to 'error'.
:param context: Current context
:param replica_list: List of all replicas for a particular share.
The 'active' replica will have its 'replica_state' attr set to
'active'.
EXAMPLE:
.. code::
:param replica_list: List of all replicas for a particular share
The 'active' replica will have its 'replica_state' attr set to
'active'.
Example::
[
{
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'replica_state': 'in_sync',
...
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
'share_server': <models.ShareServer> or None,
},
{
'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'replica_state': 'active',
...
'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094',
'share_server': <models.ShareServer> or None,
},
...
{
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'replica_state': 'in_sync',
...
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
'share_server': <models.ShareServer> or None,
},
{
'id': '10e49c3e-aca9-483b-8c2d-1c337b38d6af',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'replica_state': 'active',
...
'share_server_id': 'f63629b3-e126-4448-bec2-03f788f76094',
'share_server': <models.ShareServer> or None,
},
...
]
:param share_replica: Dictionary of the replica the snapshot instance
is meant to be associated with. Replicas in 'active' replica_state
will not be passed via this parameter.
EXAMPLE:
.. code::
:param share_replica: Share replica dictionary.
This replica is associated with the snapshot instance whose
status is being updated. Replicas in 'active' replica_state will
not be passed via this parameter.
Example::
{
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS1',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
'id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_id': 'f0e4bb5e-65f0-11e5-9d70-feff819cdc9f',
'deleted': False,
'host': 'openstack2@cmodeSSVMNFS1',
'status': 'available',
'scheduled_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'launched_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
'terminated_at': None,
'replica_state': 'in_sync',
'availability_zone_id': 'e2c2db5c-cb2f-4697-9966-c06fb200cb80',
'export_locations': [
models.ShareInstanceExportLocations,
],
'access_rules_status': 'in_sync',
'share_network_id': '4ccd5318-65f1-11e5-9d70-feff819cdc9f',
'share_server_id': '4ce78e7b-0ef6-4730-ac2a-fd2defefbd05',
}
:param replica_snapshots: List of all snapshot instances that track
the snapshot across the replicas. This will include the instance
being updated as well.
EXAMPLE:
.. code::
:param replica_snapshots: List of dictionaries of snapshot instances.
These snapshot instances track the snapshot across the replicas.
This will include the snapshot instance being updated as well.
Example::
[
{
'id': 'd3931a93-3984-421e-a9e7-d9f71895450a',
@ -1728,34 +1789,37 @@ class ShareDriver(object):
},
...
]
:param replica_snapshot: Dictionary of the snapshot instance to be
updated. replica_snapshot will be in 'creating' or 'deleting'
states when sent via this parameter.
EXAMPLE:
.. code::
:param replica_snapshot: Dictionary of the snapshot instance.
This is the instance to be updated. It will be in 'creating' or
'deleting' state when sent via this parameter.
Example::
{
'name': 'share-snapshot-18825630-574f-4912-93bb-af4611ef35a2',
'share_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_name': 'share-d487b88d-e428-4230-a465-a800c2cce5f8',
'status': 'creating',
'id': '18825630-574f-4912-93bb-af4611ef35a2',
'deleted': False,
'created_at': datetime.datetime(2016, 8, 3, 0, 5, 58),
'share': <models.ShareInstance>,
'updated_at': datetime.datetime(2016, 8, 3, 0, 5, 58),
'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40',
'progress': '0%',
'deleted_at': None,
'provider_location': None,
'name': 'share-snapshot-18825630-574f-4912-93bb-af4611ef35a2',
'share_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'share_name': 'share-d487b88d-e428-4230-a465-a800c2cce5f8',
'status': 'creating',
'id': '18825630-574f-4912-93bb-af4611ef35a2',
'deleted': False,
'created_at': datetime.datetime(2016, 8, 3, 0, 5, 58),
'share': <models.ShareInstance>,
'updated_at': datetime.datetime(2016, 8, 3, 0, 5, 58),
'share_instance_id': 'd487b88d-e428-4230-a465-a800c2cce5f8',
'snapshot_id': '13ee5cb5-fc53-4539-9431-d983b56c5c40',
'progress': '0%',
'deleted_at': None,
'provider_location': None,
}
:param share_server: <models.ShareServer> or None
:return: replica_snapshot_model_update, a dictionary containing
values that need to be updated on the database for the snapshot
instance that represents the snapshot on the replica.
:raises: exception.SnapshotResourceNotFound for
snapshots that are not found on the backend and their status was
'deleting'.
:return: replica_snapshot_model_update: a dictionary.
The dictionary must contain values that need to be updated on the
database for the snapshot instance that represents the snapshot on
the replica.
:raises: exception.SnapshotResourceNotFound
Raise this exception for snapshots that are not found on the
backend and their status was 'deleting'.
"""
raise NotImplementedError()