Doc8: Stop skipping D001: Line too long

This cleans up the cases where we had D001 violations so we can stop
skipping that check in doc8 runs.

Change-Id: Ie52f6ecac1a645fcbcc643b9ca63e033b622d830
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
This commit is contained in:
Sean McGinnis 2019-02-19 16:51:56 -06:00
parent b6f9932f9c
commit d5b539be36
No known key found for this signature in database
GPG Key ID: CE7EE4BFAF8D70C8
56 changed files with 699 additions and 427 deletions

View File

@ -10,11 +10,15 @@ Cinder Specific Commandments
- [N314] Check for vi editor configuration in source files. - [N314] Check for vi editor configuration in source files.
- [N322] Ensure default arguments are not mutable. - [N322] Ensure default arguments are not mutable.
- [N323] Add check for explicit import of _() to ensure proper translation. - [N323] Add check for explicit import of _() to ensure proper translation.
- [N325] str() and unicode() cannot be used on an exception. Remove or use six.text_type(). - [N325] str() and unicode() cannot be used on an exception. Remove or use
- [N336] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs. six.text_type().
- [C301] timeutils.utcnow() from oslo_utils should be used instead of datetime.now(). - [N336] Must use a dict comprehension instead of a dict constructor with a
sequence of key-value pairs.
- [C301] timeutils.utcnow() from oslo_utils should be used instead of
datetime.now().
- [C302] six.text_type should be used instead of unicode. - [C302] six.text_type should be used instead of unicode.
- [C303] Ensure that there are no 'print()' statements in code that is being committed. - [C303] Ensure that there are no 'print()' statements in code that is being
committed.
- [C304] Enforce no use of LOG.audit messages. LOG.info should be used instead. - [C304] Enforce no use of LOG.audit messages. LOG.info should be used instead.
- [C305] Prevent use of deprecated contextlib.nested. - [C305] Prevent use of deprecated contextlib.nested.
- [C306] timeutils.strtime() must not be used (deprecated). - [C306] timeutils.strtime() must not be used (deprecated).
@ -28,7 +32,8 @@ Cinder Specific Commandments
General General
------- -------
- Use 'raise' instead of 'raise e' to preserve original traceback or exception being reraised:: - Use 'raise' instead of 'raise e' to preserve original traceback or exception
being reraised::
except Exception as e: except Exception as e:
... ...

View File

@ -186,7 +186,8 @@ Request Example
Delete consistency group Delete consistency group
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: POST /v2/{project_id}/consistencygroups/{consistencygroup_id}/delete .. rest_method::
POST /v2/{project_id}/consistencygroups/{consistencygroup_id}/delete
Deletes a consistency group. Deletes a consistency group.
@ -264,7 +265,8 @@ Response Example
Update consistency group Update consistency group
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v2/{project_id}/consistencygroups/{consistencygroup_id}/update .. rest_method::
PUT /v2/{project_id}/consistencygroups/{consistencygroup_id}/update
Updates a consistency group. Updates a consistency group.

View File

@ -11,7 +11,8 @@ Force-delete backup
.. rest_method:: POST /v2/{project_id}/backups/{backup_id}/action .. rest_method:: POST /v2/{project_id}/backups/{backup_id}/action
Force-deletes a backup. Specify the ``os-force_delete`` action in the request body. Force-deletes a backup. Specify the ``os-force_delete`` action in the request
body.
This operation deletes the backup and any backup data. This operation deletes the backup and any backup data.
@ -52,7 +53,8 @@ Reset backup's status
.. rest_method:: POST /v2/{project_id}/backups/{backup_id}/action .. rest_method:: POST /v2/{project_id}/backups/{backup_id}/action
Reset a backup's status. Specify the ``os-reset_status`` action in the request body. Reset a backup's status. Specify the ``os-reset_status`` action in the request
body.
Response codes Response codes
-------------- --------------

View File

@ -1231,24 +1231,24 @@ os-detach:
in: body in: body
required: true required: true
type: object type: object
os-ext-snap-attr:progress:
description: |
A percentage value for the build progress.
in: body
required: true
type: string
os-ext-snap-attr:project_id:
description: |
The UUID of the owning project.
in: body
required: true
type: string
os-extend: os-extend:
description: | description: |
The ``os-extend`` action. The ``os-extend`` action.
in: body in: body
required: true required: true
type: object type: object
os-extended-snapshot-attributes:progress:
description: |
A percentage value for the build progress.
in: body
required: true
type: string
os-extended-snapshot-attributes:project_id:
description: |
The UUID of the owning project.
in: body
required: true
type: string
os-force_delete: os-force_delete:
description: | description: |
The ``os-force_delete`` action. The ``os-force_delete`` action.

View File

@ -10,7 +10,8 @@ Shows and updates quota classes for a project.
Show quota classes Show quota classes
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v2/{admin_project_id}/os-quota-class-sets/{quota_class_name} .. rest_method::
GET /v2/{admin_project_id}/os-quota-class-sets/{quota_class_name}
Shows quota class set for a project. If no specific value for the quota class Shows quota class set for a project. If no specific value for the quota class
resource exists, then the default value will be reported. resource exists, then the default value will be reported.
@ -60,7 +61,8 @@ Response Example
Update quota classes Update quota classes
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v2/{admin_project_id}/os-quota-class-sets/{quota_class_name} .. rest_method::
PUT /v2/{admin_project_id}/os-quota-class-sets/{quota_class_name}
Updates quota class set for a project. If the ``quota_class_name`` key does not Updates quota class set for a project. If the ``quota_class_name`` key does not
exist, then the API will create one. exist, then the API will create one.

View File

@ -152,7 +152,8 @@ Response Example
Get default quotas Get default quotas
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v2/{admin_project_id}/os-quota-sets/{project_id}/defaults .. rest_method::
GET /v2/{admin_project_id}/os-quota-sets/{project_id}/defaults
Gets default quotas for a project. Gets default quotas for a project.

View File

@ -12,7 +12,8 @@ Manage existing volume
.. rest_method:: POST /v2/{project_id}/os-volume-manage .. rest_method:: POST /v2/{project_id}/os-volume-manage
Creates a Block Storage volume by using existing storage rather than allocating new storage. Creates a Block Storage volume by using existing storage rather than allocating
new storage.
The caller must specify a reference to an existing storage volume The caller must specify a reference to an existing storage volume
in the ref parameter in the request. Although each storage driver in the ref parameter in the request. Although each storage driver

View File

@ -78,7 +78,8 @@ Request Example
List private volume type access details List private volume type access details
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v2/{project_id}/types/{volume_type}/os-volume-type-access .. rest_method::
GET /v2/{project_id}/types/{volume_type}/os-volume-type-access
Lists project IDs that have access to private volume type. Lists project IDs that have access to private volume type.

View File

@ -65,12 +65,12 @@ Response Parameters
.. rest_parameters:: parameters.yaml .. rest_parameters:: parameters.yaml
- status: status_2 - status: status_2
- os-extended-snapshot-attributes:progress: os-extended-snapshot-attributes:progress - os-extended-snapshot-attributes:progress: os-ext-snap-attr:progress
- description: description - description: description
- created_at: created_at - created_at: created_at
- name: name - name: name
- volume_id: volume_id_5 - volume_id: volume_id_5
- os-extended-snapshot-attributes:project_id: os-extended-snapshot-attributes:project_id - os-extended-snapshot-attributes:project_id: os-ext-snap-attr:project_id
- size: size - size: size
- id: id_4 - id: id_4
- metadata: metadata - metadata: metadata
@ -89,7 +89,8 @@ Create snapshot
.. rest_method:: POST /v2/{project_id}/snapshots .. rest_method:: POST /v2/{project_id}/snapshots
Creates a volume snapshot, which is a point-in-time, complete copy of a volume. You can create a volume from a snapshot. Creates a volume snapshot, which is a point-in-time, complete copy of a volume.
You can create a volume from a snapshot.
Response codes Response codes
-------------- --------------
@ -138,7 +139,8 @@ List snapshots
.. rest_method:: GET /v2/{project_id}/snapshots .. rest_method:: GET /v2/{project_id}/snapshots
Lists all Block Storage snapshots, with summary information, that the project can access. Lists all Block Storage snapshots, with summary information, that the project
can access.
Response codes Response codes
-------------- --------------
@ -335,13 +337,13 @@ Response Parameters
.. rest_parameters:: parameters.yaml .. rest_parameters:: parameters.yaml
- status: status_2 - status: status_2
- os-extended-snapshot-attributes:progress: os-extended-snapshot-attributes:progress - os-extended-snapshot-attributes:progress: os-ext-snap-attr:progress
- description: description - description: description
- created_at: created_at - created_at: created_at
- name: name - name: name
- snapshot: snapshot - snapshot: snapshot
- volume_id: volume_id_5 - volume_id: volume_id_5
- os-extended-snapshot-attributes:project_id: os-extended-snapshot-attributes:project_id - os-extended-snapshot-attributes:project_id: os-ext-snap-attr:project_id
- size: size - size: size
- id: id_4 - id: id_4
- metadata: metadata - metadata: metadata

View File

@ -350,7 +350,8 @@ Response Example
Delete an encryption type for v2 Delete an encryption type for v2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v2/{project_id}/types/{volume_type_id}/encryption/{encryption_id} .. rest_method::
GET /v2/{project_id}/types/{volume_type_id}/encryption/{encryption_id}
Delete an encryption type. Delete an encryption type.
@ -434,7 +435,8 @@ Response Example
Update an encryption type for v2 Update an encryption type for v2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v2/{project_id}/types/{volume_type_id}/encryption/{encryption_id} .. rest_method::
PUT /v2/{project_id}/types/{volume_type_id}/encryption/{encryption_id}
Update an encryption type. Update an encryption type.

View File

@ -15,7 +15,8 @@ Extend volume size
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Extends the size of a volume to a requested size, in gibibytes (GiB). Specify the ``os-extend`` action in the request body. Extends the size of a volume to a requested size, in gibibytes (GiB). Specify
the ``os-extend`` action in the request body.
Preconditions Preconditions
@ -63,7 +64,8 @@ Reset volume statuses
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Administrator only. Resets the status, attach status, and migration status for a volume. Specify the ``os-reset_status`` action in the request body. Administrator only. Resets the status, attach status, and migration status for
a volume. Specify the ``os-reset_status`` action in the request body.
Response codes Response codes
@ -98,7 +100,8 @@ Set image metadata for volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Sets the image metadata for a volume. Specify the ``os-set_image_metadata`` action in the request body. Sets the image metadata for a volume. Specify the ``os-set_image_metadata``
action in the request body.
Response codes Response codes
-------------- --------------
@ -145,7 +148,9 @@ Remove image metadata from volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Removes image metadata, by key, from a volume. Specify the ``os-unset_image_metadata`` action in the request body and the ``key`` for the metadata key and value pair that you want to remove. Removes image metadata, by key, from a volume. Specify the
``os-unset_image_metadata`` action in the request body and the ``key`` for the
metadata key and value pair that you want to remove.
Response codes Response codes
@ -224,7 +229,8 @@ Attach volume to server
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Attaches a volume to a server. Specify the ``os-attach`` action in the request body. Attaches a volume to a server. Specify the ``os-attach`` action in the request
body.
Preconditions Preconditions
@ -264,7 +270,8 @@ Detach volume from a server
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Detaches a volume from a server. Specify the ``os-detach`` action in the request body. Detaches a volume from a server. Specify the ``os-detach`` action in the
request body.
Preconditions Preconditions
@ -301,7 +308,9 @@ Unmanage volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Removes a volume from Block Storage management without removing the back-end storage object that is associated with it. Specify the ``os-unmanage`` action in the request body. Removes a volume from Block Storage management without removing the back-end
storage object that is associated with it. Specify the ``os-unmanage`` action
in the request body.
Preconditions Preconditions
@ -337,7 +346,8 @@ Force detach volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Forces a volume to detach. Specify the ``os-force_detach`` action in the request body. Forces a volume to detach. Specify the ``os-force_detach`` action in the
request body.
Rolls back an unsuccessful detach operation after you disconnect Rolls back an unsuccessful detach operation after you disconnect
the volume. the volume.
@ -380,7 +390,8 @@ Retype volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Change type of existing volume. Specify the ``os-retype`` action in the request body. Change type of existing volume. Specify the ``os-retype`` action in the request
body.
Change the volume type of existing volume, Cinder may migrate the volume to Change the volume type of existing volume, Cinder may migrate the volume to
proper volume host according to the new volume type. proper volume host according to the new volume type.
@ -488,8 +499,8 @@ Force delete volume
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Attempts force-delete of volume, regardless of state. Specify the ``os-force_delete`` action Attempts force-delete of volume, regardless of state. Specify the
in the request body. ``os-force_delete`` action in the request body.
Response codes Response codes
-------------- --------------
@ -520,7 +531,8 @@ Update volume bootable status
.. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v2/{project_id}/volumes/{volume_id}/action
Update the bootable status for a volume, mark it as a bootable volume. Specify the ``os-set_bootable`` action in the request body. Update the bootable status for a volume, mark it as a bootable volume. Specify
the ``os-set_bootable`` action in the request body.
Response codes Response codes

View File

@ -242,7 +242,8 @@ List volumes
.. rest_method:: GET /v2/{project_id}/volumes .. rest_method:: GET /v2/{project_id}/volumes
Lists summary information for all Block Storage volumes that the project can access. Lists summary information for all Block Storage volumes that the project can
access.
Response codes Response codes

View File

@ -194,7 +194,8 @@ Request Example
Delete a consistency group Delete a consistency group
~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: POST /v3/{project_id}/consistencygroups/{consistencygroup_id}/delete .. rest_method::
POST /v3/{project_id}/consistencygroups/{consistencygroup_id}/delete
Deletes a consistency group. Deletes a consistency group.
@ -275,7 +276,8 @@ Response Example
Update a consistency group Update a consistency group
~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v3/{project_id}/consistencygroups/{consistencygroup_id}/update .. rest_method::
PUT /v3/{project_id}/consistencygroups/{consistencygroup_id}/update
Updates a consistency group. Updates a consistency group.

View File

@ -205,7 +205,8 @@ If UUID is specified, the backup will be restored to the specified volume.
The specified volume has the following requirements: The specified volume has the following requirements:
* the specified volume status is ``available``. * the specified volume status is ``available``.
* the size of specified volume must be equal to or greater than the size of backup. * the size of specified volume must be equal to or greater than the size of
backup.
If no existing volume UUID is provided, the backup will be restored to a If no existing volume UUID is provided, the backup will be restored to a
new volume matching the size and name of the originally backed up volume. new volume matching the size and name of the originally backed up volume.

View File

@ -251,9 +251,11 @@ Response Example
Reset group snapshot status Reset group snapshot status
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: POST /v3/{project_id}/group_snapshots/{group_snapshot_id}/action .. rest_method::
POST /v3/{project_id}/group_snapshots/{group_snapshot_id}/action
Resets the status for a group snapshot. Specifies the ``reset_status`` action in the request body. Resets the status for a group snapshot. Specifies the ``reset_status`` action
in the request body.
Response codes Response codes
-------------- --------------

View File

@ -8,8 +8,8 @@ Create or update group specs for a group type
.. rest_method:: POST /v3/{project_id}/group_types/{group_type_id}/group_specs .. rest_method:: POST /v3/{project_id}/group_types/{group_type_id}/group_specs
Create group specs for a group type, if the specification key already exists in group specs, Create group specs for a group type, if the specification key already exists in
this API will update the specification as well. group specs, this API will update the specification as well.
Response codes Response codes
-------------- --------------
@ -105,7 +105,8 @@ Response Example
Show one specific group spec for a group type Show one specific group spec for a group type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id} .. rest_method::
GET /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id}
Show a group spec for a group type, Show a group spec for a group type,
@ -150,7 +151,8 @@ Response Example
Update one specific group spec for a group type Update one specific group spec for a group type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id} .. rest_method::
PUT /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id}
Update a group spec for a group type, Update a group spec for a group type,
@ -201,7 +203,8 @@ Response Example
Delete one specific group spec for a group type Delete one specific group spec for a group type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: DELETE /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id} .. rest_method::
DELETE /v3/{project_id}/group_types/{group_type_id}/group_specs/{spec_id}
Delete a group spec for a group type, Delete a group spec for a group type,

View File

@ -371,7 +371,8 @@ Reset group status
.. rest_method:: POST /v3/{project_id}/groups/{group_id}/action .. rest_method:: POST /v3/{project_id}/groups/{group_id}/action
Resets the status for a group. Specify the ``reset_status`` action in the request body. Resets the status for a group. Specify the ``reset_status`` action in the
request body.
Response codes Response codes
-------------- --------------

View File

@ -124,7 +124,8 @@ Log Disabled Cinder Service Information
.. rest_method:: PUT /v3/{project_id}/os-services/disable-log-reason .. rest_method:: PUT /v3/{project_id}/os-services/disable-log-reason
Logs information to the Cinder service table about why a Cinder service was disabled. Logs information to the Cinder service table about why a Cinder service was
disabled.
Specify the service by its host name and binary name. Specify the service by its host name and binary name.

View File

@ -1982,24 +1982,24 @@ os-detach:
in: body in: body
required: true required: true
type: object type: object
os-ext-snap-attr:progress:
description: |
A percentage value for the build progress.
in: body
required: true
type: string
os-ext-snap-attr:project_id:
description: |
The UUID of the owning project.
in: body
required: true
type: string
os-extend: os-extend:
description: | description: |
The ``os-extend`` action. The ``os-extend`` action.
in: body in: body
required: true required: true
type: object type: object
os-extended-snapshot-attributes:progress:
description: |
A percentage value for the build progress.
in: body
required: true
type: string
os-extended-snapshot-attributes:project_id:
description: |
The UUID of the owning project.
in: body
required: true
type: string
os-force_delete: os-force_delete:
description: | description: |
The ``os-force_delete`` action. The ``os-force_delete`` action.

View File

@ -10,7 +10,8 @@ Shows and updates quota classes for a project.
Show quota classes for a project Show quota classes for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{admin_project_id}/os-quota-class-sets/{quota_class_name} .. rest_method::
GET /v3/{admin_project_id}/os-quota-class-sets/{quota_class_name}
Shows quota class set for a project. If no specific value for the quota class Shows quota class set for a project. If no specific value for the quota class
resource exists, then the default value will be reported. resource exists, then the default value will be reported.
@ -65,7 +66,8 @@ Response Example
Update quota classes for a project Update quota classes for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v3/{admin_project_id}/os-quota-class-sets/{quota_class_name} .. rest_method::
PUT /v3/{admin_project_id}/os-quota-class-sets/{quota_class_name}
Updates quota class set for a project. If the ``quota_class_name`` key does not Updates quota class set for a project. If the ``quota_class_name`` key does not
exist, then the API will create one. exist, then the API will create one.

View File

@ -62,7 +62,8 @@ Response Example
Show quota usage for a project Show quota usage for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{admin_project_id}/os-quota-sets/{project_id}?{usage}=True .. rest_method::
GET /v3/{admin_project_id}/os-quota-sets/{project_id}?{usage}=True
Shows quota usage for a project. Shows quota usage for a project.
@ -204,7 +205,8 @@ Request
Get default quotas for a project Get default quotas for a project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{admin_project_id}/os-quota-sets/{project_id}/defaults .. rest_method::
GET /v3/{admin_project_id}/os-quota-sets/{project_id}/defaults
Gets default quotas for a project. Gets default quotas for a project.
@ -253,10 +255,11 @@ Response Example
Validate setup for nested quota Validate setup for nested quota
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{admin_project_id}/os-quota-sets/validate_setup_for_nested_quota_use .. rest_method::
GET /v3/{admin_project_id}/os-quota-sets/validate_setup_for_nested_quota_use
Validate setup for nested quota, administrator should ensure that Keystone v3 or greater is Validate setup for nested quota, administrator should ensure that Keystone v3
being used. or greater is being used.
Response codes Response codes
-------------- --------------

View File

@ -80,7 +80,8 @@ Request Example
List private volume type access detail List private volume type access detail
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v3/{project_id}/types/{volume_type}/os-volume-type-access .. rest_method::
GET /v3/{project_id}/types/{volume_type}/os-volume-type-access
Lists project IDs that have access to private volume type. Lists project IDs that have access to private volume type.

View File

@ -75,13 +75,13 @@ Response Parameters
.. rest_parameters:: parameters.yaml .. rest_parameters:: parameters.yaml
- status: status_snap - status: status_snap
- os-extended-snapshot-attributes:progress: os-extended-snapshot-attributes:progress - os-extended-snapshot-attributes:progress: os-ext-snap-attr:progress
- description: description_snap_req - description: description_snap_req
- created_at: created_at - created_at: created_at
- name: name - name: name
- user_id: user_id_min - user_id: user_id_min
- volume_id: volume_id_snap - volume_id: volume_id_snap
- os-extended-snapshot-attributes:project_id: os-extended-snapshot-attributes:project_id - os-extended-snapshot-attributes:project_id: os-ext-snap-attr:project_id
- size: size - size: size
- id: id_snap - id: id_snap
- metadata: metadata - metadata: metadata
@ -374,14 +374,14 @@ Response Parameters
.. rest_parameters:: parameters.yaml .. rest_parameters:: parameters.yaml
- status: status_snap - status: status_snap
- os-extended-snapshot-attributes:progress: os-extended-snapshot-attributes:progress - os-extended-snapshot-attributes:progress: os-ext-snap-attr:progress
- description: description_snap_req - description: description_snap_req
- created_at: created_at - created_at: created_at
- name: name - name: name
- snapshot: snapshot_obj - snapshot: snapshot_obj
- user_id: user_id_min - user_id: user_id_min
- volume_id: volume_id_snap - volume_id: volume_id_snap
- os-extended-snapshot-attributes:project_id: os-extended-snapshot-attributes:project_id - os-extended-snapshot-attributes:project_id: os-ext-snap-attr:project_id
- size: size - size: size
- id: id_snap - id: id_snap
- metadata: metadata - metadata: metadata
@ -528,7 +528,8 @@ Response Example
Delete a snapshot's metadata Delete a snapshot's metadata
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: DELETE /v3/{project_id}/snapshots/{snapshot_id}/metadata/{key} .. rest_method::
DELETE /v3/{project_id}/snapshots/{snapshot_id}/metadata/{key}
Deletes metadata for a snapshot. Deletes metadata for a snapshot.

View File

@ -98,7 +98,8 @@ Request
Request Example Request Example
--------------- ---------------
.. literalinclude:: ./samples/volume_type/volume-type-extra-specs-create-update-request.json .. literalinclude::
./samples/volume_type/volume-type-extra-specs-create-update-request.json
:language: javascript :language: javascript
@ -113,7 +114,8 @@ Response Parameters
Response Example Response Example
---------------- ----------------
.. literalinclude:: ./samples/volume_type/volume-type-extra-specs-create-update-response.json .. literalinclude::
./samples/volume_type/volume-type-extra-specs-create-update-response.json
:language: javascript :language: javascript
@ -151,7 +153,8 @@ Response Parameters
Response Example Response Example
---------------- ----------------
.. literalinclude:: ./samples/volume_type/volume-type-all-extra-specs-show-response.json .. literalinclude::
./samples/volume_type/volume-type-all-extra-specs-show-response.json
:language: javascript :language: javascript
@ -182,7 +185,8 @@ Request
Response Example Response Example
---------------- ----------------
.. literalinclude:: ./samples/volume_type/volume-type-specific-extra-specs-show-response.json .. literalinclude::
./samples/volume_type/volume-type-specific-extra-specs-show-response.json
:language: javascript :language: javascript
@ -213,21 +217,24 @@ Request
Request Example Request Example
--------------- ---------------
.. literalinclude:: ./samples/volume_type/volume-type-specific-extra-specs-update-request.json .. literalinclude::
./samples/volume_type/volume-type-specific-extra-specs-update-request.json
:language: javascript :language: javascript
Response Example Response Example
---------------- ----------------
.. literalinclude:: ./samples/volume_type/volume-type-specific-extra-specs-update-response.json .. literalinclude::
./samples/volume_type/volume-type-specific-extra-specs-update-response.json
:language: javascript :language: javascript
Delete extra specification for volume type Delete extra specification for volume type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: DELETE /v3/{project_id}/types/{volume_type_id}/extra_specs/{key} .. rest_method::
DELETE /v3/{project_id}/types/{volume_type_id}/extra_specs/{key}
Deletes the specific extra specification assigned to a volume type. Deletes the specific extra specification assigned to a volume type.
@ -536,14 +543,16 @@ Request
Response Example Response Example
---------------- ----------------
.. literalinclude:: ./samples/volume_type/encryption-type-specific-specs-show-response.json .. literalinclude::
./samples/volume_type/encryption-type-specific-specs-show-response.json
:language: javascript :language: javascript
Delete an encryption type Delete an encryption type
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: DELETE /v3/{project_id}/types/{volume_type_id}/encryption/{encryption_id} .. rest_method::
DELETE /v3/{project_id}/types/{volume_type_id}/encryption/{encryption_id}
To delete an encryption type for an existing volume type. To delete an encryption type for an existing volume type.
@ -623,7 +632,8 @@ Response Example
Update an encryption type Update an encryption type
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: PUT /v3/{project_id}/types/{volume_type_id}/encryption/{encryption_id} .. rest_method::
PUT /v3/{project_id}/types/{volume_type_id}/encryption/{encryption_id}
To update an encryption type for an existing volume type. To update an encryption type for an existing volume type.

View File

@ -83,7 +83,8 @@ Reset a volume's statuses
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Administrator only. Resets the status, attach status, revert to snapshot, Administrator only. Resets the status, attach status, revert to snapshot,
and migration status for a volume. Specify the ``os-reset_status`` action in the request body. and migration status for a volume. Specify the ``os-reset_status`` action in
the request body.
Response codes Response codes
-------------- --------------
@ -117,8 +118,8 @@ Revert volume to snapshot
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Revert a volume to its latest snapshot, this API only support reverting a detached volume, Revert a volume to its latest snapshot, this API only support reverting a
and the volume status must be ``available``. detached volume, and the volume status must be ``available``.
Available since API microversion ``3.40``. Available since API microversion ``3.40``.
@ -158,7 +159,8 @@ Set image metadata for a volume
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Sets the image metadata for a volume. Specify the ``os-set_image_metadata`` action in the request body. Sets the image metadata for a volume. Specify the ``os-set_image_metadata``
action in the request body.
Response codes Response codes
-------------- --------------
@ -205,7 +207,9 @@ Remove image metadata from a volume
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Removes image metadata, by key, from a volume. Specify the ``os-unset_image_metadata`` action in the request body and the ``key`` for the metadata key and value pair that you want to remove. Removes image metadata, by key, from a volume. Specify the
``os-unset_image_metadata`` action in the request body and the ``key`` for the
metadata key and value pair that you want to remove.
Response codes Response codes
-------------- --------------
@ -284,7 +288,8 @@ Attach volume to a server
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Attaches a volume to a server. Specify the ``os-attach`` action in the request body. Attaches a volume to a server. Specify the ``os-attach`` action in the request
body.
Preconditions Preconditions
@ -325,7 +330,8 @@ Detach volume from server
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Detaches a volume from a server. Specify the ``os-detach`` action in the request body. Detaches a volume from a server. Specify the ``os-detach`` action in the
request body.
Preconditions Preconditions
@ -398,7 +404,8 @@ Force detach a volume
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Forces a volume to detach. Specify the ``os-force_detach`` action in the request body. Forces a volume to detach. Specify the ``os-force_detach`` action in the
request body.
Rolls back an unsuccessful detach operation after you disconnect Rolls back an unsuccessful detach operation after you disconnect
the volume. the volume.
@ -439,7 +446,8 @@ Retype a volume
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Change type of existing volume. Specify the ``os-retype`` action in the request body. Change type of existing volume. Specify the ``os-retype`` action in the request
body.
Change the volume type of existing volume, Cinder may migrate the volume to Change the volume type of existing volume, Cinder may migrate the volume to
proper volume host according to the new volume type. proper volume host according to the new volume type.
@ -551,8 +559,8 @@ Force delete a volume
.. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action .. rest_method:: POST /v3/{project_id}/volumes/{volume_id}/action
Attempts force-delete of volume, regardless of state. Specify the ``os-force_delete`` action Attempts force-delete of volume, regardless of state. Specify the
in the request body. ``os-force_delete`` action in the request body.
Response codes Response codes
-------------- --------------

View File

@ -429,10 +429,10 @@ volume APIs.
it will raise badRequest error. it will raise badRequest error.
- Update volume API - Update volume API
Before 3.53, even if user doesn't pass any valid parameters in the request body, Before 3.53, even if user doesn't pass any valid parameters in the request
the volume was updated. body, the volume was updated.
But in 3.53, user will need to pass at least one valid parameter in the request But in 3.53, user will need to pass at least one valid parameter in the
body otherwise it will return 400 error. request body otherwise it will return 400 error.
3.54 3.54
---- ----
@ -455,8 +455,8 @@ related api (create/show/list detail transfer APIs) responses.
3.58 3.58
---- ----
Add ``project_id`` attribute to response body of list groups with detail and show Add ``project_id`` attribute to response body of list groups with detail and
group detail APIs. show group detail APIs.
3.59 3.59
---- ----

View File

@ -24,11 +24,11 @@ for **non-admin** user with json format:
} }
the key ``volume`` (singular) here stands for the resource you want to apply and the value the key ``volume`` (singular) here stands for the resource you want to apply
accepts an list which contains the allowed filters collection, once the configuration and the value accepts an list which contains the allowed filters collection,
file is changed and API service is restarted, cinder will only recognize this filter once the configuration file is changed and API service is restarted, cinder
keys, **NOTE**: the default configuration file will include all the filters we already will only recognize this filter keys, **NOTE**: the default configuration file
enabled. will include all the filters we already enabled.
Which filter keys are supported? Which filter keys are supported?
-------------------------------- --------------------------------
@ -36,8 +36,8 @@ Which filter keys are supported?
Not all the attributes are supported at present, so we add this table below to Not all the attributes are supported at present, so we add this table below to
indicate which filter keys are valid and can be used in the configuration. indicate which filter keys are valid and can be used in the configuration.
Since v3.34 we could use '~' to indicate supporting querying resource by inexact match, Since v3.34 we could use '~' to indicate supporting querying resource by
for example, if we have a configuration file as below: inexact match, for example, if we have a configuration file as below:
.. code-block:: json .. code-block:: json
@ -45,36 +45,39 @@ for example, if we have a configuration file as below:
"volume": ["name~"] "volume": ["name~"]
} }
User can query volume both by ``name=volume`` and ``name~=volume``, and the volumes User can query volume both by ``name=volume`` and ``name~=volume``, and the
named ``volume123`` and ``a_volume123`` are both valid for second input while neither are volumes named ``volume123`` and ``a_volume123`` are both valid for second input
valid for first. The supported APIs are marked with "*" below in the table. while neither are valid for first. The supported APIs are marked with "*" below
in the table.
+-----------------+-------------------------------------------------------------------------+ .. list-table::
| API | Valid filter keys | :header-rows: 1
+=================+=========================================================================+
| | id, group_id, name, status, bootable, migration_status, metadata, host, | * - API
| list volume* | image_metadata, availability_zone, user_id, volume_type_id, project_id, | - Valid filter keys
| | size, description, replication_status, multiattach | * - list volume*
+-----------------+-------------------------------------------------------------------------+ - id, group_id, name, status, bootable, migration_status, metadata, host,
| | id, volume_id, user_id, project_id, status, volume_size, name, | image_metadata, availability_zone, user_id, volume_type_id, project_id,
| list snapshot* | description, volume_type_id, group_snapshot_id, metadata, | size, description, replication_status, multiattach
| | availability_zone | * - list snapshot*
+-----------------+-------------------------------------------------------------------------+ - id, volume_id, user_id, project_id, status, volume_size, name,
| | id, name, status, container, availability_zone, description, | description, volume_type_id, group_snapshot_id, metadata,
| list backup* | volume_id, is_incremental, size, host, parent_id | availability_zone
+-----------------+-------------------------------------------------------------------------+ * - list backup*
| | id, user_id, status, availability_zone, group_type, name, description, | - id, name, status, container, availability_zone, description, volume_id,
| list group* | host | is_incremental, size, host, parent_id
+-----------------+-------------------------------------------------------------------------+ * - list group*
| list g-snapshot*| id, name, description, group_id, group_type_id, status | - id, user_id, status, availability_zone, group_type, name, description,
+-----------------+-------------------------------------------------------------------------+ host
| | id, volume_id, instance_id, attach_status, attach_mode, | * - list g-snapshot*
| list attachment*| connection_info, mountpoint, attached_host | - id, name, description, group_id, group_type_id, status
+-----------------+-------------------------------------------------------------------------+ * - list attachment*
| | id, event_id, resource_uuid, resource_type, request_id, message_level, | - id, volume_id, instance_id, attach_status, attach_mode, connection_info,
| list message* | project_id | mountpoint, attached_host
+-----------------+-------------------------------------------------------------------------+ * - list message*
| get pools | name, volume_type | - id, event_id, resource_uuid, resource_type, request_id, message_level,
+-----------------+-------------------------------------------------------------------------+ project_id
| list types(3.51)| is_public, extra_specs | * - get pools
+-----------------+-------------------------------------------------------------------------+ - name, volume_type
* - list types (3.51)
- is_public, extra_specs

View File

@ -22,7 +22,8 @@ DESCRIPTION
:command:`cinder-manage` provides control of cinder database migration, :command:`cinder-manage` provides control of cinder database migration,
and provides an interface to get information about the current state and provides an interface to get information about the current state
of cinder. of cinder.
More information about OpenStack Cinder is available at `OpenStack Cinder <https://docs.openstack.org/cinder/latest/>`_. More information about OpenStack Cinder is available at `OpenStack
Cinder <https://docs.openstack.org/cinder/latest/>`_.
OPTIONS OPTIONS
======= =======
@ -36,12 +37,15 @@ For example, to obtain a list of the cinder services currently running:
Run without arguments to see a list of available command categories: Run without arguments to see a list of available command categories:
``cinder-manage`` ``cinder-manage``
Categories are shell, logs, migrate, db, volume, host, service, backup, version, and config. Detailed descriptions are below. Categories are shell, logs, migrate, db, volume, host, service, backup,
version, and config. Detailed descriptions are below.
You can also run with a category argument such as 'db' to see a list of all commands in that category: You can also run with a category argument such as 'db' to see a list of all
commands in that category:
``cinder-manage db`` ``cinder-manage db``
These sections describe the available categories and arguments for cinder-manage. These sections describe the available categories and arguments for
cinder-manage.
Cinder Db Cinder Db
~~~~~~~~~ ~~~~~~~~~
@ -52,7 +56,8 @@ Print the current database version.
``cinder-manage db sync [--bump-versions] [version]`` ``cinder-manage db sync [--bump-versions] [version]``
Sync the database up to the most recent version. This is the standard way to create the db as well. Sync the database up to the most recent version. This is the standard way to
create the db as well.
This command interprets the following options when it is invoked: This command interprets the following options when it is invoked:
@ -65,27 +70,34 @@ version Database version
``cinder-manage db purge [<number of days>]`` ``cinder-manage db purge [<number of days>]``
Purge database entries that are marked as deleted, that are older than the number of days specified. Purge database entries that are marked as deleted, that are older than the
number of days specified.
``cinder-manage db online_data_migrations [--max-count <n>]`` ``cinder-manage db online_data_migrations [--max-count <n>]``
Perform online data migrations for database upgrade between releases in batches. Perform online data migrations for database upgrade between releases in
batches.
This command interprets the following options when it is invoked: This command interprets the following options when it is invoked:
--max-count Maximum number of objects to migrate. If not specified, all possible migrations will be completed, in batches of 50 at a time. .. code-block:: console
Returns exit status 0 if no (further) updates are possible, 1 if the ``--max-count`` --max-count Maximum number of objects to migrate. If not specified, all
option was used and some updates were completed successfully (even if others generated possible migrations will be completed, in batches of 50 at a
errors), 2 if some updates generated errors and no other migrations were able to take time.
effect in the last batch attempted, or 127 if invalid input is provided (e.g.
non-numeric max-count).
This command should be run after upgrading the database schema. If it exits with partial Returns exit status 0 if no (further) updates are possible, 1 if the
updates (exit status 1) it should be called again, even if some updates initially generated ``--max-count`` option was used and some updates were completed successfully
errors, because some updates may depend on others having completed. If it exits with (even if others generated errors), 2 if some updates generated errors and no
status 2, intervention is required to resolve the issue causing remaining updates to fail. other migrations were able to take effect in the last batch attempted, or 127
It should be considered successfully completed only when the exit status is 0. if invalid input is provided (e.g. non-numeric max-count).
This command should be run after upgrading the database schema. If it exits
with partial updates (exit status 1) it should be called again, even if some
updates initially generated errors, because some updates may depend on others
having completed. If it exits with status 2, intervention is required to
resolve the issue causing remaining updates to fail. It should be considered
successfully completed only when the exit status is 0.
Cinder Logs Cinder Logs
~~~~~~~~~~~ ~~~~~~~~~~~
@ -96,7 +108,8 @@ Displays cinder errors from log files.
``cinder-manage logs syslog [<number>]`` ``cinder-manage logs syslog [<number>]``
Displays cinder the most recent entries from syslog. The optional number argument specifies the number of entries to display (default 10). Displays cinder the most recent entries from syslog. The optional number
argument specifies the number of entries to display (default 10).
Cinder Shell Cinder Shell
~~~~~~~~~~~~ ~~~~~~~~~~~~
@ -128,23 +141,27 @@ Cinder Volume
Delete a volume without first checking that the volume is available. Delete a volume without first checking that the volume is available.
``cinder-manage volume update_host --currenthost <current host> --newhost <new host>`` ``cinder-manage volume update_host --currenthost <current host>
--newhost <new host>``
Updates the host name of all volumes currently associated with a specified host. Updates the host name of all volumes currently associated with a specified
host.
Cinder Host Cinder Host
~~~~~~~~~~~ ~~~~~~~~~~~
``cinder-manage host list [<zone>]`` ``cinder-manage host list [<zone>]``
Displays a list of all physical hosts and their zone. The optional zone argument allows the list to be filtered on the requested zone. Displays a list of all physical hosts and their zone. The optional zone
argument allows the list to be filtered on the requested zone.
Cinder Service Cinder Service
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
``cinder-manage service list`` ``cinder-manage service list``
Displays a list of all cinder services and their host, zone, status, state and when the information was last updated. Displays a list of all cinder services and their host, zone, status, state and
when the information was last updated.
``cinder-manage service remove <service> <host>`` ``cinder-manage service remove <service> <host>``
@ -155,11 +172,14 @@ Cinder Backup
``cinder-manage backup list`` ``cinder-manage backup list``
Displays a list of all backups (including ones in progress) and the host on which the backup operation is running. Displays a list of all backups (including ones in progress) and the host on
which the backup operation is running.
``cinder-manage backup update_backup_host --currenthost <current host> --newhost <new host>`` ``cinder-manage backup update_backup_host --currenthost <current host>
--newhost <new host>``
Updates the host name of all backups currently associated with a specified host. Updates the host name of all backups currently associated with a specified
host.
Cinder Version Cinder Version
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -173,12 +193,15 @@ Cinder Config
``cinder-manage config list [<param>]`` ``cinder-manage config list [<param>]``
Displays the current configuration parameters (options) for Cinder. The optional flag parameter may be used to display the configuration of one parameter. Displays the current configuration parameters (options) for Cinder. The
optional flag parameter may be used to display the configuration of one
parameter.
FILES FILES
===== =====
The cinder.conf file contains configuration information in the form of python-gflags. The cinder.conf file contains configuration information in the form of
python-gflags.
The cinder-manage.log file logs output from cinder-manage. The cinder-manage.log file logs output from cinder-manage.
@ -190,4 +213,5 @@ SEE ALSO
BUGS BUGS
==== ====
* Cinder is hosted on Launchpad so you can view current bugs at `Bugs : Cinder <https://bugs.launchpad.net/cinder/>`__ * Cinder is hosted on Launchpad so you can view current bugs at `Bugs :
Cinder <https://bugs.launchpad.net/cinder/>`__

View File

@ -5,10 +5,10 @@ DataCore SANsymphony volume driver
DataCore SANsymphony volume driver provides OpenStack Compute instances with DataCore SANsymphony volume driver provides OpenStack Compute instances with
access to the SANsymphony(TM) Software-defined Storage Platform. access to the SANsymphony(TM) Software-defined Storage Platform.
When volumes are created in OpenStack, the driver creates corresponding virtual When volumes are created in OpenStack, the driver creates corresponding
disks in the SANsymphony server group. When a volume is attached to an instance virtual disks in the SANsymphony server group. When a volume is attached to an
in OpenStack, a Linux host is registered and the corresponding virtual disk is instance in OpenStack, a Linux host is registered and the corresponding virtual
served to the host in the SANsymphony server group. disk is served to the host in the SANsymphony server group.
Requirements Requirements
------------- -------------
@ -18,8 +18,10 @@ Requirements
* OpenStack Integration has been tested with the OpenStack environment * OpenStack Integration has been tested with the OpenStack environment
installed on Ubuntu 16.04. For the list of qualified Linux host operating installed on Ubuntu 16.04. For the list of qualified Linux host operating
system types, refer to the `Linux Host Configuration Guide <https://datacore.custhelp.com/app/answers/detail/a_id/1546>`_ system types, refer to the `Linux Host Configuration
on the `DataCore Technical Support Web page <https://datacore.custhelp.com/>`_. Guide <https://datacore.custhelp.com/app/answers/detail/a_id/1546>`_
on the `DataCore Technical Support Web
page <https://datacore.custhelp.com/>`_.
* If using multipath I/O, ensure that iSCSI ports are logged in on all * If using multipath I/O, ensure that iSCSI ports are logged in on all
OpenStack Compute nodes. (All Fibre Channel ports will be logged in OpenStack Compute nodes. (All Fibre Channel ports will be logged in
@ -321,8 +323,8 @@ for additional information.
Detaching Volumes and Terminating Instances Detaching Volumes and Terminating Instances
------------------------------------------- -------------------------------------------
Notes about the expected behavior of SANsymphony software when detaching volumes Notes about the expected behavior of SANsymphony software when detaching
and terminating instances in OpenStack: volumes and terminating instances in OpenStack:
1. When a volume is detached from a host in OpenStack, the virtual disk will be 1. When a volume is detached from a host in OpenStack, the virtual disk will be
unserved from the host in SANsymphony, but the virtual disk will not be unserved from the host in SANsymphony, but the virtual disk will not be

View File

@ -2,10 +2,11 @@
Dell EMC POWERMAX iSCSI and FC drivers Dell EMC POWERMAX iSCSI and FC drivers
====================================== ======================================
The Dell EMC PowerMax drivers, ``PowerMaxISCSIDriver`` and ``PowerMaxFCDriver``, The Dell EMC PowerMax drivers, ``PowerMaxISCSIDriver`` and
support the use of Dell EMC PowerMax and VMAX storage arrays with the Cinder ``PowerMaxFCDriver``, support the use of Dell EMC PowerMax and VMAX storage
Block Storage project. They both provide equivalent functions and differ only arrays with the Cinder Block Storage project. They both provide equivalent
in support for their respective host attachment methods. functions and differ only in support for their respective host attachment
methods.
The drivers perform volume operations by communicating with the back-end The drivers perform volume operations by communicating with the back-end
PowerMax storage management software. They use the Requests HTTP library to PowerMax storage management software. They use the Requests HTTP library to
@ -256,8 +257,8 @@ PowerMax Driver Integration
- i.e., on the same server running Solutions Enabler; on a server - i.e., on the same server running Solutions Enabler; on a server
connected to the Solutions Enabler server; or using the eManagement connected to the Solutions Enabler server; or using the eManagement
container application (containing Solutions Enabler and Unisphere for container application (containing Solutions Enabler and Unisphere for
PowerMax). See ``Dell EMC Solutions Enabler 9.0.x Installation and Configuration PowerMax). See ``Dell EMC Solutions Enabler 9.0.x Installation and
Guide`` at ``support.emc.com``. Configuration Guide`` at ``support.emc.com``.
2. FC Zoning with PowerMax 2. FC Zoning with PowerMax
@ -275,8 +276,8 @@ complex and open-zoning would raise security concerns.
.. note:: .. note::
You can only ping the PowerMax iSCSI target ports when there is a valid masking You can only ping the PowerMax iSCSI target ports when there is a valid
view. An attach operation creates this masking view. masking view. An attach operation creates this masking view.
4. Configure Block Storage in cinder.conf 4. Configure Block Storage in cinder.conf
@ -417,8 +418,8 @@ complex and open-zoning would raise security concerns.
# driver_ssl_cert_verify = True # driver_ssl_cert_verify = True
#. Ensure ``driver_ssl_cert_path`` is set to ``True`` in ``cinder.conf`` backend #. Ensure ``driver_ssl_cert_path`` is set to ``True`` in ``cinder.conf``
stanza if steps 3-6 are skipped, otherwise ensure both backend stanza if steps 3-6 are skipped, otherwise ensure both
``driver_ssl_cert_path`` and ``driver_ssl_cert_path`` are set in ``driver_ssl_cert_path`` and ``driver_ssl_cert_path`` are set in
``cinder.conf`` backend stanza. ``cinder.conf`` backend stanza.
@ -439,10 +440,10 @@ complex and open-zoning would raise security concerns.
.. note:: .. note::
It is possible to create as many volume types as the number of Service Level It is possible to create as many volume types as the number of Service
and Workload(available) combination for provisioning volumes. The pool_name Level and Workload(available) combination for provisioning volumes. The
is the additional property which has to be set and is of the format: pool_name is the additional property which has to be set and is of the
``<ServiceLevel>+<Workload>+<SRP>+<Array ID>``. format: ``<ServiceLevel>+<Workload>+<SRP>+<Array ID>``.
This can be obtained from the output of the ``cinder get-pools--detail``. This can be obtained from the output of the ``cinder get-pools--detail``.
Workload is NONE for PowerMax or any All Flash with PowerMax OS (5978) Workload is NONE for PowerMax or any All Flash with PowerMax OS (5978)
or greater. or greater.
@ -490,8 +491,8 @@ complex and open-zoning would raise security concerns.
.. note:: .. note::
PowerMax and Hybrid support Optimized, Diamond, Platinum, Gold, Silver, PowerMax and Hybrid support Optimized, Diamond, Platinum, Gold, Silver,
Bronze, and NONE service levels. VMAX All Flash supports Diamond and None. Bronze, and NONE service levels. VMAX All Flash supports Diamond and
Hybrid and All Flash support DSS_REP, DSS, OLTP_REP, OLTP, and None None. Hybrid and All Flash support DSS_REP, DSS, OLTP_REP, OLTP, and None
workloads, the latter up until ucode 5977. There is no support for workloads, the latter up until ucode 5977. There is no support for
workloads in PowerMax OS (5978) or greater. workloads in PowerMax OS (5978) or greater.
@ -913,7 +914,8 @@ Prerequisites - PowerMax
**Outcome - Block Storage (Cinder)** **Outcome - Block Storage (Cinder)**
Volume is created against volume type and QOS is enforced with the parameters above Volume is created against volume type and QOS is enforced with the parameters
above.
USE CASE 4 - Default values USE CASE 4 - Default values
@ -1237,7 +1239,8 @@ https://docs.openstack.org/nova/latest/admin/configuring-migrations.html
By default, the RPC messaging client is set to timeout after 60 seconds, By default, the RPC messaging client is set to timeout after 60 seconds,
meaning if any operation you perform takes longer than 60 seconds to meaning if any operation you perform takes longer than 60 seconds to
complete the operation will timeout and fail with the ERROR message complete the operation will timeout and fail with the ERROR message
"Messaging Timeout: Timed out waiting for a reply to message ID [message_id]" "Messaging Timeout: Timed out waiting for a reply to message ID
[message_id]"
If this occurs, increase the ``rpc_response_timeout`` flag value in If this occurs, increase the ``rpc_response_timeout`` flag value in
``cinder.conf`` and ``nova.conf`` on all Cinder and Nova nodes and restart ``cinder.conf`` and ``nova.conf`` on all Cinder and Nova nodes and restart
@ -1364,8 +1367,8 @@ for configuration information.
Multi-attach Architecture Multi-attach Architecture
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
In PowerMax, a volume cannot belong to two or more FAST storage groups at the same In PowerMax, a volume cannot belong to two or more FAST storage groups at the
time. This can cause issues when we are attaching a volume to multiple same time. This can cause issues when we are attaching a volume to multiple
instances on different hosts. To get around this limitation, we leverage both instances on different hosts. To get around this limitation, we leverage both
cascaded storage groups and non-FAST storage groups (i.e. a storage group with cascaded storage groups and non-FAST storage groups (i.e. a storage group with
no service level, workload, or SRP specified). no service level, workload, or SRP specified).
@ -1527,13 +1530,13 @@ Configure the source and target arrays
.. note:: .. note::
``replication_device`` key value pairs will need to be on the same line ``replication_device`` key value pairs will need to be on the same
(separated by commas) in cinder.conf. They are displayed on separated lines line (separated by commas) in cinder.conf. They are displayed on
above for readiblity. separated lines above for readiblity.
* ``target_device_id`` is a unique PowerMax array serial number of the target * ``target_device_id`` is a unique PowerMax array serial number of the
array. For full failover functionality, the source and target PowerMax arrays target array. For full failover functionality, the source and target
must be discovered and managed by the same U4V server. PowerMax arrays must be discovered and managed by the same U4V server.
* ``remote_port_group`` is the name of a PowerMax port group that has been * ``remote_port_group`` is the name of a PowerMax port group that has been
pre-configured to expose volumes managed by this backend in the event pre-configured to expose volumes managed by this backend in the event
@ -1546,11 +1549,11 @@ Configure the source and target arrays
* ``rdf_group_label`` is the name of a PowerMax SRDF group that has been * ``rdf_group_label`` is the name of a PowerMax SRDF group that has been
pre-configured between the source and target arrays. pre-configured between the source and target arrays.
* ``allow_extend`` is a flag for allowing the extension of replicated volumes. * ``allow_extend`` is a flag for allowing the extension of replicated
To extend a volume in an SRDF relationship, this relationship must first be volumes. To extend a volume in an SRDF relationship, this relationship
broken, both the source and target volumes are then independently extended, must first be broken, both the source and target volumes are then
and then the replication relationship is re-established. If not explicitly independently extended, and then the replication relationship is
set, this flag defaults to ``False``. re-established. If not explicitly set, this flag defaults to ``False``.
.. note:: .. note::
As the SRDF link must be severed, due caution should be exercised when As the SRDF link must be severed, due caution should be exercised when
@ -1566,19 +1569,21 @@ Configure the source and target arrays
* ``metro_use_bias`` is a flag to indicate if 'bias' protection should be * ``metro_use_bias`` is a flag to indicate if 'bias' protection should be
used instead of Witness. This defaults to False. used instead of Witness. This defaults to False.
* ``allow_delete_metro`` is a flag to indicate if metro devices can be deleted. * ``allow_delete_metro`` is a flag to indicate if metro devices can be
All Metro devices in an RDF group need to be managed together, so in order to delete deleted. All Metro devices in an RDF group need to be managed together, so
one of the pairings, the whole group needs to be first suspended. Because of this, in order to delete one of the pairings, the whole group needs to be first
we require this flag to be explicitly set. This flag defaults to False. suspended. Because of this, we require this flag to be explicitly set.
This flag defaults to False.
.. note:: .. note::
Service Level and Workload: An attempt will be made to create a storage Service Level and Workload: An attempt will be made to create a storage
group on the target array with the same service level and workload combination group on the target array with the same service level and workload
as the primary. However, if this combination is unavailable on the target combination as the primary. However, if this combination is unavailable
(for example, in a situation where the source array is a Hybrid, the target array on the target (for example, in a situation where the source array is a
is an All Flash, and an All Flash incompatible service level like Bronze is Hybrid, the target array is an All Flash, and an All Flash incompatible
configured), no service level will be applied. service level like Bronze is configured), no service level will be
applied.
.. note:: .. note::
The PowerMax Cinder drivers can support a single replication target per The PowerMax Cinder drivers can support a single replication target per
@ -1603,7 +1608,8 @@ Volume replication interoperability with other features
Most features are supported, except for the following: Most features are supported, except for the following:
* Replication Group operations are available for volumes in Synchronous mode only. * Replication Group operations are available for volumes in Synchronous mode
only.
* Storage-assisted retype operations on replication-enabled PowerMax volumes * Storage-assisted retype operations on replication-enabled PowerMax volumes
(moving from a non-replicated type to a replicated-type and vice-versa. (moving from a non-replicated type to a replicated-type and vice-versa.
@ -1611,9 +1617,9 @@ Most features are supported, except for the following:
not supported. not supported.
* It is not currently possible to extend SRDF/Metro protected volumes. * It is not currently possible to extend SRDF/Metro protected volumes.
If a bigger volume size is required for a SRDF/Metro protected volume, this can be If a bigger volume size is required for a SRDF/Metro protected volume, this
achieved by cloning the original volume and choosing a larger size for the new can be achieved by cloning the original volume and choosing a larger size for
cloned volume. the new cloned volume.
* The image volume cache functionality is supported (enabled by setting * The image volume cache functionality is supported (enabled by setting
``image_volume_cache_enabled = True``), but one of two actions must be taken ``image_volume_cache_enabled = True``), but one of two actions must be taken
@ -1955,19 +1961,19 @@ the same format:
Pool example 2: Diamond+SRP_1+111111111111 Pool example 2: Diamond+SRP_1+111111111111
.. table:: **Pool values** .. list-table:: Pool values
:header-rows: 1
+----------------+-------------------------------------------------------------+ * - Key
| Key | Value | - Value
+================+=============================================================+ * - service_level
| service_level | The service level of the volume to be managed | - The service level of the volume to be managed
+----------------+-------------------------------------------------------------+ * - workload
| workload | The workload of the volume to be managed | - The workload of the volume to be managed
+----------------+-------------------------------------------------------------+ * - SRP
| SRP | The Storage Resource Pool configured for use by the backend | - The Storage Resource Pool configured for use by the backend
+----------------+-------------------------------------------------------------+ * - array_id
| array_id | The PowerMax serial number (12 digit numerical) | - The PowerMax serial number (12 digit numerical)
+----------------+-------------------------------------------------------------+
Manage Volumes Manage Volumes

View File

@ -608,8 +608,8 @@ Obsolete extra specs
Force detach Force detach
------------ ------------
The user could use `os-force_detach` action to detach a volume from all its attached hosts. The user could use `os-force_detach` action to detach a volume from all its
For more detail, please refer to attached hosts. For more detail, please refer to
https://developer.openstack.org/api-ref/block-storage/v2/?expanded=force-detach-volume-detail#force-detach-volume https://developer.openstack.org/api-ref/block-storage/v2/?expanded=force-detach-volume-detail#force-detach-volume

View File

@ -226,9 +226,9 @@ The VxFlex OS driver supports these configuration options:
Volume Types Volume Types
------------ ------------
Volume types can be used to specify characteristics of volumes allocated via the Volume types can be used to specify characteristics of volumes allocated via
VxFlex OS Driver. These characteristics are defined as ``Extra Specs`` within the VxFlex OS Driver. These characteristics are defined as ``Extra Specs``
``Volume Types``. within ``Volume Types``.
VxFlex OS Protection Domain and Storage Pool VxFlex OS Protection Domain and Storage Pool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -301,9 +301,9 @@ is attached to an instance, and thus to a compute node/SDC.
Using VxFlex OS Storage with a containerized overcloud Using VxFlex OS Storage with a containerized overcloud
------------------------------------------------------ ------------------------------------------------------
When using a containerized overcloud, such as one deployed via TripleO or RedHat When using a containerized overcloud, such as one deployed via TripleO or
Openstack version 12 and above, there is an additional step that must be Red Hat OpenStack version 12 and above, there is an additional step that must
performed. be performed.
Before deploying the overcloud Before deploying the overcloud
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -318,7 +318,8 @@ be found at
and and
``/usr/share/openstack-tripleo-heat-templates/docker/services/cinder-volume.yaml`` ``/usr/share/openstack-tripleo-heat-templates/docker/services/cinder-volume.yaml``
Two lines need to be inserted into the list of mapped volumes in each container. Two lines need to be inserted into the list of mapped volumes in each
container.
.. code-block:: yaml .. code-block:: yaml
@ -328,7 +329,8 @@ Two lines need to be inserted into the list of mapped volumes in each container.
.. end .. end
The changes to the two heat templates are identical, as an example The changes to the two heat templates are identical, as an example
the original nova-compute file should have section that resembles the following: the original nova-compute file should have section that resembles the
following:
.. code-block:: yaml .. code-block:: yaml

View File

@ -27,7 +27,8 @@ With the Hedvig Volume Driver for OpenStack, you can :
servers. servers.
- Deliver predictable performance: - Deliver predictable performance:
Receive consistent high-IOPS performance for demanding applications Receive consistent high-IOPS performance for demanding applications
through massive parallelism, dedicated flash, and edge cache configurations. through massive parallelism, dedicated flash, and edge cache
configurations.
Requirement Requirement
----------- -----------

View File

@ -246,7 +246,8 @@ Adaptive Flash Cache enabled.
Other restrictions and considerations for ``hpe3par:compression``: Other restrictions and considerations for ``hpe3par:compression``:
- For a compressed volume, minimum volume size needed is 16 GB; otherwise - For a compressed volume, minimum volume size needed is 16 GB; otherwise
resulting volume will be created successfully but will not be a compressed volume. resulting volume will be created successfully but will not be a compressed
volume.
- A full provisioned volume cannot be compressed, - A full provisioned volume cannot be compressed,
if a compression is enabled and provisioning type requested is full, if a compression is enabled and provisioning type requested is full,

View File

@ -595,7 +595,8 @@ DS8000 storage systems.
* Metro Mirror replication is enabled on DS8000 storage systems. * Metro Mirror replication is enabled on DS8000 storage systems.
#. Perform the following procedure, replacing the values in the example with your own: #. Perform the following procedure, replacing the values in the example with
your own:
.. code-block:: console .. code-block:: console

View File

@ -229,11 +229,13 @@ Note that more than one ``replication_device`` line can be added to allow for
multi-target device replication. multi-target device replication.
A volume is only replicated if the volume is of a volume-type that has A volume is only replicated if the volume is of a volume-type that has
the extra spec ``replication_enabled`` set to ``<is> True``. You can optionally specify the extra spec ``replication_enabled`` set to ``<is> True``. You can optionally
the ``replication_type`` key to specify ``<in> sync`` or ``<in> async`` to choose the specify the ``replication_type`` key to specify ``<in> sync`` or ``<in> async``
type of replication for that volume. If not specified it will default to ``async``. to choose the type of replication for that volume. If not specified it will
default to ``async``.
To create a volume type that specifies replication to remote back ends with async replication: To create a volume type that specifies replication to remote back ends with
async replication:
.. code-block:: console .. code-block:: console
@ -244,48 +246,32 @@ To create a volume type that specifies replication to remote back ends with asyn
The following table contains the optional configuration parameters available The following table contains the optional configuration parameters available
for async replication configuration with the Pure Storage array. for async replication configuration with the Pure Storage array.
==================================================== ============= ================ .. list-table:: Pure Storage replication configuration options
Option Description Default :header-rows: 1
==================================================== ============= ================
``pure_replica_interval_default`` Snapshot * - Option
replication - Description
interval in - Default
seconds. ``3600`` * - ``pure_replica_interval_default``
``pure_replica_retention_short_term_default`` Retain all - Snapshot replication interval in seconds.
snapshots on - ``3600``
target for * - ``pure_replica_retention_short_term_default``
this time - Retain all snapshots on target for this time (in seconds).
(in seconds). ``14400`` - ``14400``
``pure_replica_retention_long_term_per_day_default`` Retain how * - ``pure_replica_retention_long_term_per_day_default``
many - Retain how many snapshots for each day.
snapshots - ``3``
for each * - ``pure_replica_retention_long_term_default``
day. ``3`` - Retain snapshots per day on target for this time (in days).
``pure_replica_retention_long_term_default`` Retain - ``7``
snapshots * - ``pure_replication_pg_name``
per day - Pure Protection Group name to use for async replication (will be created
on target if it does not exist).
for this - ``cinder-group``
time (in * - ``pure_replication_pod_name``
days). ``7`` - Pure Pod name to use for sync replication (will be created if it does
``pure_replication_pg_name`` Pure not exist).
Protection - ``cinder-pod``
Group name to
use for async
replication
(will be
created if
it does not
exist). ``cinder-group``
``pure_replication_pod_name`` Pure Pod name
to use for
sync
replication
(will be
created if
it does not
exist). ``cinder-pod``
==================================================== ============= ================
.. note:: .. note::
@ -317,8 +303,9 @@ file:
.. note:: .. note::
Arrays with very good data reduction rates (compression/data deduplication/thin provisioning) Arrays with very good data reduction rates
can get *very* large oversubscription rates applied. (compression/data deduplication/thin provisioning) can get *very* large
oversubscription rates applied.
Scheduling metrics Scheduling metrics
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~

View File

@ -50,8 +50,10 @@ Configuration
Target> iscsi target auth incominguser add iqn.2018-02.com.veritas:target02 user1 Target> iscsi target auth incominguser add iqn.2018-02.com.veritas:target02 user1
... ...
#. Ensure that the Veritas Access iSCSI target service is online. If the Veritas Access #. Ensure that the Veritas Access iSCSI target service is online. If the
iSCSI target service is not online, enable the service by using the CLI or REST API. Veritas Access
iSCSI target service is not online, enable the service by using the CLI or
REST API.
.. code-block:: console .. code-block:: console

View File

@ -146,10 +146,11 @@ created from an image:
Adapter type Adapter type
~~~~~~~~~~~~ ~~~~~~~~~~~~
The VMware vCenter VMDK driver supports the adapter types ``LSI Logic Parallel``, The VMware vCenter VMDK driver supports the adapter types ``LSI Logic
``BusLogic Parallel``, ``LSI Logic SAS``, ``VMware Paravirtual`` and ``IDE`` for Parallel``, ``BusLogic Parallel``, ``LSI Logic SAS``, ``VMware Paravirtual``
volumes. Use the ``vmware:adapter_type`` extra spec key to specify the adapter and ``IDE`` for volumes. Use the ``vmware:adapter_type`` extra spec key to
type. The following table captures the mapping for adapter types: specify the adapter type. The following table captures the mapping for adapter
types:
.. list-table:: Extra spec entry to adapter type mapping .. list-table:: Extra spec entry to adapter type mapping
:header-rows: 1 :header-rows: 1

View File

@ -229,10 +229,10 @@ configuration.
We advise setting up automated tests because the Block Storage API has a lot We advise setting up automated tests because the Block Storage API has a lot
of API calls and you'll want to test each of them against an admin user, an of API calls and you'll want to test each of them against an admin user, an
observer-admin user, and a "regular" end user. Further, if you anticipate that observer-admin user, and a "regular" end user. Further, if you anticipate that
you may require finer-grained access than outlined in this example (for example, you may require finer-grained access than outlined in this example (for
you would like a "creator" role that can create and read, but not delete), your example, you would like a "creator" role that can create and read, but not
configuration will be all the more complex and hence require more extensive delete), your configuration will be all the more complex and hence require more
testing that you won't want to do by hand. extensive testing that you won't want to do by hand.
Step 1: Create a new role Step 1: Create a new role
````````````````````````` `````````````````````````

View File

@ -17,32 +17,46 @@
Adding a Method to the OpenStack API Adding a Method to the OpenStack API
==================================== ====================================
The interface is a mostly RESTful API. REST stands for Representational State Transfer and provides an architecture "style" for distributed systems using HTTP for transport. Figure out a way to express your request and response in terms of resources that are being created, modified, read, or destroyed. The interface is a mostly RESTful API. REST stands for Representational State
Transfer and provides an architecture "style" for distributed systems using
HTTP for transport. Figure out a way to express your request and response in
terms of resources that are being created, modified, read, or destroyed.
Routing Routing
------- -------
To map URLs to controllers+actions, OpenStack uses the Routes package, a clone of Rails routes for Python implementations. See http://routes.groovie.org/ for more information. To map URLs to controllers+actions, OpenStack uses the Routes package, a clone
of Rails routes for Python implementations. See http://routes.groovie.org/ for
more information.
URLs are mapped to "action" methods on "controller" classes in ``cinder/api/openstack/__init__/ApiRouter.__init__`` . URLs are mapped to "action" methods on "controller" classes in
``cinder/api/openstack/__init__/ApiRouter.__init__`` .
See http://routes.readthedocs.io/en/latest/ for all syntax, but you'll probably just need these two: See http://routes.readthedocs.io/en/latest/ for all syntax, but you'll probably
- mapper.connect() lets you map a single URL to a single action on a controller. just need these two:
- mapper.connect() lets you map a single URL to a single action on a
controller.
- mapper.resource() connects many standard URLs to actions on a controller. - mapper.resource() connects many standard URLs to actions on a controller.
Controllers and actions Controllers and actions
----------------------- -----------------------
Controllers live in ``cinder/api/openstack``, and inherit from cinder.wsgi.Controller. Controllers live in ``cinder/api/openstack``, and inherit from
cinder.wsgi.Controller.
See ``cinder/api/v2/volumes.py`` for an example. See ``cinder/api/v2/volumes.py`` for an example.
Action methods take parameters that are sucked out of the URL by mapper.connect() or .resource(). The first two parameters are self and the WebOb request, from which you can get the req.environ, req.body, req.headers, etc. Action methods take parameters that are sucked out of the URL by
mapper.connect() or .resource(). The first two parameters are self and the
WebOb request, from which you can get the req.environ, req.body, req.headers,
etc.
Serialization Serialization
------------- -------------
Actions return a dictionary, and wsgi.Controller serializes that to JSON or XML based on the request's content-type. Actions return a dictionary, and wsgi.Controller serializes that to JSON or XML
based on the request's content-type.
Errors Errors
------ ------

View File

@ -19,20 +19,29 @@ Running Cinder API under Apache
Files Files
----- -----
Copy the file etc/cinder/api-httpd.conf to the appropriate location for your Apache server, most likely: Copy the file etc/cinder/api-httpd.conf to the appropriate location for your
Apache server, most likely:
``/etc/httpd/conf.d/cinder_wsgi.conf`` ``/etc/httpd/conf.d/cinder_wsgi.conf``
Update this file to match your system configuration (for example, some distributions put httpd logs in the apache2 directory and some in the httpd directory). Update this file to match your system configuration (for example, some
Create the directory /var/www/cgi-bin/cinder/. You can either hard or soft link the file cinder/wsgi/wsgi.py to be osapi_volume under the /var/www/cgi-bin/cinder/ directory. For a distribution appropriate place, it should probably be copied to: distributions put httpd logs in the apache2 directory and some in the httpd
directory).
Create the directory /var/www/cgi-bin/cinder/. You can either hard or soft link
the file cinder/wsgi/wsgi.py to be osapi_volume under the
/var/www/cgi-bin/cinder/ directory. For a distribution appropriate place, it
should probably be copied to:
``/usr/share/openstack/cinder/httpd/cinder.py`` ``/usr/share/openstack/cinder/httpd/cinder.py``
Cinder's primary configuration file (etc/cinder.conf) and the PasteDeploy configuration file (etc/cinder-paste.ini) must be readable to httpd in one of the default locations described in Configuring Cinder. Cinder's primary configuration file (etc/cinder.conf) and the PasteDeploy
configuration file (etc/cinder-paste.ini) must be readable to httpd in one of
the default locations described in Configuring Cinder.
Access Control Access Control
-------------- --------------
If you are running with Linux kernel security module enabled (for example SELinux or AppArmor), make sure that the configuration file has the appropriate context to access the linked file. If you are running with Linux kernel security module enabled (for example
SELinux or AppArmor), make sure that the configuration file has the appropriate
context to access the linked file.

View File

@ -21,8 +21,8 @@ word ``volume``::
If a user makes a request without specifying a version, they will get If a user makes a request without specifying a version, they will get
the ``_MIN_API_VERSION`` as defined in the ``_MIN_API_VERSION`` as defined in
``cinder/api/openstack/api_version_request.py``. This value is currently ``3.0`` ``cinder/api/openstack/api_version_request.py``. This value is currently
and is expected to remain so for quite a long time. ``3.0`` and is expected to remain so for quite a long time.
The Nova project was the first to implement microversions. For full The Nova project was the first to implement microversions. For full
details please read Nova's `Kilo spec for microversions details please read Nova's `Kilo spec for microversions

View File

@ -20,7 +20,11 @@ Cinder System Architecture
The Cinder Block Storage Service is intended to be ran on one or more nodes. The Cinder Block Storage Service is intended to be ran on one or more nodes.
Cinder uses a sql-based central database that is shared by all Cinder services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, cinder will be moving towards multiple data stores with some kind of aggregation system. Cinder uses a sql-based central database that is shared by all Cinder services
in the system. The amount and depth of the data fits into a sql database quite
well. For small deployments this seems like an optimal solution. For larger
deployments, and especially if security is a concern, cinder will be moving
towards multiple data stores with some kind of aggregation system.
Components Components
---------- ----------
@ -47,8 +51,11 @@ Below you will find a brief explanation of the different components.
* DB: sql database for data storage. Used by all components (LINKS NOT SHOWN). * DB: sql database for data storage. Used by all components (LINKS NOT SHOWN).
* Web Dashboard: potential external component that talks to the api. * Web Dashboard: potential external component that talks to the api.
* api: component that receives http requests, converts commands and communicates with other components via the queue or http. * api: component that receives http requests, converts commands and
* Auth Manager: component responsible for users/projects/and roles. Can backend to DB or LDAP. This is not a separate binary, but rather a python class that is used by most components in the system. communicates with other components via the queue or http.
* Auth Manager: component responsible for users/projects/and roles. Can backend
to DB or LDAP. This is not a separate binary, but rather a python class that
is used by most components in the system.
* scheduler: decides which host gets each volume. * scheduler: decides which host gets each volume.
* volume: manages dynamically attachable block devices. * volume: manages dynamically attachable block devices.
* backup: manages backups of block storage devices. * backup: manages backups of block storage devices.

View File

@ -24,7 +24,8 @@ it may be difficult to decipher from the code.
Attach/Detach Operations are multi-part commands Attach/Detach Operations are multi-part commands
================================================ ================================================
There are three things that happen in the workflow for an attach or detach call. There are three things that happen in the workflow for an attach or detach
call.
1. Update the status of the volume in the DB (ie attaching/detaching) 1. Update the status of the volume in the DB (ie attaching/detaching)
@ -50,13 +51,15 @@ reserve_volume(self, context, volume)
Probably the most simple call in to Cinder. This method simply checks that Probably the most simple call in to Cinder. This method simply checks that
the specified volume is in an "available" state and can be attached. the specified volume is in an "available" state and can be attached.
Any other state results in an Error response notifying Nova that the volume Any other state results in an Error response notifying Nova that the volume
is NOT available. The only valid state for this call to succeed is "available". is NOT available. The only valid state for this call to succeed is
"available".
NOTE: multi-attach will add "in-use" to the above acceptable states. NOTE: multi-attach will add "in-use" to the above acceptable states.
If the volume is in fact available, we immediately issue an update to the Cinder If the volume is in fact available, we immediately issue an update to the
database and mark the status of the volume to "attaching" thereby reserving the Cinder database and mark the status of the volume to "attaching" thereby
volume so that it won't be used by another API call anywhere else. reserving the volume so that it won't be used by another API call anywhere
else.
initialize_connection(self, context, volume, connector) initialize_connection(self, context, volume, connector)
------------------------------------------------------- -------------------------------------------------------
@ -87,9 +90,10 @@ shown here:
} }
} }
In the process of building this data structure, the Cinder Volume Manager makes a number of In the process of building this data structure, the Cinder Volume Manager makes
calls to the backend driver, and builds a volume_attachment entry in the database to store a number of calls to the backend driver, and builds a volume_attachment entry
the connection information passed in via the connector object. in the database to store the connection information passed in via the connector
object.
driver.validate_connector driver.validate_connector
************************* *************************

View File

@ -48,7 +48,8 @@ attachment-create
----------------- -----------------
``` ```
cinder --os-volume-api-version 3.27 attachment-create <volume-id> <instance-uuid> cinder
--os-volume-api-version 3.27 attachment-create <volume-id> <instance-uuid>
``` ```
The attachment_create call simply creates an empty Attachment record for the The attachment_create call simply creates an empty Attachment record for the
@ -123,10 +124,11 @@ attachment-update
cinder --os-volume-api-version 3.27 attachment-update <attachment-id> cinder --os-volume-api-version 3.27 attachment-update <attachment-id>
``` ```
Once we have a reserved volume, this CLI can be used to update an attachment for a cinder volume. Once we have a reserved volume, this CLI can be used to update an attachment
This call is designed to be more of an attachment completion than anything else. for a cinder volume. This call is designed to be more of an attachment
It expects the value of a connector object to notify the driver that the volume is going to be completion than anything else. It expects the value of a connector object to
connected and where it's being connected to. The usage is the following:: notify the driver that the volume is going to be connected and where it's being
connected to. The usage is the following::
usage: cinder --os-volume-api-version 3.27 attachment-update usage: cinder --os-volume-api-version 3.27 attachment-update
<attachment-id> ... <attachment-id> ...

View File

@ -78,7 +78,8 @@ During the documentation build a number of things happen:
will look consistent with all the other OpenStack documentation. will look consistent with all the other OpenStack documentation.
* The resulting HTML is put into ``doc/build/html``. * The resulting HTML is put into ``doc/build/html``.
* Sample files like cinder.conf.sample are generated and put into ``doc/source/_static``. * Sample files like cinder.conf.sample are generated and put into
``doc/source/_static``.
* All of Cinder's ``.py`` files are processed and the docstrings are used to * All of Cinder's ``.py`` files are processed and the docstrings are used to
generate the files under ``doc/source/contributor/api`` generate the files under ``doc/source/contributor/api``

View File

@ -1,9 +1,9 @@
Code Reviews Code Reviews
============ ============
Cinder follows the same `Review guidelines`_ outlined by the OpenStack community. Cinder follows the same `Review guidelines`_ outlined by the OpenStack
This page provides additional information that is helpful for reviewers of community. This page provides additional information that is helpful for
patches to Cinder. reviewers of patches to Cinder.
Gerrit Gerrit
------ ------
@ -16,7 +16,8 @@ requests to the Cinder repository will be ignored`.
See `Quick Reference`_ for information on quick reference for developers. See `Quick Reference`_ for information on quick reference for developers.
See `Getting Started`_ for information on how to get started using Gerrit. See `Getting Started`_ for information on how to get started using Gerrit.
See `Development Workflow`_ for more detailed information on how to work with Gerrit. See `Development Workflow`_ for more detailed information on how to work with
Gerrit.
Targeting Milestones Targeting Milestones
-------------------- --------------------

View File

@ -20,8 +20,8 @@ Contributor Guide
In this section you will find information on how to contribute to Cinder. In this section you will find information on how to contribute to Cinder.
Content includes architectural overviews, tips and tricks for setting up a Content includes architectural overviews, tips and tricks for setting up a
development environment, and information on Cinder's lower level programming APIs, development environment, and information on Cinder's lower level programming
APIs.
Programming HowTos and Tutorials Programming HowTos and Tutorials
-------------------------------- --------------------------------

View File

@ -14,23 +14,26 @@ Jenkins performs tasks such as:
Run Pylint checks on proposed code changes that have been reviewed. Run Pylint checks on proposed code changes that have been reviewed.
`gate-cinder-python27`_ `gate-cinder-python27`_
Run unit tests using python2.7 on proposed code changes that have been reviewed. Run unit tests using python2.7 on proposed code changes that have been
reviewed.
`gate-cinder-python34`_ `gate-cinder-python34`_
Run unit tests using python3.4 on proposed code changes that have been reviewed. Run unit tests using python3.4 on proposed code changes that have been
reviewed.
`cinder-coverage`_ `cinder-coverage`_
Calculate test coverage metrics. Calculate test coverage metrics.
`cinder-docs`_ `cinder-docs`_
Build this documentation and push it to `OpenStack Cinder <https://docs.openstack.org/cinder/latest/>`_. Build this documentation and push it to
`OpenStack Cinder <https://docs.openstack.org/cinder/latest/>`_.
`cinder-merge-release-tags`_ `cinder-merge-release-tags`_
Merge reviewed code into the git repository. Merge reviewed code into the git repository.
`cinder-tarball`_ `cinder-tarball`_
Do ``python setup.py sdist`` to create a tarball of the cinder code and upload Do ``python setup.py sdist`` to create a tarball of the cinder code and
it to http://tarballs.openstack.org/cinder upload it to http://tarballs.openstack.org/cinder
.. _Jenkins: http://jenkins-ci.org .. _Jenkins: http://jenkins-ci.org
.. _Launchpad: https://launchpad.net .. _Launchpad: https://launchpad.net

View File

@ -1,8 +1,8 @@
Project hosting with Launchpad Project hosting with Launchpad
============================== ==============================
`Launchpad`_ hosts the Cinder project. The Cinder project homepage on Launchpad is `Launchpad`_ hosts the Cinder project. The Cinder project homepage on Launchpad
https://launchpad.net/cinder. is https://launchpad.net/cinder.
Launchpad credentials Launchpad credentials
--------------------- ---------------------
@ -21,9 +21,11 @@ Mailing list
The mailing list email is ``openstack@lists.openstack.org``. This is a common The mailing list email is ``openstack@lists.openstack.org``. This is a common
mailing list across the OpenStack projects. To participate in the mailing list: mailing list across the OpenStack projects. To participate in the mailing list:
#. Subscribe to the list at http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack #. Subscribe to the list at
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
The mailing list archives are at http://lists.openstack.org/pipermail/openstack/. The mailing list archives are at
http://lists.openstack.org/pipermail/openstack/.
Bug tracking Bug tracking
@ -40,7 +42,8 @@ https://blueprints.launchpad.net/cinder.
Technical support (Answers) Technical support (Answers)
--------------------------- ---------------------------
Cinder no longer uses Launchpad Answers to track Cinder technical support questions. Cinder no longer uses Launchpad Answers to track Cinder technical support
questions.
Note that `Ask OpenStack`_ (which is not hosted on Launchpad) can Note that `Ask OpenStack`_ (which is not hosted on Launchpad) can
be used for technical support requests. be used for technical support requests.

View File

@ -248,15 +248,15 @@ the RBD driver as a reference for this implementation.
If you would like to implement a driver specific volume migration for If you would like to implement a driver specific volume migration for
your driver, the API method associated with the driver specific migration your driver, the API method associated with the driver specific migration
is the following admin only method: is the following admin only method::
migrate_volume(self, ctxt, volume, host) migrate_volume(self, ctxt, volume, host)
If your driver is taken as the destination back-end for a generic host-assisted If your driver is taken as the destination back-end for a generic host-assisted
migration and your driver needs to update the volume model after a successful migration and your driver needs to update the volume model after a successful
migration, you need to implement the following method for your driver: migration, you need to implement the following method for your driver::
update_migrated_volume(self, ctxt, volume, new_volume, original_volume_status): update_migrated_volume(self, ctxt, volume, new_volume, original_volume_status)
Required methods Required methods

View File

@ -127,9 +127,9 @@ create a new column with desired properties and start moving the data (in a
live manner). In worst case old column can be removed in N+2. Whole procedure live manner). In worst case old column can be removed in N+2. Whole procedure
is described in more details below. is described in more details below.
In aforementioned case we need to make more complicated steps stretching through In aforementioned case we need to make more complicated steps stretching
3 releases - always keeping the backwards compatibility. In short when we want through 3 releases - always keeping the backwards compatibility. In short when
to start to move data inside the DB, then in N we should: we want to start to move data inside the DB, then in N we should:
* Add a new column for the data. * Add a new column for the data.
* Write data in both places (N-1 needs to read it). * Write data in both places (N-1 needs to read it).

View File

@ -17,35 +17,95 @@
AMQP and Cinder AMQP and Cinder
=============== ===============
AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between any two Cinder components and allows them to communicate in a loosely coupled fashion. More precisely, Cinder components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved: AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP
broker, either RabbitMQ or Qpid, sits between any two Cinder components and
allows them to communicate in a loosely coupled fashion. More precisely, Cinder
components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC
hereinafter) to communicate to one another; however such a paradigm is built
atop the publish/subscribe paradigm so that the following benefits can be
achieved:
* Decoupling between client and servant (such as the client does not need to know where the servant's reference is). * Decoupling between client and servant (such as the client does not need
* Full a-synchronism between client and servant (such as the client does not need the servant to run at the same time of the remote call). to know where the servant's reference is).
* Random balancing of remote calls (such as if more servants are up and running, one-way calls are transparently dispatched to the first available servant). * Full a-synchronism between client and servant (such as the client does
not need the servant to run at the same time of the remote call).
* Random balancing of remote calls (such as if more servants are up and
running, one-way calls are transparently dispatched to the first
available servant).
Cinder uses direct, fanout, and topic-based exchanges. The architecture looks like the one depicted in the figure below: Cinder uses direct, fanout, and topic-based exchanges. The architecture looks
like the one depicted in the figure below:
.. image:: /images/rpc/arch.png .. image:: /images/rpc/arch.png
:width: 60% :width: 60%
.. ..
Cinder implements RPC (both request+response, and one-way, respectively nicknamed 'rpc.call' and 'rpc.cast') over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each Cinder service (for example Scheduler, Volume, etc.) create two queues at the initialization time, one which accepts messages with routing keys 'NODE-TYPE.NODE-ID' (for example cinder-volume.hostname) and another, which accepts messages with routing keys as generic 'NODE-TYPE' (for example cinder-volume). The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only. Cinder implements RPC (both request+response, and one-way, respectively
nicknamed 'rpc.call' and 'rpc.cast') over AMQP by providing an adapter class
which take cares of marshaling and unmarshaling of messages into function
calls. Each Cinder service (for example Scheduler, Volume, etc.) create two
queues at the initialization time, one which accepts messages with routing keys
'NODE-TYPE.NODE-ID' (for example cinder-volume.hostname) and another, which
accepts messages with routing keys as generic 'NODE-TYPE' (for example
cinder-volume). The API acts as a consumer when RPC calls are request/response,
otherwise is acts as publisher only.
Cinder RPC Mappings Cinder RPC Mappings
------------------- -------------------
The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every Cinder component connects to the message broker and, depending on its personality, may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Volume). Invokers and Workers do not actually exist in the Cinder object model, but we are going to use them as an abstraction for sake of clarity. An Invoker is a component that sends messages in the queuing system via two operations: 1) rpc.call and ii) rpc.cast; a Worker is a component that receives messages from the queuing system and reply accordingly to rpc.call operations. The figure below shows the internals of a message broker node (referred to as a
RabbitMQ node in the diagrams) when a single instance is deployed and shared in
an OpenStack cloud. Every Cinder component connects to the message broker and,
depending on its personality, may use the queue either as an Invoker (such as
API or Scheduler) or a Worker (such as Volume). Invokers and Workers do not
actually exist in the Cinder object model, but we are going to use them as an
abstraction for sake of clarity. An Invoker is a component that sends messages
in the queuing system via two operations: 1) rpc.call and ii) rpc.cast; a
Worker is a component that receives messages from the queuing system and reply
accordingly to rpc.call operations.
Figure 2 shows the following internal elements: Figure 2 shows the following internal elements:
* Topic Publisher: a Topic Publisher comes to life when an rpc.call or an rpc.cast operation is executed; this object is instantiated and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery. * Topic Publisher: a Topic Publisher comes to life when an rpc.call or an
* Direct Consumer: a Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is instantiated and used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery; the exchange and queue identifiers are determined by a UUID generator, and are marshaled in the message sent by the Topic Publisher (only rpc.call operations). rpc.cast operation is executed; this object is instantiated and used to
* Topic Consumer: a Topic Consumer comes to life as soon as a Worker is instantiated and exists throughout its life-cycle; this object is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is 'topic') and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is 'topic.host'). push a message to the queuing system. Every publisher connects always to
* Direct Publisher: a Direct Publisher comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message. the same topic-based exchange; its life-cycle is limited to the message
* Topic Exchange: The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by Qpid or RabbitMQ); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in Cinder. delivery.
* Direct Exchange: this is a routing table that is created during rpc.call operations; there are many instances of this kind of exchange throughout the life-cycle of a message broker node, one for each rpc.call invoked. * Direct Consumer: a Direct Consumer comes to life if (an only if) a
* Queue Element: A Queue is a message bucket. Messages are kept in the queue until a Consumer (either Topic or Direct Consumer) connects to the queue and fetch it. Queues can be shared or can be exclusive. Queues whose routing key is 'topic' are shared amongst Workers of the same personality. rpc.call operation is executed; this object is instantiated and used to
receive a response message from the queuing system; Every consumer
connects to a unique direct-based exchange via a unique exclusive queue;
its life-cycle is limited to the message delivery; the exchange and queue
identifiers are determined by a UUID generator, and are marshaled in the
message sent by the Topic Publisher (only rpc.call operations).
* Topic Consumer: a Topic Consumer comes to life as soon as a Worker is
instantiated and exists throughout its life-cycle; this object is used to
receive messages from the queue and it invokes the appropriate action as
defined by the Worker role. A Topic Consumer connects to the same
topic-based exchange either via a shared queue or via a unique exclusive
queue. Every Worker has two topic consumers, one that is addressed only
during rpc.cast operations (and it connects to a shared queue whose
exchange key is 'topic') and the other that is addressed only during
rpc.call operations (and it connects to a unique queue whose exchange key
is 'topic.host').
* Direct Publisher: a Direct Publisher comes to life only during rpc.call
operations and it is instantiated to return the message required by the
request/response operation. The object connects to a direct-based
exchange whose identity is dictated by the incoming message.
* Topic Exchange: The Exchange is a routing table that exists in the
context of a virtual host (the multi-tenancy mechanism provided by Qpid
or RabbitMQ); its type (such as topic vs. direct) determines the routing
policy; a message broker node will have only one topic-based exchange for
every topic in Cinder.
* Direct Exchange: this is a routing table that is created during rpc.call
operations; there are many instances of this kind of exchange throughout
the life-cycle of a message broker node, one for each rpc.call invoked.
* Queue Element: A Queue is a message bucket. Messages are kept in the
queue until a Consumer (either Topic or Direct Consumer) connects to the
queue and fetch it. Queues can be shared or can be exclusive. Queues
whose routing key is 'topic' are shared amongst Workers of the same
personality.
.. image:: /images/rpc/rabt.png .. image:: /images/rpc/rabt.png
:width: 60% :width: 60%
@ -57,10 +117,17 @@ RPC Calls
The diagram below shows the message flow during an rpc.call operation: The diagram below shows the message flow during an rpc.call operation:
1. a Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation, a Direct Consumer is instantiated to wait for the response message. 1. a Topic Publisher is instantiated to send the message request to the
2. once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic.host') and passed to the Worker in charge of the task. queuing system; immediately before the publishing operation, a Direct
3. once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system. Consumer is instantiated to wait for the response message.
4. once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as 'msg_id') and passed to the Invoker. 2. once the message is dispatched by the exchange, it is fetched by the
Topic Consumer dictated by the routing key (such as 'topic.host') and
passed to the Worker in charge of the task.
3. once the task is completed, a Direct Publisher is allocated to send the
response message to the queuing system.
4. once the message is dispatched by the exchange, it is fetched by the
Direct Consumer dictated by the routing key (such as 'msg_id') and
passed to the Invoker.
.. image:: /images/rpc/flow1.png .. image:: /images/rpc/flow1.png
:width: 60% :width: 60%
@ -72,8 +139,11 @@ RPC Casts
The diagram below the message flow during an rpc.cast operation: The diagram below the message flow during an rpc.cast operation:
1. A Topic Publisher is instantiated to send the message request to the queuing system. 1. A Topic Publisher is instantiated to send the message request to the
2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as 'topic') and passed to the Worker in charge of the task. queuing system.
2. Once the message is dispatched by the exchange, it is fetched by the
Topic Consumer dictated by the routing key (such as 'topic') and passed
to the Worker in charge of the task.
.. image:: /images/rpc/flow2.png .. image:: /images/rpc/flow2.png
:width: 60% :width: 60%
@ -83,12 +153,22 @@ The diagram below the message flow during an rpc.cast operation:
AMQP Broker Load AMQP Broker Load
---------------- ----------------
At any given time the load of a message broker node running either Qpid or RabbitMQ is function of the following parameters: At any given time the load of a message broker node running either Qpid or
RabbitMQ is function of the following parameters:
* Throughput of API calls: the number of API calls (more precisely rpc.call ops) being served by the OpenStack cloud dictates the number of direct-based exchanges, related queues and direct consumers connected to them. * Throughput of API calls: the number of API calls (more precisely
* Number of Workers: there is one queue shared amongst workers with the same personality; however there are as many exclusive queues as the number of workers; the number of workers dictates also the number of routing keys within the topic-based exchange, which is shared amongst all workers. rpc.call ops) being served by the OpenStack cloud dictates the number of
direct-based exchanges, related queues and direct consumers connected to
them.
* Number of Workers: there is one queue shared amongst workers with the
same personality; however there are as many exclusive queues as the
number of workers; the number of workers dictates also the number of
routing keys within the topic-based exchange, which is shared amongst all
workers.
The figure below shows the status of a RabbitMQ node after Cinder components' bootstrap in a test environment (phantom is hostname). Exchanges and queues being created by Cinder components are: The figure below shows the status of a RabbitMQ node after Cinder components'
bootstrap in a test environment (phantom is hostname). Exchanges and queues
being created by Cinder components are:
* Exchanges * Exchanges
1. cinder-scheduler_fanout (fanout exchange) 1. cinder-scheduler_fanout (fanout exchange)
@ -113,18 +193,30 @@ The figure below shows the status of a RabbitMQ node after Cinder components' bo
RabbitMQ Gotchas RabbitMQ Gotchas
---------------- ----------------
Cinder uses Kombu to connect to the RabbitMQ environment. Kombu is a Python library that in turn uses AMQPLib, a library that implements the standard AMQP 0.8 at the time of writing. When using Kombu, Invokers and Workers need the following parameters in order to instantiate a Connection object that connects to the RabbitMQ server (please note that most of the following material can be also found in the Kombu documentation; it has been summarized and revised here for sake of clarity): Cinder uses Kombu to connect to the RabbitMQ environment. Kombu is a Python
library that in turn uses AMQPLib, a library that implements the standard
AMQP 0.8 at the time of writing. When using Kombu, Invokers and Workers need
the following parameters in order to instantiate a Connection object that
connects to the RabbitMQ server (please note that most of the following
material can be also found in the Kombu documentation; it has been summarized
and revised here for sake of clarity):
* Hostname: The hostname to the AMQP server. * Hostname: The hostname to the AMQP server.
* Userid: A valid username used to authenticate to the server. * Userid: A valid username used to authenticate to the server.
* Password: The password used to authenticate to the server. * Password: The password used to authenticate to the server.
* Virtual_host: The name of the virtual host to work with. This virtual host must exist on the server, and the user must have access to it. Default is "/". * Virtual_host: The name of the virtual host to work with. This virtual
host must exist on the server, and the user must have access to it.
Default is "/".
* Port: The port of the AMQP server. Default is 5672 (amqp). * Port: The port of the AMQP server. Default is 5672 (amqp).
The following parameters are default: The following parameters are default:
* Insist: insist on connecting to a server. In a configuration with multiple load-sharing servers, the Insist option tells the server that the client is insisting on a connection to the specified server. Default is False. * Insist: insist on connecting to a server. In a configuration with
* Connect_timeout: the timeout in seconds before the client gives up connecting to the server. The default is no timeout. multiple load-sharing servers, the Insist option tells the server that
the client is insisting on a connection to the specified server. Default
is False.
* Connect_timeout: the timeout in seconds before the client gives up
connecting to the server. The default is no timeout.
* SSL: use SSL to connect to the server. The default is False. * SSL: use SSL to connect to the server. The default is False.
More precisely Consumers need the following parameters: More precisely Consumers need the following parameters:
@ -132,23 +224,57 @@ More precisely Consumers need the following parameters:
* Connection: the above mentioned Connection object. * Connection: the above mentioned Connection object.
* Queue: name of the queue. * Queue: name of the queue.
* Exchange: name of the exchange the queue binds to. * Exchange: name of the exchange the queue binds to.
* Routing_key: the interpretation of the routing key depends on the value of the exchange_type attribute. * Routing_key: the interpretation of the routing key depends on the value
of the exchange_type attribute.
* Direct exchange: if the routing key property of the message and the routing_key attribute of the queue are identical, then the message is forwarded to the queue. * Direct exchange: if the routing key property of the message and the
* Fanout exchange: messages are forwarded to the queues bound the exchange, even if the binding does not have a key. routing_key attribute of the queue are identical, then the message is
* Topic exchange: if the routing key property of the message matches the routing key of the key according to a primitive pattern matching scheme, then the message is forwarded to the queue. The message routing key then consists of words separated by dots (".", like domain names), and two special characters are available; star ("*") and hash ("#"). The star matches any word, and the hash matches zero or more words. For example ".stock.#" matches the routing keys "usd.stock" and "eur.stock.db" but not "stock.nasdaq". forwarded to the queue.
* Fanout exchange: messages are forwarded to the queues bound the
exchange, even if the binding does not have a key.
* Topic exchange: if the routing key property of the message matches the
routing key of the key according to a primitive pattern matching
scheme, then the message is forwarded to the queue. The message routing
key then consists of words separated by dots (".", like domain names),
and two special characters are available; star ("*") and hash ("#").
The star matches any word, and the hash matches zero or more words. For
example ".stock.#" matches the routing keys "usd.stock" and
"eur.stock.db" but not "stock.nasdaq".
* Durable: this flag determines the durability of both exchanges and queues; durable exchanges and queues remain active when a RabbitMQ server restarts. Non-durable exchanges/queues (transient exchanges/queues) are purged when a server restarts. It is worth noting that AMQP specifies that durable queues cannot bind to transient exchanges. Default is True. * Durable: this flag determines the durability of both exchanges and
* Auto_delete: if set, the exchange is deleted when all queues have finished using it. Default is False. queues; durable exchanges and queues remain active when a RabbitMQ server
* Exclusive: exclusive queues (such as non-shared) may only be consumed from by the current connection. When exclusive is on, this also implies auto_delete. Default is False. restarts. Non-durable exchanges/queues (transient exchanges/queues) are
* Exchange_type: AMQP defines several default exchange types (routing algorithms) that covers most of the common messaging use cases. purged when a server restarts. It is worth noting that AMQP specifies
* Auto_ack: acknowledgement is handled automatically once messages are received. By default auto_ack is set to False, and the receiver is required to manually handle acknowledgment. that durable queues cannot bind to transient exchanges. Default is True.
* No_ack: it disable acknowledgement on the server-side. This is different from auto_ack in that acknowledgement is turned off altogether. This functionality increases performance but at the cost of reliability. Messages can get lost if a client dies before it can deliver them to the application. * Auto_delete: if set, the exchange is deleted when all queues have
* Auto_declare: if this is True and the exchange name is set, the exchange will be automatically declared at instantiation. Auto declare is on by default. finished using it. Default is False.
Publishers specify most the parameters of Consumers (such as they do not specify a queue name), but they can also specify the following: * Exclusive: exclusive queues (such as non-shared) may only be consumed
* Delivery_mode: the default delivery mode used for messages. The value is an integer. The following delivery modes are supported by RabbitMQ: from by the current connection. When exclusive is on, this also implies
auto_delete. Default is False.
* Exchange_type: AMQP defines several default exchange types (routing
algorithms) that covers most of the common messaging use cases.
* Auto_ack: acknowledgement is handled automatically once messages are
received. By default auto_ack is set to False, and the receiver is
required to manually handle acknowledgment.
* No_ack: it disable acknowledgement on the server-side. This is different
from auto_ack in that acknowledgement is turned off altogether. This
functionality increases performance but at the cost of reliability.
Messages can get lost if a client dies before it can deliver them to the
application.
* Auto_declare: if this is True and the exchange name is set, the exchange
will be automatically declared at instantiation. Auto declare is on by
default.
Publishers specify most the parameters of Consumers (such as they do not
specify a queue name), but they can also specify the following:
* Delivery_mode: the default delivery mode used for messages. The value is
an integer. The following delivery modes are supported by RabbitMQ:
* 1 or "transient": the message is transient. Which means it is stored in memory only, and is lost if the server dies or restarts. * 1 or "transient": the message is transient. Which means it is
* 2 or "persistent": the message is persistent. Which means the message is stored both in-memory, and on disk, and therefore preserved if the server dies or restarts. stored in memory only, and is lost if the server dies or restarts.
* 2 or "persistent": the message is persistent. Which means the
message is stored both in-memory, and on disk, and therefore
preserved if the server dies or restarts.
The default value is 2 (persistent). During a send operation, Publishers can override the delivery mode of messages so that, for example, transient messages can be sent over a durable queue. The default value is 2 (persistent). During a send operation, Publishers can
override the delivery mode of messages so that, for example, transient messages
can be sent over a durable queue.

View File

@ -33,17 +33,17 @@ delays in the case that there is only a single green thread::
greenthread.sleep(0) greenthread.sleep(0)
In current code, time.sleep(0) does the same thing as greenthread.sleep(0) if In current code, time.sleep(0) does the same thing as greenthread.sleep(0) if
time module is patched through eventlet.monkey_patch(). To be explicit, we recommend time module is patched through eventlet.monkey_patch(). To be explicit, we
contributors use ``greenthread.sleep()`` instead of ``time.sleep()``. recommend contributors use ``greenthread.sleep()`` instead of ``time.sleep()``.
MySQL access and eventlet MySQL access and eventlet
------------------------- -------------------------
There are some MySQL DB API drivers for oslo.db, like `PyMySQL`_, MySQL-python There are some MySQL DB API drivers for oslo.db, like `PyMySQL`_, MySQL-python
etc. PyMySQL is the default MySQL DB API driver for oslo.db, and it works well with etc. PyMySQL is the default MySQL DB API driver for oslo.db, and it works well
eventlet. MySQL-python uses an external C library for accessing the MySQL database. with eventlet. MySQL-python uses an external C library for accessing the MySQL
Since eventlet cannot use monkey-patching to intercept blocking calls in a C library, database. Since eventlet cannot use monkey-patching to intercept blocking calls
queries to the MySQL database using libraries like MySQL-python will block the main in a C library, queries to the MySQL database using libraries like MySQL-python
thread of a service. will block the main thread of a service.
The Diablo release contained a thread-pooling implementation that did not The Diablo release contained a thread-pooling implementation that did not
block, but this implementation resulted in a `bug`_ and was removed. block, but this implementation resulted in a `bug`_ and was removed.

View File

@ -14,19 +14,20 @@ The Block Storage API and scheduler services typically run on the controller
nodes. Depending upon the drivers used, the volume service can run nodes. Depending upon the drivers used, the volume service can run
on controller nodes, compute nodes, or standalone storage nodes. on controller nodes, compute nodes, or standalone storage nodes.
For more information, see the For more information, see the `Configuration
`Configuration Reference <https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html>`_. Reference <https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html>`_.
Prerequisites Prerequisites
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
This documentation specifically covers the installation of the Cinder Block Storage service. Before following this This documentation specifically covers the installation of the Cinder Block
guide you will need to prepare your OpenStack environment using the instructions in the Storage service. Before following this guide you will need to prepare your
OpenStack environment using the instructions in the
`OpenStack Installation Guide <https://docs.openstack.org/install-guide/>`_. `OpenStack Installation Guide <https://docs.openstack.org/install-guide/>`_.
Once able to 'Launch an instance' in your OpenStack environment follow the instructions below to add Once able to 'Launch an instance' in your OpenStack environment follow the
Cinder to the base environment. instructions below to add Cinder to the base environment.
Adding Cinder to your OpenStack Environment Adding Cinder to your OpenStack Environment

View File

@ -142,7 +142,7 @@ deps =
-r{toxinidir}/requirements.txt -r{toxinidir}/requirements.txt
-r{toxinidir}/doc/requirements.txt -r{toxinidir}/doc/requirements.txt
commands = commands =
doc8 --ignore D001 --ignore-path .tox --ignore-path *.egg-info --ignore-path doc/src/api --ignore-path doc/source/drivers.rst --ignore-path doc/build --ignore-path .eggs/*/EGG-INFO/*.txt -e .txt -e .rst -e .inc doc8
rm -fr doc/build doc/source/contributor/api/ .autogenerated rm -fr doc/build doc/source/contributor/api/ .autogenerated
sphinx-build -W -b html doc/source doc/build/html sphinx-build -W -b html doc/source doc/build/html
rm -rf api-ref/build rm -rf api-ref/build
@ -192,6 +192,11 @@ max-complexity=30
local-check-factory = cinder.hacking.checks.factory local-check-factory = cinder.hacking.checks.factory
import_exceptions = cinder.i18n import_exceptions = cinder.i18n
[doc8]
ignore-path=.tox,*.egg-info,doc/src/api,doc/source/drivers.rst,doc/build,.eggs/*/EGG-INFO/*.txt,doc/source/configuration/tables,./*.txt
extension=.txt,.rst,.inc
[testenv:lower-constraints] [testenv:lower-constraints]
basepython = python3 basepython = python3
deps = deps =