Unify usage of project name in doc to 'manila'

Change project name in doc according to style rules in
http://docs.openstack.org/contributor-guide/writing-style/openstack-components.html

Closes-Bug: #1537808

Change-Id: I849b0ec34a1bfc01263f35a9ef0128100945867e
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
This commit is contained in:
Danny Al-Gaaf 2016-01-25 17:12:35 +01:00
parent ba159a9ea3
commit 84e953ca75
27 changed files with 156 additions and 156 deletions

View File

@ -131,11 +131,11 @@ contract, and should be part of a microversion.
The reason why we are so strict on contract is that we'd like
application writers to be able to know, for sure, what the contract is
at every microversion in Manila. If they do not, they will need to write
at every microversion in manila. If they do not, they will need to write
conditional code in their application to handle ambiguities.
When in doubt, consider application authors. If it would work with no
client side changes on both Manila versions, you probably don't need a
client side changes on both manila versions, you probably don't need a
microversion. If, on the other hand, there is any ambiguity, a
microversion is probably needed.

View File

@ -19,9 +19,9 @@
Manila System Architecture
==========================
The Manila Shared Filesystem Management Service is intended to be ran on one or more nodes.
The Shared File Systems service is intended to be ran on one or more nodes.
Manila uses a sql-based central database that is shared by all Manila services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, Manila will be moving towards multiple data stores with some kind of aggregation system.
Manila uses a sql-based central database that is shared by all manila services in the system. The amount and depth of the data fits into a sql database quite well. For small deployments this seems like an optimal solution. For larger deployments, and especially if security is a concern, manila will be moving towards multiple data stores with some kind of aggregation system.
Components
----------

View File

@ -9,7 +9,7 @@ specs) are not exposed to users -- only Administrators.
Share Types
-----------
Refer to the Manila client command-line help for information on how to
Refer to the manila client command-line help for information on how to
create a share type and set "extra-spec" key/value pairs for a share type.
Extra-Specs
@ -82,7 +82,7 @@ be created.
If an array can technically support both thin and thick provisioning in a
pool, the driver still needs to programmatically determine which to use.
This should be done by configuring one pool for thin and another pool for
thick. So, a Manila pool will always report thin_provisioning as True or
thick. So, a manila pool will always report thin_provisioning as True or
False. Added in Liberty.
* `qos` - indicates that a backend/pool can provide shares using some

View File

@ -19,14 +19,14 @@ Setting Up a Development Environment
====================================
This page describes how to setup a working Python development
environment that can be used in developing Manila on Ubuntu, Fedora or
environment that can be used in developing manila on Ubuntu, Fedora or
Mac OS X. These instructions assume you're already familiar with
git. Refer to GettingTheCode_ for additional information.
.. _GettingTheCode: http://wiki.openstack.org/GettingTheCode
Following these instructions will allow you to run the Manila unit
tests. If you want to be able to run Manila (i.e., create NFS/CIFS shares),
Following these instructions will allow you to run the manila unit
tests. If you want to be able to run manila (i.e., create NFS/CIFS shares),
you will also need to install dependent projects: Nova, Neutron, Cinder and Glance.
For this purpose 'devstack' project can be used (A documented shell script to build complete OpenStack development environments).
@ -38,7 +38,7 @@ Virtual environments
Manila development uses `virtualenv <http://pypi.python.org/pypi/virtualenv>`__ to track and manage Python
dependencies while in development and testing. This allows you to
install all of the Python package dependencies in a virtual
environment or "virtualenv" (a special subdirectory of your Manila
environment or "virtualenv" (a special subdirectory of your manila
directory), instead of installing the packages at the system level.
.. note::
@ -51,7 +51,7 @@ Linux Systems
.. note::
This section is tested for Manila on Ubuntu (12.04-64) and
This section is tested for manila on Ubuntu (12.04-64) and
Fedora-based (RHEL 6.1) distributions. Feel free to add notes and
change according to your experiences or operating system.
@ -82,7 +82,7 @@ MacPorts package for OpenSSL, you will see an error when running
``manila.tests.auth_unittest.AuthTestCase.test_209_can_generate_x509``.
The stock version of OpenSSL that ships with Mac OS X 10.6 (OpenSSL 0.9.8l)
or Mac OS X 10.7 (OpenSSL 0.9.8r) works fine with Manila.
or Mac OS X 10.7 (OpenSSL 0.9.8r) works fine with manila.
Getting the code
@ -126,7 +126,7 @@ If all goes well, you should get a message something like this::
Manila development environment setup is complete.
To activate the Manila virtualenv for the extent of your current shell session
To activate the manila virtualenv for the extent of your current shell session
you can run::
$ source .venv/bin/activate

View File

@ -17,16 +17,16 @@
Manila minimum requirements and features since Mitaka
=====================================================
In order for a driver to be accepted into Manila code base, there are certain
In order for a driver to be accepted into manila code base, there are certain
minimum requirements and features that must be met, in order to ensure
interoperability and standardized Manila functionality among cloud providers.
interoperability and standardized manila functionality among cloud providers.
At least one driver mode (:term:`DHSS` true/false)
--------------------------------------------------
Driver modes determine if the driver is managing network resources
(:term:`DHSS` = true) in an automated way, in order to segregate tenants and
private networks by making use of Manila Share Networks, or if it is up to the
private networks by making use of manila Share Networks, or if it is up to the
administrator to manually configure all networks (:term:`DHSS` = false) and be
responsible for segregation, if that is desired. At least one driver mode must
be supported. In :term:`DHSS` = true mode, Share Server entities are used, so
@ -71,8 +71,8 @@ increased.
Capabilities
------------
In order for Manila to function accordingly to the driver being used, the
driver must provide a set of information to Manila, known as capabilities, as
In order for manila to function accordingly to the driver being used, the
driver must provide a set of information to manila, known as capabilities, as
follows:
- share_backend_name: a name for the backend;
@ -88,7 +88,7 @@ follows:
used.
Certain features, if supported by drivers, need to be reported in order to
function correctly in Manila, such as:
function correctly in manila, such as:
- dedupe: whether the backend supports deduplication;
- compression: whether the backend supports compressed shares;
@ -128,7 +128,7 @@ Documentation
Drivers submitted must provide and maintain related documentation on
openstack-manuals, containing instructions on how to properly install and
configure. The intended audience for this manual is cloud operators and
administrators. Also, driver maintainers must update the Manila share features
administrators. Also, driver maintainers must update the manila share features
support mapping documentation found at
http://docs.openstack.org/developer/manila/devref/share_back_ends_feature_support_mapping.html
@ -136,15 +136,15 @@ http://docs.openstack.org/developer/manila/devref/share_back_ends_feature_suppor
Manila optional requirements and features since Mitaka
======================================================
Additional to the minimum required features supported by Manila, other optional
features can be supported by drivers as they are already supported in Manila
Additional to the minimum required features supported by manila, other optional
features can be supported by drivers as they are already supported in manila
and can be accessed through the API.
Snapshots
---------
Share Snapshots allow for data respective to a particular point in time to be
saved in order to be used later. In Manila API, share snapshots taken can only
saved in order to be used later. In manila API, share snapshots taken can only
be restored by creating new shares from them, thus the original share remains
unaffected. If Snapshots are supported by drivers, they must be
crash-consistent.
@ -154,9 +154,9 @@ Managing/Unmanaging shares
If :term:`DHSS` = false mode is used, then drivers may implement a function
that supports reading existing shares in the backend that were not created by
Manila. After the previously existing share is registered in Manila, it is
completely controlled by Manila and should not be handled externally anymore.
Additionally, a function that de-registers such shares from Manila but do
manila. After the previously existing share is registered in manila, it is
completely controlled by manila and should not be handled externally anymore.
Additionally, a function that de-registers such shares from manila but do
not delete from backend may also be supported.
Share shrinking
@ -169,7 +169,7 @@ is compromised.
Share ensuring
--------------
In some situations, such as when the driver is restarted, Manila attempts to
In some situations, such as when the driver is restarted, manila attempts to
perform maintenance on created shares, on the purpose of ensuring previously
created shares are available and being serviced correctly. The driver can
implement this function by checking shares' status and performing maintenance
@ -198,4 +198,4 @@ Shares can be created within Consistency Groups in order to guarantee snapshot
consistency of multiple shares. In order to make use of this feature, driver
vendors must report this capability and implement its functions to work
according to the backend, so the feature can be properly invoked through
Manila API.
manila API.

View File

@ -17,12 +17,12 @@
Isilon Driver
=============
The EMC Manila driver framework (EMCShareDriver) utilizes EMC storage products
to provide shared filesystems to OpenStack. The EMC Manila driver is a plugin
The EMC manila driver framework (EMCShareDriver) utilizes EMC storage products
to provide shared filesystems to OpenStack. The EMC manila driver is a plugin
based driver which is designed to use different plugins to manage different EMC
storage products.
The Isilon manila driver is a plugin for the EMC Manila driver framework which
The Isilon manila driver is a plugin for the EMC manila driver framework which
allows manila to interface with an Isilon backend to provide a shared
filesystem. The EMC driver framework with the Isilon plugin is referred to as
the "Isilon Driver" in this document.

View File

@ -17,8 +17,8 @@
VNX Driver
==========
EMC Manila driver framework (EMCShareDriver) utilizes the EMC storage products
to provide the shared filesystems to OpenStack. The EMC Manila driver is a
EMC manila driver framework (EMCShareDriver) utilizes the EMC storage products
to provide the shared filesystems to OpenStack. The EMC manila driver is a
plugin based driver which is designed to use different plugins to manage
different EMC storage products.
@ -27,7 +27,7 @@ EMC driver framework with VNX plugin is referred to as VNX driver in this
document.
This driver performs the operations on VNX by XMLAPI and the File command line.
Each backend manages one Data Mover of VNX. Multiple Manila backends need to
Each backend manages one Data Mover of VNX. Multiple manila backends need to
be configured to manage multiple Data Movers.
Requirements
@ -245,10 +245,10 @@ The VNX driver has the following restrictions:
- VNX has limitations on the overall numbers of Virtual Data Movers,
filesystems, shares, checkpoints, and etc. Virtual Data Mover(VDM) is created
by the VNX driver on the VNX to serve as the Manila share server. Similarly,
by the VNX driver on the VNX to serve as the manila share server. Similarly,
filesystem is created, mounted, and exported from the VDM over CIFS or NFS
protocol to serve as the Manila share. The VNX checkpoint serves as the
Manila share snapshot. Refer to the `NAS Support Matrix` document on [EMC
protocol to serve as the manila share. The VNX checkpoint serves as the
manila share snapshot. Refer to the `NAS Support Matrix` document on [EMC
support site](http://support.emc.com) for the limitations and configure the
quotas accordingly.

View File

@ -10,16 +10,16 @@ ship APIs that are experimental in nature, are expected to change at any
time, and could even be removed entirely without a typical deprecation
period.
In conjunction with microversions, Manila has added a facility for marking
In conjunction with microversions, manila has added a facility for marking
individual REST APIs as experimental. To call an experimental API, clients
must include a specific HTTP header, ``X-OpenStack-Manila-API-Experimental``,
with a value of ``True``. If a user calls an experimental API without
including the experimental header, the server would respond with ``HTTP/404``.
This forces the client to acknowledge the experimental status of the API and
prevents anyone from building an application around a Manila feature without
prevents anyone from building an application around a manila feature without
realizing the feature could change significantly or even disappear.
On the other hand, if a request is made to a non-experimental Manila API with
On the other hand, if a request is made to a non-experimental manila API with
``X-OpenStack-Manila-API-Experimental: True``, the server would respond as if
the header had not been included. This is a convenience mechanism, as it
allows the client to specify both the requested API version as well as the
@ -50,7 +50,7 @@ flag. The maturation period can vary between features, but experimental is NOT
a stable state, and an experimental feature should not be left in that state
any longer than necessary.
Because experimental APIs have no conventional deprecation period, the Manila
Because experimental APIs have no conventional deprecation period, the manila
core team may optionally choose to remove any experimental versions of an API
at the same time that a microversioned stable version is added.

View File

@ -46,19 +46,19 @@ Note that Ganesha's concept of storage backend modules is called FSAL ("File
System Abstraction Layer"). The FSAL the driver intends to leverage needs to be
enabled in Ganesha config.
Beyond that (with default Manila config) the following line is needed to be
Beyond that (with default manila config) the following line is needed to be
present in the Ganesha config file (that defaults to
/etc/ganesha/ganesha.conf):
``%include /etc/ganesha/export.d/INDEX.conf``
The above paths can be customized through Manila configuration as follows:
The above paths can be customized through manila configuration as follows:
- `ganesha_config_dir` = toplevel directory for Ganesha configuration,
defaults to /etc/ganesha
- `ganesha_config_path` = location of the Ganesha config file, defaults
to ganesha.conf in `ganesha_config_dir`
- `ganesha_export_dir` = directory where Manila generated config bits are
- `ganesha_export_dir` = directory where manila generated config bits are
stored, defaults to `export.d` in `ganesha_config_dir`. The following
line is required to be included (with value expanded) in the Ganesha
config file (at `ganesha_config_path`):
@ -66,10 +66,10 @@ The above paths can be customized through Manila configuration as follows:
``%include <ganesha_export_dir>/INDEX.conf``
Further Ganesha related Manila configuration
Further Ganesha related manila configuration
--------------------------------------------
There are further Ganesha related options in Manila (which affect the
There are further Ganesha related options in manila (which affect the
behavior of Ganesha, but do not affect how to set up the Ganesha service
itself).
@ -126,7 +126,7 @@ template*. They are syntactically either Ganesha export blocks,
or isomorphic JSON (as Ganesha export blocks are by-and-large
equivalent to arrayless JSON), with two special placeholders
for values: ``@config`` and ``@runtime``. ``@config`` means a
value that shall be filled from Manila config, and ``@runtime``
value that shall be filled from manila config, and ``@runtime``
means a value that's filled at runtime with dynamic data.
As an example, we show the library's defaults in JSON format
@ -168,7 +168,7 @@ method as follows:
either by
- using a predefined export block dict stored in code
- loading a predefined export block from the Manila source tree
- loading a predefined export block from the manila source tree
- loading an export block from an user exposed location (to allow
user configuration)

View File

@ -17,8 +17,8 @@
Generic approach for share provisioning
=======================================
The Manila Shared Filesystem Management Service can be configured to use Nova
VMs and Cinder volumes. There are two modules that handle them in Manila:
The Shared File Systems service can be configured to use Nova
VMs and Cinder volumes. There are two modules that handle them in manila:
1) 'service_instance' module creates VMs in Nova with predefined image called
service image. This module can be used by any backend driver for provisioning
of service VMs to be able to separate share resources among tenants.

View File

@ -5,7 +5,7 @@ Manila uses the `Gerrit`_ tool to review proposed code changes. The review site
is http://review.openstack.org.
Gerrit is a complete replacement for Github pull requests. `All Github pull
requests to the Manila repository will be ignored`.
requests to the manila repository will be ignored`.
See the `Development Workflow`_ for more detailed documentation on how
to work with Gerrit.

View File

@ -18,7 +18,7 @@ GlusterFS driver
================
GlusterFS driver uses GlusterFS, an open source distributed file system,
as the storage backend for serving file shares to Manila clients.
as the storage backend for serving file shares to manila clients.
Supported shared filesystems
----------------------------
@ -45,15 +45,15 @@ Requirements
- Install glusterfs-server package, version >= 3.5.x, on the storage backend.
- Install NFS-Ganesha, version >=2.1, if using NFS-Ganesha as the NFS server
for the GlusterFS backend.
- Install glusterfs and glusterfs-fuse package, version >=3.5.x, on the Manila
- Install glusterfs and glusterfs-fuse package, version >=3.5.x, on the manila
host.
- Establish network connection between the Manila host and the storage backend.
- Establish network connection between the manila host and the storage backend.
Manila driver configuration setting
-----------------------------------
The following parameters in the Manila's configuration file need to be
The following parameters in the manila's configuration file need to be
set:
- `share_driver` = manila.share.drivers.glusterfs.GlusterfsShareDriver
@ -63,11 +63,11 @@ The following configuration parameters are optional:
- `glusterfs_nfs_server_type` = <NFS server type used by the GlusterFS
backend, `Gluster` or `Ganesha`. `Gluster` is the default type>
- `glusterfs_share_layout` = <share layout used>; cf. :ref:`glusterfs_layouts`
- `glusterfs_path_to_private_key` = <path to Manila host's private key file>
- `glusterfs_path_to_private_key` = <path to manila host's private key file>
- `glusterfs_server_password` = <password of remote GlusterFS server machine>
If Ganesha NFS server is used (``glusterfs_nfs_server_type = Ganesha``),
then by default the Ganesha server is supposed to run on the Manila host
then by default the Ganesha server is supposed to run on the manila host
and is managed by local commands. If it's deployed somewhere else, then
it's managed via ssh, which can be configured by the following parameters:
@ -108,14 +108,14 @@ backends for shares. Currently there are two layouts implemented:
- `glusterfs_target`: address of the volume that hosts the directories.
If it's of the format `<glustervolserver>:/<glustervolid>`, then the
Manila host is expected to be part of the GlusterFS cluster of the volume
manila host is expected to be part of the GlusterFS cluster of the volume
and GlusterFS management happens through locally calling the ``gluster``
utility. If it's of the format `<username>@<glustervolserver>:/<glustervolid>`,
then we ssh to `<username>@<glustervolserver>` to execute ``gluster``
(`<username>` is supposed to have administrative privileges on
`<glustervolserver>`).
- `glusterfs_mount_point_base` = <base path of GlusterFS volume mounted on
Manila host> (optional; defaults to *$state_path*\ ``/mnt``, where
manila host> (optional; defaults to *$state_path*\ ``/mnt``, where
*$state_path* defaults to ``/var/lib/manila``)
Limitations:
@ -160,12 +160,12 @@ the same time.
There is another caveat with ``nfs.export-volumes``: setting it to ``on``
without enough care is a security risk, as the default access control
for the volume exports is "allow all". For this reason, while the
``nfs.export-volumes = off`` setting is automatically set by Manila
``nfs.export-volumes = off`` setting is automatically set by manila
for all other share backend configurations, ``nfs.export-volumes = on``
is *not* set by Manila in case of a Gluster NFS with volume layout
is *not* set by manila in case of a Gluster NFS with volume layout
setup. It's left to the GlusterFS admin to make this setting in conjunction
with the associated safeguards (that is, for those volumes of the cluster
which are not used by Manila, access restrictions have to be manually
which are not used by manila, access restrictions have to be manually
configured through the ``nfs.rpc-auth-{allow,reject}`` options).
Known Restrictions
@ -177,7 +177,7 @@ Known Restrictions
shares can be accessed by NFSv3 and v4 protocols. However, if Gluster NFS is
used by the GlusterFS backend, then the shares can only be accessed by NFSv3
protocol.
- All Manila shares, which map to subdirectories within a GlusterFS volume, are
- All manila shares, which map to subdirectories within a GlusterFS volume, are
currently created within a single GlusterFS volume of a GlusterFS storage
pool.
- The driver does not provide read-only access level for shares.

View File

@ -18,9 +18,9 @@ GlusterFS Native driver
=======================
GlusterFS Native driver uses GlusterFS, an open source distributed file system,
as the storage backend for serving file shares to Manila clients.
as the storage backend for serving file shares to manila clients.
A Manila share is a GlusterFS volume. This driver uses flat-network
A manila share is a GlusterFS volume. This driver uses flat-network
(share-server-less) model. Instances directly talk with the GlusterFS backend
storage pool. The instances use 'glusterfs' protocol to mount the GlusterFS
shares. Access to each share is allowed via TLS Certificates. Only the instance
@ -30,7 +30,7 @@ hence use the share. Currently only 'rw' access is supported.
Network Approach
----------------
L3 connectivity between the storage backend and the host running the Manila
L3 connectivity between the storage backend and the host running the manila
share service should exist.
Supported shared filesystems
@ -60,16 +60,16 @@ Requirements
------------
- Install glusterfs-server package, version >= 3.6.x, on the storage backend.
- Install glusterfs and glusterfs-fuse package, version >=3.6.x, on the Manila
- Install glusterfs and glusterfs-fuse package, version >=3.6.x, on the manila
host.
- Establish network connection between the Manila host and the storage backend.
- Establish network connection between the manila host and the storage backend.
.. _gluster_native_manila_conf:
Manila driver configuration setting
-----------------------------------
The following parameters in Manila's configuration file need to be set:
The following parameters in manila's configuration file need to be set:
- `share_driver` =
manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver
@ -81,7 +81,7 @@ The following parameters in Manila's configuration file need to be set:
The optional ``<remoteuser>@`` part of the server URI indicates SSH
access for cluster management (see related optional parameters below).
If it is not given, direct command line management is performed (ie.
Manila host is assumed to be part of the GlusterFS cluster the server
manila host is assumed to be part of the GlusterFS cluster the server
belongs to).
- `glusterfs_volume_pattern` = Regular expression template
used to filter GlusterFS volumes for share creation. The regex template can
@ -95,8 +95,8 @@ The following parameters in Manila's configuration file need to be set:
The following configuration parameters are optional:
- `glusterfs_mount_point_base` = <base path of GlusterFS volume mounted on
Manila host>
- `glusterfs_path_to_private_key` = <path to Manila host's private key file>
manila host>
- `glusterfs_path_to_private_key` = <path to manila host's private key file>
- `glusterfs_server_password` = <password of remote GlusterFS server machine>
Host and backend configuration
@ -107,18 +107,18 @@ Host and backend configuration
as described in http://www.gluster.org/community/documentation/index.php/SSL.
(Enabling SSL/TLS for the management path is also possible but not
recommended currently.)
- The Manila host should be also configured for GlusterFS SSL/TLS (ie.
- The manila host should be also configured for GlusterFS SSL/TLS (ie.
`/etc/ssl/glusterfs.{pem,key,ca}` files has to be deployed as the above
document specifies).
- There is a further requirement for the CA-s used: the set of CA-s involved
should be consensual, ie. `/etc/ssl/glusterfs.ca` should be identical
across all the servers and the Manila host.
across all the servers and the manila host.
- There is a further requirement for the common names (CN-s) of the
certificates used: the certificates of the servers should have a common
name starting with `glusterfs-server`, and the certificate of the host
should have common name starting with `manila-host`.
- To support snapshots, bricks that consist the GlusterFS volumes used
by Manila should be thinly provisioned LVM ones (cf.
by manila should be thinly provisioned LVM ones (cf.
https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Snapshots/).
Known Restrictions
@ -127,15 +127,15 @@ Known Restrictions
- GlusterFS volumes are not created on demand. A pre-existing set of
GlusterFS volumes should be supplied by the GlusterFS cluster(s), conforming
to the naming convention encoded by ``glusterfs_volume_pattern``. However,
the GlusterFS endpoint is allowed to extend this set any time (so Manila
the GlusterFS endpoint is allowed to extend this set any time (so manila
and GlusterFS endpoints are expected to communicate volume supply/demand
out-of-band). ``glusterfs_volume_pattern`` can include a size hint (with
``#{size}`` syntax), which, if present, requires the GlusterFS end to
indicate the size of the shares in GB in the name. (On share creation,
Manila picks volumes *at least* as big as the requested one.)
manila picks volumes *at least* as big as the requested one.)
- Certificate setup (aka trust setup) between instance and storage backend is
out of band of Manila.
- For Manila to use GlusterFS volumes, the name of the trashcan directory in
out of band of manila.
- For manila to use GlusterFS volumes, the name of the trashcan directory in
GlusterFS volumes must not be changed from the default.
The :mod:`manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver` Module

View File

@ -19,7 +19,7 @@ GPFS Driver
GPFS driver uses IBM General Parallel File System (GPFS), a high-performance,
clustered file system, developed by IBM, as the storage backend for serving
file shares to the Manila clients.
file shares to the manila clients.
Supported shared filesystems
----------------------------
@ -48,19 +48,19 @@ Requirements
- Install Kernel NFS or Ganesha NFS server on the storage backend servers.
- If using Ganesha NFS, currently NFS Ganesha v1.5 and v2.0 are supported.
- Create a GPFS cluster and create a filesystem on the cluster, that will be
used to create the Manila shares.
used to create the manila shares.
- Enable quotas for the GPFS file system (`mmchfs -Q yes`).
- Establish network connection between the Manila host and the storage backend.
- Establish network connection between the manila host and the storage backend.
Manila driver configuration setting
-----------------------------------
The following parameters in the Manila configuration file need to be set:
The following parameters in the manila configuration file need to be set:
- `share_driver` = manila.share.drivers.ibm.gpfs.GPFSShareDriver
- `gpfs_share_export_ip` = <IP to be added to GPFS export string>
- If the backend GPFS server is not running on the Manila host machine, the
- If the backend GPFS server is not running on the manila host machine, the
following options are required to SSH to the remote GPFS backend server:
- `gpfs_ssh_login` = <GPFS server SSH login name>
@ -89,7 +89,7 @@ Known Restrictions
instead works over a flat network where the tenants share a network.
- While using remote GPFS node, with Ganesha NFS, 'gpfs_ssh_private_key' for
remote login to the GPFS node must be specified and there must be a
passwordless authentication already setup between the Manila share service
passwordless authentication already setup between the manila share service
and the remote GPFS node.
The :mod:`manila.share.drivers.ibm.gpfs` Module

View File

@ -17,7 +17,7 @@
HDFS native driver
==================
HDFS native driver is a plugin based on the OpenStack Manila service, which uses
HDFS native driver is a plugin based on the OpenStack manila service, which uses
Hadoop distributed file system (HDFS), a distributed file system designed to hold
very large amounts of data, and provide high-throughput access to the data.
@ -29,7 +29,7 @@ support access control of multiple users and groups.
Network configuration
---------------------
The storage backend and Manila hosts should be in a flat network, otherwise, the L3
The storage backend and manila hosts should be in a flat network, otherwise, the L3
connectivity between them should exist.
Supported shared filesystems
@ -56,7 +56,7 @@ Requirements
- Install HDFS package, version >= 2.4.x, on the storage backend
- To enable access control, the HDFS file system must have ACLs enabled
- Establish network connection between the Manila host and storage backend
- Establish network connection between the manila host and storage backend
Manila driver configuration
---------------------------
@ -69,7 +69,7 @@ Manila driver configuration
- `hdfs_ssh_name` = HDFS namenode SSH login name
- `hdfs_ssh_pw` = HDFS namenode SSH login password, this parameter is not
necessary, if the following `hdfs_ssh_private_key` is configured
- `hdfs_ssh_private_key` = Path to the HDFS namenode private key to ssh login
- `hdfs_ssh_private_key` = Path to the HDFS namenode private key to ssh login
Known Restrictions
------------------

View File

@ -15,19 +15,19 @@
under the License.
==========================
Hitachi HNAS Manila Driver
Hitachi HNAS manila driver
==========================
------------------
Driver Version 1.0
------------------
This OpenStack Manila driver provides support for Hitachi Data Systems (HDS)
This OpenStack manila driver provides support for Hitachi Data Systems (HDS)
NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.
HNAS Storage Requirements
'''''''''''''''''''''''''
Before using Hitachi HNAS Manila driver, use the HNAS configuration and
Before using Hitachi HNAS manila driver, use the HNAS configuration and
management utilities, such as GUI (SMU) or SSC CLI to create a storage pool
(span) and an EVS. Also, check that HNAS/SMU software version is
12.2 or higher.
@ -35,7 +35,7 @@ management utilities, such as GUI (SMU) or SSC CLI to create a storage pool
Supported Operations
''''''''''''''''''''
The following operations are supported in this version of Manila HNAS driver:
The following operations are supported in this version of manila HNAS driver:
- Create and delete NFS shares;
- Extend NFS shares;
- Manage rules to NFS shares (allow/deny access);
@ -52,11 +52,11 @@ access to the data ports (EVS IPs or aggregations). If manila-share service
is not running on controller node, it must have access to the management port.
The driver configuration can be summarized in the following steps:
| 1) Create a file system to be used by Manila on HNAS. Make sure that the
| 1) Create a file system to be used by manila on HNAS. Make sure that the
filesystem is not created as a replication target. Refer to Hitachi HNAS
reference for detailed steps on how to do this;
| 2) Install and configure an OpenStack environment with default Manila
parameters and services. Refer to OpenStack Manila configuration reference;
| 2) Install and configure an OpenStack environment with default manila
parameters and services. Refer to OpenStack manila configuration reference;
| 3) Configure HNAS parameters on manila.conf;
| 4) Prepare the network;
| 5) Configure/create share type;
@ -97,7 +97,7 @@ The following parameters need to be configured in the [backend] section of */etc
| driver_handles_share_servers | DHSS, Driver working mode. For Hitachi driver **this must be**: |
| | *False* |
+-------------------------------+-----------------------------------------------------------------------------------------------------+
| hds_hnas_ip | HNAS management interface IP for communication between Manila node and HNAS. |
| hds_hnas_ip | HNAS management interface IP for communication between manila node and HNAS. |
+-------------------------------+-----------------------------------------------------------------------------------------------------+
| hds_hnas_password | This field is used to provide password credential to HNAS. |
| | Either hds_hnas_password or hds_hnas_ssh_private_key must be set. |
@ -203,7 +203,7 @@ Step 5 - Share Type Configuration
Manila requires that the share type includes the driver_handles_share_servers
extra-spec. This ensures that the share will be created on a backend that
supports the requested driver_handles_share_servers capability. For the Hitachi
HNAS Manila driver, this must be set to False.
HNAS manila driver, this must be set to False.
``$ manila type-create hitachi False``
@ -248,10 +248,10 @@ Manage and Unmanage Shares
''''''''''''''''''''''''''
Manila has the ability to manage and unmanage shares. If there is a share in
the storage and it is not in OpenStack, you can manage that share and use it
as a Manila Share. HNAS drivers use virtual-volumes (V-VOL) to create shares.
as a manila Share. HNAS drivers use virtual-volumes (V-VOL) to create shares.
Only V-VOL shares can be used by the driver. If the NFS export is an ordinary
FS export, it is not possible to use it in Manila. The unmanage operation
only unlinks the share from Manila. All data is preserved.
FS export, it is not possible to use it in manila. The unmanage operation
only unlinks the share from manila. All data is preserved.
| To **manage** shares use:
| ``$ manila manage [--name <name>] [--description <description>]``
@ -293,7 +293,7 @@ Additional Notes:
| - HNAS has some restrictions about the number of EVSs, filesystems,
virtual-volumes and simultaneous SSC connections. Check the manual
specification for your system.
| - Shares and snapshots are thin provisioned. It is reported to Manila only the
| - Shares and snapshots are thin provisioned. It is reported to manila only the
real used space in HNAS. Also, a snapshot does not initially take any space in
HNAS, it only stores the difference between the share and the snapshot, so it
grows when share data is changed.
@ -307,4 +307,4 @@ The :mod:`manila.share.drivers.hitachi.hds_hnas` Module
:noindex:
:members:
:undoc-members:
:show-inheritance:
:show-inheritance:

View File

@ -16,7 +16,7 @@
HPE 3PAR Driver
==============
The HPE 3PAR Manila driver provides NFS and CIFS shared file systems to
The HPE 3PAR manila driver provides NFS and CIFS shared file systems to
OpenStack using HPE 3PAR's File Persona capabilities.
.. note::
@ -56,12 +56,12 @@ The following operations are supported with HPE 3PAR File Persona:
Share networks are not supported. Shares are created directly on the 3PAR
without the use of a share server or service VM. Network connectivity is
setup outside of Manila.
setup outside of manila.
Requirements
------------
On the system running the Manila share service:
On the system running the manila share service:
- python-3parclient 4.0.0 or newer from PyPI.
@ -75,7 +75,7 @@ Pre-Configuration on the HPE 3PAR
--------------------------------
- HPE 3PAR File Persona must be initialized and started (:code:`startfs`)
- A File Provisioning Group (FPG) must be created for use with Manila
- A File Provisioning Group (FPG) must be created for use with manila
- A Virtual File Server (VFS) must be created for the FPG
- The VFS must be configured with an appropriate share export IP address
- A local user in the Administrators group is needed for CIFS shares
@ -83,7 +83,7 @@ Pre-Configuration on the HPE 3PAR
Backend Configuration
---------------------
The following parameters need to be configured in the Manila configuration
The following parameters need to be configured in the manila configuration
file for the HPE 3PAR driver:
- `share_backend_name` = <backend name to enable>
@ -110,7 +110,7 @@ effect.
Network Approach
----------------
Connectivity between the storage array (SSH/CLI and WSAPI) and the Manila host
Connectivity between the storage array (SSH/CLI and WSAPI) and the manila host
is required for share management.
Connectivity between the clients and the VFS is required for mounting
@ -118,7 +118,7 @@ and using the shares. This includes:
- Routing from the client to the external network
- Assigning the client an external IP address (e.g., a floating IP)
- Configuring the Manila host networking properly for IP forwarding
- Configuring the manila host networking properly for IP forwarding
- Configuring the VFS networking properly for client subnets
Share Types
@ -126,7 +126,7 @@ Share Types
When creating a share, a share type can be specified to determine where and
how the share will be created. If a share type is not specified, the
`default_share_type` set in the Manila configuration file is used.
`default_share_type` set in the manila configuration file is used.
Manila requires that the share type includes the
`driver_handles_share_servers` extra-spec. This ensures that the share
@ -134,7 +134,7 @@ will be created on a backend that supports the requested
driver_handles_share_servers (share networks) capability.
For the HPE 3PAR driver, this must be set to False.
Another common Manila extra-spec used to determine where a share is created
Another common manila extra-spec used to determine where a share is created
is `share_backend_name`. When this extra-spec is defined in the share type,
the share will be created on a backend with a matching share_backend_name.
@ -153,7 +153,7 @@ the capabilities filter and the HPE 3PAR driver:
`thin_provisioning` will be reported as True for backends that use thin
provisioned volumes. FPGs that use fully provisioned volumes will report
False. Backends that use thin provisioning also support Manila's
False. Backends that use thin provisioning also support manila's
over-subscription feature.
`dedupe` will be reported as True for backends that use deduplication
@ -201,7 +201,7 @@ The following HPE 3PAR extra-specs are used when creating NFS shares:
The NFS export options have the following limitations:
* `ro` and `rw` are not allowed (Manila will determine the read-only option)
* `ro` and `rw` are not allowed (manila will determine the read-only option)
* `no_subtree_check` and `fsid` are not allowed per HPE 3PAR CLI support
* `(in)secure` and `(no_)root_squash` are not allowed because the HPE 3PAR
driver controls those settings

View File

@ -17,7 +17,7 @@
Huawei Driver
=============
Huawei NAS Driver is a plugin based the OpenStack Manila service. The Huawei NAS
Huawei NAS Driver is a plugin based the OpenStack manila service. The Huawei NAS
Driver can be used to provide functions such as the share and snapshot for virtual
machines(instances) in OpenStack. Huawei NAS Driver enables the OceanStor V3 series
V300R002 storage system to provide only network filesystems for OpenStack.
@ -104,7 +104,7 @@ storage systems, the driver configuration file is as follows:
Backend Configuration
---------------------
Modify the `manila.conf` Manila configuration file and add share_driver and
Modify the `manila.conf` manila configuration file and add share_driver and
manila_huawei_conf_file items.
Example for configuring a storage system:
@ -129,14 +129,14 @@ Share Types
When creating a share, a share type can be specified to determine where and
how the share will be created. If a share type is not specified, the
`default_share_type` set in the Manila configuration file is used.
`default_share_type` set in the manila configuration file is used.
Manila requires that the share type includes the `driver_handles_share_servers`
extra-spec. This ensures that the share will be created on a backend that
supports the requested driver_handles_share_servers (share networks) capability.
For the Huawei driver, this must be set to False.
Another common Manila extra-spec used to determine where a share is created
Another common manila extra-spec used to determine where a share is created
is `share_backend_name`. When this extra-spec is defined in the share type,
the share will be created on a backend with a matching share_backend_name.
@ -171,7 +171,7 @@ type uses one or more of the following extra-specs:
* huawei_smartpartition:partitionname=test_partition_name
`thin_provisioning` will be reported as True for backends that use thin
provisioned pool. Backends that use thin provisioning also support Manila's
provisioned pool. Backends that use thin provisioning also support manila's
over-subscription feature. 'thin_provisioning' will be reported as False for
backends that use thick provisioned pool.

View File

@ -1,6 +1,6 @@
Internationalization
====================
manila uses `gettext <http://docs.python.org/library/gettext.html>`_ so that
Manila uses `gettext <http://docs.python.org/library/gettext.html>`_ so that
user-facing strings such as log messages appear in the appropriate
language in different locales.

View File

@ -18,7 +18,7 @@
Developer Guide
===============
In this section you will find information on Manila's lower level programming APIs.
In this section you will find information on manila's lower level programming APIs.
Programming HowTos and Tutorials
@ -31,7 +31,7 @@ Programming HowTos and Tutorials
addmethod.openstackapi
Background Concepts for Manila
Background Concepts for manila
------------------------------
.. toctree::
:maxdepth: 3

View File

@ -11,7 +11,7 @@
License for the specific language governing permissions and limitations
under the License.
Introduction to Manila Shared Filesystem Management Service
Introduction to the Shared File Systems service
===========================================================
Manila is the file share service project for OpenStack. Manila provides the
@ -19,19 +19,19 @@ management of file shares for example, NFS and CIFS as a core service to
OpenStack. Manila currently works with NetApp, Red Hat storage (GlusterFS)
and EMC VNX, as well as on a base Linux NFS or Samba server. There are
a number of concepts that will help in better understanding of the
solutions provided by Manila. One aspect can be to explore the
different service possibilities provided by Manila.
solutions provided by manila. One aspect can be to explore the
different service possibilities provided by manila.
Manila, depending on the driver, requires the user by default to create a
share network using neutron-net-id and neutron-subnet-id (GlusterFS native
driver does not require it). After creation of the share network, the user
can proceed to create the shares. Users in Manila can configure multiple
can proceed to create the shares. Users in manila can configure multiple
back-ends just like Cinder. Manila has a share server assigned to every
tenant. This is the solution for all back-ends except for GlusterFS. The
customer in this scenario is prompted to create a share server using neutron
net-id and subnet-id before even trying to create a share.
The current low-level services available in Manila are:
The current low-level services available in manila are:
- :term:`manila-api`

View File

@ -1,7 +1,7 @@
Project hosting with Launchpad
==============================
`Launchpad`_ hosts the Manila project. The Manila project homepage on Launchpad is
`Launchpad`_ hosts the manila project. The manila project homepage on Launchpad is
http://launchpad.net/manila.
Launchpad credentials
@ -30,7 +30,7 @@ The mailing list archives are at https://lists.launchpad.net/openstack.
Bug tracking
------------
Report Manila bugs at https://bugs.launchpad.net/manila
Report manila bugs at https://bugs.launchpad.net/manila
Feature requests (Blueprints)
-----------------------------
@ -41,7 +41,7 @@ https://blueprints.launchpad.net/manila.
Technical support (Answers)
---------------------------
Manila uses Launchpad Answers to track Manila technical support questions. The Manila
Manila uses Launchpad Answers to track manila technical support questions. The manila
Answers page is at https://answers.launchpad.net/manila.
Note that the `OpenStack Forums`_ (which are not hosted on Launchpad) can also
@ -51,4 +51,4 @@ be used for technical support requests.
.. _Wiki: http://wiki.openstack.org
.. _Manila Team: https://launchpad.net/~manila
.. _OpenStack Team: https://launchpad.net/~openstack
.. _OpenStack Forums: http://forums.openstack.org/
.. _OpenStack Forums: http://forums.openstack.org/

View File

@ -18,7 +18,7 @@
Common and Misc Libraries
=========================
Libraries common throughout Manila or just ones that haven't yet been
Libraries common throughout manila or just ones that haven't yet been
categorized in depth.

View File

@ -17,7 +17,7 @@
NetApp Clustered Data ONTAP
===========================
The Manila Shared Filesystem Management Service can be configured to use
The Shared File Systems service can be configured to use
NetApp Clustered Data ONTAP (cDOT) version 8.2 and later.
Supported Operations
@ -63,15 +63,15 @@ network and provision each of a tenant's shares into that SVM. This requires
the user to specify both a share network as well as a share type with the DHSS
extra spec set to True when creating shares.
If 'driver_handles_share_servers' is False, the Manila admin must configure a
If 'driver_handles_share_servers' is False, the manila admin must configure a
single SVM, along with associated LIFs and protocol services, that will be
used for provisioning shares. The SVM is specified in the Manila config file.
used for provisioning shares. The SVM is specified in the manila config file.
Network approach
----------------
L3 connectivity between the storage cluster and Manila host must exist, and
VLAN segmentation may be configured. All of Manila's network plug-ins are
L3 connectivity between the storage cluster and manila host must exist, and
VLAN segmentation may be configured. All of manila's network plug-ins are
supported with the cDOT driver.
Supported shared filesystems
@ -91,7 +91,7 @@ Known restrictions
------------------
- For CIFS shares an external Active Directory (AD) service is required. The AD
details should be provided via a Manila security service that is attached to
details should be provided via a manila security service that is attached to
the specified share network.
- Share access rules for CIFS shares may be created only for existing users
in Active Directory.

View File

@ -6,15 +6,15 @@ Manila currently sees each share backend as a whole, even if the backend
consists of several smaller pools with totally different capabilities and
capacities.
Extending Manila to support storage pools within share backends will make
Manila scheduling decisions smarter as it now knows the full set of
Extending manila to support storage pools within share backends will make
manila scheduling decisions smarter as it now knows the full set of
capabilities of a backend.
Problem Description
-------------------
The provisioning decisions in Manila are based on the statistics reported by
The provisioning decisions in manila are based on the statistics reported by
backends. Any backend is assumed to be a single discrete unit with a set of
capabilities and single capacity. In reality this assumption is not true for
many storage providers, as their storage can be further divided or
@ -36,7 +36,7 @@ to a single storage controller, and the following problems may arise:
but perhaps not at the same time. Backends need a way to express exactly what
they support and how much space is consumed out of each type of storage.
Therefore, it is important to extend Manila so that it is aware of storage
Therefore, it is important to extend manila so that it is aware of storage
pools within each backend and can use them as the finest granularity for
resource placement.
@ -53,12 +53,12 @@ Terminology
Pool
A logical concept to describe a set of storage resources that can be
used to serve core Manila requests, e.g. shares/snapshots. This notion is
almost identical to Manila Share Backend, for it has similar attributes
used to serve core manila requests, e.g. shares/snapshots. This notion is
almost identical to manila Share Backend, for it has similar attributes
(capacity, capability). The difference is that a Pool may not exist on its
own; it must reside in a Share Backend. One Share Backend can have multiple
Pools but Pools do not have sub-Pools (meaning even if they have them,
sub-Pools do not get to exposed to Manila, yet). Each Pool has a unique name
sub-Pools do not get to exposed to manila, yet). Each Pool has a unique name
in the Share Backend namespace, which means a Share Backend cannot have two
pools using same name.
@ -79,7 +79,7 @@ The workflow in this change is simple:
as scheduler instructed.
To support placing resources (share/snapshot) onto a pool, these changes will
be made to specific components of Manila:
be made to specific components of manila:
1. Share Backends reporting capacity/capabilities at pool level;
@ -107,7 +107,7 @@ With this change:
REST API impact
---------------
With pool support added to Manila, there is an awkward situation where we
With pool support added to manila, there is an awkward situation where we
require admin to input the exact location for shares to be imported, which
must have pool info. But there is no way to find out what pools are there for
backends except looking at the scheduler log. That causes a poor user
@ -166,7 +166,7 @@ text compression should easily mitigate this problem.
Developer impact
----------------
For those share backends that would like to expose internal pools to Manila
For those share backends that would like to expose internal pools to manila
for more flexibility, developers should update their drivers to include all
pool capacities and capabilities in the share stats it reports to scheduler.
Share backends without multiple pools do not need to change their
@ -217,7 +217,7 @@ pools:
Documentation Impact
--------------------
Documentation impact for changes in Manila are introduced by the API changes.
Documentation impact for changes in manila are introduced by the API changes.
Also, doc changes are needed to append pool names to host names. Driver
changes may also introduce new configuration options which would lead to
Doc changes.

View File

@ -14,10 +14,10 @@
License for the specific language governing permissions and limitations
under the License.
AMQP and Manila
AMQP and manila
===============
AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between any two Manila components and allows them to communicate in a loosely coupled fashion. More precisely, Manila components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved:
AMQP is the messaging technology chosen by the OpenStack cloud. The AMQP broker, either RabbitMQ or Qpid, sits between any two manila components and allows them to communicate in a loosely coupled fashion. More precisely, manila components (the compute fabric of OpenStack) use Remote Procedure Calls (RPC hereinafter) to communicate to one another; however such a paradigm is built atop the publish/subscribe paradigm so that the following benefits can be achieved:
* Decoupling between client and servant (such as the client does not need to know where the servant's reference is).
* Full a-synchronism between client and servant (such as the client does not need the servant to run at the same time of the remote call).
@ -30,12 +30,12 @@ Manila uses direct, fanout, and topic-based exchanges. The architecture looks li
..
Manila implements RPC (both request+response, and one-way, respectively nicknamed 'rpc.call' and 'rpc.cast') over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each Manila service (for example Compute, Volume, etc.) create two queues at the initialization time, one which accepts messages with routing keys 'NODE-TYPE.NODE-ID' (for example compute.hostname) and another, which accepts messages with routing keys as generic 'NODE-TYPE' (for example compute). The former is used specifically when Manila-API needs to redirect commands to a specific node like 'euca-terminate instance'. In this case, only the compute node whose host's hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only.
Manila implements RPC (both request+response, and one-way, respectively nicknamed 'rpc.call' and 'rpc.cast') over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each manila service (for example Compute, Volume, etc.) create two queues at the initialization time, one which accepts messages with routing keys 'NODE-TYPE.NODE-ID' (for example compute.hostname) and another, which accepts messages with routing keys as generic 'NODE-TYPE' (for example compute). The former is used specifically when Manila-API needs to redirect commands to a specific node like 'euca-terminate instance'. In this case, only the compute node whose host's hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only.
Manila RPC Mappings
-------------------
The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every Manila component connects to the message broker and, depending on its personality (for example a compute node or a network node), may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute, Volume or Network). Invokers and Workers do not actually exist in the Manila object model, but we are going to use them as an abstraction for sake of clarity. An Invoker is a component that sends messages in the queuing system via two operations: 1) rpc.call and ii) rpc.cast; a Worker is a component that receives messages from the queuing system and reply accordingly to rcp.call operations.
The figure below shows the internals of a message broker node (referred to as a RabbitMQ node in the diagrams) when a single instance is deployed and shared in an OpenStack cloud. Every manila component connects to the message broker and, depending on its personality (for example a compute node or a network node), may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute, Volume or Network). Invokers and Workers do not actually exist in the manila object model, but we are going to use them as an abstraction for sake of clarity. An Invoker is a component that sends messages in the queuing system via two operations: 1) rpc.call and ii) rpc.cast; a Worker is a component that receives messages from the queuing system and reply accordingly to rcp.call operations.
Figure 2 shows the following internal elements:
@ -43,7 +43,7 @@ Figure 2 shows the following internal elements:
* Direct Consumer: a Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is instantiated and used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery; the exchange and queue identifiers are determined by a UUID generator, and are marshaled in the message sent by the Topic Publisher (only rpc.call operations).
* Topic Consumer: a Topic Consumer comes to life as soon as a Worker is instantiated and exists throughout its life-cycle; this object is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is 'topic') and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is 'topic.host').
* Direct Publisher: a Direct Publisher comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message.
* Topic Exchange: The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by Qpid or RabbitMQ); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in Manila.
* Topic Exchange: The Exchange is a routing table that exists in the context of a virtual host (the multi-tenancy mechanism provided by Qpid or RabbitMQ); its type (such as topic vs. direct) determines the routing policy; a message broker node will have only one topic-based exchange for every topic in manila.
* Direct Exchange: this is a routing table that is created during rpc.call operations; there are many instances of this kind of exchange throughout the life-cycle of a message broker node, one for each rpc.call invoked.
* Queue Element: A Queue is a message bucket. Messages are kept in the queue until a Consumer (either Topic or Direct Consumer) connects to the queue and fetch it. Queues can be shared or can be exclusive. Queues whose routing key is 'topic' are shared amongst Workers of the same personality.
@ -88,7 +88,7 @@ At any given time the load of a message broker node running either Qpid or Rabbi
* Throughput of API calls: the number of API calls (more precisely rpc.call ops) being served by the OpenStack cloud dictates the number of direct-based exchanges, related queues and direct consumers connected to them.
* Number of Workers: there is one queue shared amongst workers with the same personality; however there are as many exclusive queues as the number of workers; the number of workers dictates also the number of routing keys within the topic-based exchange, which is shared amongst all workers.
The figure below shows the status of a RabbitMQ node after Manila components' bootstrap in a test environment. Exchanges and queues being created by Manila components are:
The figure below shows the status of a RabbitMQ node after manila components' bootstrap in a test environment. Exchanges and queues being created by manila components are:
* Exchanges
1. manila (topic exchange)

View File

@ -28,7 +28,7 @@ flags by doing::
This will show the following help information::
Usage: ./run_tests.sh [OPTION]...
Run Manila's test suite(s)
Run manila's test suite(s)
-V, --virtual-env Always use virtualenv. Install automatically if not present
-N, --no-virtual-env Don't use virtualenv. Run tests in local environment