Merge "cephfs/driver: add nfs protocol support"

This commit is contained in:
Jenkins 2017-06-19 14:32:02 +00:00 committed by Gerrit Code Review
commit 49e7658ef2
10 changed files with 567 additions and 101 deletions

View File

@ -14,39 +14,52 @@
License for the specific language governing permissions and limitations
under the License.
=============
CephFS driver
=============
The CephFS driver enables manila to export shared filesystems to guests
using the Ceph network protocol. Guests require a Ceph client in order to
mount the filesystem.
The CephFS driver enables manila to export shared filesystems backed by Ceph's
File System (CephFS) using either the Ceph network protocol or NFS protocol.
Guests require a native Ceph client or an NFS client in order to mount the
filesystem.
Access is controlled via Ceph's cephx authentication system. When a user
requests share access for an ID, Ceph creates a corresponding Ceph auth ID
and a secret key, if they do not already exist, and authorizes the ID to access
the share. The client can then mount the share using the ID and the secret
key.
When guests access CephFS using the native Ceph protocol, access is
controlled via Ceph's cephx authentication system. If a user requests
share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret
key, if they do not already exist, and authorizes the ID to access the share.
The client can then mount the share using the ID and the secret key. To learn
more about configuring Ceph clients to access the shares created using this
driver, please see the Ceph documentation (http://docs.ceph.com/docs/master/cephfs/).
If you choose to use the kernel client rather than the FUSE client, the share
size limits set in manila may not be obeyed.
And when guests access CephFS through NFS, an NFS-Ganesha server mediates
access to CephFS. The driver enables access control by managing the NFS-Ganesha
server's exports.
To learn more about configuring Ceph clients to access the shares created
using this driver, please see the Ceph documentation(
http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel
client rather than the FUSE client, the share size limits set in manila
may not be obeyed.
Supported Operations
--------------------
~~~~~~~~~~~~~~~~~~~~
The following operations are supported with CephFS backend:
- Create/delete CephFS share
- Allow/deny CephFS share access
- Create/delete share
- Allow/deny CephFS native protocol access to share
* Only ``cephx`` access type is supported for CephFS protocol.
* Only ``cephx`` access type is supported for CephFS native protocol.
* ``read-only`` access level is supported in Newton or later versions
of manila.
* ``read-write`` access level is supported in Mitaka or later versions
of manila.
(or)
Allow/deny NFS access to share
* Only ``ip`` access type is supported for NFS protocol.
* ``read-only`` and ``read-write`` access levels are supported in Pike or
later versions of manila.
- Extend/shrink share
- Create/delete snapshot
- Create/delete consistency group (CG)
@ -60,8 +73,18 @@ The following operations are supported with CephFS backend:
see
(http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots).
Prerequisites
-------------
~~~~~~~~~~~~~
.. important:: A manila share backed by CephFS is only as good as the
underlying filesystem. Take care when configuring your Ceph
cluster, and consult the latest guidance on the use of
CephFS in the Ceph documentation (
http://docs.ceph.com/docs/master/cephfs/)
For CephFS native shares
------------------------
- Mitaka or later versions of manila.
- Jewel or later versions of Ceph.
@ -74,17 +97,33 @@ Prerequisites
- Network connectivity between your Ceph cluster's public network and the
servers running the :term:`manila-share` service.
- Network connectivity between your Ceph cluster's public network and guests.
See :ref:security_cephfs_native
.. important:: A manila share backed onto CephFS is only as good as the
underlying filesystem. Take care when configuring your Ceph
cluster, and consult the latest guidance on the use of
CephFS in the Ceph documentation (
http://docs.ceph.com/docs/master/cephfs/)
For CephFS NFS shares
---------------------
Authorize the driver to communicate with Ceph
---------------------------------------------
- Pike or later versions of manila.
- Kraken or later versions of Ceph.
- 2.5 or later versions of NFS-Ganesha.
- A Ceph cluster with a filesystem configured (
http://docs.ceph.com/docs/master/cephfs/createfs/)
- ``ceph-common`` package installed in the servers running the
:term:`manila-share` service.
- NFS client installed in the guest.
- Network connectivity between your Ceph cluster's public network and the
servers running the :term:`manila-share` service.
- Network connectivity between your Ceph cluster's public network and
NFS-Ganesha server.
- Network connectivity between your NFS-Ganesha server and the manila
guest.
Run the following commands to create a Ceph identity for manila to use:
.. _authorize_ceph_driver:
Authorizing the driver to communicate with Ceph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run the following commands to create a Ceph identity for a driver instance
to use:
.. code-block:: console
@ -105,23 +144,17 @@ Run the following commands to create a Ceph identity for manila to use:
``manila.keyring``, along with your ``ceph.conf`` file, will then need to be
placed on the server running the :term:`manila-share` service.
Enable snapshots in Ceph if you want to use them in manila:
.. important::
.. code-block:: console
ceph mds set allow_new_snaps true --yes-i-really-mean-it
.. warning::
Note that the snapshot support for the CephFS Native driver is experimental
and is known to have several caveats for use. Only enable this and the
equivalent ``manila.conf`` option if you understand these risks. See
(http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots)
for more details.
To communicate with the Ceph backend, a CephFS driver instance
(represented as a backend driver section in manila.conf) requires its own
Ceph auth ID that is not used by other CephFS driver instances running in
the same controller node.
In the server running the :term:`manila-share` service, you can place the
``ceph.conf`` and ``manila.keyring`` files in the /etc/ceph directory. Set the
``ceph.conf`` and ``manila.keyring`` files in the /etc/ceph directory. Set the
same owner for the :term:`manila-share` process and the ``manila.keyring``
file. Add the following section to the ``ceph.conf`` file.
file. Add the following section to the ``ceph.conf`` file.
.. code-block:: ini
@ -137,10 +170,30 @@ locations so that they are co-located with manila services's pid files and
log files respectively.
Configure CephFS backend in manila.conf
---------------------------------------
Enabling snapshot support in Ceph backend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Add CephFS to ``enabled_share_protocols`` (enforced at manila api layer). In
Enable snapshots in Ceph if you want to use them in manila:
.. code-block:: console
ceph mds set allow_new_snaps true --yes-i-really-mean-it
.. warning::
Note that the snapshot support for the CephFS driver is experimental and is
known to have several caveats for use. Only enable this and the
equivalent ``manila.conf`` option if you understand these risks. See
(http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots)
for more details.
Configuring CephFS backend in manila.conf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure CephFS native share backend in manila.conf
----------------------------------------------------
Add CephFS to ``enabled_share_protocols`` (enforced at manila api layer). In
this example we leave NFS and CIFS enabled, although you can remove these
if you will only use CephFS:
@ -148,25 +201,28 @@ if you will only use CephFS:
enabled_share_protocols = NFS,CIFS,CEPHFS
Create a section like this to define a CephFS backend:
Create a section like this to define a CephFS native backend:
.. code-block:: ini
[cephfs1]
[cephfsnative1]
driver_handles_share_servers = False
share_backend_name = CEPHFS1
share_backend_name = CEPHFSNATIVE1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_protocol_helper_type = CEPHFS
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_enable_snapshots = false
Set ``driver-handles-share-servers`` to ``False`` as the driver does not
manage the lifecycle of ``share-servers``. To let the driver perform snapshot
related operations, set ``cephfs_enable_snapshots`` to True.
manage the lifecycle of ``share-servers``. To let the driver perform snapshot
related operations, set ``cephfs_enable_snapshots`` to True. For the driver
backend to expose shares via the the native Ceph protocol, set
``cephfs_protocol_helper_type`` to ``CEPHFS``.
Then edit ``enabled_share_backends`` to point to the driver's backend section
using the section name. In this example we are also including another backend
using the section name. In this example we are also including another backend
("generic1"), you would include whatever other backends you have configured.
@ -178,61 +234,150 @@ using the section name. In this example we are also including another backend
.. code-block:: ini
enabled_share_backends = generic1, cephfs1
enabled_share_backends = generic1, cephfsnative1
Configure CephFS NFS share backend in manila.conf
-------------------------------------------------
Add NFS to ``enabled_share_protocols`` if it's not already there:
.. code-block:: ini
enabled_share_protocols = NFS,CIFS,CEPHFS
Create a section to define a CephFS NFS share backend:
.. code-block:: ini
[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila1
cephfs_cluster_name = ceph
cephfs_enable_snapshots = False
cephfs_ganesha_server_is_remote= False
cephfs_ganesha_server_ip = 172.24.4.3
The following options are set in the driver backend section above:
* ``driver-handles-share-servers`` to ``False`` as the driver does not
manage the lifecycle of ``share-servers``.
* ``cephfs_protocol_helper_type`` to ``NFS`` to allow NFS protocol access to
the CephFS backed shares.
* ``ceph_auth_id`` to the ceph auth ID created in :ref:`authorize_ceph_driver`.
* ``cephfs_ganesha_server_is_remote`` to False if the NFS-ganesha server is
co-located with the :term:`manila-share` service. If the NFS-Ganesha
server is remote, then set the options to ``True``, and set other options
such as ``cephfs_ganesha_server_ip``, ``cephfs_ganesha_server_username``,
and ``cephfs_ganesha_server_password`` (or ``cephfs_ganesha_path_to_private_key``)
to allow the driver to manage the NFS-Ganesha export entries over SSH.
* ``cephfs_ganesha_server_ip`` to the ganesha server IP address. It is
recommended to set this option even if the ganesha server is co-located
with the :term:`manila-share` service.
Edit ``enabled_share_backends`` to point to the driver's backend section
using the section name, ``cephfnfs1``.
.. code-block:: ini
enabled_share_backends = generic1, cephfsnfs1
Creating shares
---------------
~~~~~~~~~~~~~~~
Create CephFS native share
--------------------------
The default share type may have ``driver_handles_share_servers`` set to True.
Configure a share type suitable for cephfs:
Configure a share type suitable for CephFS native share:
.. code-block:: console
manila type-create cephfstype false
manila type-create cephfsnativetype false
manila type-key cephfsnativetype set vendor_name=Ceph storage_protocol=CEPHFS
Then create yourself a share:
.. code-block:: console
manila create --share-type cephfstype --name cephshare1 cephfs 1
manila create --share-type cephfsnativetype --name cephnativeshare1 cephfs 1
Note the export location of the share:
.. code-block:: console
manila share-export-location-list cephshare1
manila share-export-location-list cephnativeshare1
The export location of the share contains the Ceph monitor (mon) addresses and
ports, and the path to be mounted. It is of the form,
ports, and the path to be mounted. It is of the form,
``{mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}``
Create CephFS NFS share
-----------------------
Configure a share type suitable for CephFS NFS share:
.. code-block:: console
manila type-create cephfsnfstype false
manila type-key cephfsnfstype set vendor_name=Ceph storage_protocol=NFS
Then create a share:
.. code-block:: console
manila create --share-type cephfsnfstype --name cephnfsshare1 nfs 1
Note the export location of the share:
.. code-block:: console
manila share-export-location-list cephnfsshare1
The export location of the share contains the IP address of the NFS-Ganesha
server and the path to be mounted. It is of the form,
``{NFS-Ganesha server address}:{path to be mounted}``
Allowing access to shares
--------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~
Allow access to CephFS native share
-----------------------------------
Allow Ceph auth ID ``alice`` access to the share using ``cephx`` access type.
.. code-block:: console
manila access-allow cephshare1 cephx alice
manila access-allow cephnativeshare1 cephx alice
Note the access status, and the access/secret key of ``alice``.
.. code-block:: console
manila access-list cephshare1
manila access-list cephnativeshare1
.. note::
In Mitaka release, the secret key is not exposed by any manila API. The
In Mitaka release, the secret key is not exposed by any manila API. The
Ceph storage admin needs to pass the secret key to the guest out of band of
manila. You can refer to the link below to see how the storage admin
manila. You can refer to the link below to see how the storage admin
could obtain the secret key of an ID.
http://docs.ceph.com/docs/jewel/rados/operations/user-management/#get-a-user
Alternatively, the cloud admin can create Ceph auth IDs for each of the
tenants. The users can then request manila to authorize the pre-created
tenants. The users can then request manila to authorize the pre-created
Ceph auth IDs, whose secret keys are already shared with them out of band
of manila, to access the shares.
@ -248,9 +393,21 @@ Note the access status, and the access/secret key of ``alice``.
For more details, please see the Ceph documentation.
http://docs.ceph.com/docs/jewel/rados/operations/user-management/#add-a-user
Allow access to CephFS NFS share
--------------------------------
Mounting shares using FUSE client
---------------------------------
Allow a guest access to the share using ``ip`` access type.
.. code-block:: console
manila access-allow cephnfsshare1 ip 172.24.4.225
Mounting CephFS shares
~~~~~~~~~~~~~~~~~~~~~~
Mounting CephFS native share using FUSE client
----------------------------------------------
Using the secret key of the authorized ID ``alice`` create a keyring file,
``alice.keyring`` like:
@ -282,56 +439,72 @@ from the share's export location:
--keyring=./alice.keyring \
--client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c
Mount CephFS NFS share using NFS client
---------------------------------------
In the guest, mount the share using the NFS client and knowing the share's
export location.
.. code-block:: ini
sudo mount -t nfs 172.24.4.3:/volumes/_nogroup/6732900b-32c1-4816-a529-4d6d3f15811e /mnt/nfs/
Known restrictions
------------------
~~~~~~~~~~~~~~~~~~
Consider the driver as a building block for supporting multi-tenant
workloads in the future. However, it can be used in private cloud
deployments.
- The guests have direct access to Ceph's public network.
- To communicate with the Ceph backend, a CephFS driver instance
(represented as a backend driver section in manila.conf) requires its own
Ceph auth ID that is not used by other CephFS driver instances running in
the same controller node.
- The snapshot support of the driver is disabled by default. The
``cephfs_enable_snapshots`` configuration option needs to be set to ``True``
to allow snapshot operations. Snapshot support will also need to be enabled
on the backend CephFS storage.
- Snapshots are read-only. A user can read a snapshot's contents from the
- Snapshots are read-only. A user can read a snapshot's contents from the
``.snap/{manila-snapshot-id}_{unknown-id}`` folder within the mounted
share.
- To restrict share sizes, CephFS uses quotas that are enforced in the client
side. The CephFS clients are relied on to respect quotas.
Mitaka release
Restrictions with CephFS native share backend
---------------------------------------------
- To restrict share sizes, CephFS uses quotas that are enforced in the client
side. The CephFS FUSE clients are relied on to respect quotas.
Mitaka release only
- The secret-key of a Ceph auth ID required to mount a share is not exposed to
an user by a manila API. To workaround this, the storage admin would need to
an user by a manila API. To workaround this, the storage admin would need to
pass the key out of band of manila, or the user would need to use the Ceph ID
and key already created and shared with her by the cloud admin.
Security
--------
~~~~~~~~
- Each share's data is mapped to a distinct Ceph RADOS namespace. A guest is
- Each share's data is mapped to a distinct Ceph RADOS namespace. A guest is
restricted to access only that particular RADOS namespace.
http://docs.ceph.com/docs/master/cephfs/file-layouts/
- An additional level of resource isolation can be provided by mapping a
share's contents to a separate RADOS pool. This layout would be be preferred
share's contents to a separate RADOS pool. This layout would be be preferred
only for cloud deployments with a limited number of shares needing strong
resource separation. You can do this by setting a share type specification,
resource separation. You can do this by setting a share type specification,
``cephfs:data_isolated`` for the share type used by the cephfs driver.
.. code-block:: console
manila type-key cephfstype set cephfs:data_isolated=True
- As mentioned earlier, untrusted manila guests pose security risks to the
Ceph storage cluster as they would have direct access to the cluster's
public network.
.. _security_cephfs_native:
Security with CephFS native share backend
-----------------------------------------
As the guests need direct access to Ceph's public network, CephFS native
share backend is suitable only in private clouds where guests can be trusted.
The :mod:`manila.share.drivers.cephfs.driver` Module

View File

@ -134,7 +134,7 @@ Mapping of share drivers and share access rules support
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Oracle ZFSSA | NFS,CIFS(K) | \- | \- | \- | \- | \- | \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| CephFS | \- | \- | \- | CEPHFS (M) | \- | \- | \- | CEPHFS (N) |
| CephFS | NFS (P) | \- | \- | CEPHFS (M) | NFS (P) | \- | \- | CEPHFS (N) |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+
| Tegile | NFS (M) |NFS (M),CIFS (M)| \- | \- | NFS (M) |NFS (M),CIFS (M)| \- | \- |
+----------------------------------------+--------------+----------------+------------+--------------+--------------+----------------+------------+------------+

View File

@ -0,0 +1,5 @@
EXPORT {
FSAL {
Name = "CEPH";
}
}

View File

@ -14,6 +14,7 @@
# under the License.
import socket
import sys
from oslo_config import cfg
@ -25,9 +26,9 @@ from manila import exception
from manila.i18n import _
from manila.share import driver
from manila.share.drivers import ganesha
from manila.share.drivers.ganesha import utils as ganesha_utils
from manila.share import share_types
try:
import ceph_volume_client
ceph_module_found = True
@ -62,13 +63,28 @@ cephfs_opts = [
),
cfg.StrOpt('cephfs_protocol_helper_type',
default="CEPHFS",
# TODO(rraja): Add 'NFS' once CephFS/Ganesha support is
# is available in manila.
choices=['CEPHFS', ],
choices=['CEPHFS', 'NFS'],
ignore_case=True,
help="The type of protocol helper to use. Default is "
"CEPHFS."
),
cfg.BoolOpt('cephfs_ganesha_server_is_remote',
default=False,
help="Whether the NFS-Ganesha server is remote to the driver."
),
cfg.StrOpt('cephfs_ganesha_server_ip',
help="The IP address of the NFS-Ganesha server."),
cfg.StrOpt('cephfs_ganesha_server_username',
default='root',
help="The username to authenticate as in the remote "
"NFS-Ganesha server host."),
cfg.StrOpt('cephfs_ganesha_path_to_private_key',
help="The path of the driver host's private SSH key file."),
cfg.StrOpt('cephfs_ganesha_server_password',
secret=True,
help="The password to authenticate as the user in the remote "
"Ganesha server host. This is not required if "
"'cephfs_ganesha_path_to_private_key' is configured."),
]
@ -82,7 +98,8 @@ def cephfs_share_path(share):
share['share_group_id'], share['id'])
class CephFSDriver(driver.ShareDriver,):
class CephFSDriver(driver.ExecuteMixin, driver.GaneshaMixin,
driver.ShareDriver,):
"""Driver for the Ceph Filesystem."""
def __init__(self, *args, **kwargs):
@ -95,11 +112,15 @@ class CephFSDriver(driver.ShareDriver,):
self.configuration.append_config_values(cephfs_opts)
def do_setup(self, context):
protocol_helper_class = getattr(sys.modules[__name__],
'NativeProtocolHelper')
if self.configuration.cephfs_protocol_helper_type.upper() == "CEPHFS":
protocol_helper_class = getattr(
sys.modules[__name__], 'NativeProtocolHelper')
else:
protocol_helper_class = getattr(
sys.modules[__name__], 'NFSProtocolHelper')
self.protocol_helper = protocol_helper_class(
None,
self._execute,
self.configuration,
volume_client=self.volume_client)
@ -412,3 +433,83 @@ class NativeProtocolHelper(ganesha.NASHelperBase):
self._deny_access(context, share, rule)
return access_keys
class NFSProtocolHelper(ganesha.GaneshaNASHelper2):
shared_data = {}
supported_protocols = ('NFS',)
def __init__(self, execute, config_object, **kwargs):
if config_object.cephfs_ganesha_server_is_remote:
execute = ganesha_utils.SSHExecutor(
config_object.cephfs_ganesha_server_ip, 22, None,
config_object.cephfs_ganesha_server_username,
password=config_object.cephfs_ganesha_server_password,
privatekey=config_object.cephfs_ganesha_path_to_private_key)
else:
execute = ganesha_utils.RootExecutor(execute)
self.ganesha_host = config_object.cephfs_ganesha_server_ip
if not self.ganesha_host:
self.ganesha_host = socket.gethostname()
LOG.info("NFS-Ganesha server's location defaulted to driver's "
"hostname: %s", self.ganesha_host)
self.volume_client = kwargs.pop('volume_client')
super(NFSProtocolHelper, self).__init__(execute, config_object,
**kwargs)
def get_export_locations(self, share, cephfs_volume):
export_location = "{server_address}:{path}".format(
server_address=self.ganesha_host,
path=cephfs_volume['mount_path'])
LOG.info("Calculated export location for share %(id)s: %(loc)s",
{"id": share['id'], "loc": export_location})
return {
'path': export_location,
'is_admin_only': False,
'metadata': {},
}
def _default_config_hook(self):
"""Callback to provide default export block."""
dconf = super(NFSProtocolHelper, self)._default_config_hook()
conf_dir = ganesha_utils.path_from(__file__, "conf")
ganesha_utils.patch(dconf, self._load_conf_dir(conf_dir))
return dconf
def _fsal_hook(self, base, share, access):
"""Callback to create FSAL subblock."""
ceph_auth_id = ''.join(['ganesha-', share['id']])
auth_result = self.volume_client.authorize(
cephfs_share_path(share), ceph_auth_id, readonly=False,
tenant_id=share['project_id'])
# Restrict Ganesha server's access to only the CephFS subtree or path,
# corresponding to the manila share, that is to be exported by making
# Ganesha use Ceph auth IDs with path restricted capabilities to
# communicate with CephFS.
return {
'Name': 'Ceph',
'User_Id': ceph_auth_id,
'Secret_Access_Key': auth_result['auth_key']
}
def _cleanup_fsal_hook(self, base, share, access):
"""Callback for FSAL specific cleanup after removing an export."""
ceph_auth_id = ''.join(['ganesha-', share['id']])
self.volume_client.deauthorize(cephfs_share_path(share),
ceph_auth_id)
def _get_export_path(self, share):
"""Callback to provide export path."""
volume_path = cephfs_share_path(share)
return self.volume_client._get_path(volume_path)
def _get_export_pseudo_path(self, share):
"""Callback to provide pseudo path."""
volume_path = cephfs_share_path(share)
return self.volume_client._get_path(volume_path)

View File

@ -120,6 +120,10 @@ class GaneshaNASHelper(NASHelperBase):
"""Subclass this to create FSAL block."""
return {}
def _cleanup_fsal_hook(self, base_path, share, access):
"""Callback for FSAL specific cleanup after removing an export."""
pass
def _allow_access(self, base_path, share, access):
"""Allow access to the share."""
if access['access_type'] != 'ip':
@ -239,3 +243,4 @@ class GaneshaNASHelper2(GaneshaNASHelper):
else:
# No clients have access to the share. Remove export.
self.ganesha.remove_export(share['name'])
self._cleanup_fsal_hook(None, share, None)

View File

@ -141,6 +141,11 @@ def _dump_to_conf(confdict, out=sys.stdout, indent=0):
out.write("{\n")
_dump_to_conf(item, out, indent + 1)
out.write(' ' * (indent * IWIDTH) + '}\n')
# The 'CLIENTS' Ganesha string option is an exception in that it's
# string value can't be enclosed within quotes as can be done for
# other string options in a valid Ganesha conf file.
elif k.upper() == 'CLIENTS':
out.write(' ' * (indent * IWIDTH) + k + ' = ' + v + ';')
else:
out.write(' ' * (indent * IWIDTH) + k + ' ')
out.write('= ')
@ -149,10 +154,7 @@ def _dump_to_conf(confdict, out=sys.stdout, indent=0):
out.write('\n')
else:
dj = jsonutils.dumps(confdict)
if confdict == dj[1:-1]:
out.write(confdict)
else:
out.write(dj)
out.write(dj)
def parseconf(conf):

View File

@ -63,6 +63,7 @@ class MockVolumeClientModule(object):
self.create_volume = mock.Mock(return_value={
"mount_path": "/foo/bar"
})
self._get_path = mock.Mock(return_value='/foo/bar')
self.get_mon_addrs = mock.Mock(return_value=["1.2.3.4", "5.6.7.8"])
self.get_authorized_ids = mock.Mock(
return_value=[('eve', 'rw')])
@ -87,6 +88,7 @@ class CephFSDriverTestCase(test.TestCase):
def setUp(self):
super(CephFSDriverTestCase, self).setUp()
self._execute = mock.Mock()
self.fake_conf = configuration.Configuration(None)
self._context = context.get_admin_context()
self._share = fake_share.fake_share(share_proto='CEPHFS')
@ -99,20 +101,32 @@ class CephFSDriverTestCase(test.TestCase):
self.mock_object(driver, "ceph_module_found", True)
self.mock_object(driver, "cephfs_share_path")
self.mock_object(driver, 'NativeProtocolHelper')
self.mock_object(driver, 'NFSProtocolHelper')
self._driver = (
driver.CephFSDriver(configuration=self.fake_conf))
driver.CephFSDriver(execute=self._execute,
configuration=self.fake_conf))
self._driver.protocol_helper = mock.Mock()
self.mock_object(share_types, 'get_share_type_extra_specs',
mock.Mock(return_value={}))
def test_do_setup(self):
@ddt.data('cephfs', 'nfs')
def test_do_setup(self, protocol_helper):
self._driver.configuration.cephfs_protocol_helper_type = (
protocol_helper)
self._driver.do_setup(self._context)
driver.NativeProtocolHelper.assert_called_once_with(
None, self._driver.configuration,
volume_client=self._driver._volume_client)
if protocol_helper == 'cephfs':
driver.NativeProtocolHelper.assert_called_once_with(
self._execute, self._driver.configuration,
volume_client=self._driver._volume_client)
else:
driver.NFSProtocolHelper.assert_called_once_with(
self._execute, self._driver.configuration,
volume_client=self._driver._volume_client)
self._driver.protocol_helper.init_helper.assert_called_once_with()
def test_create_share(self):
@ -507,3 +521,156 @@ class NativeProtocolHelperTestCase(test.TestCase):
self.assertFalse(vc.get_authorized_ids.called)
vc.authorize.assert_called_once_with(
driver.cephfs_share_path(self._share), "alice")
@ddt.ddt
class NFSProtocolHelperTestCase(test.TestCase):
def setUp(self):
super(NFSProtocolHelperTestCase, self).setUp()
self._execute = mock.Mock()
self._share = fake_share.fake_share(share_proto='NFS')
self._volume_client = MockVolumeClientModule.CephFSVolumeClient()
self.fake_conf = configuration.Configuration(None)
self.fake_conf.set_default('cephfs_ganesha_server_ip',
'fakeip')
self.mock_object(driver, "cephfs_share_path",
mock.Mock(return_value='fakevolumepath'))
self.mock_object(driver.ganesha_utils, 'SSHExecutor')
self.mock_object(driver.ganesha_utils, 'RootExecutor')
self.mock_object(driver.socket, 'gethostname')
self._nfs_helper = driver.NFSProtocolHelper(
self._execute,
self.fake_conf,
volume_client=self._volume_client)
@ddt.data(False, True)
def test_init_executor_type(self, ganesha_server_is_remote):
fake_conf = configuration.Configuration(None)
conf_args_list = [
('cephfs_ganesha_server_is_remote', ganesha_server_is_remote),
('cephfs_ganesha_server_ip', 'fakeip'),
('cephfs_ganesha_server_username', 'fake_username'),
('cephfs_ganesha_server_password', 'fakepwd'),
('cephfs_ganesha_path_to_private_key', 'fakepathtokey')]
for args in conf_args_list:
fake_conf.set_default(*args)
driver.NFSProtocolHelper(
self._execute,
fake_conf,
volume_client=MockVolumeClientModule.CephFSVolumeClient()
)
if ganesha_server_is_remote:
driver.ganesha_utils.SSHExecutor.assert_has_calls(
[mock.call('fakeip', 22, None, 'fake_username',
password='fakepwd',
privatekey='fakepathtokey')])
else:
driver.ganesha_utils.RootExecutor.assert_has_calls(
[mock.call(self._execute)])
@ddt.data('fakeip', None)
def test_init_identify_local_host(self, ganesha_server_ip):
self.mock_object(driver.LOG, 'info')
fake_conf = configuration.Configuration(None)
conf_args_list = [
('cephfs_ganesha_server_ip', ganesha_server_ip),
('cephfs_ganesha_server_username', 'fake_username'),
('cephfs_ganesha_server_password', 'fakepwd'),
('cephfs_ganesha_path_to_private_key', 'fakepathtokey')]
for args in conf_args_list:
fake_conf.set_default(*args)
driver.NFSProtocolHelper(
self._execute,
fake_conf,
volume_client=MockVolumeClientModule.CephFSVolumeClient()
)
driver.ganesha_utils.RootExecutor.assert_has_calls(
[mock.call(self._execute)])
if ganesha_server_ip:
self.assertFalse(driver.socket.gethostname.called)
self.assertFalse(driver.LOG.info.called)
else:
driver.socket.gethostname.assert_called_once_with()
driver.LOG.info.assert_called_once()
def test_get_export_locations(self):
cephfs_volume = {"mount_path": "/foo/bar"}
ret = self._nfs_helper.get_export_locations(self._share,
cephfs_volume)
self.assertEqual(
{
'path': 'fakeip:/foo/bar',
'is_admin_only': False,
'metadata': {}
}, ret)
def test_default_config_hook(self):
fake_conf_dict = {'key': 'value1'}
self.mock_object(driver.ganesha.GaneshaNASHelper,
'_default_config_hook',
mock.Mock(return_value={}))
self.mock_object(driver.ganesha_utils, 'path_from',
mock.Mock(return_value='/fakedir/cephfs/conf'))
self.mock_object(self._nfs_helper, '_load_conf_dir',
mock.Mock(return_value=fake_conf_dict))
ret = self._nfs_helper._default_config_hook()
(driver.ganesha.GaneshaNASHelper._default_config_hook.
assert_called_once_with())
driver.ganesha_utils.path_from.assert_called_once_with(
driver.__file__, 'conf')
self._nfs_helper._load_conf_dir.assert_called_once_with(
'/fakedir/cephfs/conf')
self.assertEqual(fake_conf_dict, ret)
def test_fsal_hook(self):
expected_ret = {
'Name': 'Ceph',
'User_Id': 'ganesha-fakeid',
'Secret_Access_Key': 'fakekey'
}
self.mock_object(self._volume_client, 'authorize',
mock.Mock(return_value={'auth_key': 'fakekey'}))
ret = self._nfs_helper._fsal_hook(None, self._share, None)
driver.cephfs_share_path.assert_called_once_with(self._share)
self._volume_client.authorize.assert_called_once_with(
'fakevolumepath', 'ganesha-fakeid', readonly=False,
tenant_id='fake_project_uuid')
self.assertEqual(expected_ret, ret)
def test_cleanup_fsal_hook(self):
self.mock_object(self._volume_client, 'deauthorize')
ret = self._nfs_helper._cleanup_fsal_hook(None, self._share, None)
driver.cephfs_share_path.assert_called_once_with(self._share)
self._volume_client.deauthorize.assert_called_once_with(
'fakevolumepath', 'ganesha-fakeid')
self.assertIsNone(ret)
def test_get_export_path(self):
ret = self._nfs_helper._get_export_path(self._share)
driver.cephfs_share_path.assert_called_once_with(self._share)
self._volume_client._get_path.assert_called_once_with(
'fakevolumepath')
self.assertEqual('/foo/bar', ret)
def test_get_export_pseudo_path(self):
ret = self._nfs_helper._get_export_pseudo_path(self._share)
driver.cephfs_share_path.assert_called_once_with(self._share)
self._volume_client._get_path.assert_called_once_with(
'fakevolumepath')
self.assertEqual('/foo/bar', ret)

View File

@ -71,11 +71,11 @@ class GaneshaConfigTests(test.TestCase):
ref_ganesha_cnf = """EXPORT {
CLIENT {
Clients = ip1;
Access_Level = ro;
Access_Level = "ro";
}
CLIENT {
Clients = ip2;
Access_Level = rw;
Access_Level = "rw";
}
Export_Id = 101;
}"""

View File

@ -220,6 +220,11 @@ class GaneshaNASHelperTestCase(test.TestCase):
ret = self._helper._fsal_hook('/fakepath', self.share, self.access)
self.assertEqual({}, ret)
def test_cleanup_fsal_hook(self):
ret = self._helper._cleanup_fsal_hook('/fakepath', self.share,
self.access)
self.assertIsNone(ret)
def test_allow_access(self):
mock_ganesha_utils_patch = mock.Mock()
@ -403,6 +408,7 @@ class GaneshaNASHelper2TestCase(test.TestCase):
mock_gh = self._helper.ganesha
self.mock_object(mock_gh, '_check_export_file_exists',
mock.Mock(return_value=True))
self.mock_object(self._helper, '_cleanup_fsal_hook')
client = {'Access_Type': 'ro', 'Clients': '10.0.0.1'}
self.mock_object(
mock_gh, '_read_export_file',
@ -415,6 +421,8 @@ class GaneshaNASHelper2TestCase(test.TestCase):
mock_gh._check_export_file_exists.assert_called_once_with('fakename')
mock_gh.remove_export.assert_called_once_with('fakename')
self._helper._cleanup_fsal_hook.assert_called_once_with(
None, self.share, None)
self.assertFalse(mock_gh.add_export.called)
self.assertFalse(mock_gh.update_export.called)
@ -423,6 +431,7 @@ class GaneshaNASHelper2TestCase(test.TestCase):
self.mock_object(mock_gh, '_check_export_file_exists',
mock.Mock(return_value=False))
self.mock_object(ganesha.LOG, 'warning')
self.mock_object(self._helper, '_cleanup_fsal_hook')
self._helper.update_access(
self._context, self.share, access_rules=[],
@ -433,3 +442,4 @@ class GaneshaNASHelper2TestCase(test.TestCase):
self.assertFalse(mock_gh.add_export.called)
self.assertFalse(mock_gh.update_export.called)
self.assertFalse(mock_gh.remove_export.called)
self.assertFalse(self._helper._cleanup_fsal_hook.called)

View File

@ -0,0 +1,3 @@
---
features:
- Added NFS protocol support for shares backed by CephFS.