Implement the share backup feature for NetApp driver.
NetApp SnapVault technology is used to create and restore
the backup for NetApp ONTAP share. backup delete workflow
just delete the transferred snapshot from destination
backup volume.
Depends-On: Ifb88ec096674ea8bc010c1c3f6dea1b51be3beaa
Change-Id: I5a4edbf547e7886fb4fa9c1bed90110a33f9bf3b
The NetApp driver has been working with FlexVol ONTAP volumes.
The driver does not support scaling FlexVol volumes higher than
100 TiB, which was a theoretical limit for the large namespace that
these containers were meant to handle. ONTAP's Flexgroup volumes
eliminate such limitations. So, added the support for provisioning
share as FlexGroup in the NetApp driver.
The FlexGroup provision is enabled by new option
`netapp_enable_flexgroup`, which will make the driver report a single
pool represeting all aggregates. The selection on which aggregates the
FlexGroup share will reside is up to ONTAP. If the administrator desires
to control that selection through Manila scheduler, it must inform the set
of aggregates that formss FlexGroup pool in the new option
`netapp_flexgroup_pool`.
Each NetApp pool will report now the capability: `netapp_flexgroup`
informing which type the pool is.
The following operations are allowed with FlexGroup shares (DHSS
True/False and NFS/CIFS):
- Create/Delete share;
- Shrink/Extend share;
- Create/Delete snapshot;
- Revert to snapshot;
- Manage/Unmanage snapshots;
- Create from snapshot;
- Replication[1]
- Manage/Unmanage shares;
The backend with one FlexGroup pool configured will drop the consistent
snapshot support for all pools.
The driver FlexGroup support requires ONTAP version 9.8 or greater.
[1] FlexGroup is limited to one single replica for ONTAP version
lower than 9.9.1.
DocImpact
Depends-On: If525e97a5d456d6ddebb4bf9bc8ff6190c95a555
Depends-On: I646f782c3e2be5ac799254f08a248a22cb9e0358
Implements: bp netapp-flexgroup-support
Change-Id: I4f68a9bb33be85f9a22e0be4ccf673647e713459
Signed-off-by: Felipe Rodrigues <felipefuty01@gmail.com>
Driver shouldn't have to report support for ipv6 in two
places. Drivers that assert ipv6_implemented=True just
need to implement get_configured_ip_versions. Likewise
ipv4 support is computed by the same method.
Closes-bug: #1734127
Change-Id: I382767918a65b91e99ac1e604304ad01fac332e6
Escape IPv6 exports correctly and setup the default export
policy to allow both IPv4 and IPv6.
Partially implements: bp netapp-ontap-ipv6-support
Change-Id: I84437b140e2a9561cc4092683209800101f45815
Some of the available checks are disabled by default, like:
[H106] Don't put vim configuration in source files
[H203] Use assertIs(Not)None to check for None
[H904] Use ',' instead of '%', String interpolation should be
delayed to be handled by the logging code, rather than
being done at the point of the logging call.
Change-Id: Ie985fcf78997a86d41e40eacbb4a5ace8592a348
ONTAP supports assigning QoS policy groups to storage
objects and workloads. [1]
Expose this functionality through the ONTAP manila
drivers (DHSS=True/False, NFS, CIFS).
The drivers will set the capability "qos" to True if the
configured credentials have access to create qos policy
groups on the configured ONTAP backend. When 'qos'
extra-spec is set in share types, scoped extra-specs can
be used to specify QoS ceiling values in iops or bps.
The drivers support the following QoS specs:
'netapp:maxiops', 'netapp:maxiopspergib', 'netapp:maxbps',
'netapp:maxbpspergib'. Policies are created on-demand
and manipulated as and when shares are manipulated
through manila.
[1] http://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.pow-perf-mon%2FGUID-38357C43-FB36-419D-B31F-6FD75B47254D.html
Implements: blueprint netapp-cdot-qos
Change-Id: I6f82c012ea60cfb1e9f82a696e2346ee95c60df3
The NetApp cDOT driver logs some info about OpenStack deployments
already, and more info is needed about the specific storage
resources managed by Manila.
Implements: blueprint netapp-cdot-enhanced-support-logging
Change-Id: I8e4f81b3f1291e3c88fc88ab71ac93a415990ee3
The NetApp cDOT driver now explicitly filters root aggregates
from the pools reported to the manila scheduler if the driver
is operating with cluster credentials.
Change-Id: I659edada559e50d2332790025c65fae265a27c3d
Closes-Bug: #1624526
An earlier bug (https://bugs.launchpad.net/manila/+bug/1259988
"NetApp cDOT driver should split clone from snapshot after
creation") led us to modify the NetApp cDOT driver to split cloned
shares off from their parents immediately upon creation. As
described in that bug report, the fix makes the source snapshot
deletable after the clone split is done. However, the more
significant negative consequence is that the storage efficiency
gains from having cloned the blocks are lost. We have had
complaints from users who expect to retain the storage efficiency
of cDOT snapshots and cloning.
The fix is to not start a clone split during the
create-from-snapshot workflow. Instead, if/when a request to
delete the locked snapshot is received, the driver should start
the clone split at that time and soft-delete the snapshot. The
driver already has logic for reaping soft-deleted objects, so it
is straightforward to also reap deleted snapshots as they become
un-busy.
Change-Id: I0f7ba8f76dce6f55c64e156b372317387d299fa6
Closes-Bug: #1554592
The admin network is needed by the share migration service.
This commit creates exactly one admin LIF for each cDOT
share server (DHSS=True) if an admin network is configured
in manila.conf.
Implements: blueprint netapp-cdot-admin-network
Depends-On: Ibb88c64ddd899c09cd148f398e21ac613be9f15b
Change-Id: I3c883b652f83115434ede89fc12d17aa962af86d
- Remove passing DB reference to drivers in __init__() method
- Remove db reference from Generic driver and service_instance
- Remove db reference from Netapp share driver
- Remove db reference from Glusterfs share driver
- Remove db reference from Glusterfs_Native share driver
- Remove db reference from Quobyte share driver
- Remove db reference from IBM GPFS driver
- Remove db reference from HDS_SOP driver
- Remove db reference from HDFSNative driver
- Remove db reference from fake driver
- Remove db reference from unit tests.
Change-Id: I74a636a8897caa6fc4af833c1568471fe1cb0987
If a share or share server is not created successfully,
sometimes Manila will not save the vserver info in the
share or share server object. In such cases, when asked
to delete such a share or share server, the cDOT driver
should not protest about the lack of vserver info but
instead should log a warning and not raise an exception
(which leaves the object in an error_deleting state).
This patch addresses a number of issues in the delete,
share-server-delete, and snapshot-delete workflows where
the cDOT driver could unnecessarily raise an exception
when it should merely do nothing and allow the workflow
to proceed.
Change-Id: I54cf96b8a24ac5272b37bce2f5118551504a1699
Closes-Bug: #1438893
The nfs-exportfs-* APIs that the NetApp cDOT driver currently uses
for NFS export management are 7-mode APIs which were left in for
backwards compatibility. It's not the recommended/supported way
to do NFS export management. We need to modify the driver to use
the supported set of export-policy-* and export-rule-* APIs.
Closes-Bug: #1370761
Closes-Bug: #1437509
Change-Id: I91347d27ba69d4a20fe73ce7e75f58c9818ea6d8
A common user error with the single-SVM cDOT driver is failing to
assign aggregates to the storage virtual machine (aka vserver) used
by the driver. The driver should fail fast if there are no available
aggregates for share provisioning.
We want the multi-SVM driver to fail fast as well, if only for
consistency and to alert an admin of a problem with the
netapp_aggregate_name_search_pattern regex in manila.conf.
Change-Id: I2a2015bd07b68177b0a5e7ac322ffa50b3b41c7d
Closes-Bug: #1433871
The extant Manila driver for cDOT supports multi-svm mode only
(i.e. driver_handles_share_servers = True). This commit adds
a single-svm cDOT driver (driver_handles_share_servers = False)
for simpler operation without network plug-ins. To work in this
mode the storage admin must pre-create a vserver with the desired
LIFs and protocols configured, and specify the vserver in
the config variable 'netapp_vserver'.
To configure this driver, use a config like the following:
[cmode]
share_driver = manila.share.drivers.netapp.common.NetAppDriver
driver_handles_share_servers = False
netapp_storage_family = ontap_cluster
netapp_server_hostname=192.168.228.42
netapp_login=admin
netapp_password=cluster3
netapp_server_port=443
netapp_transport_type=https
netapp_aggregate_name_search_pattern=manila
netapp_vserver = manila_svm
Implements blueprint netapp-cdot-driver-without-handling-share-servers
Change-Id: I0cfafe3d64a9a9b52f1eca9f28a0e937b3b84f44