This patch contains the implementation of the core functions of the
NetApp drivers (NFS, iSCSI and FCP). The functions were migrated
from ZAPI to REST API, but ZAPI client was not removed since it is
being used as a fallback mechamism when a function does not have an
equivalent in the REST API.
In summary, the features implemented in this patch are related to:
> Periodic tasks - methods used during driver initialization and
executed periodically to get information about the volumes,
performance metrics and stats
> Basic volume operations - methods used to create, delete,
attach, detach and extend volumes
Co-authored-by: Fábio Oliveira <fabioaurelio1269@gmail.com>
Co-authored-by: Fernando Ferraz <sfernand@netapp.com>
Co-authored-by: Luisa Amaral <luisaa@netapp.com>
Co-authored-by: Matheus Andrade <matheus.handrade15@gmail.com>
Co-authored-by: Vinícius Angiolucci Reis <angiolucci@gmail.com>
Change-Id: I67eb7f6264cf5eea94c0a364082b530b9cdc1ae3
partially-implements: blueprint netapp-ontap-rest-api-client
This patch adds compatibility for the NetApp driver to allow custom and
OpenStack igroups to coexist with each other.
Current NetApp dataontap code for iSCSI and FC fails when detaching a
volume if we have a custom (as in not created by the Cinder driver)
igroup on the array.
This is because when we unmap a volume we look for an igroup created by
the Cinder driver, which has the "openstack-" prefix, but we won't find
it when there's a custom igroup because on attach we used the custom
igroup instead of creating an "openstack-" one.
Since the NetApp backend supports having multiple igroups with the same
initiators we will only use the OpenStack one when mapping volumes and
create one if it doesn't exist, even if there's already a custom one.
This patch also adds code so we can properly unmap volumes that have
been atached with old code and custom igroups.
Closes-Bug: #1697490
Change-Id: I9490229fb4f2852cd1d5e5c838b6ca59fe946358
This patch adds option ´netapp_driver_reports_provisioned_capacity´,
for driver to calculate the storage provisioned capacity by queryng
volume sizes directly from the storage system and reporting as a
pool capability ´provisioned_capacity_gb´, instead of relying
on the default behavior of using scheduler and volume service
internal states as done by ´allocated_capacity_gb´.
Implements: blueprint ontap-report-provisioned-capacity
Change-Id: I97625de865a63b0cf725f61d9d2ea3c44740d9e7
This patch adds support to revert to snapshot for NFS, FC and
iSCSI drivers with FlexVol pool.
Adds a method on client_cmode to rename a NFS volume file using
the file-rename-file zapi call.
The revert steps are:
1. Create a clone volume file/LUN from snapshot.
2. Change the original volume file/LUN path to a temporary path.
3. Change clone file/LUN path to the original path.
4. Delete the volume file/LUN on temporary path.
If any step fails, the original volume file/LUN is preserved and
the clone file/LUN is deleted.
For NFS, ONTAP clone file is not supported on FlexGroup pool, in
this case the generic implementation will perform the revert to
snapshot.
Implements: blueprint ontap-revert-to-snapshot
Co-Authored-By: Fabio Oliveira <fabioaurelio1269@gmail.com>
Change-Id: I347f0ee63d8d13ff181dd41a542a006b7c10b488
This patch adds support for storage assisted migration
(intra-cluster) to NetApp ONTAP drivers (iSCSI/FC/NFS), for the
following use cases:
1) Between pools in a same vserver and backend/stanza. This
operation is non-disruptive on iSCSI and FC drivers.
2) Between pools in a same vserver but in a different
backend/stanza. This operation is disruptive in all cases
and requires the volume to be in `available` status.
3) Between pools in a different vserver in a different
backend/stanza. This operation is disruptive in all cases
and requires the volume to be in `available` status.
Storage assisted migration is only supported within the same
ONTAP cluster. If a migration between two different clusters is
requested, driver will automatically fallback to host assisted
migration.
Implements: blueprint ontap-storage-assisted-migration
Change-Id: Iaad87c80ae37b6c0fc5f788dc56f1f72c0ca07fa
Adds the support for FlexGroup pool using the NFS storage mode.
The FlexGroup pool has a different view of aggregate capabilites,
changing them by a list of elements, instead of an element. They
are: `netapp_aggregate`, `netapp_raid_type`, `netapp_disk_type`
and `netapp_hybrid_aggregate`. The `netapp_aggregate_used_percent`
capability is an average of used percent of all FlexGroup's
aggregates.
The `utilization` capability is not calculated to FlexGroup pools,
it is always set to default value.
The driver cannot support consistency group with volumens that are
over FlexGroup pools.
ONTAP does not support FlexClone for file inside a FlexGroup pool,
so the operations of clone volume, create snapshot and create volume
from an image are implemented as the NFS generic driver.
The driver with FlexGroup pools has the snapshot support disabled, requiring
that setting the `nfs_snapshot_supprot` to true on the backend definition.
This config is the same as the NFS generic driver.
The driver image cache relies on FlexClone for file, so it is not applied
for volumes over FlexGroup pools. It can use the core cache image, though.
The QoS minimum is only enabled for FlexGroup pool if all nodes of the
FlexGroup support it.
Implements: blueprint netapp-flexgroup-support
Change-Id: I507083c3e34e5a5cf1db9a3d1f6bef47bd51a9f8
NetApp ONTAP 9.4 or newer supports Adaptive QoS, which scales the
throughput according to the volume size.
In Victoria release, ONTAP Cinder driver added support for assigning
pre-existing Adaptive QoS policy groups to volumes.
This patch allows the dynamic creation of Adpative QoS, by reading new
back-end QoS specs.
Implements: blueprint netapp-ontap-dynamic-adaptive-qos
Change-Id: Ie35373c7b205ffa12c4bb710255f1baf7a836d9f
Currently, the ONTAP Cinder driver only supports the max (ceiling)
throughput QoS specs.
This patch adds support for min (floor) throughput QoS policy specs
``minIOPS`` and ``minIOPSperGiB``, which can be set individually or
along with the max throughtput specs.
Added a new driver specific capability called `netapp_qos_min_support`.
It is used to filter the pools that has support to the Qos minimum
(floor) specs during the scheduler phase.
The feature is supported by ONTAP AFF with version equal or greater
than 9.2 for iSCSI/FCP and 9.3 for NFS, ONTAP Select Premium with
SSD and ONTAP C190 with version equal or greater than 9.6.
Implements: blueprint netapp-ontap-min-throughput-qos
Implements: blueprint netapp-ontap-min-throughput-qos-capability
Co-Authored-By: Felipe Rodrigues <felipen@netapp.com>
Change-Id: Ic6579d459670fec4e5295e51c12fd807d980bb81
Added support for Adaptive QoS policies that have been pre-created on
the storage system, with the NetApp driver and clustered ONTAP version
9.4 or higher. To use this feature, configure a Cinder volume type with
the following extra-specs::
netapp:qos_policy_group=<name_of_precreated_aqos_policy>
netapp:qos_policy_group_is_adaptive="<is> True"
Note that a cluster scoped account must be used in the driver
configuration in order to use QoS in clustered ONTAP.
Partially-implements: bp netapp-adaptive-qos-support
Co-Authored-By: Lucio Seki <lucioseki@gmail.com>
Change-Id: Idcdbd042bb10f7381048fe4cb9fb870c3eebb6ce
Due to a characteristic on ONTAP devices, the volume extend
operation has a max resize size limited by underlying LUN's
geometry, so the support for extend online volumes was
disabled.
This patch fixes it by allowing a volume (attached or not)
to be extended up to 16TB, which is the max LUN size
supported by ONTAP.
NFS online_extend_support is still disabled due to a bug [0]
found on the generic implementation for NFS driver, which
ONTAP NFS driver relies on.
Closes-Bug: #1874134
[0] https://bugs.launchpad.net/cinder/+bug/1870367
Change-Id: I2812d71b23f27fe8be4e9a757094867f71b1afa2
test.py was previously not in the tests
directory. This means that downstream packagers
of Cinder have to specifically exclude it from
the main Cinder package (which does not typically
include unit tests).
Move it under the cinder/tests/unit/ dir where it
should be, to clean this up.
Change-Id: I65c50722f5990f540d84fa361b997302bbc935c5
This adds usage of the flake8-import-order extension to our flake8
checks to enforce consistency on our import ordering to follow the
overall OpenStack code guidelines.
Since we have now dropped Python 2, this also cleans up a few cases for
things that were third party libs but became part of the standard
library such as mock, which is now a standard part of unittest.
Some questions, in order of importance:
Q: Are you insane?
A: Potentially.
Q: Why should we touch all of these files?
A: This adds consistency to our imports. The extension makes sure that
all imports follow our published guidelines of having imports ordered
by standard lib, third party, and local. This will be a one time
churn, then we can ensure consistency over time.
Q: Why bother. this doesn't really matter?
A: I agree - but...
We have the issue that we have less people actively involved and less
time to perform thorough code reviews. This will make it objective and
automated to catch these kinds of issues.
But part of this, even though it maybe seems a little annoying, is for
making it easier for contributors. Right now, we may or may not notice
if something is following the guidelines or not. And we may or may not
comment in a review to ask for a contributor to make adjustments to
follow the guidelines.
But then further along into the review process, someone decides to be
thorough, and after the contributor feels like they've had to deal with
other change requests and things are in really good shape, they get a -1
on something mostly meaningless as far as the functionality of their
code. It can be a frustrating and disheartening thing.
I believe this actually helps avoid that by making it an objective thing
that they find out right away up front - either the code is following
the guidelines and everything is happy, or it's not and running local
jobs or the pep8 CI job will let them know right away and they can fix
it. No guessing on whether or not someone is going to take a stand on
following the guidelines or not.
This will also make it easier on the code reviewers. The more we can
automate, the more time we can spend in code reviews making sure the
logic of the change is correct and less time looking at trivial coding
and style things.
Q: Should we use our hacking extensions for this?
A: Hacking has had to keep back linter requirements for a long time now.
Current versions of the linters actually don't work with the way
we've been hooking into them for our hacking checks. We will likely
need to do away with those at some point so we can move on to the
current linter releases. This will help ensure we have something in
place when that time comes to make sure some checks are automated.
Q: Didn't you spend more time on this than the benefit we'll get from
it?
A: Yeah, probably.
Change-Id: Ic13ba238a4a45c6219f4de131cfe0366219d722f
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
Much of our code renames this at import already --
just name it "volume_utils" for consistency, and
to make code that imports other modules named "utils"
less confusing.
Change-Id: I3cdf445ac9ab89b3b4c221ed2723835e09d48a53
This patch moves the netapp exception to the netapp/utils.py
for common use among all the netapp drivers.
Change-Id: I4bed2917bdb7d9929c0fbfa2dcfe4067cb3d963f
NetApp iSCSI drivers use discovery mode for multipathing, which means
that we'll fail to attach a volume if the IP provided for the discovery
is not accessible from the host.
Something similar would happen when using singlepath, as we are only
trying to connect to one target/portal.
This patch changes NetApp drivers so the return target_iqns,
target_portals, and target_luns parameters whenever there are more than
one option.
Closes-Bug: #1806398
Change-Id: If6b5ad23c899032dbf7535ed91251cbfe54a223a
API Tracing is valuable when diagnosing problems or
unexpected behaviors with the ONTAP Cinder drivers.
However, turning it on may spam logs and make it rather
harder to trace through specific API calls.
Added an API trace pattern filter in order to filter out
undesired API calls from the DEBUG log.
Change-Id: Ic0563848205a941cf8e779eee42e24ecdaf847dd
Cinder scheduler now checks backend capability online_extend_support
before performing an online volume extend operation. This patch makes
NetApp ONTAP iSCSI driver and FC driver report to the scheduler that
they don't support this feature, thus avoiding leaving a volume in
error_extending state after an online extending attempt.
Change-Id: Ifa248a0d3518aeffe2b6d12b064cbee9b8f48f94
Depends-On: I2c31b5c171574074a8fc7ba86f94f983fc9658f7
Related-Bug: #1765182
Improves NetApp cDOT block and file drivers suport for SVM scoped user
accounts. Features not supported for SVM scoped users include QoS,
aggregate usage reporting, and dedupe usage reporting.
Change-Id: I2b42622dbbb0f9f9f3eb9081cf67fb27e6c1a218
Closes-Bug: #1694579
The Unified Driver for NetApp Storage in Cinder
supports two families of ONTAP, 7mode and Clustered
Data ONTAP.
ONTAP 7 is now officially nearing the end-of-life
of product support. The deprecation notice [1]
for these drivers in Cinder was issued in the Newton
Release, so it is time to remove them from tree.
[1] http://lists.openstack.org/pipermail/openstack-operators/2016-November/011957.html
Implements: bp remove-netapp-7mode-drivers
Change-Id: I129ca060a89275ffd56481b8f64367b0d803cff5
The ONTAP drivers in Cinder ("7mode" and "cmode") cannot
reliably and efficiently track provisioned_capacity_gb as expected
by the Cinder scheduler.
The driver authors originally assumed that provisioned_capacity_gb
is consumed space on the backend. This results in miscalculation of
over subscription in the Cinder scheduler.
The fix adopted here is to remove this wrong reporting and rely on
calculation of the provisioned_capacity_gb in the scheduler.
Change-Id: Ic106dbcae8ceaac265b710756ab1874e445ca826
Closes-Bug: #1714209
Currently two repeating tasks are initialized, during driver
initialization, to handle deletion of unneeded QoS policies on the
NetApp backend. This change removes one of the tasks.
Change-Id: Ibb9c28937f9e1912cf3293fcf4ca83c5d76d7f75
Closes-Bug: #1713774
Adding support for generic volume groups to NetApp cDOT's FC, and iSCSI
drivers.
CG methods are moved to the 7-mode block driver. Announcement was made
in a previous release for the removal of the NetApp 7-mode drivers.
Generic groups will only be implemented for the NetApp cDOT drivers.
Change-Id: I8290734817d171f6797c88d0a0d1a21f8b5789db
Implements: blueprint netapp-add-generic-group-support-cdot
In python 3.6, escape sequences that are not
recognized in string literals issue DeprecationWarnings.
Convert these to raw strings.
Change-Id: I0d20a0dd27415eab0abf08ceb5be415ae1cf0ad0
failover_host is the interface for Cheesecake.
Currently it passes volumes to the failover_host
interface in the driver. If a backend supports both
Cheesecase and Tiramisu, it makes sense for the driver
to failover a group instead of individual volumes if a
volume is in a replication group. So this patch passes
groups to the failover_host interface in the driver in
addition to volumes so driver can decide whether to
failover a replication group.
Change-Id: I9842eec1a50ffe65a9490e2ac0c00b468f18b30a
Partially-Implements: blueprint replication-cg
This fix avoids logging an exception when a user
chooses to use an SVM scoped account. cDOT
driver requires cluster scoped privileges to
gather backend statistics, performance
counters, etc. These APIs are not available for
SVM scoped credentials.
Change-Id: If2e3bae98db225ff0cfc9e868eaaeef088135562
Closes-Bug: #1660870
This patch disables setting multiattach flag to True in driver capabilities.
Since we reworked the driver attach/detach API, we need to ensure that the
issue of detaching a shared volume works properly on the Nova side. A shared
flag might need to be required to return in initialize_connection to support
multiattach, so for now we will disable the capability.
Change-Id: I4f8229a4465d009e06de86da255f442020151113
mock_object method in TestCase class now accepts keyword arguments that
will be properly handled when no new object is provided, so we can now
simplify calls in our tests.
This patch changes calls like
self.mock_object(os.path, 'exists', mock.Mock(return_value=True))
into
self.mock_object(os.path, 'exists', return_value=True)
Change-Id: I904ab4956a081c0292e2c982320e5005b055d3d5
The NetApp DOT drivers log some info about OpenStack deployments
already, and more info is needed about the specific storage
resources managed by Cinder.
Implements: blueprint netapp-dot-enhanced-support-logging
Change-Id: Id8937a181ee7ab1b985b6aa4143f68026b02656d
The maximum amount of shared (deduplicated, cloned) data on
a Data ONTAP FlexVol (i.e. a Cinder pool) is 640TB. The only thing
more surprising about that number is that we have customers hitting
it. The symptom is that operations such as cloning Cinder volumes
fail because no more blocks may be shared. The fix is to report the
level of consumption to the scheduler for optional incorporation
into the filter & goodness functions, so that pools nearing the shared
block limit may be shielded from further provisioning requests.
Implements: blueprint netapp-cdot-report-shared-blocks-exhaustion
Change-Id: I01b7322f7ddb05ee5e28bcb1121a90a6ea307720
setUp and tearDown will be automatically called around each
testcase, so this is to remove setUp and tearDown that doing
nothing additional than super to keep code clean.
Change-Id: I80ae55cc36c473e8ed7fc7f4cbd946107bf187b4
Host level replication capability was added to the NetApp
cDOT Block and File drivers through
I87b92e76d0d5022e9be610b9e237b89417309c05.
However, the replication capability was not being reported
at the pool level.
Add the field at pool level allowing for compatibility with
volume types that may include the `replication_enabled`
extra-spec.
Change-Id: Ic735a527c609166884668c84c589da521769500b
Closes-Bug: #1615451
This fixes the issue of deleting temporary snapshots created during the
consistency group creation process. These temporary snapshots may not be
deleted if the system is under load and the temporary snapshot remains
in a "busy" state after the consistency group creation process is
otherwise complete.
This change also reduces lines of code by implementing a manager for the
creation of FixedIntervalLoopingCall instances in the ONTAP drivers.
This looping call manager also provides the ability to start all
registered looping calls after the driver has been properly
initialized. The looping call manager also makes it easy to ensure that
FixedIntervalLoopingCall instances are not instantiated in Unit Tests.
Closes-Bug: #1596679
Change-Id: I13096a8c94a32e68814f81900032dbcc6a4a9806
Pools reported by NFS and block drivers for CDOT and 7mode
now report multiattach as a capability. NetApp cDOT and 7mode
LUNs have always supported multi-attach, but their capability
was not previously reported as such.
Closes-Bug: #1612763
Change-Id: Ib7545438998b02fb7670df44a6486764c401c5f6
Add ability to failover a given host to a replication
target provided, or chosen from amongst those
configured.
DocImpact
Implements: blueprint netapp-cheesecake-replication-support
Co-Authored-By: Clinton Knight <cknight@netapp.com>
Change-Id: I87b92e76d0d5022e9be610b9e237b89417309c05
NetApp cDOT controllers can mix SSDs and spinning disks
in the same aggregate, where the SSDs are used for a
cache. This commit reports the hybrid aggregate attribute
to the scheduler for each pool for use in extra specs
matching. This commit also ensures that all aggregate
values reported to the Cinder scheduler correctly include
a vendor-specific prefix.
Implements: blueprint netapp-report-cdot-hybrid-aggrs
Change-Id: I8fbfbe3ffaad5fe89db03d2a4e567e9b35e59d5b
Clustered Data ONTAP can improve its clone scalability if
it can distinguish between a Cinder create_snapshot operation and
a Cinder create_volume_from_snapshot operation. This commit
provides that contextual hint to cDOT using a new parameter to
the file/LUN clone API.
Closes-Bug: #1596569
Change-Id: I4faad276dd4a778e172b5c136f78d9e3be2404c6
Customers have requested the NetApp cDOT drivers report aggregate
consumption metrics to the scheduler, so that custom filter and
goodness functions may be written to consider aggregate space.
Implements: blueprint netapp-report-aggregate-utilization
Change-Id: Ia43facb1c7e46b794e2cb2bb44b2174fe441e3f2
This commit continues the storage service catalog (SSC)
replacement epic. The new SSC code is added here, the
iSCSI/FC and NFS drivers are updated to use the new
capabilities library, the old SSC module and tests are
removed entirely, and all new or modified code is 100%
covered by unit tests.
Partially implements: blueprint replace-netapp-cdot-ssc-module
Change-Id: Iba688ac670f3e3671a4115c40cf34431e59989b1