NetApp driver changes to accommodate human readable
share location. Export path is updated with
human frendly value if present else use share-id.
partially-implements: bp human-readable-export-locations
Depends-On: I72ac7e24ddd4330d76cafd5e7f78bac2b0174883
Change-Id: I2f5bfdbc9d0458c7b9198a3eb94b3b095d5b5e04
Implement the share backup feature for NetApp driver.
NetApp SnapVault technology is used to create and restore
the backup for NetApp ONTAP share. backup delete workflow
just delete the transferred snapshot from destination
backup volume.
Depends-On: Ifb88ec096674ea8bc010c1c3f6dea1b51be3beaa
Change-Id: I5a4edbf547e7886fb4fa9c1bed90110a33f9bf3b
If share created from snapshot is deleted immediately after creation
and if clone split operation is in progress, then delete call fails.
Fix this issue by first stopping the clone split job and then continue
with deletion.
Closes-bug: #1960239
Change-Id: If9844b3da70cec427c6260ee239c8c6131ed77ed
Add the new scheduler weigher NetAppAIQWeigher that relies on
the NetApp Acitve IQ software to weigh the hosts. It only
works with NetApp only hosts.
It is also adding a new NetApp specific pool information
called ``netapp_cluster_name`` that contains the name
of the cluster where the pool is located.
Implements: netapp-active-iq-scheduler-weigher
Signed-off-by Felipe Rodrigues <felipefuty01@gmail.com>
Change-Id: I36b08066545afdaa37e053eee319bc9cd489efdc
When a volume is created in NetApp ONTAP, it has a few autosize
attributes that are set by default. The values of the attributes
are defined according to the volume type (DP/RW).
During the replica promotion, the types are swaped between source
and destination volumes, but the autosize values were not being
updated. This patch fixes this behavior, calling an autosize reset
after promoting the replica.
Closes-Bug: #1957075
Change-Id: I9a4e5763927b7585a8fbd6b0004d6a123dcd7fae
Changes were done in create_share_from_snapshot
method so that a scoped account does not need to get the
source and the destination cluster names and does not have the
permissions.
Closes-Bug: #1922512
Change-Id: Ib36c81c213a374a918378854ce0a89ce70acf1d0
Asynchronous SnapMirror schedules are set using netapp config option
'netapp_snapmirror_schedule'. The delta for determining replica is
in-sync or out-of-sync updated to twice the schedule time seconds.
Also, not only new snapmirrors, but also old ones should
have a schedule according to the current
'netapp_snapmirror_schedule' config.
Closes-bug: #1996859
Depends-On: I0390f82dfdc130d49e3af6928996dd730e3cf69f
Change-Id: Ifbe0575f6c359929344763666e4d93d8c6084e83
Migrate non-disruptive cifs share from different pools change the
export location. When the non-disruptive migration complete
process is started a new share and export location is created. As
result, Manila finds a conflict between the old export location and
the new one.
This patch add a condition to skip export location creation when
a CIFS migration is in progress, also change the way that the export
location is created. Instead of create the export path with share
name, the new one is taken from the backend. The fix is only for
ZAPI API calls.
Change-Id: I1bb888a0b644f0b071816d275d464c4dd27125a7
Co-authored-by: Lucas Oliveira <lucasmoliveira059@gmail.com>
Closes-bug: #1920937
Replica promote is retaining unneeded snapshots from previous
SnapMirror relationships and increasing the amount of space
consumed from snapshots in the storage system.
This patch fixes the issue by calling the snapmirror release
operation after resync completes its transferring, which allows
the SnapMirror software to properly cleanup unneeded resources.
Closes-Bug: #1982808
Change-Id: I516fb3575e30d18d971d6a1b7f3b9ad7120c3bbd
Volumes may be busy during split operation, so other actions would fail,
e.g. applying snapdir visibility, setting volume size ..
Closes-Bug: #2007970
Change-Id: I3e36f77f4e46c90af8445601e10eadf9c55ed5f6
As the result of adoption of a REST API by NetApp ONTAP, this patch
is adding the basic structure to migrate from ZAPI Client to REST
incrementally.
To avoid adding bugs to the pre-existent ZAPI client and easily backport
the new client to older releases, the new REST implementation is a
completely new code on the client layer: a new client REST and API REST
classes. The driver layer should be agnostic from this transition, the
interfaces that the driver calls should keep the same as before.
The REST client contains a fallback mechanism, which will call the
ZAPI client when the operation is not implemented yet. This first
patch is only implementing the get ONTAP version operation using
REST, all other operations keep working with ZAPI.
The REST client can only work with ONTAP 9.11.1 and upper.
A new configuration `netapp_use_legacy_client` is added to enable
the operator to switch between new REST client implementation and the
old ZAPI implementation. The default is `True`, given that the REST
client is still experimental.
partially-implements: bp netapp-ontap-rest-api-client
Change-Id: I0bbc7609df66b72e9f8e26f4abb8092ae68cbe63
Currently netapp_snapmirror_quiesce_timeout is an option of replica
promote that can only be set by the operator. There are scenarios where
high timeout value set by operator does not fit well e.g. disaster
stroke and user want to do an unplanned failover fast. Added new
option 'quiesce_wait_time' to share replica promote API which allows
to use specified wait_time(in seconds) instead of config option.
Closes-bug: #2000171
Change-Id: Ib02063ee8b82f7374cd89f90e7f24a845c6c7cd7
Introduce some extra safety by an additional guard.
Losing a snapshot is never an option, this justifies an
additional api call to the NetApp storage back end.
Change-Id: Ibc21b6c72d76a3a804f67e66e7604b3d0be4373f
Related-Bug: #1971710
Managed snapshot has its name changed. When the volume is reverted
to the snapshot, the name is reverted to the old too. As result,
the Manila loses the snapshot reference, causing some issues.
This patch is removing the code that renames the snapshot during
management, so the snapshot name is kept as informed, saving it
to the provider location field.
Closes-bug: #1936648
Change-Id: Ib1a928d453de80d6841524cdb86c4708a5b0dbdf
In order to determine replica state from snapmirror, in addition to
existing check of last-transfer-end-timestamp', also add new checks
of `last-transfer-size` and `last-transfer-error`. New config option
`netapp_snapmirror_last_transfer_size_limit` added with default value
of 1MB. The last-transfer-size above this value or presence of any
last-transfer-error is considered as replica is out_of_sync.
Closes-bug: #1989175
Change-Id: I6d038244493583cc943063b50d731b8c1ef5ed28
The NetApp driver is not reporting the home state of the aggregate
pools. This information is useful during maintenance tasks, since
not home aggregate cannot create shares.
This patch adds to the report netapp capabilities the boolean
`netapp_is_home`.
Closes-Bug: #1927823
Change-Id: I8e98541d8e457e9e4609410853b50d6156465f61
'reserved_share_extend_percentage' backend config option allows Manila
to consider different reservation percentage for share extend
operation. With this option, under existing limit of
'reserved_share_percentage', we do not want user to create new share if
limit is hit, but allow user to extend existing share.
DocImpact
Closes-Bug: #1961087
Change-Id: I000a7f530569ff80495b1df62a91981dc5865023
This patch adds support to multiple subnets in a same network
segment (and AZ). All subnets assigned to a share must have the
same VLAN id, otherwise the ONTAP driver will fail on share server
creation.
Depends-On: I7de9de4ae509182e9494bba604979cce03acceec
Implements: blueprint ontap-multiple-subnets-same-segment-id
Change-Id: If5db09627a2a9a98951a972e15b8af0857598d1e
Python2 is no longer supported, so in this patch
set we remove the usage of the six (py2 and py3
compatibility library) in favor of py3 syntax.
Change-Id: I3ddfad568a1b578bee23a6d1a96de9551e336bb4
The NetApp driver has been working with FlexVol ONTAP volumes.
The driver does not support scaling FlexVol volumes higher than
100 TiB, which was a theoretical limit for the large namespace that
these containers were meant to handle. ONTAP's Flexgroup volumes
eliminate such limitations. So, added the support for provisioning
share as FlexGroup in the NetApp driver.
The FlexGroup provision is enabled by new option
`netapp_enable_flexgroup`, which will make the driver report a single
pool represeting all aggregates. The selection on which aggregates the
FlexGroup share will reside is up to ONTAP. If the administrator desires
to control that selection through Manila scheduler, it must inform the set
of aggregates that formss FlexGroup pool in the new option
`netapp_flexgroup_pool`.
Each NetApp pool will report now the capability: `netapp_flexgroup`
informing which type the pool is.
The following operations are allowed with FlexGroup shares (DHSS
True/False and NFS/CIFS):
- Create/Delete share;
- Shrink/Extend share;
- Create/Delete snapshot;
- Revert to snapshot;
- Manage/Unmanage snapshots;
- Create from snapshot;
- Replication[1]
- Manage/Unmanage shares;
The backend with one FlexGroup pool configured will drop the consistent
snapshot support for all pools.
The driver FlexGroup support requires ONTAP version 9.8 or greater.
[1] FlexGroup is limited to one single replica for ONTAP version
lower than 9.9.1.
DocImpact
Depends-On: If525e97a5d456d6ddebb4bf9bc8ff6190c95a555
Depends-On: I646f782c3e2be5ac799254f08a248a22cb9e0358
Implements: bp netapp-flexgroup-support
Change-Id: I4f68a9bb33be85f9a22e0be4ccf673647e713459
Signed-off-by: Felipe Rodrigues <felipefuty01@gmail.com>
Implements share server migration using a proper mechanism
provided by ONTAP. In case the driver identifies that the ONTAP
version matches the version where this mechanism is available,
ONTAP will automatically chose to use this instead of SVM DR.
- Implemented new methods for migrating a share server using a
new mechanism provided by ONTAP, when both source and destination
clusters have versions >= 9.10. This new migration mechanism
supports nondisruptive migrations in case there aren't network
changes in the migration.
- The NetApp now does not need to create an actual share server in
the backend prior to the migration, in case SVM Migrate is being
used.
- The NetApp ONTAP driver can now reuse network allocations from
the source share server in case a share network change wasn't
identified.
Change-Id: Idf1581d933d11280287f6801fd4aa886a627f66f
Depends-On: I48bafd92fe7a4d4ae0bafd5bf1961dace56b6005
Implement the `readable` replication type for the NetApp driver.
The driver will keep having support for the `dr` type as well, being
the driver replication type a list containing them.
The replicas for readable style are mounted, created the export and
applied the QoS. When promoting, the original active replica does
not need to be unmounted. The user just loses the write access.
The update access interface is now applying rules for non active
replicas that are readable.
Implements: bp netapp-readable-replica
Change-Id: Icc74eaecc75c3064715f91bebb994e93c0053663
Signed-off-by: Felipe Rodrigues <felipefuty01@gmail.com>
We are replacing all usages of the 'retrying' package with
'tenacity' as the author of retrying is not actively maintaining
the project. Tenacity is a fork of retrying, but has improved the
interface and extensibility (see [1] for more details). Our end
goal here is removing the retrying package from our requirements.
Tenacity provides the same functionality as retrying, but has the
following major differences to account for:
- Tenacity uses seconds rather than ms as retrying did
(the retry interface in manila exposed time in seconds as well)
- Tenacity has different kwargs for the decorator and
Retrying class itself.
- Tenacity has a different approach for retrying args by
using classes for its stop/wait/retry kwargs.
- By default tenacity raises a RetryError if a retried callable
times out; retrying raises the last exception from the callable.
Tenacity provides backwards compatibility here by offering
the 'reraise' kwarg - we are going to set this in the retry interface
by default.
- For retries that check a result, tenacity will raise if the
retried function raises, whereas retrying retried on all
exceptions - we haven't exposed this in the retry interface.
This patch updates all usages of retrying with tenacity.
Unit tests are added where applicable.
[1] https://github.com/jd/tenacity
Co-Authored-By: boden <bodenvmw@gmail.com>
Co-Authored-By: Goutham Pacha Ravi <gouthampravi@gmail.com>
Closes-Bug: #1635393
Change-Id: Ia0c3fa5cd82356a33becbf57444f3db5ffbb0dd0
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
This config option allows different value for reservation percentage,
mostly useful on the platforms, where shares can only be created from
the snapshot on the host where snapshot was taken. The lower value of
this config option against existing (reserved_share_percentage) allows
to create shares from the snapshot on the same host up to a higher
threshold even though non-snapshot/regular share create fails.
In case this config option is not set, the shares created from snapshot
will use reservation percentage value set in 'reserved_share_percentage'.
This will be useful for users who want to keep same reservation percentage
for both non-snapshot/regular and snapshot shares.
DocImpact
Closes-Bug: #1938060
Change-Id: I390da933fe92875e3c7ee40709eacacc030278dc
In order to optimize the NetApp ONTAP driver, this patch is caching
the status of driver pools and reusing for the each share server,
given that the pool is not separated by share server.
The option `netapp_cached_aggregates_status_lifetime` is added
for controlling the time that the cached values is considered
valid.
Closes-Bug: #1900469
Change-Id: I14a059615fc29c7c173c035bb51d39e0bbb8b70a
This patch implements support for security service updates
for in use share networks. It works with all three security
service types. For 'active_directory' and 'kerberos', the 'domain'
attribute update isn't supported, since it can might affect
user's access to all related shares.
Change-Id: I8556e4e2e05deb9b116eacbd5afe2f7c5d77b44b
Depends-On: I129a794dfd2d179fa2b9a2fed050459d6f00b0de
Depends-On: I5fef50a17bc72ba66a3a9d6f786742bcb5745d7b
Implements: bp netapp-security-service-update
Co-Authored-By: Carlos Eduardo <ces.eduardo98@gmail.com>
Signed-off-by: Douglas Viroel <viroel@gmail.com>
This patch adds support for automated creation of FPolicy policies
and association to a share. The FPolicy configuration can be added using
the extra-specs 'netapp:fpolicy_extensions_to_include',
'netapp:fpolicy_extensions_to_exclude' and 'netapp:fpolicy_file_operations'.
Change-Id: I661de95bfb6f8e68b3a8c58663bb6055e9b809f6
Implements: bp netapp-fpolicy-support
Signed-off-by: Douglas Viroel <viroel@gmail.com>
This patch fix a sqlalchemy object copy made in the driver
that can raise a 'DetachedInstanceError' exception, depending on
changes made in the DB model. A simple copy is being made instead, to
have a new share object with share server information inside it.
Change-Id: I5fef50a17bc72ba66a3a9d6f786742bcb5745d7b
Signed-off-by: Douglas Viroel <viroel@gmail.com>
NetApp driver is hard-coding the location of CA certificates for SSL
verification during HTTPS requests. This location may change depending
on the environment or/and backend.
This patch adds the `netapp_ssl_cert_path` configuration, enabling
each backend to choose the directory with certificates of trusted CA
or the CA bundle. If set to a directory, it must have been processed
using the c_rehash utility supplied with OpenSSL. If not informed,
it will use the Mozilla's carefully curated collection of Root
Certificates for validating the trustworthiness of SSL certificates.
Closes-Bug: #1900191
Change-Id: Idbed4745104de26af99bb16e07c6890637dfcfd1
This patch fixes the access rules for NetApp promote replica when
using CIFS protocol. When promoting a replica, the NetApp ONTAP
driver updates the access rules for the promoted CIFS share entity
before actually creating it, failing on having those rules
applied.
The bug is fixed by switching the order of updating the access
and creating the promoted CIFS share entity.
Change-Id: I60e4057dc962d96cff57dea88587a28c2043b499
Closes-Bug: #1896949
This patch is a follow up from the main change[1] that adds support
for Adaptive QoS policies that have been pre-created in the storage.
Improvements added in this patch:
- Fail earlier when using this configuration with DHSS=True mode
and for shares that support replication.
- Fail earlier if no cluster credentials where provided to configure
volumes with QoS.
- Add support for migration and manage share operations.
Closes-Bug: #1895361
[1] https://review.opendev.org/#/c/740532/
Change-Id: I210994b84548ed6857e338c8e1f41667fa844614
Signed-off-by: Douglas Viroel <viroel@gmail.com>
This change fixes the NetApp promote back issue when using CIFS
protocol. When promoting a replica, the NetApp ONTAP driver
attempts to create a new CIFS share entity (an access point as
defined in [1]) for the new active replica. This behavior
causes a failure since the storage identifies that a current
backend CIFS share with the same name exists, considering
that the reffered replica was once the active one.
This issue is addressed by removing the related CIFS share
entity when the replica gets promoted.
[1] https://library.netapp.com/ecmdocs/ECMP1401220/html/GUID-1898D717-A510-4B3D-B2E3-CCDDD5BD0089.html
Closes-Bug: #1879368
Change-Id: Id9bdd5df0ff05ea08881dd2c83397f0a367d9945
Added support for Adaptive QoS policies that have been pre-created on
the storage system, with clustered ONTAP version 9.4 or higher. To use
this feature, configure a Manila share type with the extra-spec
"netapp:adaptive_qos_policy_group" and value set to the qos policy
group on the ONTAP storage system, for example:
netapp:adaptive_qos_policy_group=platform3
Note that a cluster scoped account must be used in the driver
configuration in order to use QoS in clustered ONTAP. Other notes:
-This only works for backends without share server management.
-This does not work for share replicas or share migration.
Partially-Implements: bp netapp-adaptive-qos-support
Change-Id: I3cc1d2fa2a8380ca925538cab5a3414ac2141d70
This patch adds support for share server migration between NetApp
ONTAP drivers. This operation is now supported for migrating a share
server and all its resources between two different clusters.
Share server migration relies on ONTAP features available only in
versions equal and greater than ``9.4``. Also, in order to have share
server migration working across ONTAP clusters, they must be peered in
advance.
At this moment, share server migration doesn't support migrate a share
server without disrupting the access to shares, since the export locations
are updated at the migration complete phase.
The driver doesn't support changing security services while changing the
destination share network. This functionality can be added in the future.
Co-Authored-By: Andre Beltrami <debeltrami@gmail.com>
Implements: bp netapp-share-server-migration
Depends-On: Ic0751027d2c3f1ef7ab0f7836baff3070a230cfd
Change-Id: Idfac890c034cf8cbb65abf685ab6cab5ef13a4b1
Signed-off-by: Douglas Viroel <viroel@gmail.com>