Python2 is no longer supported, so in this patch
set we remove the usage of the six (py2 and py3
compatibility library) in favor of py3 syntax.
Change-Id: I3ddfad568a1b578bee23a6d1a96de9551e336bb4
1、After performing a snapshot clone of the glusterfs vol,
the status of the vol is'Created', and the parameter
"gluster volume set nfs.rpc-auth-reject'*'" is required for
the vol in the'Started' state.
2、The cloned volume needs to activate the snapshot,
if the snapshot is already activated, you need to skip the activation step
Closes-Bug: #1922075
Change-Id: I304bf59b3f8c0d5b847078a5752bac8ac4f21690
Remove manila configuration options
and auth classes that were deprecated
before the Ussuri release.
Change-Id: I148225926cd249a0dd8d1f8c02b22ed06487f405
We don't need to re-initialize the volumes list
on deletion, it still makes sense to add a missing
volume to the list, going by the reasoning defined
in I14835f6c54376737b41cbf78c94908ea1befde15
Related-Bug: #1894362
Change-Id: I96d49f84122a34701328909c929ede4d66746911
When we have multiple share driver backends, create a shared instance,
and the list of'self.gluster_used_vols' will only be updated on the
current node. If the RPC request to delete the share instance is sent
to other nodes,'self.gluster_used_vols' will be Cannot find the
information of the glusterfs volume we want to delete, so we need to
update'self.gluster_used_vols' when deleting the instance
Change-Id: I14835f6c54376737b41cbf78c94908ea1befde15
Closes-Bug: #1894362
When we delete the shared instance, in addition to erasing the data
in the share, we should disconnect the client's mount point to
prevent the data from being written in.
Closes-Bug: #1886010
Change-Id: I7a334fb895669cc807a288e6aefe62154a89a7e4
This patch enables the creation of a share from snapshot
specifying another pool or backend. In the scheduler, a
new filter and weigher were implemented in order to consider
this operation if the backend supports it. Also, a new
field called 'progress' was added in the share and share
instance. The 'progress' field indicates the status
of the operation create share from snapshot (in percentage).
Finally, a new periodic task was added in order to constantly
check the share status.
Partially-implements: bp create-share-from-snapshot-in-another-pool-or-backend
DOCImpact
Change-Id: Iab13a0961eb4a387a502246e5d4b79bc9046e04b
Co-authored-by: carloss <ces.eduardo98@gmail.com>
Co-authored-by: dviroel <viroel@gmail.com>
Fix:
E731 do not assign a lambda expression, use a def
I just marked the lambdas with noqa.
Fix also other problems found by hacking in files changed.
Change-Id: I4e47670f5a96e61fba617e4cb9478958f7089711
Change http://review.gluster.org/14931,
debuting in GlusterFS 3.7.14 has changed the
XML output emitted by the gluster command
line interface. Here we implement parsing
for the new variant as well.
Change-Id: Ia9f340f1d56c95d5ebf5577df6aae9d708a026c0
Closes-Bug: 1609858
In delete_share, if private_storage entry of share is
missing, recognize this as indication of a botched
creation and return immediately so that the runtime
can go on with evicting the dangling share entry.
Change-Id: I76dabe0acc0b67ea2b03e77eb0743772ef25579d
Closes-bug: #1554290
This function is not specific to GlusterFS interaction.
Partially implements bp gluster-code-cleanup
Change-Id: I96ef68f13287d6654b65744df67880ab9deccb3f
GlusterFS has two kind of options:
- regular ones, which are a hardcoded set, and option names
are verified in "gluster volume get"
- user ones, whose name matches user.* -- these are
arbitrarily named, are ignored by "gluster volume get" and
are listed in "gluster volume info" output
So far we used "gluster volume info" universally, but that,
apart from being cumbersome for regular options, is also
incorrect, as it can't distinguish an unset option name from
an undefined one (querying the former should be treated OK,
querying the second should be treated as error).
- implement querying of regular options with "gluster volume
get" (accepting empty response)
- implement querying of user options with searching "gluster vol
info" data
- verify operations on the XML tree, make tacit XML layout
assumptions explicit
- implement optional Boolean coercion of values
Partially implements bp gluster-code-cleanup
Change-Id: I9e0843b88cd1a1668fe48c6979029c012dcbaa13
GlusterManager:
- add various error policies to gluster_call
- add set_vol_option method, with optional error tolerance
(making use of the respective gluster call error policy),
with support for Boolean options
- rename get_gluster_vol_option method to get_vol_option
for uniform nomenclature and simplicity (the "gluster" in
the method name was redundant as the classname already
hints about the Gluster scope)
Partially implements bp gluster-code-cleanup
Change-Id: I02a1d591d36c6a64eea55ed64cf715f94c1fd1c8
Replacing dict.iteritems()/.itervalues() with
six.iteritems(dict)/six.itervalues(dict) was preferred in the past,
but there was a discussion suggesting to avoid six for this[1].
The overhead of creating a temporary list on Python 2 is negligible.
[1]http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
Partially-implements blueprint py3-compatibility
Change-Id: Ia2298733188b3d964d43a547504ede2ebeaba9bd
So far, GlusterManager.gluster_call was directly calling
into the given execution function, of which the expected
error type is ProcessExecutionError; while practically in
all use cases we wanted to raise upwards a GlusterfsException,
so all use cases were individually coercing the
ProcessExecutionError into a GlusterfsException. This produced
a huge amount of excess boilerplate code.
Here we include the coercion in the definiton of gluster_call
and clean out the excess code.
Partially implements bp gluster-code-cleanup
Change-Id: I0ad0478393df2cbb6d077363ebd6b91ceed21679
With volume layout the volume we use to back a share can
be pre-created (part of the volume pool provided for Manila),
or can be created by Manila (that happens if share is created
from snapshot, in which case the volume is obtained by performing
a 'snapshot clone' gluster operation).
In terms of resource management, pre-created volumes are owned
by the pool, and Manila cloned ones are owned by Manila. So
far we kept all the volumes upon giving up its use (ie. deleting
the share it belonged to) -- we only ran a cleanup routine on them.
However, that's appropriate action only for the pool owned ones.
However, the ones we own should rather be extinguished to avoid
a resource leak. This patch implements this practice by marking
Manila owned volumes with a gluster user option.
Closes-Bug: #1506298
Change-Id: I165cc225cb7aca44785ed9ef60f459b8d46af564
With volume layout, share-volume association was kept solely
in the manila DB. That is not robust enough, first and
foremost because upon starting the service, the manager
will indicate existing share associations only of those
volumes whose shares is in 'available' state.
We need to know though if a volume is in a pristine state or
not, regardless of the state of their shares. To this end,
we introduce the 'user.manila-share' GlusterFS volume option
to indicate manila share association -- made possible by
GlusterFS allowing any user defined option to exist in the
'user' option name space --, which indicator remains there
until we explicitely drop it in `delete_share`. (The value
of 'user.manila-share' is the id of the share owning the
volume).
As a beneficial side effect, this change will also provide
insight to the Gluster storage admin about usage of the
Manila volume pool.
Change-Id: Icb388fd31fb6a992bee7e731f5e84403d5fd1a85
Partial-Bug: #1501670
Actually, all uses of export_location are incorrect -- from the
layout code's point of view, export_location is an arbitrary
opaque value, obtained from self.driver._setup_via_manager with
which the only legit action is to return it from create_share*.
That we use export_location as dict key in ensure_share is just
a pre-layout relict that survived by the virtue of remaining
unnoticed. Now the referred bug forced it out of the dark.
Change-Id: I965dae99486002f00145daff0cd2a848777b5b81
Partial-Bug: #1501670
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).
Fix this now by doing the 'vol start' call in the vol
layout code.
Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
glusterfs and glusterfs_native had a distinct
set of options to specify ssh credentials
(glusterfs_server_password vs glusterfs_native_server_password
and glusterfs_path_to_private_key vs glusterfs_native_path_to_private_key).
There is no reason to keep these separate; but worsening the situations
these options have been moved to layouts in an ad-hoc manner,
breaking certain driver/layout combos whereby the credential
option used by the driver is not provided by the chosen layout
and thus it was undefined.
Fix all the mess by defining glusterfs_server_password and
glusterfs_path_to_private_key in glusterfs.common, and
providing the native variants as deprecated aliases.
Change-Id: I48f8673858d2bff95e66bb7e72911e87030fdc0e
Closes-Bug: #1497212
The basic problem is that determining the
export location of a share should happen in
driver scope (as it depends on available
export mechanims, which are implemented by
the driver) while the code did it in layout
scope in ad-hoc ways. Also in native driver
the export location was abused to store the
address of the backing GlusterFS resource.
Fix these by
- layout:
- GlusterfsShareDriverBase._setup_via_manager
(the layer -> driver reverse callback) will
provide the export location as return value
- the share object is also passed to
GlusterfsShareDriverBase._setup_via_manager
(besides the gluster manager), because some
driver configs will need it to specify the
export location
- glusterfs-native:
- free the code from using export location
(apart from composing it in _setup_via_manager);
store the address of backing resource in
private storage instead of the export location
field
- glusterfs:
- define the `get_export` method for the export
helpers that provide the export location
- _setup_via_manager determines export location
by calling the helper's get_export
Change-Id: Id02e4908a3e8e435c4c51ecacb6576785ac8afb6
Closes-Bug: #1476774
Closes-Bug: #1493080
Previously, a 'ShareSnapshot' object was passed to the driver's API
method by share's manager.py during a create_share_from_snapshot call.
Now, a 'ShareSnapshotInstance' object is passed to the driver during
the same call. The object no longer has the attribute 'share' used by
the driver code, and in it's place has the attribute 'share_instance'.
So replace use of 'share' attribute with 'share_instance'.
Change-Id: Ibea11b33772f24609f9cd3180d61ab7f6307c1b8
Closes-Bug: #1495382
The volume management done by gluster_native has
been isolated and captured in a separate layout class.
gluster_native implements only {allow,deny}_access,
for the rest it uses the layout code.
Semantics is preserved with one difference:
the Manila host now is assumed to be set up so that
it can mount the GlusterFS volumes without any
complications. Earlier we assumed to not have cert
based access from Manila host, and turned therefore
SSL off and on on the GlusterFS side. This does not
make sense for the separate, layout-agnostic logic.
(Nb. we already wanted to make the move to set this
assumption, regardless of the layout work.)
Partially implements bp modular-glusterfs-share-layouts
Change-Id: I3cbc55eed0f61fe4808873f78811b6c3fd1c66aa