Commit Graph

36 Commits

Author SHA1 Message Date
silvacarloss d8b9d5a9e6 Migrate GlusterFS to privsep style
Change-Id: I71ab5a3606971b696111e16db8fd48351a1099f7
2022-08-17 17:43:50 -03:00
haixin a73b299374 Remove usage of six lib for third party/vendors drivers.
Python2 is no longer supported, so in this patch
set we remove the usage of the six (py2 and py3
compatibility library) in favor of py3 syntax.

Change-Id: I3ddfad568a1b578bee23a6d1a96de9551e336bb4
2022-01-29 03:01:17 +00:00
Zuul fca72a2d23 Merge "Remove deprecated config and auth" 2021-06-22 01:46:56 +00:00
LinPeiWen 63e255248b [Glusterfs] Fix create share from snapshot failed
1、After performing a snapshot clone of the glusterfs vol,
  the status of the vol is'Created', and the parameter
  "gluster volume set nfs.rpc-auth-reject'*'" is required for
  the vol in the'Started' state.
2、The cloned volume needs to activate the snapshot,
  if the snapshot is already activated, you need to skip the activation step

Closes-Bug: #1922075
Change-Id: I304bf59b3f8c0d5b847078a5752bac8ac4f21690
2021-04-27 09:23:13 +00:00
Tom Barron 5af3b8e68b Remove deprecated config and auth
Remove manila configuration options
and auth classes that were deprecated
before the Ussuri release.

Change-Id: I148225926cd249a0dd8d1f8c02b22ed06487f405
2021-04-26 11:53:58 -04:00
Goutham Pacha Ravi 914d873774 [glusterfs] don't reinit volume list on deletion
We don't need to re-initialize the volumes list
on deletion, it still makes sense to add a missing
volume to the list, going by the reasoning defined
in I14835f6c54376737b41cbf78c94908ea1befde15

Related-Bug: #1894362
Change-Id: I96d49f84122a34701328909c929ede4d66746911
2020-11-30 19:47:43 -08:00
linpeiwen 41b0b95ef6 [Glusterfs] Fix delete share, Couldn't find the 'gluster_used_vols'
When we have multiple share driver backends, create a shared instance,
and the list of'self.gluster_used_vols' will only be updated on the
current node. If the RPC request to delete the share instance is sent
to other nodes,'self.gluster_used_vols' will be Cannot find the
information of the glusterfs volume we want to delete, so we need to
update'self.gluster_used_vols' when deleting the instance

Change-Id: I14835f6c54376737b41cbf78c94908ea1befde15
Closes-Bug: #1894362
2020-09-11 07:08:30 +00:00
linpeiwen 9d44ba0b6a [Glusterfs] Fix delete share, mount point not disconnected
When we delete the shared instance, in addition to erasing the data
in the share, we should disconnect the client's mount point to
prevent the data from being written in.

Closes-Bug: #1886010
Change-Id: I7a334fb895669cc807a288e6aefe62154a89a7e4
2020-09-02 00:41:59 +00:00
Douglas Viroel 6c47b193b0 Create share from snapshot in another pool or backend
This patch enables the creation of a share from snapshot
specifying another pool or backend. In the scheduler, a
new filter and weigher were implemented in order to consider
this operation if the backend supports it. Also, a new
field called 'progress' was added in the share and share
instance. The 'progress' field indicates the status
of the operation create share from snapshot (in percentage).
Finally, a new periodic task was added in order to constantly
check the share status.

Partially-implements: bp create-share-from-snapshot-in-another-pool-or-backend

DOCImpact
Change-Id: Iab13a0961eb4a387a502246e5d4b79bc9046e04b
Co-authored-by: carloss <ces.eduardo98@gmail.com>
Co-authored-by: dviroel <viroel@gmail.com>
2020-04-09 11:15:22 -03:00
Andreas Jaeger 27808af118 Hacking: Fix E731
Fix:
E731 do not assign a lambda expression, use a def

I just marked the lambdas with noqa.

Fix also other problems found by hacking in files changed.

Change-Id: I4e47670f5a96e61fba617e4cb9478958f7089711
2020-04-01 14:11:10 +02:00
yfzhao 059fae0ed5 Remove log translations in share and share_group 4/5
Log messages are no longer being translated. This removes all use of
the _LE, _LI, and _LW translation markers to simplify logging and to
avoid confusion with new contributions.
This is the 4/5 commit.
Old commit will be abandoned: https://review.openstack.org/#/c/447822/

See:
http://lists.openstack.org/pipermail/openstack-i18n/2016-November/002574.html
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113365.html

Change-Id: Ia46e9dc4953c788274f5c9b763b2fed96c28d60e
Depends-On: I9fd264a443c634465b8548067f86ac14c1a51faa
Partial-Bug: #1674542
2017-03-31 10:20:11 +08:00
Jenkins a97983dd0e Merge "Put all imports from manila.i18n in one line" 2016-09-14 15:04:07 +00:00
zzxwill fb44a0a49e Put all imports from manila.i18n in one line
Put '_', '_LW', '_LI', '_LE' from manila.i18n in one line
to make it beautify. Nova, neutron and many other projects
follow this rule. Like
bc5035343d/nova/virt/disk/mount/nbd.py
ee42af1011/neutron/cmd/ipset_cleanup.py
(added more files)

Change-Id: If7ed442ebe946b32b3234ce37b38ee3a5ccbcb39
2016-09-14 06:07:33 +00:00
Csaba Henk 58be1ef71a glusterfs: handle new cli XML format
Change http://review.gluster.org/14931,
debuting in GlusterFS 3.7.14 has changed the
XML output emitted by the gluster command
line interface. Here we implement parsing
for the new variant as well.

Change-Id: Ia9f340f1d56c95d5ebf5577df6aae9d708a026c0
Closes-Bug: 1609858
2016-09-09 08:25:25 +02:00
Anh Tran 973711b311 Removing some redundant words
This patch removes some redundant "the".
TrivialFix

Change-Id: I8b13857fc48c8c3941c9babb17186e9ba673ea51
2016-03-29 08:10:15 +00:00
Csaba Henk ec5d9ca466 glusterfs volume layout: take care of deletion of DOA shares
In delete_share, if private_storage entry of share is
missing, recognize this as indication of a botched
creation and return immediately so that the runtime
can go on with evicting the dangling share entry.

Change-Id: I76dabe0acc0b67ea2b03e77eb0743772ef25579d
Closes-bug: #1554290
2016-03-15 11:43:51 +05:30
Csaba Henk 6bbd199b85 glusterfs.common: move the numreduct function to toplevel
This function is not specific to GlusterFS interaction.

Partially implements bp gluster-code-cleanup

Change-Id: I96ef68f13287d6654b65744df67880ab9deccb3f
2016-03-01 10:56:53 +01:00
Csaba Henk 768b02bdb7 gluster*: clean up volume option querying
GlusterFS has two kind of options:
- regular ones, which are a hardcoded set, and option names
  are verified in "gluster volume get"
- user ones, whose name matches user.* -- these are
  arbitrarily named, are ignored by "gluster volume get" and
  are listed in "gluster volume info" output

So far we used "gluster volume info" universally, but that,
apart from being cumbersome for regular options, is also
incorrect, as it can't distinguish an unset option name from
an undefined one (querying the former should be treated OK,
querying the second should be treated as error).

- implement querying of regular options with "gluster volume
  get" (accepting empty response)
- implement querying of user options with searching "gluster vol
  info" data
- verify operations on the XML tree, make tacit XML layout
  assumptions explicit
- implement optional Boolean coercion of values

Partially implements bp gluster-code-cleanup

Change-Id: I9e0843b88cd1a1668fe48c6979029c012dcbaa13
2016-02-27 02:11:43 +01:00
Csaba Henk 5847a12260 gluster*: add proper getter/setters for volume options
GlusterManager:
- add various error policies to gluster_call
- add set_vol_option method, with optional error tolerance
  (making use of the respective gluster call error policy),
  with support for Boolean options
- rename get_gluster_vol_option method to get_vol_option
  for uniform nomenclature and simplicity (the "gluster" in
  the method name was redundant as the classname already
  hints about the Gluster scope)

  Partially implements bp gluster-code-cleanup

Change-Id: I02a1d591d36c6a64eea55ed64cf715f94c1fd1c8
2016-02-16 05:18:40 +01:00
Jenkins 78f06327e6 Merge "Using dict.items() is better than six.iteritems(dict)" 2016-01-26 11:26:59 +00:00
ting.wang 2bc625399f Using dict.items() is better than six.iteritems(dict)
Replacing dict.iteritems()/.itervalues() with
six.iteritems(dict)/six.itervalues(dict) was preferred in the past,
but there was a discussion suggesting to avoid six for this[1].
The overhead of creating a temporary list on Python 2 is negligible.

[1]http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
Partially-implements blueprint py3-compatibility

Change-Id: Ia2298733188b3d964d43a547504ede2ebeaba9bd
2016-01-19 22:22:48 +08:00
Csaba Henk c1f2234dd2 gluster*: refactor gluster_call
So far, GlusterManager.gluster_call was directly calling
into the given execution function, of which the expected
error type is ProcessExecutionError; while practically in
all use cases we wanted to raise upwards a GlusterfsException,
so all use cases were individually coercing the
ProcessExecutionError into a GlusterfsException. This produced
a huge amount of excess boilerplate code.

Here we include the coercion in the definiton of gluster_call
and clean out the excess code.

Partially implements bp gluster-code-cleanup

Change-Id: I0ad0478393df2cbb6d077363ebd6b91ceed21679
2016-01-19 00:54:23 +01:00
Jenkins 5a4a029e44 Merge "Replace deprecated [logger/LOG].warn with warning" 2016-01-09 05:32:59 +00:00
Jenkins 5b2b6f33d7 Merge "glusterfs/vol layout: remove manila-created vols upon delete_share" 2016-01-09 01:11:52 +00:00
huayue 44fc3021d4 Replace deprecated [logger/LOG].warn with warning
Python 3 deprecated the logger.warn method, see:
https://docs.python.org/3/library/logging.html#logging.warning, so we
prefer to use warning to avoid DeprecationWarning.

Change-Id: I6b09f67bb63fbdf31903ec175db012fc50e87f16
Closes-Bug: 1508442
2016-01-08 10:32:47 +08:00
Csaba Henk 52bfb25a51 glusterfs/vol layout: remove manila-created vols upon delete_share
With volume layout the volume we use to back a share can
be pre-created (part of the volume pool provided for Manila),
or can be created by Manila (that happens if share is created
from snapshot, in which case the volume is obtained by performing
a 'snapshot clone' gluster operation).

In terms of resource management, pre-created volumes are owned
by the pool, and Manila cloned ones are owned by Manila. So
far we kept all the volumes upon giving up its use (ie. deleting
the share it belonged to) -- we only ran a cleanup routine on them.
However, that's appropriate action only for the pool owned ones.
However, the ones we own should rather be extinguished to avoid
a resource leak. This patch implements this practice by marking
Manila owned volumes with a gluster user option.

Closes-Bug: #1506298
Change-Id: I165cc225cb7aca44785ed9ef60f459b8d46af564
2016-01-06 09:02:53 +01:00
Jenkins c3fe1f0ec7 Merge "glusterfs/volume layout: indicate volume usage on volumes themselves" 2015-12-20 15:34:40 +00:00
Jenkins 14865090e2 Merge "glusterfs/volume layout: fix incorrect usage of export_location" 2015-11-25 14:10:03 +00:00
Csaba Henk 3537be2516 glusterfs/volume layout: indicate volume usage on volumes themselves
With volume layout, share-volume association was kept solely
in the manila DB. That is not robust enough, first and
foremost because upon starting the service, the manager
will indicate existing share associations only of those
volumes whose shares is in 'available' state.

We need to know though if a volume is in a pristine state or
not, regardless of the state of their shares. To this end,
we introduce the 'user.manila-share' GlusterFS volume option
to indicate manila share association -- made possible by
GlusterFS allowing any user defined option to exist in the
'user' option name space --, which indicator remains there
until we explicitely drop it in `delete_share`. (The value
of 'user.manila-share' is the id of the share owning the
volume).

As a beneficial side effect, this change will also provide
insight to the Gluster storage admin about usage of the
Manila volume pool.

Change-Id: Icb388fd31fb6a992bee7e731f5e84403d5fd1a85
Partial-Bug: #1501670
2015-11-20 07:03:46 +00:00
Csaba Henk e91632b6e2 glusterfs/volume layout: fix incorrect usage of export_location
Actually, all uses of export_location are incorrect -- from the
layout code's point of view, export_location is an arbitrary
opaque value, obtained from self.driver._setup_via_manager with
which the only legit action is to return it from create_share*.

That we use export_location as dict key in ensure_share is just
a pre-layout relict that survived by the virtue of remaining
unnoticed. Now the referred bug forced it out of the dark.

Change-Id: I965dae99486002f00145daff0cd2a848777b5b81
Partial-Bug: #1501670
2015-11-20 01:04:31 +01:00
Shuquan Huang 8247f350af remove default=None for config options
In the cfg module default=None is set as the default value.

Change-Id: I9303e2ae4b8f301757744efc09136868db29472a
Closes-bug: #1323975
2015-11-08 20:12:25 +08:00
Csaba Henk 4e4c8759a2 glusterfs vol layout: start volume cloned from snapshot
When handling create_share_from_snapshot with glusterfs
volume layout, we do a snapshot clone gluster operation
that gives us a new volume (which will be used to back
the new share). 'snapshot clone' does not start the
resultant volume, we have explicitly start it from Manila.
So far the volume layout code did not bother about it,
rather the 'vol start' was called from glusterfs-native
driver. That however broke all other volume layout based
configs (ie. glusterfs driver with vol layout).

Fix this now by doing the 'vol start' call in the vol
layout code.

Change-Id: I63c13ce468a3227f09e381814f55e8c914fbef95
Closes-Bug: #1499347
2015-09-29 09:49:19 +02:00
Csaba Henk 42f6a55c0a glusterfs*: fix ssh credential options
glusterfs and glusterfs_native had a distinct
set of options to specify ssh credentials
(glusterfs_server_password vs glusterfs_native_server_password
and glusterfs_path_to_private_key vs glusterfs_native_path_to_private_key).

There is no reason to keep these separate; but worsening the situations
these options have been moved to layouts in an ad-hoc manner,
breaking certain driver/layout combos whereby the credential
option used by the driver is not provided by the chosen layout
and thus it was undefined.

Fix all the mess by defining glusterfs_server_password and
glusterfs_path_to_private_key in glusterfs.common, and
providing the native variants as deprecated aliases.

Change-Id: I48f8673858d2bff95e66bb7e72911e87030fdc0e
Closes-Bug: #1497212
2015-09-22 12:36:47 +02:00
Csaba Henk bd146a841d glusterfs*: amend export location
The basic problem is that determining the
export location of a share should happen in
driver scope (as it depends on available
export mechanims, which are implemented by
the driver) while the code did it in layout
scope in ad-hoc ways. Also in native driver
the export location was abused to store the
address of the backing GlusterFS resource.

Fix these by

-  layout:
   - GlusterfsShareDriverBase._setup_via_manager
     (the layer -> driver reverse callback) will
     provide the export location as return value
   - the share object is also passed to
     GlusterfsShareDriverBase._setup_via_manager
     (besides the gluster manager), because some
     driver configs will need it to specify the
     export location
-  glusterfs-native:
   - free the code from using export location
     (apart from composing it in _setup_via_manager);
     store the address of backing resource in
     private storage instead of the export location
     field
-  glusterfs:
   - define the `get_export` method for the export
     helpers that provide the export location
   - _setup_via_manager determines export location
     by calling the helper's get_export

Change-Id: Id02e4908a3e8e435c4c51ecacb6576785ac8afb6
Closes-Bug: #1476774
Closes-Bug: #1493080
2015-09-18 14:06:54 +02:00
Ramana Raja cf7bc890fc glusterfs: Fix use of ShareSnapshotInstance object
Previously, a 'ShareSnapshot' object was passed to the driver's API
method by share's manager.py during a create_share_from_snapshot call.
Now, a 'ShareSnapshotInstance' object is passed to the driver during
the same call. The object no longer has the attribute 'share' used by
the driver code, and in it's place has the attribute 'share_instance'.
So replace use of 'share' attribute with 'share_instance'.

Change-Id: Ibea11b33772f24609f9cd3180d61ab7f6307c1b8
Closes-Bug: #1495382
2015-09-15 18:33:44 +05:30
Csaba Henk bf3c40e439 glusterfs: volume mapped share layout
The volume management done by gluster_native has
been isolated and captured in a separate layout class.
gluster_native implements only {allow,deny}_access,
for the rest it uses the layout code.

Semantics is preserved with one difference:
the Manila host now is assumed to be set up so that
it can mount the GlusterFS volumes without any
complications. Earlier we assumed to not have cert
based access from Manila host, and turned therefore
SSL off and on on the GlusterFS side. This does not
make sense for the separate, layout-agnostic logic.
(Nb. we already wanted to make the move to set this
assumption, regardless of the layout work.)

Partially implements bp modular-glusterfs-share-layouts

Change-Id: I3cbc55eed0f61fe4808873f78811b6c3fd1c66aa
2015-09-06 14:02:10 +00:00