Initial Commit
This commit is contained in:
commit
25e93d0f2a
|
@ -0,0 +1,202 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
fuel-plugin-cinder-kaminario
|
||||
============
|
||||
|
||||
Plugin description
|
|
@ -0,0 +1,15 @@
|
|||
- name: 'storage:block:backend:kaminario'
|
||||
label: 'Kaminario'
|
||||
description: 'Cinder with Kaminario backend'
|
||||
compatible:
|
||||
- name: storage:block:lvm
|
||||
- name: storage:block:ceph
|
||||
- name: storage:object:ceph
|
||||
- name: storage:ephemeral:ceph
|
||||
- name: storage:image:ceph
|
||||
- name: hypervisor:qemu
|
||||
- name: network:neutron:core:ml2
|
||||
- name: network:neutron:ml2:vlan
|
||||
- name: network:neutron:ml2:tun
|
||||
incompatible:
|
||||
- name: hypervisor:vmware
|
|
@ -0,0 +1,9 @@
|
|||
notice('MODULAR: cinder_kaminario')
|
||||
|
||||
|
||||
class { 'kaminario::driver': }->
|
||||
class { 'kaminario::krest': }->
|
||||
class { 'kaminario::config': }~> Exec[cinder_volume]
|
||||
|
||||
exec {'cinder_volume':
|
||||
command => '/usr/sbin/service cinder-volume restart',}
|
|
@ -0,0 +1,8 @@
|
|||
ini_setting { 'parser':
|
||||
ensure => present,
|
||||
path => '/etc/puppet/puppet.conf',
|
||||
section => 'main',
|
||||
setting => 'parser',
|
||||
value => 'future',
|
||||
}
|
||||
|
|
@ -0,0 +1 @@
|
|||
include kaminario::type
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,893 @@
|
|||
# Copyright (c) 2016 by Kaminario Technologies, Ltd.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""Volume driver for Kaminario K2 all-flash arrays."""
|
||||
|
||||
import math
|
||||
import re
|
||||
import threading
|
||||
|
||||
import eventlet
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import units
|
||||
from oslo_utils import versionutils
|
||||
import requests
|
||||
import six
|
||||
|
||||
import cinder
|
||||
from cinder import exception
|
||||
from cinder.i18n import _, _LE, _LW, _LI
|
||||
from cinder.objects import fields
|
||||
from cinder import utils
|
||||
from cinder.volume.drivers.san import san
|
||||
from cinder.volume import utils as vol_utils
|
||||
|
||||
krest = importutils.try_import("krest")
|
||||
|
||||
K2_MIN_VERSION = '2.2.0'
|
||||
K2_LOCK_PREFIX = 'Kaminario'
|
||||
MAX_K2_RETRY = 5
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
kaminario1_opts = [
|
||||
cfg.StrOpt('kaminario_nodedup_substring',
|
||||
default='K2-nodedup',
|
||||
help="If volume-type name contains this substring "
|
||||
"nodedup volume will be created, otherwise "
|
||||
"dedup volume wil be created.",
|
||||
deprecated_for_removal=True,
|
||||
deprecated_reason="This option is deprecated in favour of "
|
||||
"'kaminario:thin_prov_type' in extra-specs "
|
||||
"and will be removed in the next release.")]
|
||||
kaminario2_opts = [
|
||||
cfg.BoolOpt('auto_calc_max_oversubscription_ratio',
|
||||
default=False,
|
||||
help="K2 driver will calculate max_oversubscription_ratio "
|
||||
"on setting this option as True.")]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(kaminario1_opts)
|
||||
|
||||
K2HTTPError = requests.exceptions.HTTPError
|
||||
K2_RETRY_ERRORS = ("MC_ERR_BUSY", "MC_ERR_BUSY_SPECIFIC",
|
||||
"MC_ERR_INPROGRESS", "MC_ERR_START_TIMEOUT")
|
||||
|
||||
if krest:
|
||||
class KrestWrap(krest.EndPoint):
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.krestlock = threading.Lock()
|
||||
super(KrestWrap, self).__init__(*args, **kwargs)
|
||||
|
||||
def _should_retry(self, err_code, err_msg):
|
||||
if err_code == 400:
|
||||
for er in K2_RETRY_ERRORS:
|
||||
if er in err_msg:
|
||||
LOG.debug("Retry ERROR: %d with status %s",
|
||||
err_code, err_msg)
|
||||
return True
|
||||
return False
|
||||
|
||||
@utils.retry(exception.KaminarioRetryableException,
|
||||
retries=MAX_K2_RETRY)
|
||||
def _request(self, method, *args, **kwargs):
|
||||
try:
|
||||
LOG.debug("running through the _request wrapper...")
|
||||
self.krestlock.acquire()
|
||||
return super(KrestWrap, self)._request(method,
|
||||
*args, **kwargs)
|
||||
except K2HTTPError as err:
|
||||
err_code = err.response.status_code
|
||||
err_msg = err.response.text
|
||||
if self._should_retry(err_code, err_msg):
|
||||
raise exception.KaminarioRetryableException(
|
||||
reason=six.text_type(err_msg))
|
||||
raise
|
||||
finally:
|
||||
self.krestlock.release()
|
||||
|
||||
|
||||
def kaminario_logger(func):
|
||||
"""Return a function wrapper.
|
||||
|
||||
The wrapper adds log for entry and exit to the function.
|
||||
"""
|
||||
def func_wrapper(*args, **kwargs):
|
||||
LOG.debug('Entering %(function)s of %(class)s with arguments: '
|
||||
' %(args)s, %(kwargs)s',
|
||||
{'class': args[0].__class__.__name__,
|
||||
'function': func.__name__,
|
||||
'args': args[1:],
|
||||
'kwargs': kwargs})
|
||||
ret = func(*args, **kwargs)
|
||||
LOG.debug('Exiting %(function)s of %(class)s '
|
||||
'having return value: %(ret)s',
|
||||
{'class': args[0].__class__.__name__,
|
||||
'function': func.__name__,
|
||||
'ret': ret})
|
||||
return ret
|
||||
return func_wrapper
|
||||
|
||||
|
||||
class Replication(object):
|
||||
def __init__(self, config, *args, **kwargs):
|
||||
self.backend_id = config.get('backend_id')
|
||||
self.login = config.get('login')
|
||||
self.password = config.get('password')
|
||||
self.rpo = config.get('rpo')
|
||||
|
||||
|
||||
class KaminarioCinderDriver(cinder.volume.driver.ISCSIDriver):
|
||||
VENDOR = "Kaminario"
|
||||
stats = {}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(KaminarioCinderDriver, self).__init__(*args, **kwargs)
|
||||
self.configuration.append_config_values(san.san_opts)
|
||||
self.configuration.append_config_values(kaminario2_opts)
|
||||
self.replica = None
|
||||
self._protocol = None
|
||||
k2_lock_sfx = self.configuration.safe_get('volume_backend_name') or ''
|
||||
self.k2_lock_name = "%s-%s" % (K2_LOCK_PREFIX, k2_lock_sfx)
|
||||
|
||||
def check_for_setup_error(self):
|
||||
if krest is None:
|
||||
msg = _("Unable to import 'krest' python module.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
else:
|
||||
conf = self.configuration
|
||||
self.client = KrestWrap(conf.san_ip,
|
||||
conf.san_login,
|
||||
conf.san_password,
|
||||
ssl_validate=False)
|
||||
if self.replica:
|
||||
self.target = KrestWrap(self.replica.backend_id,
|
||||
self.replica.login,
|
||||
self.replica.password,
|
||||
ssl_validate=False)
|
||||
v_rs = self.client.search("system/state")
|
||||
if hasattr(v_rs, 'hits') and v_rs.total != 0:
|
||||
ver = v_rs.hits[0].rest_api_version
|
||||
ver_exist = versionutils.convert_version_to_int(ver)
|
||||
ver_min = versionutils.convert_version_to_int(K2_MIN_VERSION)
|
||||
if ver_exist < ver_min:
|
||||
msg = _("K2 rest api version should be "
|
||||
">= %s.") % K2_MIN_VERSION
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
|
||||
else:
|
||||
msg = _("K2 rest api version search failed.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
|
||||
@kaminario_logger
|
||||
def _check_ops(self):
|
||||
"""Ensure that the options we care about are set."""
|
||||
required_ops = ['san_ip', 'san_login', 'san_password']
|
||||
for attr in required_ops:
|
||||
if not getattr(self.configuration, attr, None):
|
||||
raise exception.InvalidInput(reason=_('%s is not set.') % attr)
|
||||
|
||||
replica = self.configuration.safe_get('replication_device')
|
||||
if replica and isinstance(replica, list):
|
||||
replica_ops = ['backend_id', 'login', 'password', 'rpo']
|
||||
for attr in replica_ops:
|
||||
if attr not in replica[0]:
|
||||
msg = _('replication_device %s is not set.') % attr
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
self.replica = Replication(replica[0])
|
||||
|
||||
@kaminario_logger
|
||||
def do_setup(self, context):
|
||||
super(KaminarioCinderDriver, self).do_setup(context)
|
||||
self._check_ops()
|
||||
|
||||
@kaminario_logger
|
||||
def create_volume(self, volume):
|
||||
"""Volume creation in K2 needs a volume group.
|
||||
|
||||
- create a volume group
|
||||
- create a volume in the volume group
|
||||
"""
|
||||
vg_name = self.get_volume_group_name(volume.id)
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
prov_type = self._get_is_dedup(volume.get('volume_type'))
|
||||
try:
|
||||
LOG.debug("Creating volume group with name: %(name)s, "
|
||||
"quota: unlimited and dedup_support: %(dedup)s",
|
||||
{'name': vg_name, 'dedup': prov_type})
|
||||
|
||||
vg = self.client.new("volume_groups", name=vg_name, quota=0,
|
||||
is_dedup=prov_type).save()
|
||||
LOG.debug("Creating volume with name: %(name)s, size: %(size)s "
|
||||
"GB, volume_group: %(vg)s",
|
||||
{'name': vol_name, 'size': volume.size, 'vg': vg_name})
|
||||
vol = self.client.new("volumes", name=vol_name,
|
||||
size=volume.size * units.Mi,
|
||||
volume_group=vg).save()
|
||||
except Exception as ex:
|
||||
vg_rs = self.client.search("volume_groups", name=vg_name)
|
||||
if vg_rs.total != 0:
|
||||
LOG.debug("Deleting vg: %s for failed volume in K2.", vg_name)
|
||||
vg_rs.hits[0].delete()
|
||||
LOG.exception(_LE("Creation of volume %s failed."), vol_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
if self._get_is_replica(volume.volume_type) and self.replica:
|
||||
self._create_volume_replica(volume, vg, vol, self.replica.rpo)
|
||||
|
||||
@kaminario_logger
|
||||
def _create_volume_replica(self, volume, vg, vol, rpo):
|
||||
"""Volume replica creation in K2 needs session and remote volume.
|
||||
|
||||
- create a session
|
||||
- create a volume in the volume group
|
||||
|
||||
"""
|
||||
session_name = self.get_session_name(volume.id)
|
||||
rsession_name = self.get_rep_name(session_name)
|
||||
|
||||
rvg_name = self.get_rep_name(vg.name)
|
||||
rvol_name = self.get_rep_name(vol.name)
|
||||
|
||||
k2peer_rs = self.client.search("replication/peer_k2arrays",
|
||||
mgmt_host=self.replica.backend_id)
|
||||
if hasattr(k2peer_rs, 'hits') and k2peer_rs.total != 0:
|
||||
k2peer = k2peer_rs.hits[0]
|
||||
else:
|
||||
msg = _("Unable to find K2peer in source K2:")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
try:
|
||||
LOG.debug("Creating source session with name: %(sname)s and "
|
||||
" target session name: %(tname)s",
|
||||
{'sname': session_name, 'tname': rsession_name})
|
||||
src_ssn = self.client.new("replication/sessions")
|
||||
src_ssn.replication_peer_k2array = k2peer
|
||||
src_ssn.auto_configure_peer_volumes = "False"
|
||||
src_ssn.local_volume_group = vg
|
||||
src_ssn.replication_peer_volume_group_name = rvg_name
|
||||
src_ssn.remote_replication_session_name = rsession_name
|
||||
src_ssn.name = session_name
|
||||
src_ssn.rpo = rpo
|
||||
src_ssn.save()
|
||||
LOG.debug("Creating remote volume with name: %s",
|
||||
rvol_name)
|
||||
self.client.new("replication/peer_volumes",
|
||||
local_volume=vol,
|
||||
name=rvol_name,
|
||||
replication_session=src_ssn).save()
|
||||
src_ssn.state = "in_sync"
|
||||
src_ssn.save()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Replication for the volume %s has "
|
||||
"failed."), vol.name)
|
||||
self._delete_by_ref(self.client, "replication/sessions",
|
||||
session_name, 'session')
|
||||
self._delete_by_ref(self.target, "replication/sessions",
|
||||
rsession_name, 'remote session')
|
||||
self._delete_by_ref(self.target, "volumes",
|
||||
rvol_name, 'remote volume')
|
||||
self._delete_by_ref(self.client, "volumes", vol.name, "volume")
|
||||
self._delete_by_ref(self.target, "volume_groups",
|
||||
rvg_name, "remote vg")
|
||||
self._delete_by_ref(self.client, "volume_groups", vg.name, "vg")
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
def _delete_by_ref(self, device, url, name, msg):
|
||||
rs = device.search(url, name=name)
|
||||
for result in rs.hits:
|
||||
result.delete()
|
||||
LOG.debug("Deleting %(msg)s: %(name)s", {'msg': msg, 'name': name})
|
||||
|
||||
@kaminario_logger
|
||||
def _failover_volume(self, volume):
|
||||
"""Promoting a secondary volume to primary volume."""
|
||||
session_name = self.get_session_name(volume.id)
|
||||
rsession_name = self.get_rep_name(session_name)
|
||||
tgt_ssn = self.target.search("replication/sessions",
|
||||
name=rsession_name).hits[0]
|
||||
if tgt_ssn.state == 'in_sync':
|
||||
tgt_ssn.state = 'failed_over'
|
||||
tgt_ssn.save()
|
||||
LOG.debug("The target session: %s state is "
|
||||
"changed to failed_over ", rsession_name)
|
||||
|
||||
@kaminario_logger
|
||||
def failover_host(self, context, volumes, secondary_id=None):
|
||||
"""Failover to replication target."""
|
||||
volume_updates = []
|
||||
if secondary_id and secondary_id != self.replica.backend_id:
|
||||
LOG.error(_LE("Kaminario driver received failover_host "
|
||||
"request, But backend is non replicated device"))
|
||||
raise exception.UnableToFailOver(reason=_("Failover requested "
|
||||
"on non replicated "
|
||||
"backend."))
|
||||
for v in volumes:
|
||||
vol_name = self.get_volume_name(v['id'])
|
||||
rv = self.get_rep_name(vol_name)
|
||||
if self.target.search("volumes", name=rv).total:
|
||||
self._failover_volume(v)
|
||||
volume_updates.append(
|
||||
{'volume_id': v['id'],
|
||||
'updates':
|
||||
{'replication_status':
|
||||
fields.ReplicationStatus.FAILED_OVER}})
|
||||
else:
|
||||
volume_updates.append({'volume_id': v['id'],
|
||||
'updates': {'status': 'error', }})
|
||||
|
||||
return self.replica.backend_id, volume_updates
|
||||
|
||||
@kaminario_logger
|
||||
def create_volume_from_snapshot(self, volume, snapshot):
|
||||
"""Create volume from snapshot.
|
||||
|
||||
- search for snapshot and retention_policy
|
||||
- create a view from snapshot and attach view
|
||||
- create a volume and attach volume
|
||||
- copy data from attached view to attached volume
|
||||
- detach volume and view and finally delete view
|
||||
"""
|
||||
snap_name = self.get_snap_name(snapshot.id)
|
||||
view_name = self.get_view_name(volume.id)
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
cview = src_attach_info = dest_attach_info = None
|
||||
rpolicy = self.get_policy()
|
||||
properties = utils.brick_get_connector_properties()
|
||||
LOG.debug("Searching for snapshot: %s in K2.", snap_name)
|
||||
snap_rs = self.client.search("snapshots", short_name=snap_name)
|
||||
if hasattr(snap_rs, 'hits') and snap_rs.total != 0:
|
||||
snap = snap_rs.hits[0]
|
||||
LOG.debug("Creating a view: %(view)s from snapshot: %(snap)s",
|
||||
{'view': view_name, 'snap': snap_name})
|
||||
try:
|
||||
cview = self.client.new("snapshots",
|
||||
short_name=view_name,
|
||||
source=snap, retention_policy=rpolicy,
|
||||
is_exposable=True).save()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Creating a view: %(view)s from snapshot: "
|
||||
"%(snap)s failed"), {"view": view_name,
|
||||
"snap": snap_name})
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
else:
|
||||
msg = _("Snapshot: %s search failed in K2.") % snap_name
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
|
||||
try:
|
||||
conn = self.initialize_connection(cview, properties)
|
||||
src_attach_info = self._connect_device(conn)
|
||||
self.create_volume(volume)
|
||||
conn = self.initialize_connection(volume, properties)
|
||||
dest_attach_info = self._connect_device(conn)
|
||||
vol_utils.copy_volume(src_attach_info['device']['path'],
|
||||
dest_attach_info['device']['path'],
|
||||
snapshot.volume.size * units.Ki,
|
||||
self.configuration.volume_dd_blocksize,
|
||||
sparse=True)
|
||||
self.terminate_connection(volume, properties)
|
||||
self.terminate_connection(cview, properties)
|
||||
except Exception as ex:
|
||||
self.terminate_connection(cview, properties)
|
||||
self.terminate_connection(volume, properties)
|
||||
cview.delete()
|
||||
self.delete_volume(volume)
|
||||
LOG.exception(_LE("Copy to volume: %(vol)s from view: %(view)s "
|
||||
"failed"), {"vol": vol_name, "view": view_name})
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def create_cloned_volume(self, volume, src_vref):
|
||||
"""Create a clone from source volume.
|
||||
|
||||
- attach source volume
|
||||
- create and attach new volume
|
||||
- copy data from attached source volume to attached new volume
|
||||
- detach both volumes
|
||||
"""
|
||||
clone_name = self.get_volume_name(volume.id)
|
||||
src_name = self.get_volume_name(src_vref.id)
|
||||
src_vol = self.client.search("volumes", name=src_name)
|
||||
src_map = self.client.search("mappings", volume=src_vol)
|
||||
if src_map.total != 0:
|
||||
msg = _("K2 driver does not support clone of a attached volume. "
|
||||
"To get this done, create a snapshot from the attached "
|
||||
"volume and then create a volume from the snapshot.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
try:
|
||||
properties = utils.brick_get_connector_properties()
|
||||
conn = self.initialize_connection(src_vref, properties)
|
||||
src_attach_info = self._connect_device(conn)
|
||||
self.create_volume(volume)
|
||||
conn = self.initialize_connection(volume, properties)
|
||||
dest_attach_info = self._connect_device(conn)
|
||||
vol_utils.copy_volume(src_attach_info['device']['path'],
|
||||
dest_attach_info['device']['path'],
|
||||
src_vref.size * units.Ki,
|
||||
self.configuration.volume_dd_blocksize,
|
||||
sparse=True)
|
||||
|
||||
self.terminate_connection(volume, properties)
|
||||
self.terminate_connection(src_vref, properties)
|
||||
except Exception as ex:
|
||||
self.terminate_connection(src_vref, properties)
|
||||
self.terminate_connection(volume, properties)
|
||||
self.delete_volume(volume)
|
||||
LOG.exception(_LE("Create a clone: %s failed."), clone_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def delete_volume(self, volume):
|
||||
"""Volume in K2 exists in a volume group.
|
||||
|
||||
- delete the volume
|
||||
- delete the corresponding volume group
|
||||
"""
|
||||
vg_name = self.get_volume_group_name(volume.id)
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
try:
|
||||
if self._get_is_replica(volume.volume_type) and self.replica:
|
||||
self._delete_volume_replica(volume, vg_name, vol_name)
|
||||
|
||||
LOG.debug("Searching and deleting volume: %s in K2.", vol_name)
|
||||
vol_rs = self.client.search("volumes", name=vol_name)
|
||||
if vol_rs.total != 0:
|
||||
vol_rs.hits[0].delete()
|
||||
LOG.debug("Searching and deleting vg: %s in K2.", vg_name)
|
||||
vg_rs = self.client.search("volume_groups", name=vg_name)
|
||||
if vg_rs.total != 0:
|
||||
vg_rs.hits[0].delete()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Deletion of volume %s failed."), vol_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def _delete_volume_replica(self, volume, vg_name, vol_name):
|
||||
rvg_name = self.get_rep_name(vg_name)
|
||||
rvol_name = self.get_rep_name(vol_name)
|
||||
session_name = self.get_session_name(volume.id)
|
||||
rsession_name = self.get_rep_name(session_name)
|
||||
src_ssn = self.client.search('replication/sessions',
|
||||
name=session_name).hits[0]
|
||||
tgt_ssn = self.target.search('replication/sessions',
|
||||
name=rsession_name).hits[0]
|
||||
src_ssn.state = 'suspended'
|
||||
src_ssn.save()
|
||||
self._check_for_status(tgt_ssn, 'suspended')
|
||||
src_ssn.state = 'idle'
|
||||
src_ssn.save()
|
||||
self._check_for_status(tgt_ssn, 'idle')
|
||||
tgt_ssn.delete()
|
||||
src_ssn.delete()
|
||||
|
||||
LOG.debug("Searching and deleting snapshots for volume groups:"
|
||||
"%(vg1)s, %(vg2)s in K2.", {'vg1': vg_name, 'vg2': rvg_name})
|
||||
vg = self.client.search('volume_groups', name=vg_name).hits
|
||||
rvg = self.target.search('volume_groups', name=rvg_name).hits
|
||||
snaps = self.client.search('snapshots', volume_group=vg).hits
|
||||
for s in snaps:
|
||||
s.delete()
|
||||
rsnaps = self.target.search('snapshots', volume_group=rvg).hits
|
||||
for s in rsnaps:
|
||||
s.delete()
|
||||
|
||||
self._delete_by_ref(self.target, "volumes", rvol_name, 'remote volume')
|
||||
self._delete_by_ref(self.target, "volume_groups",
|
||||
rvg_name, "remote vg")
|
||||
|
||||
@kaminario_logger
|
||||
def _check_for_status(self, obj, status):
|
||||
while obj.state != status:
|
||||
obj.refresh()
|
||||
eventlet.sleep(1)
|
||||
|
||||
@kaminario_logger
|
||||
def get_volume_stats(self, refresh=False):
|
||||
if refresh:
|
||||
self.update_volume_stats()
|
||||
stats = self.stats
|
||||
stats['storage_protocol'] = self._protocol
|
||||
stats['driver_version'] = self.VERSION
|
||||
stats['vendor_name'] = self.VENDOR
|
||||
backend_name = self.configuration.safe_get('volume_backend_name')
|
||||
stats['volume_backend_name'] = (backend_name or
|
||||
self.__class__.__name__)
|
||||
return stats
|
||||
|
||||
def create_export(self, context, volume, connector):
|
||||
pass
|
||||
|
||||
def ensure_export(self, context, volume):
|
||||
pass
|
||||
|
||||
def remove_export(self, context, volume):
|
||||
pass
|
||||
|
||||
@kaminario_logger
|
||||
def create_snapshot(self, snapshot):
|
||||
"""Create a snapshot from a volume_group."""
|
||||
vg_name = self.get_volume_group_name(snapshot.volume_id)
|
||||
snap_name = self.get_snap_name(snapshot.id)
|
||||
rpolicy = self.get_policy()
|
||||
try:
|
||||
LOG.debug("Searching volume_group: %s in K2.", vg_name)
|
||||
vg = self.client.search("volume_groups", name=vg_name).hits[0]
|
||||
LOG.debug("Creating a snapshot: %(snap)s from vg: %(vg)s",
|
||||
{'snap': snap_name, 'vg': vg_name})
|
||||
self.client.new("snapshots", short_name=snap_name,
|
||||
source=vg, retention_policy=rpolicy,
|
||||
is_auto_deleteable=False).save()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Creation of snapshot: %s failed."), snap_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def delete_snapshot(self, snapshot):
|
||||
"""Delete a snapshot."""
|
||||
snap_name = self.get_snap_name(snapshot.id)
|
||||
try:
|
||||
LOG.debug("Searching and deleting snapshot: %s in K2.", snap_name)
|
||||
snap_rs = self.client.search("snapshots", short_name=snap_name)
|
||||
if snap_rs.total != 0:
|
||||
snap_rs.hits[0].delete()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Deletion of snapshot: %s failed."), snap_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def extend_volume(self, volume, new_size):
|
||||
"""Extend volume."""
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
try:
|
||||
LOG.debug("Searching volume: %s in K2.", vol_name)
|
||||
vol = self.client.search("volumes", name=vol_name).hits[0]
|
||||
vol.size = new_size * units.Mi
|
||||
LOG.debug("Extending volume: %s in K2.", vol_name)
|
||||
vol.save()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Extending volume: %s failed."), vol_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def update_volume_stats(self):
|
||||
conf = self.configuration
|
||||
LOG.debug("Searching system capacity in K2.")
|
||||
cap = self.client.search("system/capacity").hits[0]
|
||||
LOG.debug("Searching total volumes in K2 for updating stats.")
|
||||
total_volumes = self.client.search("volumes").total - 1
|
||||
provisioned_vol = cap.provisioned_volumes
|
||||
if (conf.auto_calc_max_oversubscription_ratio and cap.provisioned
|
||||
and (cap.total - cap.free) != 0):
|
||||
ratio = provisioned_vol / float(cap.total - cap.free)
|
||||
else:
|
||||
ratio = conf.max_over_subscription_ratio
|
||||
self.stats = {'QoS_support': False,
|
||||
'free_capacity_gb': cap.free / units.Mi,
|
||||
'total_capacity_gb': cap.total / units.Mi,
|
||||
'thin_provisioning_support': True,
|
||||
'sparse_copy_volume': True,
|
||||
'total_volumes': total_volumes,
|
||||
'thick_provisioning_support': False,
|
||||
'provisioned_capacity_gb': provisioned_vol / units.Mi,
|
||||
'max_oversubscription_ratio': ratio,
|
||||
'kaminario:thin_prov_type': 'dedup/nodedup',
|
||||
'replication_enabled': True,
|
||||
'kaminario:replication': True}
|
||||
|
||||
@kaminario_logger
|
||||
def get_initiator_host_name(self, connector):
|
||||
"""Return the initiator host name.
|
||||
|
||||
Valid characters: 0-9, a-z, A-Z, '-', '_'
|
||||
All other characters are replaced with '_'.
|
||||
Total characters in initiator host name: 32
|
||||
"""
|
||||
return re.sub('[^0-9a-zA-Z-_]', '_', connector.get('host', ''))[:32]
|
||||
|
||||
@kaminario_logger
|
||||
def get_volume_group_name(self, vid):
|
||||
"""Return the volume group name."""
|
||||
return "cvg-{0}".format(vid)
|
||||
|
||||
@kaminario_logger
|
||||
def get_volume_name(self, vid):
|
||||
"""Return the volume name."""
|
||||
return "cv-{0}".format(vid)
|
||||
|
||||
@kaminario_logger
|
||||
def get_session_name(self, vid):
|
||||
"""Return the volume name."""
|
||||
return "ssn-{0}".format(vid)
|
||||
|
||||
@kaminario_logger
|
||||
def get_snap_name(self, sid):
|
||||
"""Return the snapshot name."""
|
||||
return "cs-{0}".format(sid)
|
||||
|
||||
@kaminario_logger
|
||||
def get_view_name(self, vid):
|
||||
"""Return the view name."""
|
||||
return "cview-{0}".format(vid)
|
||||
|
||||
@kaminario_logger
|
||||
def get_rep_name(self, name):
|
||||
"""Return the corresponding replication names."""
|
||||
return "r{0}".format(name)
|
||||
|
||||
@kaminario_logger
|
||||
def _delete_host_by_name(self, name):
|
||||
"""Deleting host by name."""
|
||||
host_rs = self.client.search("hosts", name=name)
|
||||
if hasattr(host_rs, "hits") and host_rs.total != 0:
|
||||
host = host_rs.hits[0]
|
||||
host.delete()
|
||||
|
||||
@kaminario_logger
|
||||
def get_policy(self):
|
||||
"""Return the retention policy."""
|
||||
try:
|
||||
LOG.debug("Searching for retention_policy in K2.")
|
||||
return self.client.search("retention_policies",
|
||||
name="Best_Effort_Retention").hits[0]
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Retention policy search failed in K2."))
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
@kaminario_logger
|
||||
def _get_volume_object(self, volume):
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
if volume.replication_status == 'failed-over':
|
||||
vol_name = self.get_rep_name(vol_name)
|
||||
self.client = self.target
|
||||
LOG.debug("Searching volume : %s in K2.", vol_name)
|
||||
vol_rs = self.client.search("volumes", name=vol_name)
|
||||
if not hasattr(vol_rs, 'hits') or vol_rs.total == 0:
|
||||
msg = _("Unable to find volume: %s from K2.") % vol_name
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
return vol_rs.hits[0]
|
||||
|
||||
@kaminario_logger
|
||||
def _get_lun_number(self, vol, host):
|
||||
volsnap = None
|
||||
LOG.debug("Searching volsnaps in K2.")
|
||||
volsnap_rs = self.client.search("volsnaps", snapshot=vol)
|
||||
if hasattr(volsnap_rs, 'hits') and volsnap_rs.total != 0:
|
||||
volsnap = volsnap_rs.hits[0]
|
||||
|
||||
LOG.debug("Searching mapping of volsnap in K2.")
|
||||
map_rs = self.client.search("mappings", volume=volsnap, host=host)
|
||||
return map_rs.hits[0].lun
|
||||
|
||||
def initialize_connection(self, volume, connector):
|
||||
pass
|
||||
|
||||
@kaminario_logger
|
||||
def terminate_connection(self, volume, connector):
|
||||
"""Terminate connection of volume from host."""
|
||||
# Get volume object
|
||||
if type(volume).__name__ != 'RestObject':
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
if volume.replication_status == 'failed-over':
|
||||
vol_name = self.get_rep_name(vol_name)
|
||||
self.client = self.target
|
||||
LOG.debug("Searching volume: %s in K2.", vol_name)
|
||||
volume_rs = self.client.search("volumes", name=vol_name)
|
||||
if hasattr(volume_rs, "hits") and volume_rs.total != 0:
|
||||
volume = volume_rs.hits[0]
|
||||
else:
|
||||
vol_name = volume.name
|
||||
|
||||
# Get host object.
|
||||
host_name = self.get_initiator_host_name(connector)
|
||||
host_rs = self.client.search("hosts", name=host_name)
|
||||
if hasattr(host_rs, "hits") and host_rs.total != 0 and volume:
|
||||
host = host_rs.hits[0]
|
||||
LOG.debug("Searching and deleting mapping of volume: %(name)s to "
|
||||
"host: %(host)s", {'host': host_name, 'name': vol_name})
|
||||
map_rs = self.client.search("mappings", volume=volume, host=host)
|
||||
if hasattr(map_rs, "hits") and map_rs.total != 0:
|
||||
map_rs.hits[0].delete()
|
||||
if self.client.search("mappings", host=host).total == 0:
|
||||
LOG.debug("Deleting initiator hostname: %s in K2.", host_name)
|
||||
host.delete()
|
||||
else:
|
||||
LOG.warning(_LW("Host: %s not found on K2."), host_name)
|
||||
|
||||
def k2_initialize_connection(self, volume, connector):
|
||||
# Get volume object.
|
||||
if type(volume).__name__ != 'RestObject':
|
||||
vol = self._get_volume_object(volume)
|
||||
else:
|
||||
vol = volume
|
||||
# Get host object.
|
||||
host, host_rs, host_name = self._get_host_object(connector)
|
||||
try:
|
||||
# Map volume object to host object.
|
||||
LOG.debug("Mapping volume: %(vol)s to host: %(host)s",
|
||||
{'host': host_name, 'vol': vol.name})
|
||||
mapping = self.client.new("mappings", volume=vol, host=host).save()
|
||||
except Exception as ex:
|
||||
if host_rs.total == 0:
|
||||
self._delete_host_by_name(host_name)
|
||||
LOG.exception(_LE("Unable to map volume: %(vol)s to host: "
|
||||
"%(host)s"), {'host': host_name,
|
||||
'vol': vol.name})
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
# Get lun number.
|
||||
if type(volume).__name__ == 'RestObject':
|
||||
return self._get_lun_number(vol, host)
|
||||
else:
|
||||
return mapping.lun
|
||||
|
||||
def _get_host_object(self, connector):
|
||||
pass
|
||||
|
||||
def _get_is_dedup(self, vol_type):
|
||||
if vol_type:
|
||||
specs_val = vol_type.get('extra_specs', {}).get(
|
||||
'kaminario:thin_prov_type')
|
||||
if specs_val == 'nodedup':
|
||||
return False
|
||||
elif CONF.kaminario_nodedup_substring in vol_type.get('name'):
|
||||
LOG.info(_LI("'kaminario_nodedup_substring' option is "
|
||||
"deprecated in favour of 'kaminario:thin_prov_"
|
||||
"type' in extra-specs and will be removed in "
|
||||
"the 10.0.0 release."))
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
else:
|
||||
return True
|
||||
|
||||
def _get_is_replica(self, vol_type):
|
||||
replica = False
|
||||
if vol_type and vol_type.get('extra_specs'):
|
||||
specs = vol_type.get('extra_specs')
|
||||
if (specs.get('kaminario:replication') == 'enabled' and
|
||||
self.replica):
|
||||
replica = True
|
||||
return replica
|
||||
|
||||
def _get_replica_status(self, vg_name):
|
||||
vg = self.client.search("volume_groups", name=vg_name).hits[0]
|
||||
if self.client.search("replication/sessions",
|
||||
local_volume_group=vg).total != 0:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def manage_existing(self, volume, existing_ref):
|
||||
vol_name = existing_ref['source-name']
|
||||
new_name = self.get_volume_name(volume.id)
|
||||
vg_new_name = self.get_volume_group_name(volume.id)
|
||||
vg_name = None
|
||||
is_dedup = self._get_is_dedup(volume.get('volume_type'))
|
||||
try:
|
||||
LOG.debug("Searching volume: %s in K2.", vol_name)
|
||||
vol = self.client.search("volumes", name=vol_name).hits[0]
|
||||
vg = vol.volume_group
|
||||
vg_replica = self._get_replica_status(vg.name)
|
||||
vol_map = False
|
||||
if self.client.search("mappings", volume=vol).total != 0:
|
||||
vol_map = True
|
||||
if is_dedup != vg.is_dedup or vg_replica or vol_map:
|
||||
raise exception.ManageExistingInvalidReference(
|
||||
existing_ref=existing_ref,
|
||||
reason=_('Manage volume type invalid.'))
|
||||
vol.name = new_name
|
||||
vg_name = vg.name
|
||||
LOG.debug("Manage new volume name: %s", new_name)
|
||||
vg.name = vg_new_name
|
||||
LOG.debug("Manage volume group name: %s", vg_new_name)
|
||||
vg.save()
|
||||
LOG.debug("Manage volume: %s in K2.", vol_name)
|
||||
vol.save()
|
||||
except Exception as ex:
|
||||
vg_rs = self.client.search("volume_groups", name=vg_new_name)
|
||||
if hasattr(vg_rs, 'hits') and vg_rs.total != 0:
|
||||
vg = vg_rs.hits[0]
|
||||
if vg_name and vg.name == vg_new_name:
|
||||
vg.name = vg_name
|
||||
LOG.debug("Updating vg new name to old name: %s ", vg_name)
|
||||
vg.save()
|
||||
LOG.exception(_LE("manage volume: %s failed."), vol_name)
|
||||
raise exception.ManageExistingInvalidReference(
|
||||
existing_ref=existing_ref,
|
||||
reason=six.text_type(ex.message))
|
||||
|
||||
def manage_existing_get_size(self, volume, existing_ref):
|
||||
vol_name = existing_ref['source-name']
|
||||
v_rs = self.client.search("volumes", name=vol_name)
|
||||
if hasattr(v_rs, 'hits') and v_rs.total != 0:
|
||||
vol = v_rs.hits[0]
|
||||
size = vol.size / units.Mi
|
||||
return math.ceil(size)
|
||||
else:
|
||||
raise exception.ManageExistingInvalidReference(
|
||||
existing_ref=existing_ref,
|
||||
reason=_('Unable to get size of manage volume.'))
|
||||
|
||||
def after_volume_copy(self, ctxt, volume, new_volume, remote=None):
|
||||
self.delete_volume(volume)
|
||||
vg_name_old = self.get_volume_group_name(volume.id)
|
||||
vol_name_old = self.get_volume_name(volume.id)
|
||||
vg_name_new = self.get_volume_group_name(new_volume.id)
|
||||
vol_name_new = self.get_volume_name(new_volume.id)
|
||||
vg_new = self.client.search("volume_groups", name=vg_name_new).hits[0]
|
||||
vg_new.name = vg_name_old
|
||||
vg_new.save()
|
||||
vol_new = self.client.search("volumes", name=vol_name_new).hits[0]
|
||||
vol_new.name = vol_name_old
|
||||
vol_new.save()
|
||||
|
||||
def retype(self, ctxt, volume, new_type, diff, host):
|
||||
old_type = volume.get('volume_type')
|
||||
vg_name = self.get_volume_group_name(volume.id)
|
||||
old_rep_type = self._get_replica_status(vg_name)
|
||||
new_rep_type = self._get_is_replica(new_type)
|
||||
new_prov_type = self._get_is_dedup(new_type)
|
||||
old_prov_type = self._get_is_dedup(old_type)
|
||||
# Change dedup<->nodedup with add/remove replication is complex in K2
|
||||
# since K2 does not have api to change dedup<->nodedup.
|
||||
if new_prov_type == old_prov_type:
|
||||
if not old_rep_type and new_rep_type:
|
||||
self._add_replication(volume)
|
||||
return True
|
||||
elif old_rep_type and not new_rep_type:
|
||||
self._delete_replication(volume)
|
||||
return True
|
||||
elif not new_rep_type and not old_rep_type:
|
||||
LOG.debug("Use '--migration-policy on-demand' to change 'dedup "
|
||||
"without replication'<->'nodedup without replication'.")
|
||||
return False
|
||||
else:
|
||||
LOG.error(_LE('Change from type1: %(type1)s to type2: %(type2)s '
|
||||
'is not supported directly in K2.'),
|
||||
{'type1': old_type, 'type2': new_type})
|
||||
return False
|
||||
|
||||
def _add_replication(self, volume):
|
||||
vg_name = self.get_volume_group_name(volume.id)
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
LOG.debug("Searching volume group with name: %(name)s",
|
||||
{'name': vg_name})
|
||||
vg = self.client.search("volume_groups", name=vg_name).hits[0]
|
||||
LOG.debug("Searching volume with name: %(name)s",
|
||||
{'name': vol_name})
|
||||
vol = self.client.search("volumes", name=vol_name).hits[0]
|
||||
self._create_volume_replica(volume, vg, vol, self.replica.rpo)
|
||||
|
||||
def _delete_replication(self, volume):
|
||||
vg_name = self.get_volume_group_name(volume.id)
|
||||
vol_name = self.get_volume_name(volume.id)
|
||||
self._delete_volume_replica(volume, vg_name, vol_name)
|
|
@ -0,0 +1,184 @@
|
|||
# Copyright (c) 2016 by Kaminario Technologies, Ltd.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""Volume driver for Kaminario K2 all-flash arrays."""
|
||||
import six
|
||||
|
||||
from oslo_log import log as logging
|
||||
|
||||
from cinder import coordination
|
||||
from cinder import exception
|
||||
from cinder.i18n import _, _LE
|
||||
from cinder.objects import fields
|
||||
from cinder.volume.drivers.kaminario import kaminario_common as common
|
||||
from cinder.zonemanager import utils as fczm_utils
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
kaminario_logger = common.kaminario_logger
|
||||
|
||||
|
||||
class KaminarioFCDriver(common.KaminarioCinderDriver):
|
||||
"""Kaminario K2 FC Volume Driver.
|
||||
|
||||
Version history:
|
||||
1.0 - Initial driver
|
||||
1.1 - Added manage/unmanage and extra-specs support for nodedup
|
||||
1.2 - Added replication support
|
||||
1.3 - Added retype support
|
||||
"""
|
||||
|
||||
VERSION = '1.3'
|
||||
|
||||
# ThirdPartySystems wiki page name
|
||||
CI_WIKI_NAME = "Kaminario_K2_CI"
|
||||
|
||||
@kaminario_logger
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(KaminarioFCDriver, self).__init__(*args, **kwargs)
|
||||
self._protocol = 'FC'
|
||||
self.lookup_service = fczm_utils.create_lookup_service()
|
||||
|
||||
@fczm_utils.AddFCZone
|
||||
@kaminario_logger
|
||||
@coordination.synchronized('{self.k2_lock_name}')
|
||||
def initialize_connection(self, volume, connector):
|
||||
"""Attach K2 volume to host."""
|
||||
# Check wwpns in host connector.
|
||||
if not connector.get('wwpns'):
|
||||
msg = _("No wwpns found in host connector.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
# Get target wwpns.
|
||||
target_wwpns = self.get_target_info(volume)
|
||||
# Map volume.
|
||||
lun = self.k2_initialize_connection(volume, connector)
|
||||
# Create initiator-target mapping.
|
||||
target_wwpns, init_target_map = self._build_initiator_target_map(
|
||||
connector, target_wwpns)
|
||||
# Return target volume information.
|
||||
return {'driver_volume_type': 'fibre_channel',
|
||||
'data': {"target_discovered": True,
|
||||
"target_lun": lun,
|
||||
"target_wwn": target_wwpns,
|
||||
"initiator_target_map": init_target_map}}
|
||||
|
||||
@fczm_utils.RemoveFCZone
|
||||
@kaminario_logger
|
||||
@coordination.synchronized('{self.k2_lock_name}')
|
||||
def terminate_connection(self, volume, connector, **kwargs):
|
||||
super(KaminarioFCDriver, self).terminate_connection(volume, connector)
|
||||
properties = {"driver_volume_type": "fibre_channel", "data": {}}
|
||||
host_name = self.get_initiator_host_name(connector)
|
||||
host_rs = self.client.search("hosts", name=host_name)
|
||||
# In terminate_connection, host_entry is deleted if host
|
||||
# is not attached to any volume
|
||||
if host_rs.total == 0:
|
||||
# Get target wwpns.
|
||||
target_wwpns = self.get_target_info(volume)
|
||||
target_wwpns, init_target_map = self._build_initiator_target_map(
|
||||
connector, target_wwpns)
|
||||
properties["data"] = {"target_wwn": target_wwpns,
|
||||
"initiator_target_map": init_target_map}
|
||||
return properties
|
||||
|
||||
@kaminario_logger
|
||||
def get_target_info(self, volume):
|
||||
rep_status = fields.ReplicationStatus.FAILED_OVER
|
||||
if (hasattr(volume, 'replication_status') and
|
||||
volume.replication_status == rep_status):
|
||||
self.client = self.target
|
||||
LOG.debug("Searching target wwpns in K2.")
|
||||
fc_ports_rs = self.client.search("system/fc_ports")
|
||||
target_wwpns = []
|
||||
if hasattr(fc_ports_rs, 'hits') and fc_ports_rs.total != 0:
|
||||
for port in fc_ports_rs.hits:
|
||||
if port.pwwn:
|
||||
target_wwpns.append((port.pwwn).replace(':', ''))
|
||||
if not target_wwpns:
|
||||
msg = _("Unable to get FC target wwpns from K2.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
return target_wwpns
|
||||
|
||||
@kaminario_logger
|
||||
def _get_host_object(self, connector):
|
||||
host_name = self.get_initiator_host_name(connector)
|
||||
LOG.debug("Searching initiator hostname: %s in K2.", host_name)
|
||||
host_rs = self.client.search("hosts", name=host_name)
|
||||
host_wwpns = connector['wwpns']
|
||||
if host_rs.total == 0:
|
||||
try:
|
||||
LOG.debug("Creating initiator hostname: %s in K2.", host_name)
|
||||
host = self.client.new("hosts", name=host_name,
|
||||
type="Linux").save()
|
||||
except Exception as ex:
|
||||
LOG.exception(_LE("Unable to create host : %s in K2."),
|
||||
host_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
else:
|
||||
# Use existing host.
|
||||
LOG.debug("Use existing initiator hostname: %s in K2.", host_name)
|
||||
host = host_rs.hits[0]
|
||||
# Adding host wwpn.
|
||||
for wwpn in host_wwpns:
|
||||
wwpn = ":".join([wwpn[i:i + 2] for i in range(0, len(wwpn), 2)])
|
||||
if self.client.search("host_fc_ports", pwwn=wwpn,
|
||||
host=host).total == 0:
|
||||
LOG.debug("Adding wwpn: %(wwpn)s to host: "
|
||||
"%(host)s in K2.", {'wwpn': wwpn,
|
||||
'host': host_name})
|
||||
try:
|
||||
self.client.new("host_fc_ports", pwwn=wwpn,
|
||||
host=host).save()
|
||||
except Exception as ex:
|
||||
if host_rs.total == 0:
|
||||
self._delete_host_by_name(host_name)
|
||||
LOG.exception(_LE("Unable to add wwpn : %(wwpn)s to "
|
||||
"host: %(host)s in K2."),
|
||||
{'wwpn': wwpn, 'host': host_name})
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
return host, host_rs, host_name
|
||||
|
||||
@kaminario_logger
|
||||
def _build_initiator_target_map(self, connector, all_target_wwns):
|
||||
"""Build the target_wwns and the initiator target map."""
|
||||
target_wwns = []
|
||||
init_targ_map = {}
|
||||
|
||||
if self.lookup_service is not None:
|
||||
# use FC san lookup.
|
||||
dev_map = self.lookup_service.get_device_mapping_from_network(
|
||||
connector.get('wwpns'),
|
||||
all_target_wwns)
|
||||
|
||||
for fabric_name in dev_map:
|
||||
fabric = dev_map[fabric_name]
|
||||
target_wwns += fabric['target_port_wwn_list']
|
||||
for initiator in fabric['initiator_port_wwn_list']:
|
||||
if initiator not in init_targ_map:
|
||||
init_targ_map[initiator] = []
|
||||
init_targ_map[initiator] += fabric['target_port_wwn_list']
|
||||
init_targ_map[initiator] = list(set(
|
||||
init_targ_map[initiator]))
|
||||
target_wwns = list(set(target_wwns))
|
||||
else:
|
||||
initiator_wwns = connector.get('wwpns', [])
|
||||
target_wwns = all_target_wwns
|
||||
|
||||
for initiator in initiator_wwns:
|
||||
init_targ_map[initiator] = target_wwns
|
||||
|
||||
return target_wwns, init_targ_map
|
|
@ -0,0 +1,127 @@
|
|||
# Copyright (c) 2016 by Kaminario Technologies, Ltd.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""Volume driver for Kaminario K2 all-flash arrays."""
|
||||
import six
|
||||
|
||||
from oslo_log import log as logging
|
||||
|
||||
from cinder import coordination
|
||||
from cinder import exception
|
||||
from cinder.i18n import _, _LE
|
||||
from cinder import interface
|
||||
from cinder.objects import fields
|
||||
from cinder.volume.drivers.kaminario import kaminario_common as common
|
||||
|
||||
ISCSI_TCP_PORT = "3260"
|
||||
LOG = logging.getLogger(__name__)
|
||||
kaminario_logger = common.kaminario_logger
|
||||
|
||||
|
||||
@interface.volumedriver
|
||||
class KaminarioISCSIDriver(common.KaminarioCinderDriver):
|
||||
"""Kaminario K2 iSCSI Volume Driver.
|
||||
|
||||
Version history:
|
||||
1.0 - Initial driver
|
||||
1.1 - Added manage/unmanage and extra-specs support for nodedup
|
||||
1.2 - Added replication support
|
||||
1.3 - Added retype support
|
||||
"""
|
||||
|
||||
VERSION = '1.3'
|
||||
|
||||
# ThirdPartySystems wiki page name
|
||||
CI_WIKI_NAME = "Kaminario_K2_CI"
|
||||
|
||||
@kaminario_logger
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(KaminarioISCSIDriver, self).__init__(*args, **kwargs)
|
||||
self._protocol = 'iSCSI'
|
||||
|
||||
@kaminario_logger
|
||||
@coordination.synchronized('{self.k2_lock_name}')
|
||||
def initialize_connection(self, volume, connector):
|
||||
"""Attach K2 volume to host."""
|
||||
# Get target_portal and target iqn.
|
||||
iscsi_portal, target_iqn = self.get_target_info(volume)
|
||||
# Map volume.
|
||||
lun = self.k2_initialize_connection(volume, connector)
|
||||
# Return target volume information.
|
||||
return {"driver_volume_type": "iscsi",
|
||||
"data": {"target_iqn": target_iqn,
|
||||
"target_portal": iscsi_portal,
|
||||
"target_lun": lun,
|
||||
"target_discovered": True}}
|
||||
|
||||
@kaminario_logger
|
||||
@coordination.synchronized('{self.k2_lock_name}')
|
||||
def terminate_connection(self, volume, connector, **kwargs):
|
||||
super(KaminarioISCSIDriver, self).terminate_connection(volume,
|
||||
connector)
|
||||
|
||||
@kaminario_logger
|
||||
def get_target_info(self, volume):
|
||||
rep_status = fields.ReplicationStatus.FAILED_OVER
|
||||
if (hasattr(volume, 'replication_status') and
|
||||
volume.replication_status == rep_status):
|
||||
self.client = self.target
|
||||
LOG.debug("Searching first iscsi port ip without wan in K2.")
|
||||
iscsi_ip_rs = self.client.search("system/net_ips", wan_port="")
|
||||
iscsi_ip = target_iqn = None
|
||||
if hasattr(iscsi_ip_rs, 'hits') and iscsi_ip_rs.total != 0:
|
||||
iscsi_ip = iscsi_ip_rs.hits[0].ip_address
|
||||
if not iscsi_ip:
|
||||
msg = _("Unable to get ISCSI IP address from K2.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
iscsi_portal = "{0}:{1}".format(iscsi_ip, ISCSI_TCP_PORT)
|
||||
LOG.debug("Searching system state for target iqn in K2.")
|
||||
sys_state_rs = self.client.search("system/state")
|
||||
|
||||
if hasattr(sys_state_rs, 'hits') and sys_state_rs.total != 0:
|
||||
target_iqn = sys_state_rs.hits[0].iscsi_qualified_target_name
|
||||
|
||||
if not target_iqn:
|
||||
msg = _("Unable to get target iqn from K2.")
|
||||
LOG.error(msg)
|
||||
raise exception.KaminarioCinderDriverException(reason=msg)
|
||||
return iscsi_portal, target_iqn
|
||||
|
||||
@kaminario_logger
|
||||
def _get_host_object(self, connector):
|
||||
host_name = self.get_initiator_host_name(connector)
|
||||
LOG.debug("Searching initiator hostname: %s in K2.", host_name)
|
||||
host_rs = self.client.search("hosts", name=host_name)
|
||||
"""Create a host if not exists."""
|
||||
if host_rs.total == 0:
|
||||
try:
|
||||
LOG.debug("Creating initiator hostname: %s in K2.", host_name)
|
||||
host = self.client.new("hosts", name=host_name,
|
||||
type="Linux").save()
|
||||
LOG.debug("Adding iqn: %(iqn)s to host: %(host)s in K2.",
|
||||
{'iqn': connector['initiator'], 'host': host_name})
|
||||
iqn = self.client.new("host_iqns", iqn=connector['initiator'],
|
||||
host=host)
|
||||
iqn.save()
|
||||
except Exception as ex:
|
||||
self._delete_host_by_name(host_name)
|
||||
LOG.exception(_LE("Unable to create host: %s in K2."),
|
||||
host_name)
|
||||
raise exception.KaminarioCinderDriverException(
|
||||
reason=six.text_type(ex.message))
|
||||
else:
|
||||
LOG.debug("Use existing initiator hostname: %s in K2.", host_name)
|
||||
host = host_rs.hits[0]
|
||||
return host, host_rs, host_name
|
|
@ -0,0 +1,11 @@
|
|||
module Puppet::Parser::Functions
|
||||
newfunction(:get_replication_device, :type => :rvalue) do |args|
|
||||
ip = args[0].to_s
|
||||
login = args[1]
|
||||
password = args[2]
|
||||
rpo = args[3]
|
||||
replication_device = 'backend_id' + ':' + ip + "," + 'login' + ':' + login + "," + 'password' + ':' + password + "," + 'rpo' + ':' + rpo
|
||||
return replication_device
|
||||
end
|
||||
end
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
module Puppet::Parser::Functions
|
||||
newfunction(:section_name, :type => :rvalue) do |args|
|
||||
ip = args[0]
|
||||
str = args[1]
|
||||
sec_name = str + '_' + ip
|
||||
return sec_name
|
||||
end
|
||||
end
|
||||
|
|
@ -0,0 +1,39 @@
|
|||
class kaminario::driver{
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario':
|
||||
ensure => 'directory',
|
||||
owner => 'root',
|
||||
group => 'root',
|
||||
mode => '0755',}
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/__init__.py':
|
||||
mode => '0644',
|
||||
owner => root,
|
||||
group => root,
|
||||
source => 'puppet:///modules/kaminario/__init__.py'}
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_common.py':
|
||||
mode => '0644',
|
||||
owner => root,
|
||||
group => root,
|
||||
source => 'puppet:///modules/kaminario/kaminario_common.py'}
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_fc.py':
|
||||
mode => '0644',
|
||||
owner => root,
|
||||
group => root,
|
||||
source => 'puppet:///modules/kaminario/kaminario_fc.py'}
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/kaminario/kaminario_iscsi.py':
|
||||
mode => '0644',
|
||||
owner => root,
|
||||
group => root,
|
||||
source => 'puppet:///modules/kaminario/kaminario_iscsi.py'}
|
||||
|
||||
file { '/usr/lib/python2.7/dist-packages/cinder/exception.py':
|
||||
mode => '0644',
|
||||
owner => root,
|
||||
group => root,
|
||||
source => 'puppet:///modules/kaminario/exception.py'}
|
||||
|
||||
}
|
|
@ -0,0 +1,82 @@
|
|||
class kaminario::config {
|
||||
$num = [ '0', '1', '2', '3', '4', '5' ]
|
||||
$plugin_settings = hiera('cinder_kaminario')
|
||||
each($num) |$value| {
|
||||
config {"plugin_${value}":
|
||||
cinder_node => $plugin_settings["cinder_node_${value}"],
|
||||
storage_protocol => $plugin_settings["storage_protocol_${value}"],
|
||||
backend_name => $plugin_settings["backend_name_${value}"],
|
||||
storage_user => $plugin_settings["storage_user_${value}"],
|
||||
storage_password => $plugin_settings["storage_password_${value}"],
|
||||
storage_ip => $plugin_settings["storage_ip_${value}"],
|
||||
enable_replication => $plugin_settings["enable_replication_${value}"],
|
||||
replication_ip => $plugin_settings["replication_ip_${value}"],
|
||||
replication_login => $plugin_settings["replication_login_${value}"],
|
||||
replication_rpo => $plugin_settings["replication_rpo_${value}"],
|
||||
replication_password => $plugin_settings["replication_password_${value}"],
|
||||
num => $value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
define config($storage_protocol,$backend_name,$storage_user,$storage_password,$storage_ip,$num,$cinder_node,$enable_replication,$replication_ip,$replication_login,$replication_rpo,$replication_password) {
|
||||
|
||||
$sec_name = section_name( $storage_ip , $backend_name )
|
||||
$config_file = "/etc/cinder/cinder.conf"
|
||||
if $cinder_node == hiera(user_node_name) {
|
||||
if $storage_protocol == 'FC'{
|
||||
ini_subsetting {"enable_backend_${num}":
|
||||
ensure => present,
|
||||
section => 'DEFAULT',
|
||||
key_val_separator => '=',
|
||||
path => $config_file,
|
||||
setting => 'enabled_backends',
|
||||
subsetting => $backend_name,
|
||||
subsetting_separator => ',',
|
||||
}->
|
||||
cinder_config {
|
||||
"$sec_name/volume_driver" : value => "cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver";
|
||||
"$sec_name/volume_backend_name" : value => $backend_name;
|
||||
"$sec_name/san_ip" : value => $storage_ip;
|
||||
"$sec_name/san_login" : value => $storage_user;
|
||||
"$sec_name/san_password" : value => $storage_password;
|
||||
}
|
||||
|
||||
if $enable_replication == true {
|
||||
$replication_device = get_replication_device($replication_ip, $replication_login , $replication_password , $replication_rpo)
|
||||
cinder_config {
|
||||
"$sec_name/replication_device" : value => $replication_device;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
if $storage_protocol == 'ISCSI'{
|
||||
ini_subsetting {"enable_backend_${num}":
|
||||
ensure => present,
|
||||
section => 'DEFAULT',
|
||||
key_val_separator => '=',
|
||||
path => $config_file,
|
||||
setting => 'enabled_backends',
|
||||
subsetting => $backend_name,
|
||||
subsetting_separator => ',',
|
||||
}->
|
||||
cinder_config {
|
||||
"$sec_name/volume_driver" : value => "cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver";
|
||||
"$sec_name/volume_backend_name" : value => $backend_name;
|
||||
"$sec_name/san_ip" : value => $storage_ip;
|
||||
"$sec_name/san_login" : value => $storage_user;
|
||||
"$sec_name/san_password" : value => $storage_password;
|
||||
}
|
||||
|
||||
if $enable_replication == true {
|
||||
$replication_device = get_replication_device($replication_ip, $replication_login , $replication_password , $replication_rpo)
|
||||
cinder_config {
|
||||
"$sec_name/replication_device" : value => $replication_device;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
class kaminario::krest{
|
||||
package { 'python-pip':
|
||||
ensure => installed,}
|
||||
package { 'krest':
|
||||
ensure => installed,
|
||||
provider => pip,
|
||||
require => Package['python-pip'],}
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
class kaminario::type {
|
||||
$num = [ '0', '1', '2', '3', '4', '5' ]
|
||||
$plugin_settings = hiera('cinder_kaminario')
|
||||
each($num) |$value| {
|
||||
kaminario_type {"plugin_${value}":
|
||||
create_type => $plugin_settings["create_type_${value}"],
|
||||
options => $plugin_settings["options_${value}"],
|
||||
backend_name => $plugin_settings["backend_name_${value}"]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
define kaminario_type ($create_type,$options,$backend_name) {
|
||||
if $create_type == true {
|
||||
case $options {
|
||||
"enable_replication_type": {
|
||||
cinder_type {$backend_name:
|
||||
ensure => present,
|
||||
properties => ["volume_backend_name=${backend_name}",'kaminario:replication=enabled'],
|
||||
}
|
||||
}
|
||||
"enable_dedup": {
|
||||
cinder_type {$backend_name:
|
||||
ensure => present,
|
||||
properties => ["volume_backend_name=${backend_name}",'kaminario:thin_prov_type=nodedup'],
|
||||
}
|
||||
}
|
||||
"replication_dedup": {
|
||||
cinder_type {$backend_name:
|
||||
ensure => present,
|
||||
properties => ["volume_backend_name=${backend_name}",'kaminario:thin_prov_type=nodedup','kaminario:thin_prov_type=nodedup'],
|
||||
}
|
||||
}
|
||||
"default": {
|
||||
cinder_type {$backend_name:
|
||||
ensure => present,
|
||||
properties => ["volume_backend_name=${backend_name}"],
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,39 @@
|
|||
- id: kaminario_parser
|
||||
type: puppet
|
||||
version: 2.1.0
|
||||
groups: [cinder,primary-controller,controller]
|
||||
requires: [top-role-cinder]
|
||||
required_for: [kaminario_cinder]
|
||||
condition:
|
||||
yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
|
||||
parameters:
|
||||
puppet_manifest: puppet/manifests/cinder_parser.pp
|
||||
puppet_modules: puppet/modules:/etc/puppet/modules
|
||||
timeout: 360
|
||||
|
||||
- id: kaminario_cinder
|
||||
type: puppet
|
||||
version: 2.1.0
|
||||
groups: [cinder]
|
||||
requires: [kaminario_parser]
|
||||
required_for: [deploy_end]
|
||||
condition:
|
||||
yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
|
||||
parameters:
|
||||
puppet_manifest: puppet/manifests/cinder_kaminario.pp
|
||||
puppet_modules: puppet/modules:/etc/puppet/modules
|
||||
timeout: 360
|
||||
|
||||
- id: kaminario_types
|
||||
type: puppet
|
||||
version: 2.1.0
|
||||
groups: [primary-controller]
|
||||
requires: [openstack-cinder]
|
||||
required_for: [deploy_end]
|
||||
condition:
|
||||
yaql_exp: "changedAny($.storage, $.cinder_kaminario)"
|
||||
parameters:
|
||||
puppet_manifest: puppet/manifests/cinder_type.pp
|
||||
puppet_modules: puppet/modules:/etc/puppet/modules
|
||||
timeout: 360
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,34 @@
|
|||
# Plugin name
|
||||
name: cinder_kaminario
|
||||
# Human-readable name for your plugin
|
||||
title: Kaminario For Cinder
|
||||
# Plugin version
|
||||
version: '1.0.0'
|
||||
# Description
|
||||
description: Enable Kaminario Storage Array as a Cinder backend
|
||||
# Required fuel version
|
||||
fuel_version: ['9.0']
|
||||
# Specify license of your plugin
|
||||
licenses: ['Apache License Version 2.0']
|
||||
# Specify author or company name
|
||||
authors: ['Biarca']
|
||||
# A link to the plugin's page
|
||||
homepage: 'https://github.com/openstack/fuel-plugin-cinder-kaminario'
|
||||
# Specify a group which your plugin implements, possible options:
|
||||
# network, storage, storage::cinder, storage::glance, hypervisor,
|
||||
# equipment
|
||||
groups: ['storage::cinder']
|
||||
# Change `false` to `true` if the plugin can be installed in the environment
|
||||
# after the deployment.
|
||||
is_hotpluggable: true
|
||||
|
||||
# The plugin is compatible with releases in the list
|
||||
releases:
|
||||
- os: ubuntu
|
||||
version: mitaka-9.0
|
||||
mode: ['ha']
|
||||
deployment_scripts_path: deployment_scripts/
|
||||
repository_path: repositories/ubuntu
|
||||
|
||||
# Version of plugin package
|
||||
package_version: '4.0.0'
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Add here any the actions which are required before plugin build
|
||||
# like packages building, packages downloading from mirrors and so on.
|
||||
# The script should return 0 if there were no errors.
|
|
@ -0,0 +1,7 @@
|
|||
volumes_roles_mapping:
|
||||
# Default role mapping
|
||||
fuel-plugin-cinder-kaminario_role:
|
||||
- {allocate_size: "min", id: "os"}
|
||||
|
||||
# Set here new volumes for your role
|
||||
volumes: []
|
Loading…
Reference in New Issue