Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I632d24d4c4d26ba397c72eccf777b97000d0c26e
This commit is contained in:
Andreas Jaeger 2019-12-18 19:45:35 +01:00
parent efdec4e004
commit e3d0e74a76
95 changed files with 10 additions and 13647 deletions

59
.gitignore vendored
View File

@ -1,59 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
*.db
*.~vsd

View File

@ -1,53 +0,0 @@
# Contributions
The Fuel plugin for ScaleIO project has been licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") License. In order to contribute to the project you will to do two things:
1. License your contribution under the [DCO](http://elinux.org/Developer_Certificate_Of_Origin "Developer Certificate of Origin") + [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0")
2. Identify the type of contribution in the commit message
### 1. Licensing your Contribution:
As part of the contribution, in the code comments (or license file) associated with the contribution must include the following:
Copyright (c) 2015, EMC Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- [Insert Name], [Date (e.g., 1/1/15]”
**For example:**
A contribution from **Joe Developer**, an **independent developer**, submitted in **May 15th of 2015** should have an associated license (as file or/and code comments) like this:
Copyright (c) 2015, Joe Developer
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- Joe Developer, May 15th 2015”
### 2. Identifying the Type of Contribution
In addition to identifying an open source license in the documentation, **all Git Commit messages** associated with a contribution must identify the type of contribution (i.e., Bug Fix, Patch, Script, Enhancement, Tool Creation, or Other).

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2015, EMC Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

112
README.md
View File

@ -1,112 +0,0 @@
# ScaleIO-Cinder Plugin for Fuel
## Overview
The `ScaleIO-Cinder` plugin leverages an existing ScaleIO cluster by configuring Cinder to use ScaleIO as the block storage backend.
If you are looking to deploy a new ScaleIO cluster on Fuel slave nodes and replaces the default OpenStack volume backend by ScaleIO. please take a look at the [ScaleIO](https://github.com/openstack/fuel-plugin-scaleio) plugin.
## Requirements
| Requirement | Version/Comment |
|----------------------------------|-----------------|
| Mirantis OpenStack compatibility | 6.1/7.0 |
## Recommendations
None.
## Limitations
The following table shows current limitation
![ScaleIOSupport](https://github.com/openstack/fuel-plugin-scaleio-cinder/blob/master/doc/source/images/SIO_Support.png)
# Installation Guide
## ScaleIO-Cinder Plugin install from RPM file
To install the ScaleIO-Cinder plugin, follow these steps:
1. Download the plugin from the [Fuel Plugins Catalog](https://software.mirantis.com/download-mirantis-openstack-fuel-plug-ins/).
2. Copy the plugin file to the Fuel Master node. Follow the [Quick start guide](https://software.mirantis.com/quick-start/) if you don't have a running Fuel Master node yet.
```
$ scp scaleio-cinder-1.5-1.5.0-1.noarch.rpm root@<Fuel Master node IP address>:/tmp/
```
3. Log into the Fuel Master node and install the plugin using the fuel command line.
```
$ fuel plugins --install /tmp/scaleio-cinder-1.5-1.5.0-1.noarch.rpm
```
4. Verify that the plugin is installed correctly.
```
$ fuel plugins
```
## ScaleIO-Cinder Plugin install from source code
To install the ScaleIO-Cinder Plugin from source code, you first need to prepare an environment to build the RPM file of the plugin. The recommended approach is to build the RPM file directly onto the Fuel Master node so that you won't have to copy that file later.
Prepare an environment for building the plugin on the **Fuel Master node**.
1. Install the standard Linux development tools:
```
$ yum install createrepo rpm rpm-build dpkg-devel
```
2. Install the Fuel Plugin Builder. To do that, you should first get pip:
```
$ easy_install pip
```
3. Then install the Fuel Plugin Builder (the `fpb` command line) with `pip`:
```
$ pip install fuel-plugin-builder
```
*Note: You may also have to build the Fuel Plugin Builder if the package version of the
plugin is higher than package version supported by the Fuel Plugin Builder you get from `pypi`.
In this case, please refer to the section "Preparing an environment for plugin development"
of the [Fuel Plugins wiki](https://wiki.openstack.org/wiki/Fuel/Plugins) if you
need further instructions about how to build the Fuel Plugin Builder.*
4. Clone the ScaleIO Plugin git repository:
```
$ git clone --recursive git@github.com:openstack/fuel-plugin-scaleio-cinder.git
```
5. Check that the plugin is valid:
```
$ fpb --check ./fuel-plugin-scaleio-cinder
```
6. Build the plugin:
```
$ fpb --build ./fuel-plugin-scaleio-cinder
```
7. Now you have created an RPM file that you can install using the steps described above. The RPM file will be located in:
```
$ ./fuel-plugin-scaleio-cinder/scaleio-cinder-1.5-1.5.0-1.noarch.rpm
```
# User Guide
Please read the [ScaleIO Plugin User Guide](doc).
# Contributions
Please read the [CONTRIBUTING.md](CONTRIBUTING.md) document for the latest information about contributions.
# Bugs, requests, questions
Please use the [Launchpad project site](https://launchpad.net/fuel-plugin-scaleio-cinder) to report bugs, request features, ask questions, etc.
# License
Please read the [LICENSE](LICENSE) document for the latest licensing information.

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,35 +0,0 @@
c700a8b9312d24bdc57570f7d6a131cf63d89016 LICENSE
f15e1db1b640c3ae73cc129934f9d914440b0250 README.md
bffb5460de132beba188914cb0dcc14e9dc4e36b deployment_scripts/deploy.sh
7eb8ef1f761b13f04c8beb1596bdb95bc281b54a deployment_scripts/install_scaleio_compute.pp
2f62d18be6c2a612f8513815c95d2ca3a20600e6 deployment_scripts/install_scaleio_controller.pp
76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters
c9efb58603cc6e448c2974c4a095cda14f13b431 deployment_scripts/puppet/install_scaleio_compute/files/scaleiolibvirtdriver.py
056e1427d651c8991c7caa3d82e62a3f0a5959d3 deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp
76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters
8dbfc34be2c4b4348736306de01769f3daf78b11 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.py
97151193ca7cafc39bb99e11175c2bd8e07410e1 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_fault_set.rb
ea0175506182a5d1265060ebca7c3c439e446042 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_sds_device_pairs.rb
fe5b7322b0f4d1b18959d6d399b20f98568e30eb deployment_scripts/puppet/install_scaleio_controller/lib/puppet/provider/scaleio_cluster_create/init_cluster.rb
015351dfe07ca666e4c50186b09b89abcaab5959 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/type/scaleio_cluster_create.rb
f46c0cf37a4e7c3a9f0d8a4d1d5c9b0fdd567692 deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp
2cf472314221fbc1520f9ec76c0eb47570a2f444 deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo
d280a227cfb05d67795d1a03bacfd781900b6134 deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb
29162123f2ad50753d7f5cb3be9d5af5687de10b deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo
d501258114fecc5e677b42486f91436ce24bf912 deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb
e644fa23da337234dfa78e03fd2f2be8162e5617 deployment_scripts/puppet/remove_scaleio_repo/manifests/init.pp
73f9232026d4bd9e74f8e97afadfc972044c64cf deployment_scripts/remove_scaleio_repo.pp
0b552e1a89b852857efe0b6fe1c368feb3870dd9 environment_config.yaml
ede2ec1bf0bdb1455f3a0b56901ef27ed214645d metadata.yaml
83c3d6d1526da89aed2fc1e24ec8bfacb4a3ea1e pre_build_hook
da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/centos/.gitkeep
7246778c54a204f01064348b067abeb4a766a24b repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2
ad0107628e9b6dd7a82553ba5cb447388e50900a repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz
1eb13a25318339d9e8157f0bf80419c019fa5000 repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz
ebe841ac4c94ae950cfc8f5f80bc6707eb39e456 repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2
f7affeb9ed7e353556e43caf162660cae95d8d19 repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2
84b124bc4de1c04613859bdb7af8d5fef021e3bb repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz
e50be018e61c5d5479cd6734fc748a821440daf8 repositories/centos/repodata/repomd.xml
da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/ubuntu/.gitkeep
cbecb9edd9e08fbebf280633bc72c69ff735b8c7 repositories/ubuntu/Packages.gz
25e9290ad1ca50f8346c3beb59c6cdcdef7ecca2 tasks.yaml

View File

@ -1,4 +0,0 @@
#!/bin/bash
# It's a script which deploys your plugin
echo scaleio > /tmp/scaleio

View File

@ -1,2 +0,0 @@
$plugin_settings = hiera('scaleio-cinder')
class {'install_scaleio_compute': }

View File

@ -1,2 +0,0 @@
$plugin_settings = hiera('scaleio-cinder')
class {'install_scaleio_controller': }

View File

@ -1,329 +0,0 @@
# Copyright (c) 2013 EMC Corporation
# All Rights Reserved
# This software contains the intellectual property of EMC Corporation
# or is licensed to EMC Corporation from third parties. Use of this
# software and the intellectual property contained therein is expressly
# limited to the terms and conditions of the License Agreement under which
# it is provided by or on behalf of EMC.
import glob
import hashlib
import os
import time
import urllib2
import urlparse
import requests
import json
import re
import sys
import urllib
from oslo.config import cfg
from nova import exception
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.openstack.common import loopingcall
from nova.openstack.common import processutils
from nova import paths
from nova.storage import linuxscsi
from nova import utils
from nova.virt.libvirt import config as vconfig
from nova.virt.libvirt import utils as virtutils
from nova.virt.libvirt.volume import LibvirtBaseVolumeDriver
LOG = logging.getLogger(__name__)
volume_opts = [
cfg.IntOpt('num_iscsi_scan_tries',
default=3,
help='number of times to rescan iSCSI target to find volume'),
cfg.IntOpt('num_iser_scan_tries',
default=3,
help='number of times to rescan iSER target to find volume'),
cfg.StrOpt('rbd_user',
help='the RADOS client name for accessing rbd volumes'),
cfg.StrOpt('rbd_secret_uuid',
help='the libvirt uuid of the secret for the rbd_user'
'volumes'),
cfg.StrOpt('nfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the nfs volume is mounted on the compute node'),
cfg.StrOpt('nfs_mount_options',
help='Mount options passed to the nfs client. See section '
'of the nfs man page for details'),
cfg.IntOpt('num_aoe_discover_tries',
default=3,
help='number of times to rediscover AoE target to find volume'),
cfg.StrOpt('glusterfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the glusterfs volume is mounted on the '
'compute node'),
cfg.BoolOpt('libvirt_iscsi_use_multipath',
default=False,
help='use multipath connection of the iSCSI volume'),
cfg.BoolOpt('libvirt_iser_use_multipath',
default=False,
help='use multipath connection of the iSER volume'),
cfg.StrOpt('scality_sofs_config',
help='Path or URL to Scality SOFS configuration file'),
cfg.StrOpt('scality_sofs_mount_point',
default='$state_path/scality',
help='Base dir where Scality SOFS shall be mounted'),
cfg.ListOpt('qemu_allowed_storage_drivers',
default=[],
help='Protocols listed here will be accessed directly '
'from QEMU. Currently supported protocols: [gluster]')
]
CONF = cfg.CONF
CONF.register_opts(volume_opts)
OK_STATUS_CODE=200
VOLUME_NOT_MAPPED_ERROR=84
VOLUME_ALREADY_MAPPED_ERROR=81
class LibvirtScaleIOVolumeDriver(LibvirtBaseVolumeDriver):
"""Class implements libvirt part of volume driver for ScaleIO cinder driver."""
local_sdc_id = None
mdm_id = None
pattern3 = None
def __init__(self, connection):
"""Create back-end to nfs."""
LOG.warning("ScaleIO libvirt volume driver INIT")
super(LibvirtScaleIOVolumeDriver,
self).__init__(connection, is_block_dev=False)
def find_volume_path(self, volume_id):
LOG.info("looking for volume %s" % volume_id)
#look for the volume in /dev/disk/by-id directory
disk_filename = ""
tries = 0
while not disk_filename:
if (tries > 15):
raise exception.NovaException("scaleIO volume {0} not found at expected path ".format(volume_id))
by_id_path = "/dev/disk/by-id"
if not os.path.isdir(by_id_path):
LOG.warn("scaleIO volume {0} not yet found (no directory /dev/disk/by-id yet). Try number: {1} ".format(volume_id, tries))
tries = tries + 1
time.sleep(1)
continue
filenames = os.listdir(by_id_path)
LOG.warning("Files found in {0} path: {1} ".format(by_id_path, filenames))
for filename in filenames:
if (filename.startswith("emc-vol") and filename.endswith(volume_id)):
disk_filename = filename
if not disk_filename:
LOG.warn("scaleIO volume {0} not yet found. Try number: {1} ".format(volume_id, tries))
tries = tries + 1
time.sleep(1)
if (tries != 0):
LOG.warning("Found scaleIO device {0} after {1} retries ".format(disk_filename, tries))
full_disk_name = by_id_path + "/" + disk_filename
LOG.warning("Full disk name is " + full_disk_name)
return full_disk_name
# path = os.path.realpath(full_disk_name)
# LOG.warning("Path is " + path)
# return path
def _get_client_id(self, server_ip, server_port, server_username, server_password, server_token, sdc_ip):
request = "https://" + server_ip + ":" + server_port + "/api/types/Client/instances/getByIp::" + sdc_ip + "/"
LOG.info("ScaleIO get client id by ip request: %s" % request)
r = requests.get(request, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
sdc_id = r.json()
if (sdc_id == '' or sdc_id is None):
msg = ("Client with ip %s wasn't found " % (sdc_ip))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != 200 and "errorCode" in sdc_id):
msg = ("Error getting sdc id from ip %s: %s " % (sdc_ip, sdc_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO sdc id is %s" % sdc_id)
return sdc_id
def _get_volume_id(self, server_ip, server_port, server_username, server_password, server_token, volname):
volname_encoded = urllib.quote(volname, '')
volname_double_encoded = urllib.quote(volname_encoded, '')
# volname = volname.replace('/', '%252F')
LOG.info("volume name after double encoding is %s " % volname_double_encoded)
request = "https://" + server_ip + ":" + server_port + "/api/types/Volume/instances/getByName::" + volname_double_encoded
LOG.info("ScaleIO get volume id by name request: %s" % request)
r = requests.get(request, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
volume_id = r.json()
if (volume_id == '' or volume_id is None):
msg = ("Volume with name %s wasn't found " % (volname))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != OK_STATUS_CODE and "errorCode" in volume_id):
msg = ("Error getting volume id from name %s: %s " % (volname, volume_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO volume id is %s" % volume_id)
return volume_id
def _check_response(self, response, request, server_ip, server_port, server_username, server_password, server_token):
if (response.status_code == 401 or response.status_code == 403):
LOG.info("Token is invalid, going to re-login and get a new one")
login_request = "https://" + server_ip + ":" + server_port + "/api/login"
r = requests.get(login_request, auth=(server_username, server_password), verify=False)
token = r.json()
#repeat request with valid token
LOG.debug("going to perform request again {0} with valid token".format(request))
res = requests.get(request, auth=(server_username, token), verify=False)
return res
return response
def connect_volume(self, connection_info, disk_info):
"""Connect the volume. Returns xml for libvirt."""
conf = super(LibvirtScaleIOVolumeDriver,
self).connect_volume(connection_info,
disk_info)
LOG.info("scaleIO connect volume in scaleio libvirt volume driver")
data = connection_info
LOG.info("scaleIO connect to stuff "+str(data))
data = connection_info['data']
LOG.info("scaleIO connect to joined "+str(data))
LOG.info("scaleIO Dsk info "+str(disk_info))
volname = connection_info['data']['scaleIO_volname']
#sdc ip here is wrong, probably not retrieved properly in cinder driver. Currently not used.
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
iops_limit = connection_info['data']['iopsLimit']
bandwidth_limit = connection_info['data']['bandwidthLimit']
LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}, REST Server username: {3}, REST Server password: {4}, iops limit: {5}, bandwidth limit: {6}".format(volname, sdc_ip, server_ip, server_username, server_password, iops_limit, bandwidth_limit))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: "+str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
# sdc_id = self._get_client_id(server_ip, server_port, server_username, server_password, server_token, sdc_ip)
# params = {'sdcId' : sdc_id}
params = {'guid' : guid, 'allowMultipleMappings' : 'TRUE'}
volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname)
headers = {'content-type': 'application/json'}
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc"
LOG.info("map volume request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
# LOG.info("map volume response: %s" % r.text)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_ALREADY_MAPPED_ERROR):
msg = ("Ignoring error mapping volume %s: volume already mapped" % (volname))
LOG.warning(msg)
else:
msg = ("Error mapping volume %s: %s" % (volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
# convert id to hex
# val = int(volume_id)
# id_in_hex = hex((val + (1 << 64)) % (1 << 64))
# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0"
formated_id = volume_id
conf.source_path = self.find_volume_path(formated_id)
conf.source_type = 'block'
# set QoS settings after map was performed
if (iops_limit != None and bandwidth_limit != None):
params = {'sdcId' : sdc_id, 'iopsLimit': iops_limit, 'bandwidthLimitInKbps': bandwidth_limit}
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/setMappedSdcLimits"
LOG.info("set client limit request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
LOG.info("set client limit response: %s" % response)
msg = ("Error setting client limits for volume %s: %s" % (volname, response['message']))
LOG.error(msg)
return conf
def disconnect_volume(self, connection_info, disk_info):
conf = super(LibvirtScaleIOVolumeDriver,
self).disconnect_volume(connection_info,
disk_info)
LOG.info("scaleIO disconnect volume in scaleio libvirt volume driver")
volname = connection_info['data']['scaleIO_volname']
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}".format(volname, sdc_ip, server_ip))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: "+str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
params = {'guid' : guid}
headers = {'content-type': 'application/json'}
volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname)
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/removeMappedSdc"
LOG.info("unmap volume request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_NOT_MAPPED_ERROR):
msg = ("Ignoring error unmapping volume %s: volume not mapped" % (volname))
LOG.warning(msg)
else:
msg = ("Error unmapping volume %s: %s" % (volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)

View File

@ -1,3 +0,0 @@
[Filters]
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root

View File

@ -1,50 +0,0 @@
# ScaleIO Puppet Manifest for Compute Nodes for Centos
class install_scaleio_compute::centos
{
$nova_service = 'openstack-nova-compute'
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
#install ScaleIO SDC package
exec { "install_sdc":
command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"",
path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin",
}
#Configure nova-compute
ini_subsetting { 'nova-volume_driver':
ensure => present,
path => '/etc/nova/nova.conf',
subsetting_separator => ',',
section => 'libvirt',
setting => 'volume_drivers',
subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver',
notify => Service[$nova_service],
}
file { 'scaleiolibvirtdriver.py':
path => '/usr/lib/python2.6/site-packages/nova/virt/libvirt/scaleiolibvirtdriver.py',
source => 'puppet:///modules/install_scaleio_compute/6.1/scaleiolibvirtdriver.py',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
file { 'scaleio.filters':
path => '/usr/share/nova/rootwrap/scaleio.filters',
source => 'puppet:///modules/install_scaleio_compute/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
service { $nova_service:
ensure => 'running',
}
}

View File

@ -1,11 +0,0 @@
# ScaleIO Puppet Manifest for Compute Nodes
class install_scaleio_compute
{
if($::operatingsystem == 'Ubuntu') {
include install_scaleio_compute::ubuntu
}else {
include install_scaleio_compute::centos
}
}

View File

@ -1,84 +0,0 @@
# ScaleIO Puppet Manifest for Compute Nodes Ubuntu
class install_scaleio_compute::ubuntu
{
$nova_service = 'nova-compute'
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
$version = hiera('fuel_version')
#install ScaleIO SDC package
if($version == '6.1') {
file { 'scaleiolibvirtdriver.py':
path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/scaleiolibvirtdriver.py',
source => 'puppet:///modules/install_scaleio_compute/6.1/scaleiolibvirtdriver.py',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
}
else
{
file { 'driver.py':
path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py',
source => 'puppet:///modules/install_scaleio_compute/7.0/driver.py',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
} ->
file { 'volume.py':
path => '/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py',
source => 'puppet:///modules/install_scaleio_compute/7.0/volume.py',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
}
package{'emc-scaleio-sdc':
ensure => installed,
}->
exec {"Add MDM to drv-cfg":
command => "bash -c 'echo mdm ${mdm_ip_1},${mdm_ip_2} >> /bin/emc/scaleio/drv_cfg.txt'",
path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ],
}->
exec {"Start SDC":
command => "bash -c '/etc/init.d/scini restart'",
path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ],
}->
#Configure nova-compute
ini_subsetting { 'nova-volume_driver':
ensure => present,
path => '/etc/nova/nova.conf',
subsetting_separator => ',',
section => 'libvirt',
setting => 'volume_drivers',
subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver',
notify => Service[$nova_service],
}->
file { 'scaleio.filters':
path => '/etc/nova/rootwrap.d/scaleio.filters',
source => 'puppet:///modules/install_scaleio_compute/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}~>
service { $nova_service:
ensure => 'running',
}
}

View File

@ -1,3 +0,0 @@
[Filters]
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root

View File

@ -1,104 +0,0 @@
class install_scaleio_controller::centos
{
$services = ['openstack-cinder-volume', 'openstack-cinder-api', 'openstack-cinder-scheduler', 'openstack-nova-scheduler']
$gw_ip = $plugin_settings['scaleio_GW']
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
$admin = $plugin_settings['scaleio_Admin']
$password = $plugin_settings['scaleio_Password']
$volume_type = "scaleio-thin"
#1. Install SDC package
exec { "install_sdc1":
command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"",
path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin",
} ->
#2. Copy ScaleIO Files
file { 'scaleio.py':
path => '/usr/lib/python2.6/site-packages/cinder/volume/drivers/emc/scaleio.py',
source => 'puppet:///modules/install_scaleio_controller/6.1/scaleio.py',
mode => '644',
owner => 'root',
group => 'root',
} ->
file { 'scaleio.filters':
path => '/usr/share/cinder/rootwrap/scaleio.filters',
source => 'puppet:///modules/install_scaleio_controller/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
before => File['cinder_scaleio.config'],
}
# 3. Create config for ScaleIO
$cinder_scaleio_config = "[scaleio]
rest_server_ip=$gw_ip
rest_server_username=$admin
rest_server_password=$password
protection_domain_name=${plugin_settings['protection_domain']}
storage_pools=${plugin_settings['protection_domain']}:${plugin_settings['storage_pool_1']}
storage_pool_name=${plugin_settings['storage_pool_1']}
round_volume_capacity=True
force_delete=True
verify_server_certificate=False
"
file { 'cinder_scaleio.config':
ensure => present,
path => '/etc/cinder/cinder_scaleio.config',
content => $cinder_scaleio_config,
mode => 0644,
owner => root,
group => root,
before => Ini_setting['cinder_conf_enabled_backeds'],
} ->
# 4. To /etc/cinder/cinder.conf add
ini_setting { 'cinder_conf_enabled_backeds':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'DEFAULT',
setting => 'enabled_backends',
value => 'ScaleIO',
before => Ini_setting['cinder_conf_volume_driver'],
} ->
ini_setting { 'cinder_conf_volume_driver':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_driver',
value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver',
before => Ini_setting['cinder_conf_scio_config'],
} ->
ini_setting { 'cinder_conf_scio_config':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'cinder_scaleio_config_file',
value => '/etc/cinder/cinder_scaleio.config',
before => Ini_setting['cinder_conf_volume_backend_name'],
} ->
ini_setting { 'cinder_conf_volume_backend_name':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_backend_name',
value => 'ScaleIO',
}~>
service { $services:
ensure => running,
}->
exec { "Create Cinder volume type \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-create ${volume_type}'",
path => ['/usr/bin', '/bin'],
unless => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
} ->
exec { "Create Cinder volume type extra specs for \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-key ${volume_type} set sio:pd_name=${plugin_settings['protection_domain']} sio:provisioning=thin sio:sp_name=${plugin_settings['storage_pool_1']}'",
path => ['/usr/bin', '/bin'],
onlyif => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
}
}

View File

@ -1,10 +0,0 @@
class install_scaleio_controller
{
if($::operatingsystem == 'Ubuntu') {
include install_scaleio_controller::ubuntu
}else {
include install_scaleio_controller::centos
}
}

View File

@ -1,132 +0,0 @@
class install_scaleio_controller::ubuntu
{
$services = ['cinder-volume', 'cinder-api', 'cinder-scheduler', 'nova-scheduler']
$gw_ip = $plugin_settings['scaleio_GW']
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
$admin = $plugin_settings['scaleio_Admin']
$password = $plugin_settings['scaleio_Password']
$volume_type = "scaleio-thin"
$version = hiera('fuel_version')
# 3. Create config for ScaleIO
$cinder_scaleio_config = "[scaleio]
rest_server_ip=$gw_ip
rest_server_username=$admin
rest_server_password=$password
protection_domain_name=${plugin_settings['protection_domain']}
storage_pools=${plugin_settings['protection_domain']}:${plugin_settings['storage_pool_1']}
storage_pool_name=${plugin_settings['storage_pool_1']}
round_volume_capacity=True
force_delete=True
verify_server_certificate=False
"
if($version == '6.1') {
file { 'scaleio_6.py':
path => '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/scaleio.py',
source => 'puppet:///modules/install_scaleio_controller/6.1/scaleio.py',
mode => '644',
owner => 'root',
group => 'root',
}
}
else
{
file { 'scaleio_7.py':
path => '/usr/lib/python2.7/dist-packages/cinder/volume/drivers/emc/scaleio.py',
source => 'puppet:///modules/install_scaleio_controller/7.0/scaleio.py',
mode => '644',
owner => 'root',
group => 'root',
}
}
package {'emc-scaleio-sdc':
ensure => installed ,
}->
exec {"Add MDM to drv-cfg":
command => "bash -c 'echo mdm ${mdm_ip_1} >>/bin/emc/scaleio/drv_cfg.txt'",
path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ],
}->
exec {"Start SDC":
command => "bash -c '/etc/init.d/scini restart'",
path => ['/usr/bin', '/bin','/usr/local/sbin','/usr/sbin','/sbin' ],
}->
file { 'scaleio.filters':
path => '/etc/cinder/rootwrap.d/scaleio.filters',
source => 'puppet:///modules/install_scaleio_controller/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
before => File['cinder_scaleio.config'],
}->
# 3. Create config for ScaleIO
file { 'cinder_scaleio.config':
ensure => present,
path => '/etc/cinder/cinder_scaleio.config',
content => $cinder_scaleio_config,
mode => 0644,
owner => root,
group => root,
before => Ini_setting['cinder_conf_enabled_backeds'],
} ->
# 4. To /etc/cinder/cinder.conf add
ini_setting { 'cinder_conf_enabled_backeds':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'DEFAULT',
setting => 'enabled_backends',
value => 'ScaleIO',
before => Ini_setting['cinder_conf_volume_driver'],
} ->
ini_setting { 'cinder_conf_volume_driver':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_driver',
value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver',
before => Ini_setting['cinder_conf_scio_config'],
} ->
ini_setting { 'cinder_conf_scio_config':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'cinder_scaleio_config_file',
value => '/etc/cinder/cinder_scaleio.config',
before => Ini_setting['cinder_conf_volume_backend_name'],
} ->
ini_setting { 'cinder_conf_volume_backend_name':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_backend_name',
value => 'ScaleIO',
}~>
service { $services:
ensure => running,
}->
exec { "Create Cinder volume type \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-create ${volume_type}'",
path => ['/usr/bin', '/bin'],
unless => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
} ->
exec { "Create Cinder volume type extra specs for \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-key ${volume_type} set sio:pd_name=${plugin_settings['protection_domain']} sio:provisioning=thin sio:sp_name=${plugin_settings['storage_pool_1']}'",
path => ['/usr/bin', '/bin'],
onlyif => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
}
}

View File

@ -1,53 +0,0 @@
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

View File

@ -1,10 +0,0 @@
[scaleio]
rest_server_ip=10.225.25.200
rest_server_username=admin
rest_server_password=Scaleio123
protection_domain_name=default
#storage_pools=use-ash1-pd1:use-ash1-sp-tier1,use-ash1-pd1:use-ash1-sp-tier2
storage_pool_name=default
round_volume_capacity=True
force_delete=True
verify_server_certificate=False

View File

@ -1,26 +0,0 @@
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=0

View File

@ -1,16 +0,0 @@
mdm.ip.addresses=<%= @mdm_ip_1 %>,<%= @mdm_ip_2 %>
system.id=
mdm.port=6611
gateway-admin.password=Scaleio123
vmware-mode=false
im.parallelism=100
do.not.update.user.properties.with.values.before.upgrade=false
features.enable_gateway_and_IM=true
features.enable_snmp=false
snmp.mdm.username=
snmp.mdm.password=
snmp.sampling_frequency=30
snmp.traps_receiver_ip=
snmp.port=162
im.ip.ignore.list=
lia.password=Scaleio123

Binary file not shown.

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNSXvplugin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNSXvplugin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXvplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXvplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,8 +0,0 @@
Appendix
========
- `ScaleIO Web Site <http://www.emc.com/storage/scaleio/index.htm>`_
- `ScaleIO Documentation <http://www.emc.com/collateral/technical-documentation/scaleio-user-guide.pdf>`_
- `ScaleIO Download <http://www.emc.com/products-solutions/trial-software-download/scaleio.htm>`_
- `Fuel Enable Experimental Features <https://docs.mirantis.com/openstack/fuel/fuel-6.1/operations.html#enable-experimental-features>`_
- `Fuel Plugins Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/>`_

View File

@ -1,340 +0,0 @@
# -*- coding: utf-8 -*-
#
# fuel-plugin-scaleio-cinder documentation build configuration file, created by
# sphinx-quickstart on Wed Oct 7 12:48:35 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
# 'sphinx.ext.todo',
# 'sphinx.ext.coverage',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'The ScaleIO Cinder plugin for Fuel'
copyright = u'2015, EMC'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '1.0-1.0.0-1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'fuel-plugin-scaleio-cinderdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'fuel-plugin-scaleio-cinder.tex', u'The ScaleIO Cinder Plugin for Fuel Documentation',
u'EMC', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# make latex stop printing blank pages between sections
# http://stackoverflow.com/questions/5422997/sphinx-docs-remove-blank-pages-from-generated-pdfs
latex_elements = { 'classoptions': ',openany,oneside', 'babel' : '\\usepackage[english]{babel}' }
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuel-plugin-scaleio-cinder', u'Guide to the ScaleIO Cinder Plugin ver. 1.0-1.0.0-1 for Fuel',
[u'EMC'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'fuel-plugin-scaleio-cinder', u'The ScaleIO Cinder Plugin for Fuel Documentation',
u'EMC', 'fuel-plugin-scaleio-cinder', 'The ScaleIO Cinder Plugin for Fuel Documentation',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Insert footnotes where they are defined instead of
# at the end.
pdf_inline_footnotes = True
# -- Options for Epub output ----------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'The ScaleIO Cinder Plugin for Fuel'
epub_author = u'EMC'
epub_publisher = u'EMC'
epub_copyright = u'2015, EMC'
# The basename for the epub file. It defaults to the project name.
#epub_basename = u'fuel-plugin-mellanox'
# The HTML theme for the epub output. Since the default themes are not optimized
# for small screen space, using the same theme for HTML and epub output is
# usually not wise. This defaults to 'epub', a theme designed to save visual
# space.
#epub_theme = 'epub'
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# A sequence of (type, uri, title) tuples for the guide element of content.opf.
#epub_guide = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True
# Choose between 'default' and 'includehidden'.
#epub_tocscope = 'default'
# Fix unsupported image types using the PIL.
#epub_fix_images = False
# Scale large images.
#epub_max_image_width = 0
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#epub_show_urls = 'inline'
# If false, no index is generated.
#epub_use_index = True

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 342 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

View File

@ -1,22 +0,0 @@
.. fuel-plugin-scaleio-cinder documentation master file, created by
sphinx-quickstart on Wed Oct 7 12:48:35 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
============================================================
Guide to the ScaleIO Cinder Plugin ver. 1.0-1.0.0-1 for Fuel
============================================================
User documentation
==================
.. toctree::
:maxdepth: 2
introduction.rst
installation.rst
user-guide.rst
appendix.rst

View File

@ -1,92 +0,0 @@
Install ScaleIO Cinder Plugin
=============================
To install the ScaleIO-Cinder Fuel plugin:
#. Download it from the
`Fuel Plugins Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/>`_.
#. Copy the *rpm* file to the Fuel Master node:
::
[root@home ~]# scp scaleio-cinder-1.5-1.5.0-1.noarch.rpm root@fuel:/tmp
#. Log into Fuel Master node and install the plugin using the
`Fuel CLI <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#using-fuel-cli>`_:
::
[root@fuel ~]# fuel plugins --install scaleio-cinder-1.5-1.5.0-1.noarch.rpm
#. Verify that the plugin is installed correctly:
::
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|---------------|---------|----------------
1 | scaleio-cinder| 1.5.0 | 2.0.0
.. raw:: pdf
PageBreak
Configure ScaleIO plugin
------------------------
Once the plugin has been copied and installed at the
Fuel Master node, you can configure the nodes and set the parameters for the plugin:
#. Start by creating a new OpenStack environment following the
`Mirantis OpenStack User Guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#create-a-new-openstack-environment>`_.
#. `Configure your environment <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#configure-your-environment>`_.
.. image:: images/scaleio-cinder-install-2.png
#. Open the **Settings tab** of the Fuel web UI and scroll down the page.
Select the Fuel plugin checkbox to enable ScaleIO Cinder plugin for Fuel:
.. image:: images/scaleio-cinder-install-4.PNG
+----------------------------+----------------------------------------------------+
| Parameter name | Parameter description |
| | |
+============================+====================================================+
| userName | The ScaleIO User name |
+----------------------------+----------------------------------------------------+
| Password | The ScaleIO password for the selected user name |
+----------------------------+----------------------------------------------------+
| ScaleIO GW IP | The IP address of the the ScaleIO Gateway service |
+----------------------------+----------------------------------------------------+
| ScaleIO Primary IP | The ScaleIO cluster's primary IP address |
+----------------------------+----------------------------------------------------+
| ScaleIO Secondary IP | The ScaleIO cluster's secondary IP address |
+----------------------------+----------------------------------------------------+
| ScaleIO protection domain | Name of the ScaleIO's protection domain |
+----------------------------+----------------------------------------------------+
| ScaleIO storage pool 1 | Name of the first storage pool |
+----------------------------+----------------------------------------------------+
.. note:: Please refer to the ScaleIO documentation for more information on these parameters.
This is an example of the ScaleIO configuration parameters populated:
.. image:: images/scaleio-cinder-install-5.PNG
#. After the configuration is done, you can add the nodes to the Openstack Deployment.
.. image:: images/scaleio-cinder-install-3.png
#. You can run the network verification check and
`deploy changes <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#deploy-changes>`_ then.
#. After deployment is completed, you should see a success message:
.. image:: images/scaleio-cinder-install-complete.jpg
.. note:: It may take an hour or more for the OpenStack deployment
to complete, depending on your hardware configuration.

View File

@ -1,104 +0,0 @@
Overview
=========
The following diagram shows the plugin's high level architecture:
.. image:: images/fuel-plugin-scaleio-cinder-2.jpg
:width: 100%
From the figure we can see that we need the following OpenStack roles
and services:
.. csv-table:: OpenStack roles and services
:header: "Service Role/Name", "Description", "Installed in"
:widths: 50, 50, 50
"Controller Node + Cinder Host", "A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability. In addition this node is a Cinder Host, that contains the Cinder Volume Manager", "OpenStack Cluster"
"Compute Node", "A node that runs the nova-compute daemon that manages Virtual Machine (VM) instances that provide a wide range of services, such as web applications and analytics", "OpenStack Cluster"
In the **external ScaleIO cluster** we have installed the following
roles and services:
.. csv-table:: ScaleIO cluster roles and services
:header: "Service Role", "Description", "Installed in"
:widths: 50, 50, 50
"ScaleIO Gateway (REST API)", "The ScaleIO Gateway Service, includes the REST API to communicate storage commands to the SclaeIO Cluster, in addtion this service is used for authentication and certificate management.", "ScaleIO Cluster"
"Meta-data Manager (MDM)", "Configures and monitors the ScaleIO system. The MDM can be configured in redundant Cluster Mode, with three members on three servers, or in Single Mode on a single server.", "ScaleIO Cluster"
"Tie Breaker (TB)", 'Tie Breaker service helps determining what service runs as a master vs. a slave.", "ScaleIO Cluster"
"Storage Data Server (SDS)", "Manages the capacity of a single server and acts as a back-end for data access.The SDS is installed on all servers contributing storage devices to the ScaleIO system. These devices are accessed through the SDS.", "ScaleIO Cluster"
"Storage Data Client (SDC)", "A lightweight device driver that exposes ScaleIO volumes as block devices to the application that resides on the same server on which the SDC is installed.", "Openstack Cluster"
**Note:** for more information in how to deploy a ScaleIO Cluster,
please refer to the ScaleIO manuals located in the
`download packages <http://www.emc.com/products-solutions/trial-software-download/scaleio.htm>`_ for
your platform and `watch the demo <https://community.emc.com/docs/DOC-45019>`__.
Requirements
============
These are the plugin requirements:
+--------------------------------------------------------------------------------+--------------------------------+
| Requirement | Version/Comment |
+================================================================================+================================+
| Mirantis OpenStack compatibility | 6.1 / 7.0 |
+--------------------------------------------------------------------------------+--------------------------------+
| ScaleIO Version | >= 1.32 |
+--------------------------------------------------------------------------------+--------------------------------+
| Controller and Compute Nodes' Operative System | CentOS 6.5/Ubuntu 14.04 LTS |
+--------------------------------------------------------------------------------+--------------------------------+
| OpenStack Cluster (Controller/cinder-volume node) can access ScaleIO Cluster | via a TCP/IP Network |
+--------------------------------------------------------------------------------+--------------------------------+
| OpenStack Cluster (Compute nodes) can access ScaleIO Cluster | via a TCP/IP Network |
+--------------------------------------------------------------------------------+--------------------------------+
| Install ScaleIO Storage Data Client (SDC) in Controller and Compute Nodes | Plugin takes care of install |
+--------------------------------------------------------------------------------+--------------------------------+
Limitations
===========
Currently Fuel doesn't support multi-backend storage. Also the following table show the current support version and limitation
.. image:: images/SIO_Support.png
:width: 100%
Configuration
=============
Plugin files and directories:
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| File/Directory | Description |
+==============================+==============================================================================================================+
| Deployment\_scripts | Folder that includes the bash/puppet manifests for deploying the services and roles required by the plugin |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| Deployment\_scripts/puppet | |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| environment\_config.yaml | Contains the ScaleIO plugin parameters/fields for the Fuel web UI |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| metadata.yaml | Contains the name, version and compatibility information for the ScaleIO plugin |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| pre\_build\_hook | Mandatory file - blank for the ScaleIO plugin |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| repositories/centos | Empty Directory, the plugin scripts will download the required CentOS packages |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| repositories/Ubuntu | Empty Directory, not used |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
| taks.yaml | Contains the information about what scripts to run and how to run them |
+------------------------------+--------------------------------------------------------------------------------------------------------------+
Before starting a deployment there are some things that you should
verify:
#. Your ScaleIO Cluster can route 10G Storage Network to all Compute
nodes as well as the Cinder Control/Manager node.
#. An account on the ScaleIO cluster is created to use as the OpenStack
Administrator account (use the login/password for this account as
san\_login/password settings).
#. The IP address from the ScaleIO cluster is obtained.

View File

@ -1,121 +0,0 @@
.. raw:: pdf
PageBreak
==========
User Guide
==========
#. Install ScaleIO-Cinder plugin using the `Installation Guide <./installation.rst>`_.
#. Create environment with enabled plugin in fuel ui, lunch the fuel
site and check setting section to make sure the Scaleio-Cinder
section exists
#. Add 3 nodes with Controller role and 1 node with Compute and another
role:
.. image:: images/installation/image006.png
#. Picture of the External ScaleIO Cluster Running:
.. image:: images/installation/image007.png
#. Retrive the external ScaleIO Cluster information. For
our example these are the configuration settings:
.. image:: images/installation/image007.png
#. Use the ScaleIO Cluster information to update the ScaleIO Plugin
information:
.. image:: images/installation/image009.png
#. Apply network settings
#. Use the networking settings that are appropriate for your
environment. For our example we used the default settings provided
by Fuel:
.. image:: images/installation/image010.png
#. Run network verification check:
.. image:: images/installation/image011.png
#. Deploy the cluster:
.. image:: images/installation/image012.png
#. Once the deployment finished successfully, open OpenStack Dashboard (Horizon):
.. image:: images/installation/image013.png
#. Check Storage tab under system information and make sure ScaleIO
service is up and running:
.. image:: images/installation/image014.png
ScaleIO Cinder plugin OpenStack operations
==========================================
Once the OpenStack Cluster is setup, we can setup ScaleIO Volumes. This
is an example in how to attach a Volume to a running VM:
#. Login into the OpenStack Cluster:
.. image:: images/scaleio-cinder-install-6.PNG
:alt: OpenStack Login
#. Review the Block storage services by navigating: Admin -> System ->
System Information secction. You should see the ScaleIO Cinder
Volume.
.. image:: images/scaleio-cinder-install-7.PNG
:alt: Block Storage Services Verification
#. Review the System Volumes by navigating to: Admin -> System ->
Volumes. You should see the ScaleIO Volume Type:
.. image:: images/scaleio-cinder-install-8.PNG
:alt: Volume Type Verification
#. Create a new OpenStack Volume:
.. image:: images/scaleio-cinder-install-9.PNG
:alt: Volume Creation
#. View the newly created Volume:
.. image:: images/scaleio-cinder-install-10.PNG
:alt: Volume Listing
#. In the ScaleIO Control Panel, you will see that no Volumes have been
mapped yet:
.. image:: images/scaleio-cinder-install-11.PNG
:alt: ScaleIO UI No mapped Volumes
#. Once the Volume is attached to a VM, the ScaleIO UI will reflect the
mapping:
.. image:: images/scaleio-cinder-install-12.png
:alt: ScaleIO UI Mapped Volume

View File

@ -1,64 +0,0 @@
attributes:
scaleio_Admin:
value: ''
label: 'Admin username'
description: 'Type ScaleIO Admin username'
weight: 5
type: "text"
regex:
source: '^[\S]{4,}$'
error: "You must provide a username with at least 4 characters"
scaleio_Password:
value: ''
label: 'Admin Password'
description: 'Type ScaleIO Admin password'
weight: 10
type: "password"
regex:
source: '^[\S]{4,}$'
error: "You must provide a password with at least 4 characters"
scaleio_GW:
value: ''
label: 'Gateway IP'
description: 'Type the ScaleIO Gateway IP or hostname'
weight: 15
type: "text"
regex:
source: '\S'
error: "Gateway IP cannot be empty"
scaleio_mdm1:
value: ''
label: 'Primary MDM IP'
description: 'Type the primary MDM IP or hostname'
weight: 16
type: "text"
regex:
source: '\S'
error: "Primary MDM IP cannot be empty"
scaleio_mdm2:
value: ''
label: 'Secondary MDM IP'
description: 'Type the secondary MDM IP or hostname'
weight: 17
type: "text"
regex:
source: '\S'
error: "Secondary MDM IP cannot be empty"
protection_domain:
value: ''
label: 'Protection Domain'
description: 'Type the Protection Domain you want to use for OpenStack'
weight: 35
type: "text"
regex:
source: '\S'
error: "Protection Domain cannot be empty"
storage_pool_1:
value: ''
label: 'Storage Pool'
description: 'Type the Storage Pool you want to use for OpenStack'
weight: 45
type: "text"
regex:
source: '\S'
error: "Storage Pool cannot be empty"

View File

@ -1,39 +0,0 @@
# Plugin name
name: scaleio-cinder
# Human-readable name for your plugin
title: ScaleIO Cinder plugin
# Plugin version
version: '1.5.0'
# Description
description: Enable EMC ScaleIO as the block storage backend
# Required fuel version
fuel_version: ['6.1','7.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['Magdy Salem, EMC', 'Adrian Moreno Martinez, EMC']
# A link to the plugin's page
homepage: 'https://github.com/stackforge/fuel-scaleio-cinder'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: []
# The plugin is compatible with releases in the list
releases:
- os: centos
version: 2014.2.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
- os: ubuntu
version: 2014.2.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: ubuntu
version: 2015.1.0-7.0
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
#Version of plugin package
package_version: '2.0.0'

View File

@ -1,5 +0,0 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.

View File

@ -1,55 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1450754058</revision>
<data type="filelists">
<checksum type="sha256">1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c</checksum>
<open-checksum type="sha256">35f31c37eeaf596efa22987d57d227a7bb369fc5d47ecd1d640b0c7f5835fa28</open-checksum>
<location href="repodata/1c08af82e8ad2d64539ab49b62c54e5e6c6a0f0756a943ba6c0f89d4ece5136c-filelists.xml.gz"/>
<timestamp>1450754059</timestamp>
<size>378</size>
<open-size>1003</open-size>
</data>
<data type="primary">
<checksum type="sha256">3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e</checksum>
<open-checksum type="sha256">96a50c0d1b14905953cc6c59710879768b723cb4d1ef1b6c65e101685ac72f65</open-checksum>
<location href="repodata/3f2418a8019919b6eb1b825da000b90abb7a26ba1741e4f3b64fab9a4c76802e-primary.xml.gz"/>
<timestamp>1450754059</timestamp>
<size>1069</size>
<open-size>3553</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9</checksum>
<open-checksum type="sha256">fb5dd5c72a77eb670c83b9bd3564787553525157a8eb55a6a5120590fc3582e4</open-checksum>
<location href="repodata/735377c76ec7cb64030af7e0fd56137693a250d78d9be9f46c80cd75ebda41a9-primary.sqlite.bz2"/>
<timestamp>1450754059.26</timestamp>
<database_version>10</database_version>
<size>3169</size>
<open-size>24576</open-size>
</data>
<data type="other_db">
<checksum type="sha256">49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3</checksum>
<open-checksum type="sha256">af551d95cb3e21338f3aef8538595842a3615537cf43e12000ec0b7ffbb6eaac</open-checksum>
<location href="repodata/49e88ece5d5d1e36f7c3ce9f6e94e2bf7b0cf942f33cf114a0558a41e70be6d3-other.sqlite.bz2"/>
<timestamp>1450754059.14</timestamp>
<database_version>10</database_version>
<size>652</size>
<open-size>6144</open-size>
</data>
<data type="other">
<checksum type="sha256">a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1</checksum>
<open-checksum type="sha256">2b428c4c2cd6e583c89d0613aaaf4de4fac1c6216e35fd43ca260915be748572</open-checksum>
<location href="repodata/a9f0c021ced8ac9ee2233b6f88f204e72db1bbe17103fbb048ad9fbb5e5095b1-other.xml.gz"/>
<timestamp>1450754059</timestamp>
<size>249</size>
<open-size>306</open-size>
</data>
<data type="filelists_db">
<checksum type="sha256">54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8</checksum>
<open-checksum type="sha256">ae7b880e69c74359a00a090a2ebdf21dc7e27d73c495a9f15ecbf643d810b78e</open-checksum>
<location href="repodata/54d2af84584243256c5c31c0aaa1c99d826d5ec305f362b0842bc56ed0a31ac8-filelists.sqlite.bz2"/>
<timestamp>1450754059.17</timestamp>
<database_version>10</database_version>
<size>1050</size>
<open-size>7168</open-size>
</data>
</repomd>

Binary file not shown.

View File

@ -1,3 +0,0 @@
Origin: Magdy Salem, EMC; Adrian Moreno Martinez, EMC
Label: scaleio-cinder
Version: 1.0

View File

@ -1,120 +0,0 @@
===========================================================
Fuel plugin for ScaleIO-Cinder clusters as a Cinder backend
===========================================================
ScaleIO-Cinder plugin for Fuel extends Mirantis OpenStack functionality by adding
support for ScaleIO clusters for Cinder backend.
Problem description
===================
Currently, Fuel has no support for ScaleIO clusters as block storage for
OpenStack environments. ScaleIO-Cinder plugin aims to provide support for it.
This plugin will configure OpenStack environments with an existing ScaleIO cluster
Proposed change
===============
Implement a Fuel plugin that will configure the ScaleIO-Cinder driver for
Cinder on all Controller nodes and Compute nodes. All Cinder services run
on controllers no additional Cinder node is required in environment.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
The ScaleIO-Cinder storage clusters provide high performance block storage for
OpenStack environment, and therefore enabling the ScaleIO-Cinder driver in OpenStack
will greatly improve peformance of OpenStack.
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
The plugin generates the approriate cinder.conf to enable the ScaleIO-Cinder
cluster within OpenStack. There are scaleio driver and filter required files, the plugin
contains the files and will copy them to the correct location.
Cinder-volume service is installed on all Controller nodes and Compute nodes.
All instances of cinder-volume have the same “host” parameter in cinder.conf
file. This is required to achieve ability to manage all volumes in the
environment by any cinder-volume instance.
Assignee(s)
-----------
| Patrick Butler Monterde <Patrick.ButlerMonterde@emc.com>
| Magdy Salem <magdy.salem@emc.com>
| Adrián Moreno <Adrian.Moreno@emc.com>
Work Items
----------
* Implement the Fuel plugin.
* Implement the Puppet manifests.
* Testing.
* Write the documentation.
Dependencies
============
* Fuel 6.1 and higher.
Testing
=======
* Prepare a test plan.
* Test the plugin by deploying environments with all Fuel deployment modes.
Documentation Impact
====================
* Deployment Guide (how to install the storage backends, how to prepare an
environment for installation, how to install the plugin, how to deploy an
OpenStack environment with the plugin).
* User Guide (which features the plugin provides, how to use them in the
deployed OpenStack environment).
* Test Plan.
* Test Report.

View File

@ -1,28 +0,0 @@
# This tasks will be applied on controller nodes,
# here you can also specify several roles, for example
# ['cinder', 'compute'] will be applied only on
# cinder and compute nodes
#Install ScaleIO cluster
- role: ['compute']
stage: post_deployment/2010
type: puppet
parameters:
puppet_manifest: install_scaleio_compute.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600
#Install ScaleIO on controller
- role: ['controller']
stage: post_deployment/2110
type: puppet
parameters:
puppet_manifest: install_scaleio_controller.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600
#Install ScaleIO on controller
- role: ['primary-controller']
stage: post_deployment/2120
type: puppet
parameters:
puppet_manifest: install_scaleio_controller.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600