Uploading code from temporary repo

Change-Id: Ice66f017779334aa6c9ff8bdb0fe3163d374eaf1
This commit is contained in:
Moreno, Adrian 2015-10-21 16:25:15 +02:00
parent df8426b053
commit ca62bcfeb6
56 changed files with 2403 additions and 0 deletions

59
.gitignore vendored Normal file
View File

@ -0,0 +1,59 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
*.db
*.~vsd

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

289
README.md Normal file
View File

@ -0,0 +1,289 @@
# Fuel Plugin ScaleIO on Cinder for OpenStack
**Build:** <a href="http://buildserver.emccode.com/viewType.html?-buildTypeId=FuelPluginsForScaleIO_FuelPluginScaleioCinder&guest=1"><img src="http://buildserver.emccode.com/app/rest/builds/buildType:(id:FuelPluginsForScaleIO_FuelPluginScaleioCinder)/statusIcon"/></a>
### Contents
- [Introduction](#Introduction)
- [Requirements](#requirements)
- [Limitations](#limitations)
- [Configuration](#configuration)
- [ScaleIO Cinder plugin installation](#scaleio-cinder-plugin-installation)
- [ScaleIO Cinder plugin configuration](#scaleio-cinder-plugin-configuration)
- [ScaleIO Cinder plugin OpenStack operations](#scaleio-cinder-plugin-openstack-operations)
- [Contributions](#contributions)
- [License](#licensing)
## Introduction
Fuel plugin for ScaleIO for enabling OpenStack to work with an **External** ScaleIO deployment. This ScaleIO plugin for Fuel extends Mirantis OpenStack functionality by adding support for ScaleIO block storage.
ScaleIO is a software-only solution that uses existing servers' local disks and LAN to create a virtual SAN that has all the benefits of external storage—but at a fraction of cost and complexity. ScaleIO utilizes the existing local internal storage and turns it into internal shared block storage.
The following diagram shows the plugin's high level architecture:
![ScaleIO Fuel plugin high level architecture](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/fuel-plugin-scaleio-cinder-1.jpg)
From the figure we can see that we need the following OpenStack roles and services:
Service/Role Name | Description | Installed in |
|------------|-------------|--------------|
|Controller Node + Cinder Host |A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability. In addition this node is a Cinder Host, that contains the Cinder Volume Manager|OpenStack Cluster|
|Compute Node |A node that runs the nova-compute daemon that manages Virtual Machine (VM) instances that provide a wide range of services, such as web applications and analytics.|OpenStack Cluster|
In the **external ScaleIO cluster** we have installed the following roles and services:
Service Name | Description | Installed in |
|------------|-------------|--------------|
|SclaeIO Gateway (REST API)|The ScaleIO Gateway Service, includes the REST API to communicate storage commands to the SclaeIO Cluster, in addtion this service is used for authentication and certificate management.|ScaleIO Cluster|
|Meta-data Manager (MDM)|Configures and monitors the ScaleIO system. The MDM can be configured in redundant Cluster Mode, with three members on three servers, or in Single Mode on a single server.|ScaleIO Cluster|
|Tie Breaker (TB)|Tie Breaker service helps determining what service runs as a master vs. a slave |ScaleIO Cluster|
|Storage Data Server (SDS)|Manages the capacity of a single server and acts as a back-end for data access.The SDS is installed on all servers contributing storage devices to the ScaleIO system. These devices are accessed through the SDS.|ScaleIO Cluster|
|Storage Data Client (SDC)|A lightweight device driver that exposes ScaleIO volumes as block devices to the application that resides on the same server on which the SDC is installed.|Openstack Cluster|
**Note:** for more information in how to deploy a ScaleIO Cluster, please refer to the ScaleIO manuals located in the download packages for your platform: [http://www.emc.com/products-solutions/trial-software-download/scaleio.htm](http://www.emc.com/products-solutions/trial-software-download/scaleio.htm "Download ScaleIO") and/or [watch the demo](https://community.emc.com/docs/DOC-45019 "Watch our demo to learn how to download, install, and configure ScaleIO")
## Requirements
These are the plugin requirements:
| Requirement | Version/Comment |
|----------------------------------------------------------|-----------------|
| Mirantis OpenStack compatibility | >= 6.1 |
| ScaleIO Version | >= 1.32 |
| Controller and Compute Nodes' Operative System | CentOS/RHEL 6.5 |
| OpenStack Cluster (Controller/cinder-volume node) can access ScaleIO Cluster | via a TCP/IP Network |
| OpenStack Cluster (Compute nodes) can access ScaleIO Cluster| via a TCP/IP Network |
| Install ScaleIO Storage Data Client (SDC) in Controller and Compute Nodes| Plugin takes care of install|
## Limitations
Currently Fuel doesn't support multi-backend storage.
## Configuration
Plugin files and directories:
|File/Directory|Description|
|--------------|-----------|
|Deployment_scripts| Folder that includes the bash/puppet manifests for deploying the services and roles required by the plugin|
|Deployment_scripts/puppet||
|environment_config.yaml|Contains the ScaleIO plugin parameters/fields for the Fuel web UI|
|metadata.yaml|Contains the name, version and compatibility information for the ScaleIO plugin|
|pre_build_hook|Mandatory file - blank for the ScaleIO plugin|
|repositories/centos|Empty Directory, the plugin scripts will download the required CentOS packages|
|repositories/Ubuntu|Empty Directory, not used|
|taks.yaml|Contains the information about what scripts to run and how to run them|
This Fuel plugin will install the ScaleIO Storage Data Client (SDC) service on each Controller node and Compute node in the cluster. This is necessary in order for the VMs in each compute node to utilize ScaleIO Storage:
![Plugin Architecture ](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/fuel-plugin-scaleio-cinder-2.jpg)
Before starting a deployment there are some things that you should verify:
1. Your ScaleIO Cluster can route 10G Storage Network to all Compute nodes
as well as the Cinder Control/Manager node.
2. Create an account on the ScaleIO cluster to use as the OpenStack Administrator
account (use the login/password for this account as san_login/password settings).
3. Obtain the IP address from the ScaleIO cluster
### ScaleIO Cinder plugin installation
The first step is to install the ScaleIO Cinder plugin in the Fuel Master:
1. Download the plugin from the [releases section](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/releases "Releases Page") or from the [Fuel plugins catalog](https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/ "Fuel Plugins Catalog").
2. Copy the plugin to an already installed Fuel Master node. If you do not have the Fuel Master node yet, follow the instructions from the official Mirantis OpenStack documentation:
`scp fuel-plugin-scaleio-cinder-1.0.noarch.rpm root@:<the_Fuel_Master_node_IP>:/tmp`
3. Log into the Fuel Master node and install the plugin, if downloaded in the `/tmp` directory:
`cd /tmp`
`fuel plugins --install /tmp/fuel-plugin-scaleio-cinder-1.0.noarch.rpm`
4. Verify that the plugin has been installed successfully:
![Plugin Installation](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-1.png)
### ScaleIO Cinder plugin configuration
Once the plugin has been installed in the Master, we configure the nodes and set the parameters for the plugin:
1. Start by creating a new OpenStack environment following the [Mirantis instructions](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#create-a-new-openstack-environment "Creating a new OpenStack environment")
2. Configure your environment following the [Mirantis OpenStack configuration documentation](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#configure-your-environment)
![OpenStack Node configuration](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-2.png)
3. Open the **Settings tab** of the Fuel web UI and scroll down the page. Select the Fuel plugin check-box to enable ScaleIO Cinder plugin for Fuel:
![ScaleIO Cluster Parameters](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-4.PNG)
**Plugin's parameters explanation:**
|Parameter Name |Parameter Description|
|---------------------|---------------------|
|ScaleIO Repo URL| The URL of the ScaleIO sources repository. This is the URL for the required scaleIO zip file that contains the ScaleIO product. **The URL can point to an external repository (requires external network access to that repository) or to an internal server in the local network (a local webserver)**. For our example we are using the URI for the [ScaleIO Linux download](http://downloads.emc.com/emc-com/usa/ScaleIO/ScaleIO_Linux_SW_Download.zip "ScaleIO Linux Download") located in the ScaleIO trial download at [EMC.com](http://www.emc.com/products-solutions/trial-software-download/scaleio.htm "ScaleIO Trial Download"). |
|userName|The ScaleIO User Name|
|Password|The SclaeIO password for the selected user name|
|ScaleIO GW IP|The IP address of the the ScaleIO Gateway service|
|ScaleIO Primary IP|The ScaleIO cluster's primary IP address|
|ScaleIO Secondary IP|The ScaleIO cluster's secondary IP address|
|ScaleIO protection domain|Name of the ScaleIO's protection domain|
|ScaleIO storage pool 1|Name of the first storage pool|
|Fault sets list|List of the fault sets (comma separated)|
**Note:** Please refer to the ScaleIO documentation for more information on these parameters
This is an example of the ScaleIO configuration paremets populated:
![ScaleIO Cluster Parameters](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-5.PNG)
4. After the configuration is done, you can Add the nodes to the Openstack Deployment. There is a minimum of two nodes for this configuration:
Service/Role Name | Description |
|------------|-------------|
|Controller Node + Cinder Host |A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability. In addition this node is a Cinder Host, that contains the Cinder Volume Manager|
|Compute Node |A node that runs the nova-compute daemon that manages Virtual Machines (VMs) instances that provide a wide range of services, such as web applications and analytics.|
![OpenStack Node Deployment](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-3.PNG)
**Note:** you can run the [network verification](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#verify-networks) check and [deploy the environment](https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#deploy-changes).
After this is complete you should see a success message:
![OpenStack Deployment Successful](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-complete.jpg)
**Note:** It make take an hour or more for the OpenStack deployment to complete, depending on your hardware configuration.
### ScaleIO Cinder plugin OpenStack operations
Once the OpenStack Cluster is setup, we can setup ScaleIO Volumes. This is an example in how to attach a Volume to a running VM:
1. Login into the OpenStack Cluster:
![OpenStack Login](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-6.PNG)
2. Review the Block storage services by navigating: Admin -> System -> System Information secction. You should see the ScaleIO Cinder Volume.
![Block Storage Services Verification](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-7.PNG)
3. Review the System Volumes by navigating to: Admin -> System -> Volumes. You should see the ScaleIO Volume Type:
![Volume Type Verification](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-8.PNG)
4. Create a new OpenStack Volume:
![Volume Creation](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-9.PNG)
5. View the newly created Volume:
![Volume Listing](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-10.PNG)
6. In the ScaleIO Control Panel, you will see that no Volumes have been mapped yet:
![ScaleIO UI No mapped Volumes](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-11.PNG)
7. Once the Volume is attached to a VM, the ScaleIO UI will reflect the mapping:
![ScaleIO UI Mapped Volume](https://github.com/emccode/fuel-plugin-scaleio-cinder-test/blob/master/doc/images/scaleio-cinder-install-12.png)
## Contributions
The Fuel plugin for ScaleIO project has been licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") License. In order to contribute to the project you will to do two things:
1. License your contribution under the [DCO](http://elinux.org/Developer_Certificate_Of_Origin "Developer Certificate of Origin") + [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0")
2. Identify the type of contribution in the commit message
### 1. Licensing your Contribution:
As part of the contribution, in the code comments (or license file) associated with the contribution must include the following:
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- [Insert Name], [Date (e.g., 1/1/15]”
**For example:**
A contribution from **Joe Developer**, an **independent developer**, submitted in **May 15th of 2015** should have an associated license (as file or/and code comments) like this:
Copyright (c) 2015, Joe Developer
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- Joe Developer, May 15th 2015”
### 2. Identifying the Type of Contribution
In addition to identifying an open source license in the documentation, **all Git Commit messages** associated with a contribution must identify the type of contribution (i.e., Bug Fix, Patch, Script, Enhancement, Tool Creation, or Other).
## Licensing
The fuel plugin for ScaleIO is licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") license
Copyright (c) 2015, EMC Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
## Support
Please file bugs and issues at the Github issues page. For more general discussions you can contact the EMC Code team at <a href="https://groups.google.com/forum/#!forum/emccode-users">Google Groups</a> or tagged with **EMC** on <a href="https://stackoverflow.com">Stackoverflow.com</a>. The code and documentation are released with no warranties or SLAs and are intended to be supported through a community driven process.

35
checksums.sha1 Normal file
View File

@ -0,0 +1,35 @@
c700a8b9312d24bdc57570f7d6a131cf63d89016 LICENSE
f15e1db1b640c3ae73cc129934f9d914440b0250 README.md
bffb5460de132beba188914cb0dcc14e9dc4e36b deployment_scripts/deploy.sh
7eb8ef1f761b13f04c8beb1596bdb95bc281b54a deployment_scripts/install_scaleio_compute.pp
2f62d18be6c2a612f8513815c95d2ca3a20600e6 deployment_scripts/install_scaleio_controller.pp
76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_compute/files/scaleio.filters
c9efb58603cc6e448c2974c4a095cda14f13b431 deployment_scripts/puppet/install_scaleio_compute/files/scaleiolibvirtdriver.py
056e1427d651c8991c7caa3d82e62a3f0a5959d3 deployment_scripts/puppet/install_scaleio_compute/manifests/init.pp
76f95bec5ebca8a29423a51be08241f890cccc24 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.filters
8dbfc34be2c4b4348736306de01769f3daf78b11 deployment_scripts/puppet/install_scaleio_controller/files/scaleio.py
97151193ca7cafc39bb99e11175c2bd8e07410e1 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_fault_set.rb
ea0175506182a5d1265060ebca7c3c439e446042 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/parser/functions/get_sds_device_pairs.rb
fe5b7322b0f4d1b18959d6d399b20f98568e30eb deployment_scripts/puppet/install_scaleio_controller/lib/puppet/provider/scaleio_cluster_create/init_cluster.rb
015351dfe07ca666e4c50186b09b89abcaab5959 deployment_scripts/puppet/install_scaleio_controller/lib/puppet/type/scaleio_cluster_create.rb
f46c0cf37a4e7c3a9f0d8a4d1d5c9b0fdd567692 deployment_scripts/puppet/install_scaleio_controller/manifests/init.pp
2cf472314221fbc1520f9ec76c0eb47570a2f444 deployment_scripts/puppet/install_scaleio_controller/templates/CentOS-Base.repo
d280a227cfb05d67795d1a03bacfd781900b6134 deployment_scripts/puppet/install_scaleio_controller/templates/cinder_scaleio.config.erb
29162123f2ad50753d7f5cb3be9d5af5687de10b deployment_scripts/puppet/install_scaleio_controller/templates/epel.repo
d501258114fecc5e677b42486f91436ce24bf912 deployment_scripts/puppet/install_scaleio_controller/templates/gatewayUser.properties.erb
e644fa23da337234dfa78e03fd2f2be8162e5617 deployment_scripts/puppet/remove_scaleio_repo/manifests/init.pp
73f9232026d4bd9e74f8e97afadfc972044c64cf deployment_scripts/remove_scaleio_repo.pp
0b552e1a89b852857efe0b6fe1c368feb3870dd9 environment_config.yaml
ede2ec1bf0bdb1455f3a0b56901ef27ed214645d metadata.yaml
83c3d6d1526da89aed2fc1e24ec8bfacb4a3ea1e pre_build_hook
da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/centos/.gitkeep
7246778c54a204f01064348b067abeb4a766a24b repositories/centos/repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2
ad0107628e9b6dd7a82553ba5cb447388e50900a repositories/centos/repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz
1eb13a25318339d9e8157f0bf80419c019fa5000 repositories/centos/repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz
ebe841ac4c94ae950cfc8f5f80bc6707eb39e456 repositories/centos/repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2
f7affeb9ed7e353556e43caf162660cae95d8d19 repositories/centos/repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2
84b124bc4de1c04613859bdb7af8d5fef021e3bb repositories/centos/repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz
e50be018e61c5d5479cd6734fc748a821440daf8 repositories/centos/repodata/repomd.xml
da39a3ee5e6b4b0d3255bfef95601890afd80709 repositories/ubuntu/.gitkeep
cbecb9edd9e08fbebf280633bc72c69ff735b8c7 repositories/ubuntu/Packages.gz
25e9290ad1ca50f8346c3beb59c6cdcdef7ecca2 tasks.yaml

View File

@ -0,0 +1,4 @@
#!/bin/bash
# It's a script which deploys your plugin
echo scaleio > /tmp/scaleio

View File

@ -0,0 +1,2 @@
$plugin_settings = hiera('scaleio-cinder')
class {'install_scaleio_compute': }

View File

@ -0,0 +1,2 @@
$plugin_settings = hiera('scaleio-cinder')
class {'install_scaleio_controller': }

View File

@ -0,0 +1,3 @@
[Filters]
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root

View File

@ -0,0 +1,329 @@
# Copyright (c) 2013 EMC Corporation
# All Rights Reserved
# This software contains the intellectual property of EMC Corporation
# or is licensed to EMC Corporation from third parties. Use of this
# software and the intellectual property contained therein is expressly
# limited to the terms and conditions of the License Agreement under which
# it is provided by or on behalf of EMC.
import glob
import hashlib
import os
import time
import urllib2
import urlparse
import requests
import json
import re
import sys
import urllib
from oslo.config import cfg
from nova import exception
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.openstack.common import loopingcall
from nova.openstack.common import processutils
from nova import paths
from nova.storage import linuxscsi
from nova import utils
from nova.virt.libvirt import config as vconfig
from nova.virt.libvirt import utils as virtutils
from nova.virt.libvirt.volume import LibvirtBaseVolumeDriver
LOG = logging.getLogger(__name__)
volume_opts = [
cfg.IntOpt('num_iscsi_scan_tries',
default=3,
help='number of times to rescan iSCSI target to find volume'),
cfg.IntOpt('num_iser_scan_tries',
default=3,
help='number of times to rescan iSER target to find volume'),
cfg.StrOpt('rbd_user',
help='the RADOS client name for accessing rbd volumes'),
cfg.StrOpt('rbd_secret_uuid',
help='the libvirt uuid of the secret for the rbd_user'
'volumes'),
cfg.StrOpt('nfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the nfs volume is mounted on the compute node'),
cfg.StrOpt('nfs_mount_options',
help='Mount options passed to the nfs client. See section '
'of the nfs man page for details'),
cfg.IntOpt('num_aoe_discover_tries',
default=3,
help='number of times to rediscover AoE target to find volume'),
cfg.StrOpt('glusterfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the glusterfs volume is mounted on the '
'compute node'),
cfg.BoolOpt('libvirt_iscsi_use_multipath',
default=False,
help='use multipath connection of the iSCSI volume'),
cfg.BoolOpt('libvirt_iser_use_multipath',
default=False,
help='use multipath connection of the iSER volume'),
cfg.StrOpt('scality_sofs_config',
help='Path or URL to Scality SOFS configuration file'),
cfg.StrOpt('scality_sofs_mount_point',
default='$state_path/scality',
help='Base dir where Scality SOFS shall be mounted'),
cfg.ListOpt('qemu_allowed_storage_drivers',
default=[],
help='Protocols listed here will be accessed directly '
'from QEMU. Currently supported protocols: [gluster]')
]
CONF = cfg.CONF
CONF.register_opts(volume_opts)
OK_STATUS_CODE=200
VOLUME_NOT_MAPPED_ERROR=84
VOLUME_ALREADY_MAPPED_ERROR=81
class LibvirtScaleIOVolumeDriver(LibvirtBaseVolumeDriver):
"""Class implements libvirt part of volume driver for ScaleIO cinder driver."""
local_sdc_id = None
mdm_id = None
pattern3 = None
def __init__(self, connection):
"""Create back-end to nfs."""
LOG.warning("ScaleIO libvirt volume driver INIT")
super(LibvirtScaleIOVolumeDriver,
self).__init__(connection, is_block_dev=False)
def find_volume_path(self, volume_id):
LOG.info("looking for volume %s" % volume_id)
#look for the volume in /dev/disk/by-id directory
disk_filename = ""
tries = 0
while not disk_filename:
if (tries > 15):
raise exception.NovaException("scaleIO volume {0} not found at expected path ".format(volume_id))
by_id_path = "/dev/disk/by-id"
if not os.path.isdir(by_id_path):
LOG.warn("scaleIO volume {0} not yet found (no directory /dev/disk/by-id yet). Try number: {1} ".format(volume_id, tries))
tries = tries + 1
time.sleep(1)
continue
filenames = os.listdir(by_id_path)
LOG.warning("Files found in {0} path: {1} ".format(by_id_path, filenames))
for filename in filenames:
if (filename.startswith("emc-vol") and filename.endswith(volume_id)):
disk_filename = filename
if not disk_filename:
LOG.warn("scaleIO volume {0} not yet found. Try number: {1} ".format(volume_id, tries))
tries = tries + 1
time.sleep(1)
if (tries != 0):
LOG.warning("Found scaleIO device {0} after {1} retries ".format(disk_filename, tries))
full_disk_name = by_id_path + "/" + disk_filename
LOG.warning("Full disk name is " + full_disk_name)
return full_disk_name
# path = os.path.realpath(full_disk_name)
# LOG.warning("Path is " + path)
# return path
def _get_client_id(self, server_ip, server_port, server_username, server_password, server_token, sdc_ip):
request = "https://" + server_ip + ":" + server_port + "/api/types/Client/instances/getByIp::" + sdc_ip + "/"
LOG.info("ScaleIO get client id by ip request: %s" % request)
r = requests.get(request, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
sdc_id = r.json()
if (sdc_id == '' or sdc_id is None):
msg = ("Client with ip %s wasn't found " % (sdc_ip))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != 200 and "errorCode" in sdc_id):
msg = ("Error getting sdc id from ip %s: %s " % (sdc_ip, sdc_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO sdc id is %s" % sdc_id)
return sdc_id
def _get_volume_id(self, server_ip, server_port, server_username, server_password, server_token, volname):
volname_encoded = urllib.quote(volname, '')
volname_double_encoded = urllib.quote(volname_encoded, '')
# volname = volname.replace('/', '%252F')
LOG.info("volume name after double encoding is %s " % volname_double_encoded)
request = "https://" + server_ip + ":" + server_port + "/api/types/Volume/instances/getByName::" + volname_double_encoded
LOG.info("ScaleIO get volume id by name request: %s" % request)
r = requests.get(request, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
volume_id = r.json()
if (volume_id == '' or volume_id is None):
msg = ("Volume with name %s wasn't found " % (volname))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != OK_STATUS_CODE and "errorCode" in volume_id):
msg = ("Error getting volume id from name %s: %s " % (volname, volume_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO volume id is %s" % volume_id)
return volume_id
def _check_response(self, response, request, server_ip, server_port, server_username, server_password, server_token):
if (response.status_code == 401 or response.status_code == 403):
LOG.info("Token is invalid, going to re-login and get a new one")
login_request = "https://" + server_ip + ":" + server_port + "/api/login"
r = requests.get(login_request, auth=(server_username, server_password), verify=False)
token = r.json()
#repeat request with valid token
LOG.debug("going to perform request again {0} with valid token".format(request))
res = requests.get(request, auth=(server_username, token), verify=False)
return res
return response
def connect_volume(self, connection_info, disk_info):
"""Connect the volume. Returns xml for libvirt."""
conf = super(LibvirtScaleIOVolumeDriver,
self).connect_volume(connection_info,
disk_info)
LOG.info("scaleIO connect volume in scaleio libvirt volume driver")
data = connection_info
LOG.info("scaleIO connect to stuff "+str(data))
data = connection_info['data']
LOG.info("scaleIO connect to joined "+str(data))
LOG.info("scaleIO Dsk info "+str(disk_info))
volname = connection_info['data']['scaleIO_volname']
#sdc ip here is wrong, probably not retrieved properly in cinder driver. Currently not used.
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
iops_limit = connection_info['data']['iopsLimit']
bandwidth_limit = connection_info['data']['bandwidthLimit']
LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}, REST Server username: {3}, REST Server password: {4}, iops limit: {5}, bandwidth limit: {6}".format(volname, sdc_ip, server_ip, server_username, server_password, iops_limit, bandwidth_limit))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: "+str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
# sdc_id = self._get_client_id(server_ip, server_port, server_username, server_password, server_token, sdc_ip)
# params = {'sdcId' : sdc_id}
params = {'guid' : guid, 'allowMultipleMappings' : 'TRUE'}
volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname)
headers = {'content-type': 'application/json'}
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc"
LOG.info("map volume request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
# LOG.info("map volume response: %s" % r.text)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_ALREADY_MAPPED_ERROR):
msg = ("Ignoring error mapping volume %s: volume already mapped" % (volname))
LOG.warning(msg)
else:
msg = ("Error mapping volume %s: %s" % (volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
# convert id to hex
# val = int(volume_id)
# id_in_hex = hex((val + (1 << 64)) % (1 << 64))
# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0"
formated_id = volume_id
conf.source_path = self.find_volume_path(formated_id)
conf.source_type = 'block'
# set QoS settings after map was performed
if (iops_limit != None and bandwidth_limit != None):
params = {'sdcId' : sdc_id, 'iopsLimit': iops_limit, 'bandwidthLimitInKbps': bandwidth_limit}
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/setMappedSdcLimits"
LOG.info("set client limit request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
LOG.info("set client limit response: %s" % response)
msg = ("Error setting client limits for volume %s: %s" % (volname, response['message']))
LOG.error(msg)
return conf
def disconnect_volume(self, connection_info, disk_info):
conf = super(LibvirtScaleIOVolumeDriver,
self).disconnect_volume(connection_info,
disk_info)
LOG.info("scaleIO disconnect volume in scaleio libvirt volume driver")
volname = connection_info['data']['scaleIO_volname']
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
LOG.debug("scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}".format(volname, sdc_ip, server_ip))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: "+str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
params = {'guid' : guid}
headers = {'content-type': 'application/json'}
volume_id = self._get_volume_id(server_ip, server_port, server_username, server_password, server_token, volname)
request = "https://" + server_ip + ":" + server_port + "/api/instances/Volume::" + str(volume_id) + "/action/removeMappedSdc"
LOG.info("unmap volume request: %s" % request)
r = requests.post(request, data=json.dumps(params), headers=headers, auth=(server_username, server_token), verify=False)
r = self._check_response(r, request, server_ip, server_port, server_username, server_password, server_token)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_NOT_MAPPED_ERROR):
msg = ("Ignoring error unmapping volume %s: volume not mapped" % (volname))
LOG.warning(msg)
else:
msg = ("Error unmapping volume %s: %s" % (volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)

View File

@ -0,0 +1,50 @@
# ScaleIO Puppet Manifest for Compute Nodes
class install_scaleio_compute
{
$nova_service = 'openstack-nova-compute'
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
$scaleio_repo=$plugin_settings['scaleio_repo']
#install ScaleIO SDC package
exec { "install_sdc":
command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"",
path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin",
}
#Configure nova-compute
ini_subsetting { 'nova-volume_driver':
ensure => present,
path => '/etc/nova/nova.conf',
subsetting_separator => ',',
section => 'libvirt',
setting => 'volume_drivers',
subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver',
notify => Service[$nova_service],
}
file { 'scaleiolibvirtdriver.py':
path => '/usr/lib/python2.6/site-packages/nova/virt/libvirt/scaleiolibvirtdriver.py',
source => 'puppet:///modules/install_scaleio_compute/scaleiolibvirtdriver.py',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
file { 'scaleio.filters':
path => '/usr/share/nova/rootwrap/scaleio.filters',
source => 'puppet:///modules/install_scaleio_compute/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
service { $nova_service:
ensure => 'running',
}
}

View File

@ -0,0 +1,3 @@
[Filters]
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,92 @@
class install_scaleio_controller
{
$services = ['openstack-cinder-volume', 'openstack-cinder-api', 'openstack-cinder-scheduler', 'openstack-nova-scheduler']
$gw_ip = $plugin_settings['scaleio_GW']
$mdm_ip_1 = $plugin_settings['scaleio_mdm1']
$mdm_ip_2 = $plugin_settings['scaleio_mdm2']
$admin = $plugin_settings['scaleio_Admin']
$password = $plugin_settings['scaleio_Password']
$scaleio_repo=$plugin_settings['scaleio_repo']
#1. Install SDC package
exec { "install_sdc1":
command => "/bin/bash -c \"MDM_IP=$mdm_ip_1,$mdm_ip_2 yum install -y EMC-ScaleIO-sdc\"",
path => "/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin",
} ->
#2. Copy ScaleIO Files
file { 'scaleio.py':
path => '/usr/lib/python2.6/site-packages/cinder/volume/drivers/emc/scaleio.py',
source => 'puppet:///modules/install_scaleio_controller/scaleio.py',
mode => '644',
owner => 'root',
group => 'root',
} ->
file { 'scaleio.filters':
path => '/usr/share/cinder/rootwrap/scaleio.filters',
source => 'puppet:///modules/install_scaleio_controller/scaleio.filters',
mode => '644',
owner => 'root',
group => 'root',
before => File['cinder_scaleio.config'],
}
# 3. Create config for ScaleIO
$cinder_scaleio_config = "[scaleio]
rest_server_ip=$gw_ip
rest_server_username=$admin
rest_server_password=$password
protection_domain_name=${plugin_settings['protection_domain']}
storage_pools=${plugin_settings['protection_domain']}:${plugin_settings['storage_pool_1']}
storage_pool_name=${plugin_settings['storage_pool_1']}
round_volume_capacity=True
force_delete=True
verify_server_certificate=False
"
file { 'cinder_scaleio.config':
ensure => present,
path => '/etc/cinder/cinder_scaleio.config',
content => $cinder_scaleio_config,
mode => 0644,
owner => root,
group => root,
before => Ini_setting['cinder_conf_enabled_backeds'],
} ->
# 4. To /etc/cinder/cinder.conf add
ini_setting { 'cinder_conf_enabled_backeds':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'DEFAULT',
setting => 'enabled_backends',
value => 'ScaleIO',
before => Ini_setting['cinder_conf_volume_driver'],
} ->
ini_setting { 'cinder_conf_volume_driver':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_driver',
value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver',
before => Ini_setting['cinder_conf_scio_config'],
} ->
ini_setting { 'cinder_conf_scio_config':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'cinder_scaleio_config_file',
value => '/etc/cinder/cinder_scaleio.config',
before => Ini_setting['cinder_conf_volume_backend_name'],
} ->
ini_setting { 'cinder_conf_volume_backend_name':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_backend_name',
value => 'ScaleIO',
}~>
service { $services:
ensure => running,
}
}

View File

@ -0,0 +1,53 @@
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

View File

@ -0,0 +1,10 @@
[scaleio]
rest_server_ip=10.225.25.200
rest_server_username=admin
rest_server_password=Scaleio123
protection_domain_name=default
#storage_pools=use-ash1-pd1:use-ash1-sp-tier1,use-ash1-pd1:use-ash1-sp-tier2
storage_pool_name=default
round_volume_capacity=True
force_delete=True
verify_server_certificate=False

View File

@ -0,0 +1,26 @@
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=0
[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=0

View File

@ -0,0 +1,16 @@
mdm.ip.addresses=<%= @mdm_ip_1 %>,<%= @mdm_ip_2 %>
system.id=
mdm.port=6611
gateway-admin.password=Scaleio123
vmware-mode=false
im.parallelism=100
do.not.update.user.properties.with.values.before.upgrade=false
features.enable_gateway_and_IM=true
features.enable_snmp=false
snmp.mdm.username=
snmp.mdm.password=
snmp.sampling_frequency=30
snmp.traps_receiver_ip=
snmp.port=162
im.ip.ignore.list=
lia.password=Scaleio123

View File

@ -0,0 +1,16 @@
# Class: remove_scaleio_repo
#
#
class remove_scaleio_repo {
# resources
$files = ['/etc/yum.repos.d/epel.repo','/etc/yum.repos.d/CentOS-Base.repo']
define remove_repo {
file { $name:
ensure => absent,
}
}
remove_repo { $files: }
}

View File

@ -0,0 +1,2 @@
$fuel_settings = parseyaml($astute_settings_yaml)
class {'remove_scaleio_repo': }

Binary file not shown.

0
doc/content/appendix.rst Normal file
View File

View File

View File

View File

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

49
environment_config.yaml Normal file
View File

@ -0,0 +1,49 @@
attributes:
scaleio_Admin:
value: ''
label: 'UserName'
description: 'ScaleIO Admin User Name'
weight: 5
type: "text"
scaleio_Password:
value: ''
label: 'Password'
description: 'ScaleIO Admin Password'
weight: 10
type: "text"
scaleio_GW:
value: ''
label: 'ScaleIO GW IP'
description: 'ScaleIO Gateway IP'
weight: 15
type: "text"
scaleio_mdm1:
value: ''
label: 'ScaleIO Primary IP'
description: 'ScaleIO Primary MDM IP'
weight: 16
type: "text"
scaleio_mdm2:
value: ''
label: 'ScaleIO Secondary IP'
description: 'ScaleIO Secondary MDM IP'
weight: 17
type: "text"
protection_domain:
value: ''
label: 'ScaleIO protection domain'
description: 'Protection domain for ScaleIO'
weight: 35
type: "text"
storage_pool_1:
value: ''
label: 'ScaleIO storage pool 1'
description: 'First storage pool for ScaleIO'
weight: 45
type: "text"
fault_sets:
value: ''
label: 'Fault sets list'
description: 'Comma separated list of fault sets'
weight: 60
type: "text"

28
metadata.yaml Normal file
View File

@ -0,0 +1,28 @@
# Plugin name
name: scaleio-cinder
# Human-readable name for your plugin
title: Fuel plugin for ScaleIO
# Plugin version
version: '1.0.0'
# Description
description: Enable ScaleIO as a storage backend
# Required fuel version
fuel_version: ['6.1']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['EMC Code']
# A link to the plugin's page
homepage: 'https://github.com/stackforge/fuel-scaleio-cinder'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: []
# The plugin is compatible with releases in the list
releases:
- os: centos
version: 2014.2.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
# Version of plugin package
package_version: '2.0.0'

5
pre_build_hook Normal file
View File

@ -0,0 +1,5 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.

View File

Binary file not shown.

View File

@ -0,0 +1,55 @@
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1441268891</revision>
<data type="filelists">
<checksum type="sha256">401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93</checksum>
<open-checksum type="sha256">bf9808b81cb2dbc54b4b8e35adc584ddcaa73bd81f7088d73bf7dbbada961310</open-checksum>
<location href="repodata/401dc19bda88c82c403423fb835844d64345f7e95f5b9835888189c03834cc93-filelists.xml.gz"/>
<timestamp>1441268891</timestamp>
<size>123</size>
<open-size>125</open-size>
</data>
<data type="primary">
<checksum type="sha256">dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9</checksum>
<open-checksum type="sha256">e1e2ffd2fb1ee76f87b70750d00ca5677a252b397ab6c2389137a0c33e7b359f</open-checksum>
<location href="repodata/dabe2ce5481d23de1f4f52bdcfee0f9af98316c9e0de2ce8123adeefa0dd08b9-primary.xml.gz"/>
<timestamp>1441268891</timestamp>
<size>134</size>
<open-size>167</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331</checksum>
<open-checksum type="sha256">960e2acb75b3414dd377efbe0277342d8a911139e8100357c83177a9351ddd6f</open-checksum>
<location href="repodata/ad36b2b9cd3689c29dcf84226b0b4db80633c57d91f50997558ce7121056e331-primary.sqlite.bz2"/>
<timestamp>1441268891</timestamp>
<database_version>10</database_version>
<size>1130</size>
<open-size>21504</open-size>
</data>
<data type="other_db">
<checksum type="sha256">d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69</checksum>
<open-checksum type="sha256">257af9e36ea0f10e4fc9e6053bf7f4cd9f0919b8857e93ec36b11b4ae8103440</open-checksum>
<location href="repodata/d5630fb9d7f956c42ff3962f2e6e64824e5df7edff9e08adf423d4c353505d69-other.sqlite.bz2"/>
<timestamp>1441268891</timestamp>
<database_version>10</database_version>
<size>570</size>
<open-size>6144</open-size>
</data>
<data type="other">
<checksum type="sha256">6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670</checksum>
<open-checksum type="sha256">e0ed5e0054194df036cf09c1a911e15bf2a4e7f26f2a788b6f47d53e80717ccc</open-checksum>
<location href="repodata/6bf9672d0862e8ef8b8ff05a2fd0208a922b1f5978e6589d87944c88259cb670-other.xml.gz"/>
<timestamp>1441268891</timestamp>
<size>123</size>
<open-size>121</open-size>
</data>
<data type="filelists_db">
<checksum type="sha256">2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee</checksum>
<open-checksum type="sha256">de1e4e1a56e70198865fdff487472070da92d535f5419bd25ff700caf5ceeb92</open-checksum>
<location href="repodata/2daa2f7a904d6ae04d81abc07d2ecb3bc3d8244a1e78afced2c94994f1b5f3ee-filelists.sqlite.bz2"/>
<timestamp>1441268891</timestamp>
<database_version>10</database_version>
<size>591</size>
<open-size>7168</open-size>
</data>
</repomd>

View File

Binary file not shown.

View File

36
tasks.yaml Normal file
View File

@ -0,0 +1,36 @@
# This tasks will be applied on controller nodes,
# here you can also specify several roles, for example
# ['cinder', 'compute'] will be applied only on
# cinder and compute nodes
#Install ScaleIO cluster
- role: ['compute']
stage: post_deployment/2010
type: puppet
parameters:
puppet_manifest: install_scaleio_compute.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600
#Install ScaleIO on controller
- role: ['controller']
stage: post_deployment/2110
type: puppet
parameters:
puppet_manifest: install_scaleio_controller.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600
#Install ScaleIO on controller
- role: ['primary-controller']
stage: post_deployment/2120
type: puppet
parameters:
puppet_manifest: install_scaleio_controller.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600
#Remove ScaleIO repo from all servers
- role: '*'
stage: post_deployment
type: puppet
parameters:
puppet_manifest: remove_scaleio_repo.pp
puppet_modules: puppet/:/etc/puppet/modules
timeout: 600