Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I4d787ba03e9168bc593365675e3124c650546b33
This commit is contained in:
Andreas Jaeger 2019-12-18 19:45:22 +01:00
parent 9cb2bc17ac
commit fb15aca1b0
76 changed files with 10 additions and 3921 deletions

23
.gitignore vendored
View File

@ -1,23 +0,0 @@
# Mac
.DS_Store
# Vagrant
.vagrant/
# Fuel Plugin Builder
.build/
*.rpm
# Python virtualenv
.venv/
# Sphinx
_build/
# PDF
*.pdf
.project
.doctrees
.buildpath
.pydevproject

View File

@ -1,53 +0,0 @@
# Contributions
The Fuel plugin for ScaleIO project has been licensed under the [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0") License. In order to contribute to the project you will to do two things:
1. License your contribution under the [DCO](http://elinux.org/Developer_Certificate_Of_Origin "Developer Certificate of Origin") + [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0")
2. Identify the type of contribution in the commit message
### 1. Licensing your Contribution:
As part of the contribution, in the code comments (or license file) associated with the contribution must include the following:
Copyright (c) 2015, EMC Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- [Insert Name], [Date (e.g., 1/1/15]”
**For example:**
A contribution from **Joe Developer**, an **independent developer**, submitted in **May 15th of 2015** should have an associated license (as file or/and code comments) like this:
Copyright (c) 2015, Joe Developer
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This code is provided under the Developer Certificate of Origin- Joe Developer, May 15th 2015”
### 2. Identifying the Type of Contribution
In addition to identifying an open source license in the documentation, **all Git Commit messages** associated with a contribution must identify the type of contribution (i.e., Bug Fix, Patch, Script, Enhancement, Tool Creation, or Other).

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2015, EMC Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

180
README.md
View File

@ -1,180 +0,0 @@
# ScaleIO Plugin for Fuel
## Overview
The `ScaleIO` plugin allows to:
* Deploy an EMC ScaleIO v.2.0 cluster together with OpenStack and configure OpenStack to use ScaleIO
as the storage for persistent and ephemeral volumes
* Configure OpenStack to use an existing ScaleIO cluster as a volume backend
* Support the following ScaleIO custer modes: 1_node, 3_node and 5_node
the mode is chosen automatically depending on the number of controller nodes
## Requirements
| Requirement | Version/Comment |
|----------------------------------|-----------------|
| Mirantis OpenStack | 8.0 |
| Mirantis OpenStack | 9.0 |
## Recommendations
1. Use configuration with 3 controllers or 5 controllers.
Although 1 controller mode is supported is suitable for testing purposees only.
2. Assign Cinder role for all controllers with allocating minimal diskspace for this role.
Some space is needed because of FUEL framework limitation (this space will not used).
Rest of the space keep for images.
3. Use nodes with similar HW configuration within one group of roles.
4. Deploy SDS coponents only on compute nodes.
Deploymen SDS-es on controllers is supported but it is more suitable for testing than for production environment.
5. On compute nodes keep minimal space for virtual storage on the first disk, rest disks use for ScaleIO.
Some space is needed because of FUEL framework limitations.
Other disks should be unallocated and can be used for ScaleIO.
6. In case of extending cluster with new compute nodes not to forget to run update_hosts tasks on controller nodes via FUEL cli.
## Limitations
1. Plugin supports Ubuntu environment only.
2. Multi storage backend is not supported.
3. It is not possible to use different backends for persistent and ephemeral volumes.
4. Disks for SDS-es should be unallocated before deployment via FUEL UI or cli.
5. MDMs and Gateways are deployed together and only onto controller nodes.
6. Adding and removing node(s) to/from the OpenStack cluster won't re-configure the ScaleIO.
# Installation Guide
## ScaleIO Plugin install from source code
To install the ScaleIO Plugin from source code, you first need to prepare an environment to build the RPM file of the plugin. The recommended approach is to build the RPM file directly onto the Fuel Master node so that you won't have to copy that file later.
Prepare an environment for building the plugin on the **Fuel Master node**.
0. You might want to make sure that kernel you have on the nodes for ScaleIO SDC installation (compute and cinder nodes) is suitable for the drivers present here: ``` ftp://QNzgdxXix:Aw3wFAwAq3@ftp.emc.com/ ```. Look for something like ``` Ubuntu/2.0.5014.0/4.2.0-30-generic ```. Local kernel version can be found with ``` uname -a ``` command.
1. Install the standard Linux development tools:
```
$ yum install createrepo rpm rpm-build dpkg-devel dpkg-dev git
```
2. Install the Fuel Plugin Builder. To do that, you should first get pip:
```
$ easy_install pip
```
3. Then install the Fuel Plugin Builder (the `fpb` command line) with `pip`:
```
$ pip install fuel-plugin-builder
```
*Note: You may also have to build the Fuel Plugin Builder if the package version of the
plugin is higher than package version supported by the Fuel Plugin Builder you get from `pypi`.
In this case, please refer to the section "Preparing an environment for plugin development"
of the [Fuel Plugins wiki](https://wiki.openstack.org/wiki/Fuel/Plugins) if you
need further instructions about how to build the Fuel Plugin Builder.*
4. Clone the ScaleIO Plugin git repository:
```
$ git clone https://github.com/openstack/fuel-plugin-scaleio.git
$ cd fuel-plugin-scaleio
$ git checkout "tags/v2.1.3"
```
5. Check that the plugin is valid:
```
$ fpb --check .
```
6. Build the plugin:
```
$ fpb --build .
```
7. Install plugin:
```
$ fuel plugins --install ./scaleio-2.1-2.1.3-1.noarch.rpm
```
## ScaleIO Plugin install from Fuel Plugins Catalog
To install the ScaleIOv2.0 Fuel plugin:
1. Download it from the [Fuel Plugins Catalog](https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/)
2. Copy the rpm file to the Fuel Master node
```
[root@home ~]# scp scaleio-2.1-2.1.3-1.noarch.rpm root@fuel-master:/tmp
```
3. Log into Fuel Master node and install the plugin using the Fuel CLI
```
$ fuel plugins --install ./scaleio-2.1-2.1.3-1.noarch.rpm
```
4. Verify that the plugin is installed correctly
```
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
1 | scaleio | 2.1.3 | 3.0.0
```
# User Guide
Please read the [ScaleIO Plugin User Guide](doc/source/builddir/ScaleIO-Plugin_Guide.pdf) for full description.
First of all, ScaleIOv2.0 plugin functionality should be enabled by switching on ScaleIO in the Settings.
ScaleIO section contains the following info to fill in:
1. Existing ScaleIO Cluster.
Set "Use existing ScaleIO" checkbox.
The following parameters should be specified:
* Gateway IP address - IP address of ScaleIO gateway
* Gateway port - Port of ScaleIO gateway
* Gateway user - User to access ScaleIO gateway
* Admin password - Password to access ScaleIO gateway
* Protection domain - The protection domain to use
* Storage pools - Comma-separated list of storage pools
2. New ScaleIO deployment
The following parameters should be specified:
* Admin password - Administrator password to set for ScaleIO MDM
* Protection domain - The protection domain to create for ScaleIO cluster
* Storage pools - Comma-separated list of storage pools to create for ScaleIO cluster
* Storage devices - Path to storage devices, comma separated (/dev/sdb,/dev/sdd)
The following parameters are optional and have default values suitable for most cases:
* Controller as Storage - Use controller nodes for ScaleIO SDS (by default only compute nodes are used for ScaleIO SDS deployment)
* Provisioning type - Thin/Thick provisioning for ephemeral and persistent volumes
* Checksum mode - Checksum protection. ScaleIO protects data in-flight by calculating and validating the checksum value for the payload at both ends.
Note, the checksum feature may have a minor effect on performance. ScaleIO utilizes hardware capabilities for this feature, where possible.
* Spare policy - % out of total space to be reserved for rebalance and redundancy recovery cases.
* Enable Zero Padding for Storage Pools - New volumes will be zeroed if the option enabled.
* Background device scanner - This options enables the background device scanner on the devices in device only mode.
* XtremCache devices - List of SDS devices for SSD caching. Cache is disabled if list empty.
* XtremCache storage pools - List of storage pools which should be cached with XtremCache.
* Capacity high priority alert - Threshold of the non-spare capacity of the Storage Pool that will trigger a high-priority alert, in percentage format.
* Capacity critical priority alert - Threshold of the non-spare capacity of the Storage Pool that will trigger a critical-priority alert, in percentage format.
* Use RAM cache (RMCache) - Enable/Disable use of SDS Servers RAM for caching storage devices in a Storage Pool.
* Passthrough RMCache storage pools - List of Storage Pools to be cached in SDS Servers RAM in passthrough mode (if the 'Use RAM cache (RMCache)' option is enabled)
* Cached RMCache storage pools - List od Storage Pools to be cached in SDS Servers RAM in cached mode (if the 'Use RAM cache (RMCache)' option is enabled)
* Glance images on ScaleIO - Enable/Disable ScaleIO backend for Glance images (It uses cinder backend in Glance to store images on ScaleIO).
This option is available since MOS9.0.
Configuration of disks for allocated nodes:
The devices listed in the "Storage devices" and "XtremCache devices" should be left unallocated for ScaleIO SDS to work.
# Contributions
Please read the [CONTRIBUTING.md](CONTRIBUTING.md) document for the latest information about contributions.
# Bugs, requests, questions
Please use the [Launchpad project site](https://launchpad.net/fuel-plugin-scaleio) to report bugs, request features, ask questions, etc.
# License
Please read the [LICENSE](LICENSE) document for the latest licensing information.

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

31
Vagrantfile vendored
View File

@ -1,31 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
$script = <<SCRIPT
sudo yum update -y
sudo yum install python-devel git createrepo rpm rpm-build dpkg-devel python-pip -y
git clone https://github.com/openstack/fuel-plugins.git /tmp/fpb
cd /tmp/fpb && sudo python setup.py develop
SCRIPT
Vagrant.configure(2) do |config|
config.vm.box = "puphpet/centos65-x64"
config.vm.hostname = "plugin-builder"
config.ssh.pty = true
config.vm.synced_folder ".", "/home/vagrant/src"
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
config.vm.provider "vmware_appcatalyst" do |v|
v.memory = "1024"
end
config.vm.provision "shell", inline: $script
end

View File

@ -1,38 +0,0 @@
# The puppet configures OpenStack cinder to use ScaleIO.
notice('MODULAR: scaleio: cinder')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if empty(filter_nodes($nodes, 'role', 'cinder')) {
fail("Cinder Role is not found on the host ${::hostname}")
}
if $scaleio['provisioning_type'] and $scaleio['provisioning_type'] != '' {
$provisioning_type = $scaleio['provisioning_type']
} else {
$provisioning_type = undef
}
$gateway_ip = $scaleio['existing_cluster'] ? {
true => $scaleio['gateway_ip'],
default => hiera('management_vip')
}
$password = $scaleio['password']
if $scaleio['existing_cluster'] {
$client_password = $password
} else {
$client_password_str = base64('encode', pw_hash($password, 'SHA-512', 'scaleio.client.access'))
$client_password = inline_template('Sio-<%= @client_password_str[33..40] %>-<%= @client_password_str[41..48] %>')
}
class {'::scaleio_openstack::cinder':
ensure => present,
gateway_user => $::gateway_user,
gateway_password => $client_password,
gateway_ip => $gateway_ip,
gateway_port => $::gateway_port,
protection_domains => $scaleio['protection_domain'],
storage_pools => $::storage_pools,
provisioning_type => $provisioning_type,
}
}

View File

@ -1,450 +0,0 @@
# Protection domains and Storage Pools.
# The puppet configures ScaleIO cluster - adds MDMs, SDSs, sets up
#Helpers for array processing
define mdm_standby() {
$ip = $title
notify {"Configure Standby MDM ${ip}": } ->
scaleio::mdm {"Standby MDM ${ip}":
ensure => 'present',
ensure_properties => 'present',
sio_name => $ip,
role => 'manager',
ips => $ip,
management_ips => $ip,
}
}
define mdm_tb() {
$ip = $title
notify {"Configure Tie-Breaker MDM ${ip}": } ->
scaleio::mdm {"Tie-Breaker MDM ${ip}":
ensure => 'present',
ensure_properties => 'present',
sio_name => $ip,
role => 'tb',
ips => $ip,
management_ips => undef,
}
}
define storage_pool_ensure(
$zero_padding,
$scanner_mode,
$checksum_mode,
$spare_policy,
$rfcache_storage_pools_array,
$rmcache_passthrough_pools,
$rmcache_cached_pools,
) {
$parsed_pool_name = split($title, ':')
$protection_domain = $parsed_pool_name[0]
$sp_name = $parsed_pool_name[1]
if $::scaleio_storage_pools and $::scaleio_storage_pools != '' {
$current_pools = split($::scaleio_storage_pools, ',')
} else {
$current_pools = []
}
if ! ("${protection_domain}:${sp_name}" in $current_pools) {
if $sp_name in $rmcache_passthrough_pools or $sp_name in $rmcache_cached_pools {
$rmcache_usage = 'use'
if $sp_name in $rmcache_passthrough_pools {
$rmcache_write_handling_mode = 'passthrough'
} else {
$rmcache_write_handling_mode = 'cached'
}
} else {
$rmcache_usage = 'dont_use'
$rmcache_write_handling_mode = undef
}
if $sp_name in $rfcache_storage_pools_array {
$rfcache_usage = 'use'
} else {
$rfcache_usage = 'dont_use'
}
notify {"storage_pool_ensure ${protection_domain}:${sp_name}: zero_padding=${zero_padding}, checksum=${checksum_mode}, scanner=${scanner_mode}, spare=${spare_policy}, rfcache=${rfcache_usage}":
} ->
scaleio::storage_pool {"Storage Pool ${protection_domain}:${sp_name}":
sio_name => $sp_name,
protection_domain => $protection_domain,
zero_padding_policy => $zero_padding,
checksum_mode => $checksum_mode,
scanner_mode => $scanner_mode,
spare_percentage => $spare_policy,
rfcache_usage => $rfcache_usage,
rmcache_usage => $rmcache_usage,
rmcache_write_handling_mode => $rmcache_write_handling_mode,
}
} else {
notify {"Skip storage pool ${sp_name} because it is already exists in ${::scaleio_storage_pools}": }
}
}
define protection_domain_ensure(
$pools_array,
$zero_padding,
$scanner_mode,
$checksum_mode,
$spare_policy,
$rfcache_storage_pools_array,
$rmcache_passthrough_pools,
$rmcache_cached_pools,
) {
$protection_domain = $title
$full_name_pools_array = prefix($pools_array, "${protection_domain}:")
scaleio::protection_domain {"Ensure protection domain ${protection_domain}":
sio_name => $protection_domain,
} ->
storage_pool_ensure {$full_name_pools_array:
zero_padding => $zero_padding,
scanner_mode => $scanner_mode,
checksum_mode => $checksum_mode,
spare_policy => $spare_policy,
rfcache_storage_pools_array => $rfcache_storage_pools_array,
rmcache_passthrough_pools => $rmcache_passthrough_pools,
rmcache_cached_pools => $rmcache_cached_pools,
}
}
define sds_ensure(
$sds_nodes,
$sds_to_pd_map, # map of SDSes to Protection domains
$storage_pools, # if sds_devices_config==undef then storage_pools and device_paths are used,
$device_paths, # this is FUELs w/o plugin's roles support, so all SDSes have the same config
$rfcache_devices,
$sds_devices_config, # for FUELs with plugin's roles support, config could be different for SDSes
) {
$sds_name = $title
$protection_domain = $sds_to_pd_map[$sds_name]
$sds_node_ = filter_nodes($sds_nodes, 'name', $sds_name)
$sds_node = $sds_node_[0]
#ips for data path traffic
$storage_ips = $sds_node['storage_address']
$storage_ip_roles = 'sdc_only'
#ips for communication with MDM and SDS<=>SDS
$mgmt_ips = $sds_node['internal_address']
$mgmt_ip_roles = 'sds_only'
if count(split($storage_ips, ',')) != 1 or count(split($mgmt_ips, ',')) != 1 {
fail("TODO: behaviour changed: address becomes comma-separated list ${storage_ips} or ${mgmt_ips}, so it is needed to add the generation of ip roles")
}
if $mgmt_ips == $storage_ips {
$sds_ips = $storage_ips
$sds_ip_roles = 'all'
}
else {
$sds_ips = "${storage_ips},${mgmt_ips}"
$sds_ip_roles = "${storage_ip_roles},${mgmt_ip_roles}"
}
if $sds_devices_config {
$cfg = $sds_devices_config[$sds_name]
if $cfg {
notify{"sds ${sds_name} config: ${cfg}": }
$pool_devices = $cfg ? { false => undef, default => convert_sds_config($cfg) }
if $pool_devices {
$sds_pools = $pool_devices[0]
$sds_device = $pool_devices[1]
} else {
warn("sds ${sds_name} there is empty pools and devices in configuration")
$sds_pools = undef
$sds_device = undef
}
if $cfg['rfcache_devices'] and $cfg['rfcache_devices'] != '' {
$sds_rfcache_devices = $cfg['rfcache_devices']
} else {
$sds_rfcache_devices = undef
}
} else {
warn("sds ${sds_name} there is no sds config in DB")
$sds_pools = undef
$sds_device = undef
$sds_rfcache_devices = undef
}
} else {
$sds_pools = $storage_pools
$sds_device = $device_paths
$sds_rfcache_devices = $rfcache_devices
}
notify { "sds ${sds_name}: pools:devices:rfcache: '${sds_pools}': '${sds_device}': '${sds_rfcache_devices}'": } ->
scaleio::sds {$sds_name:
ensure => 'present',
sio_name => $sds_name,
protection_domain => $protection_domain,
ips => $sds_ips,
ip_roles => $sds_ip_roles,
storage_pools => $sds_pools,
device_paths => $sds_device,
rfcache_devices => $sds_rfcache_devices,
}
}
define cleanup_sdc () {
$sdc_ip = $title
scaleio::sdc {"Remove SDC ${sdc_ip}":
ensure => 'absent',
ip => $sdc_ip,
}
}
define cleanup_sds () {
$sds_name = $title
scaleio::sds {"Remove SDS ${sds_name}":
ensure => 'absent',
sio_name => $sds_name,
}
}
notice('MODULAR: scaleio: cluster')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
if $::managers_ips {
$all_nodes = hiera('nodes')
# primary controller configures cluster
if ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller')) {
$use_plugin_roles = $scaleio['enable_sds_role']
if ! $use_plugin_roles {
if $scaleio['hyper_converged_deployment'] {
$storage_nodes = filter_nodes($all_nodes, 'role', 'compute')
if $scaleio['sds_on_controller'] {
$controller_nodes = filter_nodes($all_nodes, 'role', 'controller')
$pr_controller_nodes = filter_nodes($all_nodes, 'role', 'primary-controller')
$sds_nodes = concat(concat($pr_controller_nodes, $controller_nodes), $storage_nodes)
} else {
$sds_nodes = $storage_nodes
}
} else {
$sds_nodes = filter_nodes($all_nodes, 'role', 'scaleio')
}
} else {
$sds_nodes = concat(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier1'), filter_nodes($all_nodes, 'role', 'scaleio-storage-tier2'))
}
$sds_nodes_names = keys(nodes_to_hash($sds_nodes, 'name', 'internal_address'))
$sds_nodes_count = count($sds_nodes_names)
$sdc_nodes = concat(filter_nodes($all_nodes, 'role', 'compute'), filter_nodes($all_nodes, 'role', 'cinder'))
$sdc_nodes_ips = values(nodes_to_hash($sdc_nodes, 'name', 'internal_address'))
$mdm_ip_array = split($::managers_ips, ',')
$tb_ip_array = split($::tb_ips, ',')
$mdm_count = count($mdm_ip_array)
$tb_count = count($tb_ip_array)
if $mdm_count < 2 or $tb_count == 0 {
$cluster_mode = 1
$standby_ips = []
$slave_names = undef
$tb_names = undef
} else {
# primary controller IP is first in the list in case of first deploy and it creates cluster.
# it's guaranied by the tasks environment.pp and resize_cluster.pp
# in case of re-deploy the first ip is current master ip
$standby_ips = delete($mdm_ip_array, $mdm_ip_array[0])
if $mdm_count < 3 or $tb_count == 1 {
$cluster_mode = 3
$slave_names = join(values_at($standby_ips, '0-0'), ',')
$tb_names = join(values_at($tb_ip_array, '0-0'), ',')
} else {
$cluster_mode = 5
# incase of switch 3 to 5 nodes add only standby mdm/tb
$to_add_slaves = difference(values_at($standby_ips, '0-1'), intersection(values_at($standby_ips, '0-1'), split($::scaleio_mdm_ips, ',')))
$to_add_tb = difference(values_at($tb_ip_array, '0-1'), intersection(values_at($tb_ip_array, '0-1'), split($::scaleio_tb_ips, ',')))
$slave_names = join($to_add_slaves, ',')
$tb_names = join($to_add_tb, ',')
}
}
$password = $scaleio['password']
# parse config from centralized DB if exists
if $::scaleio_sds_config and $::scaleio_sds_config != '' {
$sds_devices_config = parsejson($::scaleio_sds_config)
}
else {
$sds_devices_config = undef
}
if $scaleio['device_paths'] and $scaleio['device_paths'] != '' {
# if devices come from settings, remove probable trailing commas
$paths = join(split($scaleio['device_paths'], ','), ',')
} else {
$paths = undef
}
if $scaleio['storage_pools'] and $scaleio['storage_pools'] != '' {
# if storage pools come from settings remove probable trailing commas
$pools_array = split($scaleio['storage_pools'], ',')
$pools = join($pools_array, ',')
} else {
$pools_array = get_pools_from_sds_config($sds_devices_config)
$pools = undef
}
$zero_padding = $scaleio['zero_padding'] ? {
false => 'disable',
default => 'enable'
}
$scanner_mode = $scaleio['scanner_mode'] ? {
false => 'disable',
default => 'enable'
}
$checksum_mode = $scaleio['checksum_mode'] ? {
false => 'disable',
default => 'enable'
}
$spare_policy = $scaleio['spare_policy'] ? {
false => undef,
default => $scaleio['spare_policy']
}
if $scaleio['rmcache_usage'] {
if $scaleio['rmcache_passthrough_pools'] and $scaleio['rmcache_passthrough_pools'] != '' {
$rmcache_passthrough_pools = split($scaleio['rmcache_passthrough_pools'], ',')
} else {
$rmcache_passthrough_pools = []
}
if $scaleio['rmcache_cached_pools'] and $scaleio['rmcache_cached_pools'] != '' {
$rmcache_cached_pools = split($scaleio['rmcache_cached_pools'], ',')
} else {
$rmcache_cached_pools = []
}
} else {
$rmcache_passthrough_pools = []
$rmcache_cached_pools = []
}
if $scaleio['rfcache_devices'] and $scaleio['rfcache_devices'] != '' {
$rfcache_devices = $scaleio['rfcache_devices']
} else {
$rfcache_devices = undef
}
if $scaleio['cached_storage_pools'] and $scaleio['cached_storage_pools'] != '' {
$rfcache_storage_pools_array = split($scaleio['cached_storage_pools'], ',')
} else {
$rfcache_storage_pools_array = []
}
if $scaleio['capacity_high_alert_threshold'] and $scaleio['capacity_high_alert_threshold'] != '' {
$capacity_high_alert_threshold = $scaleio['capacity_high_alert_threshold']
} else {
$capacity_high_alert_threshold = undef
}
if $scaleio['capacity_critical_alert_threshold'] and $scaleio['capacity_critical_alert_threshold'] != '' {
$capacity_critical_alert_threshold = $scaleio['capacity_critical_alert_threshold']
} else {
$capacity_critical_alert_threshold = undef
}
$client_password_str = base64('encode', pw_hash($password, 'SHA-512', 'scaleio.client.access'))
$client_password = inline_template('Sio-<%= @client_password_str[33..40] %>-<%= @client_password_str[41..48] %>')
notify {"Configure cluster MDM: ${master_mdm}": } ->
scaleio::login {'Normal':
password => $password,
require => File_line['SCALEIO_discovery_allowed']
}
if $::scaleio_sdc_ips {
$current_sdc_ips = split($::scaleio_sdc_ips, ',')
$to_keep_sdc = intersection($current_sdc_ips, $sdc_nodes_ips)
$to_remove_sdc = difference($current_sdc_ips, $to_keep_sdc)
# todo: not clear is it safe: actually task sdc is run before cluster task,
# so there to_add_sdc_ips is always empty, because all SDCs
# are already registered in cluster and are returned from facter scaleio_current_sdc_list
notify {"SDC change current='${::scaleio_current_sdc_list}', to_add='${to_add_sdc_ips}', to_remove='${to_remove_sdc}'": } ->
cleanup_sdc {$to_remove_sdc:
require => Scaleio::Login['Normal'],
}
}
if $::scaleio_sds_names {
$current_sds_names = split($::scaleio_sds_names, ',')
$to_keep_sds = intersection($current_sds_names, $sds_nodes_names)
$to_add_sds_names = difference($sds_nodes_names, $to_keep_sds)
$to_remove_sds = difference($current_sds_names, $to_keep_sds)
notify {"SDS change current='${::scaleio_sds_names}' new='${new_sds_names}' to_remove='${to_remove_sds}'": } ->
cleanup_sds {$to_remove_sds:
require => Scaleio::Login['Normal'],
}
} else {
$to_add_sds_names = $sds_nodes_names
}
if $::scaleio_sds_with_protection_domain_list and $::scaleio_sds_with_protection_domain_list != '' {
$scaleio_sds_to_pd_map = hash(split($::scaleio_sds_with_protection_domain_list, ','))
} else {
$scaleio_sds_to_pd_map = {}
}
$sds_pd_limit = $scaleio['protection_domain_nodes'] ? {
undef => 0, # unlimited
default => $scaleio['protection_domain_nodes']
}
$sds_to_pd_map = update_sds_to_pd_map($scaleio_sds_to_pd_map, $scaleio['protection_domain'], $sds_pd_limit, $to_add_sds_names)
$protection_domain_array = unique(values($sds_to_pd_map))
if $cluster_mode != 1 {
mdm_standby {$standby_ips:
require => Scaleio::Login['Normal'],
} ->
mdm_tb{$tb_ip_array:} ->
scaleio::cluster {'Configure cluster mode':
ensure => 'present',
cluster_mode => $cluster_mode,
slave_names => $slave_names,
tb_names => $tb_names,
require => Scaleio::Login['Normal'],
}
}
protection_domain_ensure {$protection_domain_array:
pools_array => $pools_array,
zero_padding => $zero_padding,
scanner_mode => $scanner_mode,
checksum_mode => $checksum_mode,
spare_policy => $spare_policy,
rfcache_storage_pools_array => $rfcache_storage_pools_array,
rmcache_passthrough_pools => $rmcache_passthrough_pools,
rmcache_cached_pools => $rmcache_cached_pools,
require => Scaleio::Login['Normal'],
} ->
sds_ensure {$to_add_sds_names:
sds_nodes => $sds_nodes,
sds_to_pd_map => $sds_to_pd_map,
storage_pools => $pools,
device_paths => $paths,
rfcache_devices => $rfcache_devices,
sds_devices_config => $sds_devices_config,
require => Protection_domain_ensure[$protection_domain_array],
before => Scaleio::Cluster['Create scaleio client user'],
}
if $capacity_high_alert_threshold and $capacity_critical_alert_threshold {
scaleio::cluster {'Configure alerts':
ensure => 'present',
capacity_high_alert_threshold => $capacity_high_alert_threshold,
capacity_critical_alert_threshold => $capacity_critical_alert_threshold,
require => Protection_domain_ensure[$protection_domain_array],
before => Scaleio::Cluster['Create scaleio client user'],
}
}
# Apply high performance profile to SDC-es
# Use first sdc ip because underlined puppet uses all_sdc parameters
if ! empty($sdc_nodes_ips) {
scaleio::sdc {'Set performance settings for all available SDCs':
ip => $sdc_nodes_ips[0],
require => Protection_domain_ensure[$protection_domain_array],
before => Scaleio::Cluster['Create scaleio client user'],
}
}
scaleio::cluster {'Create scaleio client user':
ensure => 'present',
client_password => $client_password,
require => [Protection_domain_ensure[$protection_domain_array], Sds_ensure[$to_add_sds_names]],
}
} else {
notify {"Not Master MDM IP ${master_mdm}": }
}
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_mdm_ips=',
line => "SCALEIO_mdm_ips=${::managers_ips}",
} ->
# forbid requesting sdc/sds from discovery facters,
# this is a workaround of the ScaleIO problem -
# these requests hangs in some reason if cluster is in degraded state
file_line {'SCALEIO_discovery_allowed':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_discovery_allowed=',
line => 'SCALEIO_discovery_allowed=no',
}
} else {
fail('Empty MDM IPs configuration')
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -1,37 +0,0 @@
# The puppet discovers cluster and updates mdm_ips and tb_ips values for next cluster task.
notice('MODULAR: scaleio: discovery_cluster')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
# names of mdm and tb are IPs in fuel
$current_mdms = split($::scaleio_mdm_ips, ',')
$current_managers = concat(split($::scaleio_mdm_ips, ','), split($::scaleio_standby_mdm_ips, ','))
$current_tbs = concat(split($::scaleio_tb_ips, ','), split($::scaleio_standby_tb_ips, ','))
$discovered_mdms_ips = join($current_mdms, ',')
$discovered_managers_ips = join($current_managers, ',')
$discovered_tbs_ips = join($current_tbs, ',')
notify {"ScaleIO cluster: discovery: discovered_managers_ips='${discovered_managers_ips}', discovered_tbs_ips='${discovered_tbs_ips}'": } ->
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_mdm_ips=',
line => "SCALEIO_mdm_ips=${discovered_mdms_ips}",
} ->
file_line {'SCALEIO_managers_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_managers_ips=',
line => "SCALEIO_managers_ips=${discovered_managers_ips}",
} ->
file_line {'SCALEIO_tb_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_tb_ips=',
line => "SCALEIO_tb_ips=${discovered_tbs_ips}",
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -1,180 +0,0 @@
# The puppet reset mdm ips into initial state for next cluster detection on controllers.
# On client nodes just all controllers are used as mdm ips because no way to detect cluster there.
define env_fact($role, $fact, $value) {
file_line { "Append a SCALEIO_${role}_${fact} line to /etc/environment":
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_${role}_${fact}=",
line => "SCALEIO_${role}_${fact}=${value}",
}
}
notice('MODULAR: scaleio: environment')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
notify{'ScaleIO plugin enabled': }
# The following exec allows interrupt for debugging at the very beginning of the plugin deployment
# because Fuel doesn't provide any tools for this and deployment can last for more than two hours.
# Timeouts in tasks.yaml and in the deployment_tasks.yaml (which in 6.1 is not user-exposed and
# can be found for example in astute docker container during deloyment) should be set to high values.
# It'll be invoked only if /tmp/scaleio_debug file exists on particular node and you can use
# "touch /tmp/go" when you're ready to resume.
exec { 'Wait on debug interrupt: use touch /tmp/go to resume':
command => "bash -c 'while [ ! -f /tmp/go ]; do :; done'",
path => [ '/bin/' ],
onlyif => 'ls /tmp/scaleio_debug',
}
case $::osfamily {
'RedHat': {
fail('This is a temporary limitation. ScaleIO supports only Ubuntu for now.')
}
'Debian': {
# nothing to do
}
default: {
fail("Unsupported osfamily: ${::osfamily} operatingsystem: ${::operatingsystem}, module ${module_name} only support osfamily RedHat and Debian")
}
}
$all_nodes = hiera('nodes')
if ! $scaleio['skip_checks'] and empty(filter_nodes($all_nodes, 'role', 'cinder')) {
fail('At least one Node with Cinder role is required')
}
if $scaleio['existing_cluster'] {
# Existing ScaleIO cluster attaching
notify{'Use existing ScaleIO cluster': }
env_fact{"Environment fact: role gateway, ips: ${scaleio['gateway_ip']}":
role => 'gateway',
fact => 'ips',
value => $scaleio['gateway_ip']
} ->
env_fact{"Environment fact: role gateway, user: ${scaleio['gateway_user']}":
role => 'gateway',
fact => 'user',
value => $scaleio['gateway_user']
} ->
env_fact{"Environment fact: role gateway, password: ${scaleio['password']}":
role => 'gateway',
fact => 'password',
value => $scaleio['password']
} ->
env_fact{"Environment fact: role gateway, port: ${scaleio['gateway_port']}":
role => 'gateway',
fact => 'port',
value => $scaleio['gateway_port']
} ->
env_fact{"Environment fact: role storage, pools: ${scaleio['existing_storage_pools']}":
role => 'storage',
fact => 'pools',
value => $scaleio['existing_storage_pools']
}
# mdm_ips are requested from gateways in separate manifest because no way to pass args to facter
}
else {
# New ScaleIO cluster deployment
notify{'Deploy ScaleIO cluster': }
$controllers_nodes = filter_nodes($all_nodes, 'role', 'controller')
$primary_controller_nodes = filter_nodes($all_nodes, 'role', 'primary-controller')
#use management network for ScaleIO components communications
# order of ips should be equal on all nodes:
# - first ip must be primary controller, others should be sorted have defined order
$controllers_ips_ = ipsort(values(nodes_to_hash($controllers_nodes, 'name', 'internal_address')))
$controller_ips_array = concat(values(nodes_to_hash($primary_controller_nodes, 'name', 'internal_address')), $controllers_ips_)
$ctrl_ips = join($controller_ips_array, ',')
notify{"ScaleIO cluster: ctrl_ips=${ctrl_ips}": }
# Check SDS count
$use_plugin_roles = $scaleio['enable_sds_role']
if ! $use_plugin_roles {
if $scaleio['hyper_converged_deployment'] {
$controller_sds_count = $scaleio['sds_on_controller'] ? {
true => count($controller_ips_array),
default => 0
}
$total_sds_count = count(filter_nodes($all_nodes, 'role', 'compute')) + $controller_sds_count
} else {
$total_sds_count = count(filter_nodes($all_nodes, 'role', 'scaleio'))
}
if $total_sds_count < 3 {
$sds_check_msg = 'There should be at least 3 nodes with SDSs, either add Compute node or use Controllers as SDS.'
}
} else {
$tier1_sds_count = count(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier1'))
$tier2_sds_count = count(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier2'))
if $tier1_sds_count != 0 and $tier1_sds_count < 3 {
$sds_check_msg = 'There are less than 3 nodes with Scaleio Storage Tier1 role.'
}
if $tier2_sds_count != 0 and $tier2_sds_count < 3 {
$sds_check_msg = 'There are less than 3 nodes with Scaleio Storage Tier2 role.'
}
}
if $sds_check_msg {
if ! $scaleio['skip_checks'] {
fail($sds_check_msg)
} else{
warning($sds_check_msg)
}
}
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if ! empty(concat(filter_nodes($nodes, 'role', 'controller'), filter_nodes($nodes, 'role', 'primary-controller'))) {
if $::memorysize_mb < 2900 {
if ! $scaleio['skip_checks'] {
fail("Controller node requires at least 3000MB but there is ${::memorysize_mb}")
} else {
warning("Controller node requires at least 3000MB but there is ${::memorysize_mb}")
}
}
}
if $::sds_storage_small_devices {
if ! $scaleio['skip_checks'] {
fail("Storage devices minimal size is 100GB. The following devices do not meet this requirement ${::sds_storage_small_devices}")
} else {
warning("Storage devices minimal size is 100GB. The following devices do not meet this requirement ${::sds_storage_small_devices}")
}
}
# mdm ips and tb ips must be emtpy to avoid queries from ScaleIO about SDC/SDS,
# the next task (cluster discovering) will set them into correct values.
env_fact{'Environment fact: mdm ips':
role => 'mdm',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: managers ips':
role => 'managers',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: tb ips':
role => 'tb',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: gateway ips':
role => 'gateway',
fact => 'ips',
value => $ctrl_ips
} ->
env_fact{'Environment fact: controller ips':
role => 'controller',
fact => 'ips',
value => $ctrl_ips
} ->
env_fact{'Environment fact: role gateway, user: scaleio_client':
role => 'gateway',
fact => 'user',
value => 'scaleio_client'
} ->
env_fact{'Environment fact: role gateway, port: 4443':
role => 'gateway',
fact => 'port',
value => 4443
} ->
env_fact{"Environment fact: role storage, pools: ${scaleio['storage_pools']}":
role => 'storage',
fact => 'pools',
value => $scaleio['storage_pools']
}
}
} else {
notify{'ScaleIO plugin disabled': }
}

View File

@ -1,30 +0,0 @@
# The puppet configure ScaleIO MDM IPs in environment for existing ScaleIO cluster.
#TODO: move it from this file and from environment.pp into modules
define env_fact($role, $fact, $value) {
file_line { "Append a SCALEIO_${role}_${fact} line to /etc/environment":
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_${role}_${fact}=",
line => "SCALEIO_${role}_${fact}=${value}",
}
}
notice('MODULAR: scaleio: environment_existing_mdm_ips')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if $scaleio['existing_cluster'] {
$ips = $::scaleio_mdm_ips_from_gateway
if ! $ips or $ips == '' {
fail('Cannot request MDM IPs from existing cluster. Check Gateway address/port and user name with password.')
}
env_fact{"Environment fact: role mdm, ips from existing cluster ${ips}":
role => 'controller',
fact => 'ips',
value => $ips
}
}
} else {
notify{'ScaleIO plugin disabled': }
}

View File

@ -1,43 +0,0 @@
# The puppet configures ScaleIO Gateway. Sets the password and connects to MDMs.
notice('MODULAR: scaleio: gateway_server')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
if $::managers_ips {
$gw_ips = split($::gateway_ips, ',')
$haproxy_config_options = {
'balance' => 'roundrobin',
'mode' => 'tcp',
'option' => ['tcplog'],
}
Haproxy::Service { use_include => true }
Haproxy::Balancermember { use_include => true }
class {'::scaleio::gateway_server':
ensure => 'present',
mdm_ips => $::managers_ips,
password => $scaleio['password'],
pkg_ftp => $scaleio['pkg_ftp'],
} ->
notify { "Configure Haproxy for Gateway nodes: ${gw_ips}": } ->
openstack::ha::haproxy_service { 'scaleio-gateway':
order => 201,
server_names => $gw_ips,
ipaddresses => $gw_ips,
listen_port => $::gateway_port,
public_virtual_ip => hiera('public_vip'),
internal_virtual_ip => hiera('management_vip'),
define_backups => true,
public => true,
haproxy_config_options => $haproxy_config_options,
balancermember_options => 'check inter 10s fastinter 2s downinter 3s rise 3 fall 3',
}
} else {
fail('Empty MDM IPs configuration')
}
} else {
notify{'Skip deploying gateway server because of using existing cluster': }
}
}

View File

@ -1,19 +0,0 @@
# The puppet configures OpenStack glance to use ScaleIO via Cinder.
notice('MODULAR: scaleio: glance')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if empty(filter_nodes($nodes, 'role', 'primary-controller')) and empty(filter_nodes($nodes, 'role', 'controller')) {
fail("glance task should be run only on controllers, but node ${::hostname} is not controller")
}
if $scaleio['use_scaleio_for_glance'] {
class {'scaleio_openstack::glance':
}
} else {
notify { 'Skip glance configuration': }
}
}

View File

@ -1,22 +0,0 @@
# The puppet installs ScaleIO MDM packages.
notice('MODULAR: scaleio: mdm_package')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$node_ips = split($::ip_address_array, ',')
if ! empty(intersection(split($::controller_ips, ','), $node_ips))
{
notify {'Mdm server installation': } ->
class {'::scaleio::mdm_server':
ensure => 'present',
pkg_ftp => $scaleio['pkg_ftp'],
}
} else {
notify{'Skip deploying mdm server because it is not controller': }
}
} else {
notify{'Skip deploying mdm server because of using existing cluster': }
}
}

View File

@ -1,68 +0,0 @@
# The puppet installs ScaleIO MDM packages.
notice('MODULAR: scaleio: mdm_server')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$all_nodes = hiera('nodes')
$node_ips = split($::ip_address_array, ',')
$new_mdm_ips = split($::managers_ips, ',')
$is_tb = ! empty(intersection(split($::tb_ips, ','), $node_ips))
$is_mdm = ! empty(intersection($new_mdm_ips, $node_ips))
if $is_tb or $is_mdm {
if $is_tb {
$is_manager = 0
$master_mdm_name = undef
$master_ip = undef
} else {
$is_manager = 1
$is_new_cluster = ! $::scaleio_mdm_ips or $::scaleio_mdm_ips == ''
$is_primary_controller = ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller'))
if $is_new_cluster and $is_primary_controller {
$master_ip = $new_mdm_ips[0]
$master_mdm_name = $new_mdm_ips[0]
} else {
$master_ip = undef
$master_mdm_name = undef
}
}
$old_password = $::mdm_password ? {
undef => 'admin',
default => $::mdm_password
}
$password = $scaleio['password']
notify {"Controller server is_manager=${is_manager} master_mdm_name=${master_mdm_name} master_ip=${master_ip}": } ->
class {'::scaleio::mdm_server':
ensure => 'present',
is_manager => $is_manager,
master_mdm_name => $master_mdm_name,
mdm_ips => $master_ip,
}
if $old_password != $password {
if $master_mdm_name {
scaleio::login {'First':
password => $old_password,
require => Class['scaleio::mdm_server']
} ->
scaleio::cluster {'Set password':
password => $old_password,
new_password => $password,
before => File_line['Append a SCALEIO_mdm_password line to /etc/environment']
}
}
file_line {'Append a SCALEIO_mdm_password line to /etc/environment':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_mdm_password=',
line => "SCALEIO_mdm_password=${password}",
}
}
} else {
notify{'Skip deploying mdm server because it is not mdm and tb': }
}
} else {
notify{'Skip deploying mdm server because of using existing cluster': }
}
}

View File

@ -1,48 +0,0 @@
# The puppet configures OpenStack nova to use ScaleIO.
notice('MODULAR: scaleio: nova')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if empty(filter_nodes($nodes, 'role', 'compute')) {
fail("Compute Role is not found on the host ${::hostname}")
}
if $scaleio['provisioning_type'] and $scaleio['provisioning_type'] != '' {
$provisioning_type = $scaleio['provisioning_type']
} else {
$provisioning_type = undef
}
$gateway_ip = $scaleio['existing_cluster'] ? {
true => $scaleio['gateway_ip'],
default => hiera('management_vip')
}
$password = $scaleio['password']
if $scaleio['existing_cluster'] {
$client_password = $password
} else {
$client_password_str = base64('encode', pw_hash($password, 'SHA-512', 'scaleio.client.access'))
$client_password = inline_template('Sio-<%= @client_password_str[33..40] %>-<%= @client_password_str[41..48] %>')
}
if $::scaleio_sds_with_protection_domain_list and $::scaleio_sds_with_protection_domain_list != '' {
$scaleio_sds_to_pd_map = hash(split($::scaleio_sds_with_protection_domain_list, ','))
} else {
$scaleio_sds_to_pd_map = {}
}
if $::hostname in $scaleio_sds_to_pd_map {
$pd_name = $scaleio_sds_to_pd_map[$::hostname]
} else {
$pd_name = $scaleio['protection_domain']
}
class {'::scaleio_openstack::nova':
ensure => present,
gateway_user => $::gateway_user,
gateway_password => $client_password,
gateway_ip => $gateway_ip,
gateway_port => $::gateway_port,
protection_domains => $pd_name,
storage_pools => $::storage_pools,
provisioning_type => $provisioning_type,
}
}

View File

@ -1,127 +0,0 @@
# The puppet configures OpenStack entities like flavor, volume_types, etc.
define apply_flavor(
$flavors_hash = undef, # hash of flavors
) {
$resource_name = $title # 'flavor:action'
$parsed_name = split($title, ':')
$action = $parsed_name[1]
if $action == 'add' {
$flavor_name = $parsed_name[0]
$flavor = $flavors_hash[$flavor_name]
scaleio_openstack::flavor {$resource_name:
ensure => present,
name => $resource_name,
storage_pool => $flavor['storage_pool'],
id => $flavor['id'],
ram_size => $flavor['ram_size'],
vcpus => $flavor['vcpus'],
disk_size => $flavor['disk_size'],
ephemeral_size => $flavor['ephemeral_size'],
swap_size => $flavor['swap_size'],
rxtx_factor => $flavor['rxtx_factor'],
is_public => $flavor['is_public'],
provisioning => $flavor['provisioning'],
}
} else {
scaleio_openstack::flavor {$resource_name:
ensure => absent,
name => $resource_name,
}
}
}
notice('MODULAR: scaleio: os')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
if ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller')) {
if $scaleio['storage_pools'] and $scaleio['storage_pools'] != '' {
# if storage pools come from settings remove probable trailing commas
$pools_array = split($scaleio['storage_pools'], ',')
} else {
$pools_array = get_pools_from_sds_config($sds_devices_config)
}
$storage_pool = $pools_array ? {
undef => undef,
default => $pools_array[0] # use first pool for flavors
}
$flavors = {
'm1.micro' => {
'id' => undef,
'ram_size' => 64,
'vcpus' => 1,
'disk_size' => 8, # because ScaleIO requires the size be multiplier of 8
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.tiny' => {
'id' => 1,
'ram_size' => 512,
'vcpus' => 1,
'disk_size' => 8,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.small' => {
'id' => 2,
'ram_size' => 2048,
'vcpus' => 1,
'disk_size' => 24,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.medium' => {
'id' => 3,
'ram_size' => 4096,
'vcpus' => 2,
'disk_size' => 48,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.large' => {
'id' => 4,
'ram_size' => 8192,
'vcpus' => 4,
'disk_size' => 80,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.xlarge' => {
'id' => 5,
'ram_size' => 16384,
'vcpus' => 8,
'disk_size' => 160,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
}
$to_remove_flavors = suffix(keys($flavors), ':remove')
$to_add_flavors = suffix(keys($flavors), ':add')
apply_flavor {$to_remove_flavors:
flavors_hash => undef,
} ->
apply_flavor {$to_add_flavors:
flavors_hash => $flavors,
}
}
}

View File

@ -1,123 +0,0 @@
# The puppet sets 1_node mode and removes absent nodes if there are any ones.
# It expects that facters mdm_ips and tb_ips are correctly set to current cluster state
define cleanup_mdm () {
$mdm_name = $title
scaleio::mdm {"Remove MDM ${mdm_name}":
ensure => 'absent',
sio_name => $mdm_name,
}
}
notice('MODULAR: scaleio: resize_cluster')
# The only mdm with minimal IP from current MDMs does cleaunp
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$all_nodes = hiera('nodes')
$controller_ips_array = split($::controller_ips, ',')
# names of mdm and tb are IPs in fuel
$current_mdms = split($::managers_ips, ',')
$current_tbs = split($::tb_ips, ',')
$mdms_present = intersection($current_mdms, $controller_ips_array)
$mdms_present_str = join($mdms_present, ',')
$mdms_absent = difference($current_mdms, $mdms_present)
$tbs_present = intersection($current_tbs, $controller_ips_array)
$tbs_absent = difference($current_tbs, $tbs_present)
$controllers_count = count($controller_ips_array)
if $controllers_count < 3 {
# 1 node mode
$to_add_mdm_count = 1 - count($mdms_present)
$to_add_tb_count = 0
} else {
# 3 node mode
if $controllers_count < 5 {
$to_add_mdm_count = 2 - count($mdms_present)
$to_add_tb_count = 1 - count($tbs_present)
} else {
# 5 node mode
$to_add_mdm_count = 3 - count($mdms_present)
$to_add_tb_count = 2 - count($tbs_present)
}
}
$nodes_present = concat(intersection($current_mdms, $controller_ips_array), $tbs_present)
$available_nodes = difference($controller_ips_array, intersection($nodes_present, $controller_ips_array))
if $to_add_tb_count > 0 and count($available_nodes) >= $to_add_tb_count {
$last_tb_index = count($available_nodes) - 1
$first_tb_index = $last_tb_index - $to_add_tb_count + 1
$tbs_present_tmp = intersection($current_tbs, $controller_ips_array) # use tmp because concat modifys first param
$new_tb_ips = join(concat($tbs_present_tmp, values_at($available_nodes, "${first_tb_index}-${last_tb_index}")), ',')
} else {
$new_tb_ips = join($tbs_present, ',')
}
if $to_add_mdm_count > 0 and count($available_nodes) >= $to_add_mdm_count {
$last_mdm_index = $to_add_mdm_count - 1
$mdms_present_tmp = intersection($current_mdms, $controller_ips_array) # use tmp because concat modifys first param
$new_mdms_ips = join(concat($mdms_present_tmp, values_at($available_nodes, "0-${last_mdm_index}")), ',')
} else {
$new_mdms_ips = join($mdms_present, ',')
}
$is_primary_controller = !empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller'))
notify {"ScaleIO cluster: resize: controller_ips_array='${controller_ips_array}', current_mdms='${current_mdms}', current_tbs='${current_tbs}'": }
if !empty($mdms_absent) or !empty($tbs_absent) {
notify {"ScaleIO cluster: change: mdms_present='${mdms_present}', mdms_absent='${mdms_absent}', tbs_present='${tbs_present}', tbs_absent='${tbs_absent}'": }
# primary-controller will do cleanup
if $is_primary_controller {
$active_mdms = split($::scaleio_mdm_ips, ',')
$slaves_names = join(delete($active_mdms, $active_mdms[0]), ',') # first is current master
$to_remove_mdms = concat(split(join($mdms_absent, ','), ','), $tbs_absent) # join/split because concat affects first argument
scaleio::login {'Normal':
password => $scaleio['password']
} ->
scaleio::cluster {'Resize cluster mode to 1_node and remove other MDMs':
ensure => 'absent',
cluster_mode => 1,
slave_names => $slaves_names,
tb_names => $::scaleio_tb_ips,
require => Scaleio::Login['Normal'],
before => File_line['SCALEIO_mdm_ips']
} ->
cleanup_mdm {$to_remove_mdms:
before => File_line['SCALEIO_mdm_ips']
}
} else {
notify {"ScaleIO cluster: resize: Not primary controller ${::hostname}": }
}
} else {
notify {'ScaleIO cluster: resize: nothing to resize': }
}
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_mdm_ips=',
line => "SCALEIO_mdm_ips=${mdms_present_str}",
} ->
file_line {'SCALEIO_managers_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_managers_ips=',
line => "SCALEIO_managers_ips=${new_mdms_ips}",
} ->
file_line {'SCALEIO_tb_ips':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_tb_ips=',
line => "SCALEIO_tb_ips=${new_tb_ips}",
}
# only primary-controller needs discovery of sds/sdc
if $is_primary_controller {
file_line {'SCALEIO_discovery_allowed':
ensure => present,
path => '/etc/environment',
match => '^SCALEIO_discovery_allowed=',
line => 'SCALEIO_discovery_allowed=yes',
require => File_line['SCALEIO_tb_ips']
}
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -1,26 +0,0 @@
# The puppet installs ScaleIO SDC packages and connect to MDMs.
# It expects that any controller could be MDM
notice('MODULAR: scaleio: sdc')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $::controller_ips {
fail('Empty Controller IPs configuration')
}
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
$is_compute = !empty(filter_nodes($nodes, 'role', 'compute'))
$is_cinder = !empty(filter_nodes($nodes, 'role', 'cinder'))
$is_glance = (!empty(filter_nodes($nodes, 'role', 'primary-controller')) or !empty(filter_nodes($nodes, 'role', 'controller'))) and $scaleio['use_scaleio_for_glance']
$need_sdc = $is_compute or $is_cinder or $is_glance
if $need_sdc {
class {'::scaleio::sdc_server':
ensure => 'present',
mdm_ip => $::controller_ips,
}
} else{
notify {"Skip SDC server task on the node ${::hostname}": }
}
}

View File

@ -1,22 +0,0 @@
# The puppet installs ScaleIO SDS packages.
notice('MODULAR: scaleio: sdc_server')
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
$is_compute = !empty(filter_nodes($nodes, 'role', 'compute'))
$is_cinder = !empty(filter_nodes($nodes, 'role', 'cinder'))
$is_glance = (!empty(filter_nodes($nodes, 'role', 'primary-controller')) or !empty(filter_nodes($nodes, 'role', 'controller'))) and $scaleio['use_scaleio_for_glance']
$need_sdc = $is_compute or $is_cinder or $is_glance
if $need_sdc {
class {'::scaleio::sdc_server':
ensure => 'present',
mdm_ip => undef,
pkg_ftp => $scaleio['pkg_ftp'],
}
} else{
notify {"Skip SDC server task on the node ${::hostname}": }
}
}

View File

@ -1,122 +0,0 @@
# The puppet installs ScaleIO SDS packages
# helping define for array processing
define sds_device_cleanup() {
$device = $title
exec { "device ${device} cleanup":
command => "bash -c 'for i in \$(parted ${device} print | awk \"/^ [0-9]+/ {print(\\\$1)}\"); do parted ${device} rm \$i; done'",
path => [ '/bin/', '/sbin/' , '/usr/bin/', '/usr/sbin/' ],
}
}
notice('MODULAR: scaleio: sds_server')
# Just install packages
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
$use_plugin_roles = $scaleio['enable_sds_role']
if ! $use_plugin_roles {
if $scaleio['hyper_converged_deployment'] {
$is_controller = !empty(concat(filter_nodes($nodes, 'role', 'primary-controller'), filter_nodes($nodes, 'role', 'controller')))
$is_sds_on_controller = $is_controller and $scaleio['sds_on_controller']
$is_sds_on_compute = !empty(filter_nodes($nodes, 'role', 'compute'))
$is_sds_server = $is_sds_on_controller or $is_sds_on_compute
} else {
$is_sds_server = !empty(filter_nodes($nodes, 'role', 'scaleio'))
}
} else {
$is_sds_server = ! empty(concat(
concat(filter_nodes($nodes, 'role', 'scaleio-storage-tier1'), filter_nodes($nodes, 'role', 'scaleio-storage-tier2')),
filter_nodes($nodes, 'role', 'scaleio-storage-tier3')))
}
if $is_sds_server {
if ! $use_plugin_roles {
if $scaleio['device_paths'] and $scaleio['device_paths'] != '' {
$device_paths = split($scaleio['device_paths'], ',')
} else {
$device_paths = []
}
if $scaleio['rfcache_devices'] and $scaleio['rfcache_devices'] != '' {
$rfcache_devices = split($scaleio['rfcache_devices'], ',')
} else {
$rfcache_devices = []
}
if ! empty($rfcache_devices) {
$use_xcache = 'present'
} else {
$use_xcache = 'absent'
}
$devices = concat(flatten($device_paths), $rfcache_devices)
sds_device_cleanup {$devices:
before => Class['Scaleio::Sds_server']
} ->
class {'::scaleio::sds_server':
ensure => 'present',
xcache => $use_xcache,
pkg_ftp => $scaleio['pkg_ftp'],
}
} else {
# save devices in shared DB
$tier1_devices = $::sds_storage_devices_tier1 ? {
undef => '',
default => join(split($::sds_storage_devices_tier1, ','), ',')
}
$tier2_devices = $::sds_storage_devices_tier2 ? {
undef => '',
default => join(split($::sds_storage_devices_tier2, ','), ',')
}
$tier3_devices = $::sds_storage_devices_tier3 ? {
undef => '',
default => join(split($::sds_storage_devices_tier3, ','), ',')
}
$rfcache_devices = $::sds_storage_devices_rfcache ? {
undef => '',
default => join(split($::sds_storage_devices_rfcache, ','), ',')
}
if $rfcache_devices and $rfcache_devices != '' {
$use_xcache = 'present'
} else {
$use_xcache = 'absent'
}
$sds_name = $::hostname
$sds_config = {
"${sds_name}" => {
'devices' => {
'tier1' => $tier1_devices,
'tier2' => $tier2_devices,
'tier3' => $tier3_devices,
},
'rfcache_devices' => $rfcache_devices,
}
}
# convert hash to string and add escaping of qoutes
$sds_config_str = regsubst(regsubst(inline_template('<%= @sds_config.to_s %>'), '=>', ':', 'G'), '"', '\"', 'G')
$galera_host = hiera('management_vip')
$mysql_opts = hiera('mysql')
$mysql_password = $mysql_opts['root_password']
$sql_connect = "mysql -h ${galera_host} -uroot -p${mysql_password}"
$db_query = 'CREATE DATABASE IF NOT EXISTS scaleio; USE scaleio'
$table_query = 'CREATE TABLE IF NOT EXISTS sds (name VARCHAR(64), PRIMARY KEY(name), value TEXT(1024))'
$update_query = "INSERT INTO sds (name, value) VALUES ('${sds_name}', '${sds_config_str}') ON DUPLICATE KEY UPDATE value='${sds_config_str}'"
$sql_query = "${sql_connect} -e \"${db_query}; ${table_query}; ${update_query};\""
class {'::scaleio::sds_server':
ensure => 'present',
xcache => $use_xcache,
pkg_ftp => $scaleio['pkg_ftp'],
} ->
package {'mysql-client':
ensure => present,
} ->
exec {'sds_devices_config':
command => $sql_query,
path => '/bin:/usr/bin:/usr/local/bin',
}
}
}
} else {
notify{'Skip sds server because of using existing cluster': }
}
}

View File

@ -1,243 +0,0 @@
# The set of facts about ScaleIO cluster.
# All facts expect that MDM IPs are available via the fact 'mdm_ips'.
# If mdm_ips is absent then the facts are skipped.
# The facts about SDS/SDC and getting IPs from Gateway additionally expect that
# MDM password is available via the fact 'mdm_password'.
#
# Facts about MDM:
# (they go over the MDM IPs one by one and request informatiom from the MDM cluster
# via SCLI query_cluster command)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_mdm_ips | Comma separated list of MDM IPs (excepting stanby)
# | scaleio_mdm_names | Comma separated list of MDM names (excepting stanby)
# | scaleio_tb_ips | Comma separated list of Tie-Breaker IPs (excepting stanby)
# | scaleio_tb_names | Comma separated list of Tie-Breaker names (excepting stanby)
# | scaleio_standby_mdm_ips | Comma separated list of standby managers IPs
# | scaleio_standby_tb_ips | Comma separated list of stnadby tie breakers IPs
#
# Facts about SDS and SDC:
# (they use MDM IPs as a single list and request information from a cluster via
# SCLI query_all_sds and query_cll_sdc commands)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_sds_ips | Comma separated list of SDS IPs.
# | scaleio_sds_names | Comma separated list of SDS names.
# | scaleio_sdc_ips | Comma separated list of SDC IPs,
# | | it is list of management IPs, not storage IPs.
# | scaleio_sds_with_protection_domain_list | Comma separated list of SDS
# | | with its Protection domains: sds1,pd1,sds2,pd2,..
# Facts about MDM from Gateway:
# (It requests them from Gateway via curl and requires the fact 'gateway_ips'.
# An user is 'admin' by default or the fact 'gateway_user' if it exists.
# A port is 4443 or the fact 'gateway_port' if it exists.)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_mdm_ips_from_gateway | Comma separated list of MDM IP.
require 'date'
require 'facter'
require 'json'
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
# Facter to scan existing cluster
# Controller IPs to scan
$controller_ips = Facter.value(:controller_ips)
if $controller_ips and $controller_ips != ''
# Register all facts for MDMs
# Example of output that facters below parse:
# Cluster:
# Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
# Master MDM:
# Name: 192.168.0.4, ID: 0x0ecb483853835e00
# IPs: 192.168.0.4, Management IPs: 192.168.0.4, Port: 9011
# Version: 2.0.5014
# Slave MDMs:
# Name: 192.168.0.5, ID: 0x3175fbe7695bbac1
# IPs: 192.168.0.5, Management IPs: 192.168.0.5, Port: 9011
# Status: Normal, Version: 2.0.5014
# Tie-Breakers:
# Name: 192.168.0.6, ID: 0x74ccbc567622b992
# IPs: 192.168.0.6, Port: 9011
# Status: Normal, Version: 2.0.5014
# Standby MDMs:
# Name: 192.168.0.5, ID: 0x0ce414fa06a17491, Manager
# IPs: 192.168.0.5, Management IPs: 192.168.0.5, Port: 9011
# Name: 192.168.0.6, ID: 0x74ccbc567622b992, Tie Breaker
# IPs: 192.168.0.6, Port: 9011
mdm_components = {
'scaleio_mdm_ips' => ['/Master MDM/,/\(Tie-Breakers\)\|\(Standby MDMs\)/p', '/./,//p', 'IPs:'],
'scaleio_tb_ips' => ['/Tie-Breakers/,/Standby MDMs/p', '/./,//p', 'IPs:'],
'scaleio_mdm_names' => ['/Master MDM/,/\(Tie-Breakers\)\|\(Standby MDMs\)/p', '/./,//p', 'Name:'],
'scaleio_tb_names' => ['/Tie-Breakers/,/Standby MDMs/p', '/./,//p', 'Name:'],
'scaleio_standby_mdm_ips' => ['/Standby MDMs/,//p', '/Manager/,/Tie Breaker/p', 'IPs:'],
'scaleio_standby_tb_ips' => ['/Standby MDMs/,//p', '/Tie Breaker/,/Manager/p', 'IPs:'],
}
# Define mdm opts for SCLI tool to connect to ScaleIO cluster.
# If there is no mdm_ips available it is expected to be run on a node with MDM Master.
mdm_opts = []
$controller_ips.split(',').each do |ip|
mdm_opts.push("--mdm_ip %s" % ip)
end
# the cycle over MDM IPs because for query cluster SCLI's behaiveour is strange
# it works for one IP but doesn't for the list.
query_result = nil
mdm_opts.detect do |opts|
query_cmd = "scli %s --query_cluster --approve_certificate 2>>%s && echo success" % [opts, $scaleio_log_file]
res = Facter::Util::Resolution.exec(query_cmd)
debug_log("%s returns:\n'%s'" % [query_cmd, res])
query_result = res unless !res or !res.include?('success')
end
if query_result
mdm_components.each do |name, selector|
Facter.add(name) do
setcode do
ip = nil
cmd = "echo '%s' | sed -n '%s' | sed -n '%s' | awk '/%s/ {print($2)}' | tr -d ','" % [query_result, selector[0], selector[1], selector[2]]
res = Facter::Util::Resolution.exec(cmd)
ip = res.split(' ').join(',') unless !res
debug_log("%s='%s'" % [name, ip])
ip
end
end
end
end
end
# Facter to scan existing cluster
# MDM IPs to scan
$discovery_allowed = Facter.value(:discovery_allowed)
$mdm_ips = Facter.value(:mdm_ips)
$mdm_password = Facter.value(:mdm_password)
if $discovery_allowed == 'yes' and $mdm_ips and $mdm_ips != '' and $mdm_password and $mdm_password != ''
sds_sdc_components = {
'scaleio_sdc_ips' => ['sdc', 'IP: [^ ]*', nil],
'scaleio_sds_ips' => ['sds', 'IP: [^ ]*', 'Protection Domain'],
'scaleio_sds_names' => ['sds', 'Name: [^ ]*', 'Protection Domain'],
}
sds_sdc_components.each do |name, selector|
Facter.add(name) do
setcode do
mdm_opts = "--mdm_ip %s" % $mdm_ips
login_cmd = "scli %s --approve_certificate --login --username admin --password %s 1>/dev/null 2>>%s" % [mdm_opts, $mdm_password, $scaleio_log_file]
query_cmd = "scli %s --approve_certificate --query_all_%s 2>>%s" % [mdm_opts, selector[0], $scaleio_log_file]
cmd = "%s && %s" % [login_cmd, query_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
skip_cmd = ''
if selector[2]
skip_cmd = "grep -v '%s' | " % selector[2]
end
select_cmd = "%s grep -o '%s' | awk '{print($2)}'" % [skip_cmd, selector[1]]
cmd = "echo '%s' | %s" % [result, select_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
result = result.split(' ')
if result.count() > 0
result = result.join(',')
end
end
end
debug_log("%s='%s'" % [name, result])
result
end
end
end
Facter.add(:scaleio_storage_pools) do
setcode do
mdm_opts = "--mdm_ip %s" % $mdm_ips
login_cmd = "scli %s --approve_certificate --login --username admin --password %s 1>/dev/null 2>>%s" % [mdm_opts, $mdm_password, $scaleio_log_file]
query_cmd = "scli %s --approve_certificate --query_all 2>>%s" % [mdm_opts, $scaleio_log_file]
fiter_cmd = "awk '/Protection Domain|Storage Pool/ {if($2==\"Domain\"){pd=$3}else{if($2==\"Pool\"){print(pd\":\"$3)}}}'"
cmd = "%s && %s | %s" % [login_cmd, query_cmd, fiter_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
result = result.split(' ')
if result.count() > 0
result = result.join(',')
end
end
debug_log("%s='%s'" % ['scaleio_storage_pools', result])
result
end
end
Facter.add(:scaleio_sds_with_protection_domain_list) do
setcode do
mdm_opts = "--mdm_ip %s" % $mdm_ips
login_cmd = "scli %s --approve_certificate --login --username admin --password %s 1>/dev/null 2>>%s" % [mdm_opts, $mdm_password, $scaleio_log_file]
query_cmd = "scli %s --approve_certificate --query_all_sds 2>>%s" % [mdm_opts, $scaleio_log_file]
fiter_cmd = "awk '/Protection Domain/ {domain=$5} /SDS ID:/ {print($5\",\"domain)}'"
cmd = "%s && %s | %s" % [login_cmd, query_cmd, fiter_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
result = result.split(' ')
if result.count() > 0
result = result.join(',')
end
end
debug_log("%s='%s'" % ['scaleio_sds_with_protection_domain_list', result])
result
end
end
end
#The fact about MDM IPs.
#It requests them from Gateway.
$gw_ips = Facter.value(:gateway_ips)
$gw_passw = Facter.value(:gateway_password)
if $gw_passw && $gw_passw != '' and $gw_ips and $gw_ips != ''
Facter.add('scaleio_mdm_ips_from_gateway') do
setcode do
result = nil
if Facter.value('gateway_user')
gw_user = Facter.value('gateway_user')
else
gw_user = 'admin'
end
host = $gw_ips.split(',')[0]
if Facter.value('gateway_port')
port = Facter.value('gateway_port')
else
port = 4443
end
base_url = "https://%s:%s/api/%s"
login_url = base_url % [host, port, 'login']
config_url = base_url % [host, port, 'Configuration']
login_req = "curl -k --basic --connect-timeout 5 --user #{gw_user}:#{$gw_passw} #{login_url} 2>>%s | sed 's/\"//g'" % $scaleio_log_file
debug_log(login_req)
token = Facter::Util::Resolution.exec(login_req)
if token && token != ''
req_url = "curl -k --basic --connect-timeout 5 --user #{gw_user}:#{token} #{config_url} 2>>%s" % $scaleio_log_file
debug_log(req_url)
request_result = Facter::Util::Resolution.exec(req_url)
if request_result
config = JSON.parse(request_result)
if config and config['mdmAddresses']
result = config['mdmAddresses'].join(',')
end
end
end
debug_log("%s='%s'" % ['scaleio_mdm_ips_from_gateway', result])
result
end
end
end

View File

@ -1,78 +0,0 @@
require 'date'
require 'facter'
$scaleio_storage_guid = '5e9bd278-9919-4db3-9756-4b82c7e9df52'
## Experimental:
#$scaleio_tier1_guid = 'f2e81bdc-99b3-4bf6-a68f-dc794da6cd8e'
#$scaleio_tier2_guid = 'd5321bb3-1098-433e-b4f5-216712fcd06f'
#$scaleio_tier3_guid = '97987bfc-a9ba-40f3-afea-13e1a228e492'
#$scaleio_rfcache_guid = '163ddeea-95dd-4af0-a329-140623590c47'
$scaleio_tiers = {
'all' => $scaleio_storage_guid,
# 'tier1' => $scaleio_tier1_guid,
# 'tier2' => $scaleio_tier2_guid,
# 'tier3' => $scaleio_tier3_guid,
# 'rfcache' => $scaleio_rfcache_guid,
}
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
$scaleio_tiers.each do |tier, part_guid|
facter_name = "sds_storage_devices_%s" % tier
Facter.add(facter_name) do
setcode do
devices = nil
res = Facter::Util::Resolution.exec("lsblk -nr -o KNAME,TYPE 2>%s | awk '/disk/ {print($1)}'" % $scaleio_log_file)
if res and res != ''
parts = []
disks = res.split(' ')
disks.each do |d|
disk_path = "/dev/%s" % d
part_number = Facter::Util::Resolution.exec("partx -s %s -oTYPE,NR 2>%s | awk '/%s/ {print($2)}'" % [disk_path, $scaleio_log_file, part_guid])
parts.push("%s%s" % [disk_path, part_number]) unless !part_number or part_number == ''
end
if parts.count() > 0
devices = parts.join(',')
end
end
debug_log("%s='%s'" % [facter_name, devices])
devices
end
end
end
# facter to validate storage devices that are less than 96GB
Facter.add('sds_storage_small_devices') do
setcode do
result = nil
disks0 = Facter.value('sds_storage_devices_all')
disks1 = Facter.value('sds_storage_devices_tier1')
disks2 = Facter.value('sds_storage_devices_tier2')
disks3 = Facter.value('sds_storage_devices_tier3')
if disks1 or disks2 or disks3
disks = [disks0, disks1, disks2, disks3].join(',')
end
if disks
devices = disks.split(',')
if devices.count() > 0
devices.each do |d|
size = Facter::Util::Resolution.exec("partx -r -b -o SIZE %s 2>%s | grep -v SIZE" % [d, $scaleio_log_file])
if size and size != '' and size.to_i < 96*1024*1024*1024
if not result
result = {}
end
result[d] = "%s MB" % (size.to_i / 1024 / 1024)
end
end
result = result.to_s unless !result
end
end
debug_log("%s='%s'" % ['sds_storage_small_devices', result])
result
end
end

View File

@ -1,15 +0,0 @@
# set of facts about deploying environment
require 'facter'
base_cmd = "bash -c 'source /etc/environment; echo $SCALEIO_%s'"
facters = ['controller_ips', 'tb_ips', 'mdm_ips', 'managers_ips',
'gateway_user', 'gateway_port', 'gateway_ips', 'gateway_password', 'mdm_password',
'storage_pools', 'discovery_allowed']
facters.each { |f|
if ! Facter.value(f)
Facter.add(f) do
setcode base_cmd % f
end
end
}

View File

@ -1,17 +0,0 @@
require 'facter'
Facter.add("ip_address_array") do
setcode do
interfaces = Facter.value(:interfaces)
interfaces_array = interfaces.split(',')
ip_address_array = []
interfaces_array.each do |interface|
ipaddress = Facter.value("ipaddress_#{interface}")
ip_address_array.push(ipaddress) unless !ipaddress
end
ssh_ip = Facter.value(:ssh_ip)
ip_address_array.push(ssh_ip) unless !ssh_ip
ip_address_array.join(',')
end
end

View File

@ -1,22 +0,0 @@
require 'facter'
base_cmd = "bash -c 'source /root/openrc; echo $%s'"
if File.exist?("/root/openrc")
Facter.add(:os_password) do
setcode base_cmd % 'OS_PASSWORD'
end
Facter.add(:os_tenant_name) do
setcode base_cmd % 'OS_TENANT_NAME'
end
Facter.add(:os_username) do
setcode base_cmd % 'OS_USERNAME'
end
Facter.add(:os_auth_url) do
setcode base_cmd % 'OS_AUTH_URL'
end
end

View File

@ -1,43 +0,0 @@
require 'date'
require 'facter'
require 'json'
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
$astute_config = '/etc/astute.yaml'
if File.exists?($astute_config)
Facter.add(:scaleio_sds_config) do
setcode do
result = nil
config = YAML.load_file($astute_config)
if config and config.key('scaleio') and config['scaleio'].key('enable_sds_role')
galera_host = config['management_vip']
mysql_opts = config['mysql']
password = mysql_opts['root_password']
sql_query = "mysql -h %s -uroot -p%s -e 'USE scaleio; SELECT * FROM sds \\G;' 2>>%s | awk '/value:/ {sub($1 FS,\"\" );print}'" % [galera_host, password, $scaleio_log_file]
debug_log(sql_query)
query_result = Facter::Util::Resolution.exec(sql_query)
debug_log(query_result)
if query_result
query_result.each_line do |r|
if r
if not result
result = '{'
else
result += ', '
end
result += r.strip.slice(1..-2)
end
end
result += '}' unless !result
end
end
debug_log("scaleio_sds_config='%s'" % result)
result
end
end
end

View File

@ -1,38 +0,0 @@
# Convert sds_config from centralized db to form of two lists pools and devices with equal lengths
module Puppet::Parser::Functions
newfunction(:convert_sds_config, :type => :rvalue, :doc => <<-EOS
Takes sds config as a has and returns array - first element is pools, second devices
EOS
) do |args|
sds_config = args[0]
result = nil
if sds_config
pools_devices = sds_config['devices']
if pools_devices
pools = nil
devices = nil
pools_devices.each do |pool, devs|
devs.split(',').each do |d|
if d and d != ""
if ! pools
pools = pool.strip
else
pools += ",%s" % pool.strip
end
if ! devices
devices = d.strip
else
devices += ",%s" % d.strip
end
end
end
end
if pools and devices
result = [pools, devices]
end
end
end
return result
end
end

View File

@ -1,9 +0,0 @@
module Puppet::Parser::Functions
newfunction(:filter_nodes, :type => :rvalue) do |args|
name = args[1]
value = args[2]
args[0].select do |it|
it[name] == value
end
end
end

View File

@ -1,23 +0,0 @@
# Convert sds_config from centralized db to form of two lists pools and devices with equal lengths
require File.join([File.expand_path(File.dirname(__FILE__)), 'convert_sds_config.rb'])
module Puppet::Parser::Functions
newfunction(:get_pools_from_sds_config, :type => :rvalue, :doc => <<-EOS
Takes sds config as a has and returns array - first element is pools, second devices
EOS
) do |args|
config = args[0]
result = []
if config
config.each do |sds, cfg|
pools_devices = function_convert_sds_config([cfg]) # prefix function_ is required for puppet functions
# args - is arrays with required options
if pools_devices and pools_devices[0]
result += pools_devices[0].split(',')
end
end
end
return result.uniq
end
end

View File

@ -1,24 +0,0 @@
module Puppet::Parser::Functions
newfunction(:pw_hash, :type => :rvalue) do |args|
require 'digest/sha2'
raise(Puppet::ParseError, "pw_hash(): Wrong number of arguments " +
"passed (#{args.size} but we require 3)") if args.size != 3
password = args[0]
alg = args[1]
salt = args[2]
return case alg
when 'SHA-512'
Digest::SHA512.digest(salt + password)
when 'SHA-256'
Digest::SHA256.digest(salt + password)
when 'MD5'
Digest::MD5.digest(salt + password)
else
raise(Puppet::ParseError, "pw_hash(): Invalid algorithm " +
"passed (%s but it supports only SHA-512, SHA-256, MD5)" % alg)
end
end
end

View File

@ -1,43 +0,0 @@
# Update the map of SDSes to protection domains according to SDS number limit in one domain
module Puppet::Parser::Functions
newfunction(:update_sds_to_pd_map, :type => :rvalue, :doc => <<-EOS
Updates map of SDS to PD with new SDSes according to SDS number limit in one PD
EOS
) do |args|
sds_to_pd_map = {}
sds_to_pd_map.update(args[0])
pd_name_template = args[1]
pd_number = 1
sds_limit = args[2].to_i
new_sds_array = args[3]
#prepare map of PDs to SDSes
pd_to_sds_map = {}
sds_to_pd_map.each do |sds, pd|
if not pd_to_sds_map.has_key?(pd)
pd_to_sds_map[pd] = []
end
pd_to_sds_map[pd] << sds
end
# map new SDSes to PDs
new_sds_array.select{|sds| not sds_to_pd_map.has_key?(sds)}.each do |sds|
suitable_pd_array = pd_to_sds_map.select{|pd, sds_es| sds_limit == 0 or sds_es.length < sds_limit}.keys
if suitable_pd_array.length == 0
pd_name = nil
while not pd_name
proposed_name = pd_number == 1 ? pd_name_template : "%s_%s" % [pd_name_template, pd_number]
pd_number += 1
if not pd_to_sds_map.has_key?(proposed_name)
pd_name = proposed_name
end
end
pd_to_sds_map[pd_name] = []
suitable_pd_array = [pd_name]
end
suitable_pd = suitable_pd_array[0]
pd_to_sds_map[suitable_pd] << sds
sds_to_pd_map[sds] = suitable_pd
end
return sds_to_pd_map
end
end

View File

@ -1,229 +0,0 @@
##############################################################################
# ScaleIO task groups
##############################################################################
# for next version:
# - id: scaleio-storage-tier1
# type: group
# role: [scaleio-storage-tier1]
# tasks: [hiera, globals, tools, logging, netconfig, hosts, firewall, deploy_start]
# required_for: [deploy_end]
# requires: [deploy_start]
# parameters:
# strategy:
# type: parallel
#
# - id: scaleio-storage-tier2
# type: group
# role: [scaleio-storage-tier2]
# tasks: [hiera, globals, tools, logging, netconfig, hosts, firewall, deploy_start]
# required_for: [deploy_end]
# requires: [deploy_start]
# parameters:
# strategy:
# type: parallel
- id: scaleio
type: group
role: [scaleio]
tasks: [hiera, globals, setup_repositories, tools, logging, netconfig, hosts, deploy_start]
required_for: [deploy_end]
requires: [deploy_start]
parameters:
strategy:
type: one_by_one
##############################################################################
# ScaleIO environment check
##############################################################################
- id: scaleio-environment-check
# groups: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
groups: [primary-controller, controller, compute, cinder, scaleio]
required_for: [deploy_end, hosts]
requires: [deploy_start] #, netconfig]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/environment.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
##############################################################################
# ScaleIO prerequisites tasks
##############################################################################
- id: scaleio-environment
# role: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
role: [primary-controller, controller, compute, cinder, scaleio]
required_for: [post_deployment_end]
requires: [post_deployment_start]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/environment.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-environment-existing-mdm-ips
# role: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
role: [primary-controller, controller, compute, cinder]
required_for: [post_deployment_end]
requires: [scaleio-environment]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/environment_existing_mdm_ips.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
##############################################################################
# ScaleIO cluster tasks
##############################################################################
- id: scaleio-mdm-packages
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-environment-existing-mdm-ips]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/mdm_package.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-discover-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-mdm-packages]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/discovery_cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-resize-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-discover-cluster]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/resize_cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-mdm-server
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-resize-cluster]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/mdm_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-gateway-server
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-mdm-server]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/gateway_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sds-server
# role: [scaleio-storage-tier1, scaleio-storage-tier2]
role: [primary-controller, controller, compute, scaleio]
required_for: [post_deployment_end]
requires: [post_deployment_start, scaleio-environment, scaleio-environment-existing-mdm-ips, scaleio-gateway-server]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/sds_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sdc-server
required_for: [post_deployment_end]
requires: [post_deployment_start, scaleio-environment-existing-mdm-ips, scaleio-sds-server]
role: [primary-controller, controller, compute, cinder]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/sdc_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sdc
required_for: [post_deployment_end]
requires: [scaleio-sdc-server]
role: [primary-controller, controller, compute, cinder]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/sdc.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-configure-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-sds-server, scaleio-sdc]
cross-depends:
- name: scaleio-sds-server
- name: scaleio-sdc
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
##############################################################################
# ScaleIO OS tasks
##############################################################################
- id: scaleio-compute
required_for: [post_deployment_end]
requires: [scaleio-configure-cluster]
role: [compute]
cross-depends:
- name: scaleio-configure-cluster
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/nova.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-cinder
required_for: [post_deployment_end]
requires: [scaleio-configure-cluster]
cross-depends:
- name: scaleio-configure-cluster
role: [cinder]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/cinder.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-glance
required_for: [upload_cirros, post_deployment_end]
requires: [scaleio-cinder]
cross-depends:
- name: scaleio-cinder
cross-depended-by:
- name: upload_cirros
role: [primary-controller, controller]
type: puppet
version: 2.0.0
parameters:
puppet_manifest: puppet/manifests/glance.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNSXvplugin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNSXvplugin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXvplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXvplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,6 +0,0 @@
Appendix
========
#. `ScaleIO OpenStack information <https://community.emc.com/docs/DOC-44337>`_
#. `Reference Architecture: EMC Storage Solutions With Mirantis OpenStack <https://community.emc.com/docs/DOC-44819>`_
#. `OpenStack @EMC Cheat Sheet <https://community.emc.com/docs/DOC-46246>`_

View File

@ -1,7 +0,0 @@
#!/bin/bash
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
sphinx-build -b pdf . builddir
#sphinx-build -b html . builddir

View File

@ -1,39 +0,0 @@
# Always use the default theme for Readthedocs
RTD_NEW_THEME = True
extensions = ['rst2pdf.pdfbuilder']
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = u'The ScaleIO plugin for Fuel'
copyright = u'2016, EMC Corporation'
version = '2.1-2.1.3-1'
release = '2.1-2.1.3-1'
exclude_patterns = []
pygments_style = 'sphinx'
html_theme = 'classic'
html_static_path = ['_static']
latex_documents = [
('index', 'ScaleIO-Plugin_Guide.tex', u'The ScaleIO plugin for Fuel Documentation',
u'EMC Corporation', 'manual'),
]
latex_elements = {
'fncychap': '\\usepackage[Conny]{fncychap}',
'classoptions': ',openany,oneside',
'babel' : '\\usepackage[english]{babel}',
}
pdf_documents = [
('index', 'ScaleIO-Plugin_Guide', u'ScaleIO plugin for Fuel Documentation',
u'EMC Corporation', 'manual'),
]

View File

@ -1,219 +0,0 @@
User Guide
==========
Once the Fuel ScaleIOv2.0 plugin has been installed (following the
:ref:`Installation Guide <installation>`), you can create an *OpenStack* environments that
uses ScaleIO as the block storage backend.
Prepare infrastructure
----------------------
At least 5 nodes are required to successfully deploy Mirantis OpenStack with ScaleIO (for 3-controllers mode cluster).
#. Fuel master node (w/ 50GB Disk, 2 Network interfaces [Mgmt, PXE] )
#. OpenStack Controller #1 node
#. OpenStack Controller #2 node
#. OpenStack Controller #3 node
#. OpenStack Compute node
Each node shall have at least 2 CPUs, 4GB RAM, 200GB disk, 3 Network interfaces. Each node which is supposed to host ScaleIO SDS should have at least one empty disk of minimum 100GB size.
The 3 interfaces will be used for the following purposes:
#. Admin (PXE) network: Mirantis OpenStack uses PXE booting to install the operating system, and then loads the OpenStack packages for you.
#. Public, Management and Storage networks: All of the OpenStack management traffic will flow over this network (“Management” and “Storage” will be separated by VLANs), and to re-use the network it will also host the public network used by OpenStack service nodes and the floating IP address range.
#. Private network: This network will be added to Virtual Machines when they boot. It will therefore be the route where traffic flows in and out of the VM.
In case of new ScaleIO cluster deployment Controllers 1, 2, and 3 will be for hosting ScaleIO MDM and ScaleIO Gateway services.
Cinder role should be deployed if ScaleIO volume functionality is required.
All Compute nodes are used as ScaleIO SDS. It is possible to enable ScaleIO SDS on Controllers node. Keep in mind that 3 SDSs is a minimal required configuration so if you have less than 3 compute nodes you have to deploy ScaleIO SDS on controllers as well. All nodes that will be used as ScaleIO SDS should have equal disk configuration. All disks that will be used as SDS devices should be unallocated in Fuel.
In case of existing ScaleIO cluster deployment the plugin deploys ScaleIO SDC component onto Compute and Cinder nodes and configures OpenStack Cinder and Nova to use ScaleIO as the block storage backend.
The ScaleIO cluster will use the storage network for all volume and cluster maintenance operations.
.. _scaleiogui:
Install ScaleIO GUI
-------------------
It is recommended to install the ScaleIO GUI to easily access and manage the ScaleIO cluster.
#. Make sure the machine in which you will install the ScaleIO GUI has access to the Controller nodes.
#. Download the ScaleIO for your operating system from the following link: http://www.emc.com/products-solutions/trial-software-download/scaleio.htm
#. Unzip the file and install the ScaleIO GUI component.
#. Once installed, run the application and you will be prompted with the following login window. We will use it once the deployment is completed.
.. image:: images/scaleio-login.png
:width: 50%
Select Environment
------------------
#. Create a new environment with the Fuel UI wizard.
From OpenStack Release dropdown list select "Liberty on Ubunu 14.04" and continue until you finish with the wizard.
.. image:: images/wizard.png
:width: 80%
#. Add VMs to the new environment according to `Fuel User Guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#adding-redeploying-and-replacing-nodes>`_ and configure them properly.
Plugin configuration
--------------------
\1. Go to the Settings tab and then go to the section Storage. You need to fill all fields with your preferred ScaleIO configuration. If you do not know the purpose of a field you can leave it with its default value.
.. image:: images/settings1.png
:width: 80%
\2. In order to deploy new ScaleIO cluster together with OpenStack
\a. Disable the checkbox 'Use existing ScaleIO'
\b. Provide Admin passwords for ScaleIO MDM and Gateway, list of Storage devices to be used as ScaleIO SDS storage devices. Optionally you can provide protection domain name and storage pool names.
.. image:: images/settings2.png
:width: 80%
.. image:: images/settings3.png
:width: 80%
\c. In order to use separate ScaleIO storage nodes disable check-box 'Hyper-converged deployment'.
In such kind of depllyment ScaleIO SDS component will be deployed only on the nodes with ScaleIO role.
Although there is a role for ScaleIO Storage the user still has to point devices in the 'Storage devices' settings.
The role frees user from making ScaleIO disks unassigned. User can use devices with ScaleIO
role as 'Storage devices' (with mapping to different storage pools as described below) as well as
'XtremCache devices' (it is expected that user is aware which device are SSD actually,
the plugin does not perform such check).
.. image:: images/settings2_nh.png
:width: 80%
.. image:: images/role_scaleio.png
:width: 80%
.. image:: images/devices_scaleio.png
:width: 80%
\d. In case of hyper-converged deployment enabled make disks for ScaleIO SDS devices unallocated.
These disks will be cleaned up and added to SDSs as storage devices.
Note, that because of current Fuel framework limitation it is needed to keep some space for Cinder and Nova roles.
.. image:: images/devices_compute.png
:width: 80%
.. image:: images/devices_controller.png
:width: 80%
\e. In case you want to specify different storage pools for different devices provide a list of pools corresponding to device paths, e.g. 'pool1,pool2' and '/dev/sdb,/dev/sdc' will assign /dev/sdb for the pool1 and /dev/sdc for the pool2.
\3. In order to use existing ScaleIO cluster
\a. Enable checkbox 'Use existing ScaleIO'
\b. Provide IP address and password for ScaleIO Gateway, protection domain name and storage pool names that will be allowed to be used in OpenStack. The first storage pool name will become the default storage pool for volumes.
.. image:: images/settings_existing_cluster.png
:width: 80%
\4. Take the time to review and configure other environment settings such as the DNS and NTP servers, URLs for the repositories, etc.
Finish environment configuration
--------------------------------
#. Go to the Network tab and configure the network according to your environment.
#. Run `network verification check <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Verify network configuration>`_
.. image:: images/network.png
:width: 90%
#. Press `Deploy button <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Deploy changes>`_ once you have finished reviewing the environment configuration.
.. image:: images/deploy.png
:width: 60%
#. After deployment is done, you will see a message indicating the result of the deployment.
.. image:: images/deploy-result.png
:width: 80%
ScaleIO verification
--------------------
Once the OpenStack cluster is set up, you can make use of ScaleIO volumes. This is an example about how to attach a volume to a running VM.
#. Perform OpenStack Health Check via FUEL UI. Note, that it is needed to keep un-selected tests that are related to running of instances because they use a default instance flavour but ScaleIO requires a flavour with volume sizes that are multiple of 8GB. FUEL does not allow to configure these tests from the plugin.
#. Login into the OpenStack cluster:
#. Review the block storage services by navigating to the "Admin -> System -> System Information" section. You should see the "@ScaleIO" appended to all cinder-volume hosts.
.. image:: images/block-storage-services.png
:width: 90%
#. Connect to ScaleIO cluster in the ScaleIO GUI (see :ref:`Install ScaleIO GUI section <scaleiogui>`). In case of new ScaleIO cluster deployment use the IP address of the master ScaleIO MDM (initially it's the controller node with the minimal IP-address but master MDM can switch to another controller), username `admin`, and the password you entered in the Fuel UI.
#. Once logged in, verify that it successfully reflects the ScaleIO resources:
.. image:: images/scaleio-cp.png
:width: 80%
#. For the case of new ScaleIO cluster deployment click on the "Backend" tab and verify all SDS nodes:
.. image:: images/scaleio-sds.png
:width: 90%
#. Create a new OpenStack volume (ScaleIO backend is used by default).
#. In the ScaleIO GUI, you will see that there is one volume defined but none have been mapped yet.
.. image:: images/sio-volume-defined.png
:width: 20%
#. Once the volume is attached to a VM, the ScaleIO GUI will reflect the mapping.
.. image:: images/sio-volume-mapped.png
:width: 20%
Troubleshooting
---------------
1. Deployment cluster fails.
* Verify network settings.
* Ensure that the nodes have internet access.
* Ensure that there are at least 3 nodes with SDS in the cluster. All Compute nodes play SDS role, Controller nodes play SDS role in case if the option 'Controller as Storage' is enabled in the Plugin's settings.
* For the nodes that play SDS role ensure that disks which are listed in the Plugin's settings 'Storage devices' and 'XtremCache devices' are unallocated and their sizes are greater than 100GB.
* Ensure that controller nodes have at least 3GB RAM.
2. Deploying changes fails with timeout errors if remove a controller node (only if there were 3 controllers in cluster).
* Connect via ssh to the one of controller nodes
* Get MDM IPs:
::
cat /etc/environment | grep SCALEIO_mdm_ips
* Request ScaleIO cluster state
::
scli --mdm_ip <ip_of_alive_mdm> --query_cluster
* If cluster is in Degraded mode and there is one of Slave MDMs is disconnected then switch the cluster into the mode '1_node':
::
scli --switch_cluster_mode --cluster_mode 1_node
--remove_slave_mdm_ip <ips_of_slave_mdms>
--remove_tb_ip <ips_of_tie_breakers>
Where ips_of_slave_mdms and ips_of_tie_breakers are comma separated lists
of slave MDMs and Tie Breakers respectively (IPs should be taken from
query_cluster command above).
3. ScaleIO cluster does not see new SDS after deploying new Compute node.
It is needed to run update hosts task on controller nodes manually on the FUEL master node, e.g. 'fuel --env 5 node --node-id 1,2,3 --task update_hosts'. This is because FUEL does not trigger plugin's tasks after Compute node deploymet.
4. ScaleIO cluster has SDS/SDC components in disconnected state after nodes deletion.
See previous point.
5. Other issues.
Ensure that ScaleIO cluster is operational and there are storage pool and protection domain available. For more details see ScaleIO user guide.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 268 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 318 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 204 KiB

View File

@ -1,15 +0,0 @@
===================================================
Guide to the ScaleIOv2.0 Plugin for Fuel 8.0, 9.0
===================================================
Plugin Guide
============
.. toctree::
:maxdepth: 2
release_notes
introduction
installation
guide
appendix

View File

@ -1,34 +0,0 @@
.. _installation:
Installation Guide
==================
Install from `Fuel Plugins Catalog`_
------------------------------------
To install the ScaleIOv2.0 Fuel plugin:
#. Download it from the `Fuel Plugins Catalog`_
#. Copy the *rpm* file to the Fuel Master node:
::
[root@home ~]# scp scaleio-2.1-2.1.3-1.noarch.rpm
root@fuel-master:/tmp
#. Log into Fuel Master node and install the plugin using the
`Fuel CLI <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Fuel Plugins CLI>`_:
::
[root@fuel-master ~]# fuel plugins --install
/tmp/scaleio-2.1-2.1.3-1.noarch.rpm
#. Verify that the plugin is installed correctly:
::
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
1 | scaleio | 2.1.3 | 3.0.0
.. _Fuel Plugins Catalog: https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/

View File

@ -1,63 +0,0 @@
Introduction
============
Purpose
-------
This document will guide you through the steps of install, configure and use of the **ScaleIOv2.0 Plugin** for Fuel.
The ScaleIO Plugin is used to:
** deploy and configure a ScaleIO cluster as a volume backend for an OpenStack environment
** configure an Openstack environment to use existing ScaleIO cluster as a volume backend
ScaleIO Overview
----------------
EMC ScaleIO is a software-only server-based storage area network (SAN) that converges storage and compute resources to form a single-layer, enterprise-grade storage product. ScaleIO storage is elastic and delivers linearly scalable performance. Its scale-out server SAN architecture can grow from a few to thousands of servers.
ScaleIO uses servers direct-attached storage (DAS) and aggregates all disks into a global, shared, block storage. ScaleIO features single-layer compute and storage architecture without requiring additional hardware or cooling/ power/space.
Breaking traditional barriers of storage scalability, ScaleIO scales out to hundreds and thousands of nodes and multiple petabytes of storage. The parallel architecture and distributed volume layout delivers a massively parallel system that deliver I/O operations through a distributed system. As a result, performance can scale linearly with the number of application servers and disks, leveraging fast parallel rebuild and rebalance without interruption to I/O. ScaleIO has been carefully designed and implemented with ScaleIO software components so as to consume minimal computing resources.
With ScaleIO, any administrator can add, move, or remove servers and capacity on demand during I/O operations. The software responds automatically to any infrastructure change and rebalances data accordingly across the grid nondisruptively. ScaleIO can add capacity on demand, without capacity planning or data migration and grow in small or large increments and pay as you grow, running on any server and with any storage media.
ScaleIO natively supports all leading Linux distributions and hypervisors. It works agnostically with any solid-state drive (SSD) or hard disk drive (HDD) regardless of type, model, or speed.
ScaleIO Components
------------------
**ScaleIO Data Client (SDC)** is a lightweight block device driver that exposes ScaleIO shared block volumes to applications. The SDS runs on the same server as the application. This enables the application to issue a IO request and the SDC fulfills it regardless of where the particular blocks physically reside. The SDC communicates with other nodes over TCP/IP-based protocol, so it is fully routable.
**ScaleIO Data Service (SDS)** owns local storage that contributes to the ScaleIO storage pools. An instance of the SDS runs on every node that contributes some, or all its storage space (HDDs, SSDs) to the aggregated pool of storage within the ScaleIO virtual SAN. The role of the SDS is to actually perform the back-end IO operations as requested by an SDC.
**ScaleIO Metadata Manager (MDM)** manages the metadata, SDC, SDS, devices mapping, volumes, snapshots, system capacity including device allocations and/or release of capacity, errors and failures, and system rebuild tasks including rebalancing. The MDM uses a Active/Passive with a tiebreaker component where the primary node is Active, and the secondary is Passive. The data repository is stored in both Active and Passive. Currently, an MDM can manage up to 1024 servers. When several MDMs are present, an SDC may be managed by several MDMs, whereas an SDS can only belong to one MDM. If the MDM does not detect the heartbeat from one SDS, it will initiate a forward-rebuild.
**ScaleIO Gateway** is the HTTP/HTTPS REST endpoint. It is the primary endpoint used by OpenStack to actuate commands against ScaleIO. Due to its stateless nature, we can have multiples instances and easily balance the load.
**Xtrem Cache (RFCache)** is the component enabling caching on PCI flash cards and/or SSDs thus accelerating the reads of SDS's HDD devices. It is deployed together with SDS component.
ScaleIO Cinder and Nova Drivers
-------------------------------
ScaleIO includes Cinder driver, which interfaces between ScaleIO and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage. It also includes an OpenStack Nova driver, for handling compute and instance volume related operations. The ScaleIO driver executes the volume operations by communicating with the backend ScaleIO MDM through the ScaleIO Gateway.
Requirements
------------
========================= ===============
Requirement Version/Comment
========================= ===============
Mirantis OpenStack 8.0
Mirantis OpenStack 9.0
========================= ===============
Limitations
-----------
1. Plugin is compatible with Mirantis Fuel 8.0 and 9.0.
2. Plugin supports only Ubuntu environment.
3. Multi storage backend is not supported.
4. It is not possible to use different backends for persistent and ephemeral volumes.
5. In hyper-converged deployment disks for SDS-es should be unallocated before deployment via FUEL UI or cli.
6. MDMs and Gateways are deployed together and only onto controller nodes.
7. Adding and removing node(s) to/from the OpenStack cluster won't re-configure the ScaleIO.

View File

@ -1,42 +0,0 @@
Release Notes v2.1.3
====================
New Features
----------------
1. Use FTP server to download ScaleIO packages. Added appropriate option into the plugin settings to enable ability to use own FTP.
Release Notes v2.1.2
====================
New Features
----------------
1. Non-hyperconverged deployment support. Separate ScaleIO role for ScaleIO Storage nodes.
To enable this feature there is appropriate check-box in the plugin's settings.
Note, that although there is a role for ScaleIO Storage the user still has to point devices
in the 'Storage devices' settings. The role frees user from making
ScaleIO disks unassigned. User can use devices with ScaleIO role as 'Storage devices' (with
mapping to different storage pools) as well as 'XtremCache devices' (it is expected that user
is aware which device are SSD actually, the plugin does not perform such check).
Release Notes v2.1.1
====================
New Features
----------------
1. Mirantis Fuel 9.0 support.
2. RAM Cache (RMCache) support.
3. Using special FronEnd ScaleIO user in Cinder and Nova to access ScaleIO cluster instead of the 'admin' user.
4. Ability to keep Glance images on ScaleIO.
Fixed Bugs
----------------
1. Fixed algorithm of protection domain auto-creation if number of SDS-es becomes larger than a threshold in the plugin settings.

View File

@ -1,331 +0,0 @@
attributes:
metadata:
# Settings group can be one of "general", "security", "compute", "network",
# "storage", "logging", "openstack_services" and "other".
group: 'storage'
existing_cluster:
type: "checkbox"
value: false
label: "Use existing ScaleIO."
description: "Do not deploy ScaleIO cluster, just use existing cluster."
weight: 10
gateway_ip:
type: "text"
value: ""
label: "Gateway IP address"
description: "Cinder and Nova use it for requests to ScaleIO."
weight: 20
regex:
source: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
error: "Gateway address is requried parameter"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
gateway_port:
type: "text"
value: "4443"
label: "Gateway port"
description: "Cinder and Nova use it for requests to ScaleIO."
weight: 25
regex:
source: '^[0-9]+$'
error: "Gateway port is required parameter"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
gateway_user:
type: "text"
value: "admin"
label: "Gateway user"
description: "Type a user name for the gateway"
weight: 30
regex:
source: '^\w+$'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
password:
type: "password"
weight: 40
value: ""
label: "Admin password"
description: "Type ScaleIO Admin password"
regex:
source: '^(?=.*[a-z])(?=.*[A-Z])(?=.*\d).{8,15}$'
error: "You must provide a password with between 8 and 15 characters, one uppercase, and one number"
protection_domain:
type: "text"
value: "default"
label: "Protection domain"
description:
Name of first protection domain. In case of auto-scaling next domains will get names like default_2, default_3.
Auto-scaling works if the Use Existing Cluster option is disabled. Next domain is created
if number of SDS-es reaches the limit in the setting Maximum number of nodes in one protection domain.
weight: 70
regex:
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
protection_domain_nodes:
type: "text"
value: "100"
label: "Maximum number of nodes in one protection domain"
description:
If number of nodes gets larger than this threshold then new protection domain will be created.
Note, in that case it is needed to add at least 3 new nodes to make new domain operationable.
In case of hyper-converged deployment they should be compute nodes, otherwise - the ScaleIO nodes.
weight: 75
regex:
source: '^[1-9]{1}[0-9]*$'
error: "Should be number that equal or larger than 1"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
enable_sds_role:
type: "checkbox"
value: false
label: "Experimental role-based deployment"
description: "Hidden option to disable experimental feature"
weight: 5
restrictions:
- condition: "true"
action: hide
storage_pools:
type: "text"
value: "default"
label: "Storage pools"
description:
Comma separated list for splitting devices between them.
It could be just one element if all devices are belong to the one pool.
weight: 80
regex:
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.enable_sds_role.value == true"
action: hide
existing_storage_pools:
type: "text"
value: "default"
label: "Storage pools"
description: "Storage pools which are allowed to be used in new Cloud."
weight: 90
regex:
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
device_paths:
type: "text"
value: ""
label: "Storage devices"
description: "Comma separated list of devices, e.g. /dev/sdb,/dev/sdc."
weight: 100
regex:
source: '^(/[a-zA-Z0-9:-_]+)+(,(/[a-zA-Z0-9:-_]+)+)*$'
error: 'List of path is incorrect. It is comma separated list aka /dev/sdb,/dev/sdc'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.enable_sds_role.value == true"
action: hide
hyper_converged_deployment:
type: "checkbox"
value: true
label: "Hyper-converged deployment"
description:
Deploy SDS component on all compute nodes automatically and optionally on controller nodes.
If the option disabled then SDS will be deployed only on the nodes with the ScaleIO role.
weight: 103
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
sds_on_controller:
type: "checkbox"
value: true
label: "Controller as Storage"
description: "Setup SDS-es on controller nodes."
weight: 105
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.hyper_converged_deployment.value == false or settings:scaleio.enable_sds_role.value == true"
action: hide
provisioning_type:
type: "radio"
value: "thin"
label: "Provisioning type"
description: "Thin/Thick provisioning for ephemeral and persistent volumes."
weight: 110
values:
- data: 'thin'
label: 'Thin provisioning'
description: "Thin provisioning for ephemeral and persistent volumes."
- data: 'thick'
label: 'Thick provisioning'
description: "Thick provisioning for ephemeral and persistent volumes."
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
checksum_mode:
type: "checkbox"
value: false
label: "Checksum mode"
description:
Checksum protection. ScaleIO protects data in-flight by calculating and validating the checksum value for the payload at both ends.
Note, the checksum feature may have a minor effect on performance.
ScaleIO utilizes hardware capabilities for this feature, where possible.
weight: 120
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
spare_policy:
type: "text"
value: '10'
label: "Spare policy"
description: "% out of total space"
weight: 130
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
zero_padding:
type: "checkbox"
value: true
label: "Enable Zero Padding for Storage Pools"
description: "New volumes will be zeroed if the option enabled."
weight: 140
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
scanner_mode:
type: "checkbox"
value: false
label: "Background device scanner"
description: "This options enables the background device scanner on the devices in device only mode."
weight: 150
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
rmcache_usage:
type: "checkbox"
value: false
label: "Use RAM cache (RMCache)"
description: "SDS Server RAM is reserved for caching storage devices in a Storage Pool."
weight: 155
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
rmcache_passthrough_pools:
type: "text"
value: ""
label: "Passthrough RMCache storage pools"
description: "List of Storage pools which should be cached in RAM in passthrough mode (writes to storage only)."
weight: 157
regex:
source: '^(\w+)*((,){1}(?=\w+))*'
error: 'List of storage pools incorrect. It could be either empty or the comma separated list e.g. pool1,pool2'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.rmcache_usage.value == false"
action: hide
rmcache_cached_pools:
type: "text"
value: ""
label: "Cached RMCache storage pools"
description: "List of Storage pools which should be cached in RAM in cached mode (writes both to cache and to storage)."
weight: 158
regex:
source: '^(\w+)*((,){1}(?=\w+))*'
error: 'List of storage pools incorrect. It could be either empty or the comma separated list e.g. pool1,pool2'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.rmcache_usage.value == false"
action: hide
rfcache_devices:
type: "text"
value: ""
label: "XtremCache devices"
description: "List of SDS devices for SSD caching. Cache is disabled if list empty."
weight: 160
regex:
source: '^(/[a-zA-Z0-9:-_]+)*(,(/[a-zA-Z0-9:-_]+)+)*$'
error: 'List of path is incorrect. It could be either empty or the comma separated list e.g. /dev/sdb,/dev/sdc'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.enable_sds_role.value == true"
action: hide
cached_storage_pools:
type: "text"
value: ""
label: "XtremCache storage pools"
description: "List of storage pools which should be cached with XtremCache."
weight: 170
regex:
source: '^(\w+)*((,){1}(?=\w+))*'
error: 'List of storage pools incorrect. It could be either empty or the comma separated list e.g. pool1,pool2'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or settings:scaleio.enable_sds_role.value == true"
action: hide
capacity_high_alert_threshold:
type: "text"
value: '80'
label: "Capacity high priority alert"
description: "Threshold of the non-spare capacity of the Storage Pool that will trigger a high-priority alert, in percentage format"
weight: 180
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
capacity_critical_alert_threshold:
type: "text"
value: '90'
label: "Capacity critical priority alert"
description: "Threshold of the non-spare capacity of the Storage Pool that will trigger a critical-priority alert, in percentage format"
weight: 190
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
use_scaleio_for_glance:
type: "checkbox"
value: false
label: "Glance images on ScaleIO"
description: "Glance uses ScaleIO as a backend for images if the option enabled. It uses cinder backend in Glance to store images on ScaleIO."
weight: 195
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '6.1' or cluster:fuel_version == '7.0' or cluster:fuel_version == '8.0'"
action: hide
pkg_ftp:
type: "text"
value: "ftp://QNzgdxXix:Aw3wFAwAq3@ftp.emc.com/Ubuntu/2.0.7536.0"
label: "FTP server with ScaleIO packages"
description: "In case of no internet connection set this option to a local FTP server with appropriate folder structure."
weight: 200

View File

@ -1,69 +0,0 @@
# Plugin name
name: scaleio
# Human-readable name for your plugin
title: ScaleIOv2.0 plugin
# Plugin version
version: '2.1.3'
# Description
description: This plugin deploys and enables EMC ScaleIO ver. 2.0 as the block storage backend
# Required fuel version
fuel_version: ['7.0', '8.0', '9.0', '10.0.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['EMC']
# A link to the plugin's page
homepage: 'https://github.com/openstack/fuel-plugin-scaleio'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: ['storage']
is_hotpluggable: false
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: 2014.2.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: ubuntu
version: 2015.1.0-7.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: ubuntu
version: liberty-8.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: ubuntu
version: mitaka-9.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: centos
version: mitaka-9.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
- os: ubuntu
version: newton-10.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: centos
version: newton-10.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
# Version of plugin package
package_version: "3.0.0"

View File

@ -1,29 +0,0 @@
scaleio:
name: "ScaleIO"
description: "ScaleIO"
update_required:
- primary-controller
restrictions:
- condition: "settings:scaleio.hyper_converged_deployment.value == true"
message:
ScaleIO role is available only in the non hyper-converged deployment mode.
To select non hyper-converged mode navigate to the ScaleIO plugin settings and un-check appropriate checkbox.
group: "storage"
## Experimental features: disabled in production
# scaleio-storage-tier1:
# name: "ScaleIO Storage Tier1"
# description: "Devices of a node with this role will be assigned to the storage poll tier1. If both tier roles are assigned, devices will be slit according to plugin settings."
# limits:
# min: 3
# update_required:
# - controller
# - primary-controller
#
# scaleio-storage-tier2:
# name: "ScaleIO Storage Tier2"
# description: "Devices of a node with this role will be assigned to the storage poll tier2. If both tier roles are assigned, devices will be slit according to plugin settings."
# limits:
# min: 3
# update_required:
# - controller
# - primary-controller

View File

@ -1,55 +0,0 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.
#
# Define environment variable:
# - FORCE_DOWNLOAD - to force package downloading
# - FORCE_CLONE - to force re-cloning of puppet git reposintories
set -eux
RELEASE=${RELEASE_TAG:-"v1.2.0"}
BASE_PUPPET_URL="https://github.com/codedellemc"
##############################################################################
# Download required puppet modules
##############################################################################
GIT_REPOS=(puppet-scaleio puppet-scaleio-openstack)
DESTINATIONS=(scaleio scaleio_openstack)
for r in {0..1}
do
puppet_url="$BASE_PUPPET_URL/${GIT_REPOS[$r]}"
destination="./deployment_scripts/puppet/modules/${DESTINATIONS[$r]}"
if [[ ! -d "$destination" || ! -z "${FORCE_CLONE+x}" ]]
then
if [ ! -z "${FORCE_CLONE+x}" ]
then
rm -rf "$destination"
fi
git clone "$puppet_url" "$destination"
pushd "$destination"
if git tag -l | grep -q "$RELEASE" ; then
git checkout "tags/$RELEASE"
else
git checkout "$RELEASE"
fi
popd
else
if [ -z "${SKIP_PULL+x}" ]
then
pushd "$destination"
if git tag -l | grep -q "$RELEASE" ; then
git checkout "tags/$RELEASE"
else
git checkout "$RELEASE"
fi
popd
fi
fi
done

View File

@ -1,124 +0,0 @@
===================
ScaleIO Fuel plugin
===================
ScaleIO plugin for Fuel extends Mirantis OpenStack functionality by adding
support for deploying and configuring ScaleIO clusters as block storage backend.
Problem description
===================
Currently, Fuel has no support for ScaleIO clusters as block storage backend for
OpenStack environments. ScaleIO plugin aims to provide support for it.
This plugin will deploy a ScaleIO cluster and configure the OpenStack environment
to consume the block storage services from ScaleIO.
Proposed change
===============
Implement a Fuel plugin that will deploy a ScaleIO cluster and configure the
ScaleIO Cinder driver on all Controller and Compute nodes.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
The ScaleIO storage clusters provide high performance block storage for
OpenStack environments. Therefore, enabling the ScaleIO plugin in OpenStack
will greatly improve performance of OpenStack block storage operation.
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
This plugin contains several tasks:
* The first task installs the ScaleIO cluster. All nodes will contribute to the
storage pool by the amount specified by the user in the configuration process.
Controllers 1, 2, and 3 will contain the MDM and the Gateway in HA mode.
* The second task configures the ScaleIO gateway to avoid interference with the
Horizon dashboard.
* The third task enables HA in the ScaleIO gateway instances.
* The fourth task configures all Compute nodes to use ScaleIO backend.
* The fifth task configures all Controller nodes to use ScaleIO backend.
* The sixth task creates and configures a Cinder volume type with the parameters
from the ScaleIO cluster.
Assignee(s)
-----------
| Adrian Moreno Martinez <adrian.moreno@emc.com>
| Magdy Salem <magdy.salem@emc.com>
| Patrick Butler Monterde <patrick.butlermonterde@emc.com>
Work Items
----------
* Implement the Fuel plugin.
* Implement the Puppet manifests.
* Testing.
* Write the documentation.
Dependencies
============
* Fuel 6.1.
Testing
=======
* Prepare a test plan.
* Test the plugin by deploying environments with all Fuel deployment modes.
Documentation Impact
====================
* Deployment Guide (how to install the storage backends, how to prepare an
environment for installation, how to install the plugin, how to deploy an
OpenStack environment with the plugin).
* User Guide (which features the plugin provides, how to use them in the
deployed OpenStack environment).
* Test Plan.
* Test Report.

View File

@ -1,84 +0,0 @@
volumes:
- id: "scaleio"
type: "partition"
min_size:
generator: "calc_gb_to_mb"
generator_args: [0]
label: "ScaleIO"
name: "ScaleIO"
mount: "none"
file_system: "none"
partition_guid: "5e9bd278-9919-4db3-9756-4b82c7e9df52"
## Experimental features: disabled in production
# - id: "scaleio-storage-tier1"
# type: "partition"
# min_size:
# generator: "calc_gb_to_mb"
# generator_args: [0]
# label: "ScaleIO Tier1"
# name: "ScaleIOTier1"
# mount: "none"
# file_system: "none"
# partition_guid: "f2e81bdc-99b3-4bf6-a68f-dc794da6cd8e"
# - id: "scaleio-storage-tier2"
# type: "partition"
# min_size:
# generator: "calc_gb_to_mb"
# generator_args: [0]
# label: "ScaleIO Tier2"
# name: "ScaleIOTier2"
# mount: "none"
# file_system: "none"
# partition_guid: "d5321bb3-1098-433e-b4f5-216712fcd06f"
# - id: "scaleio-storage-tier3"
# type: "partition"
# min_size:
# generator: "calc_gb_to_mb"
# generator_args: [0]
# label: "ScaleIO Tier3"
# name: "ScaleIOTier3"
# mount: "none"
# file_system: "none"
# partition_guid: "97987bfc-a9ba-40f3-afea-13e1a228e492"
# - id: "scaleio-storage-rfcache"
# type: "partition"
# min_size:
# generator: "calc_gb_to_mb"
# generator_args: [0]
# label: "ScaleIO RFCahe"
# name: "ScaleIORFCache"
# mount: "none"
# file_system: "none"
# partition_guid: "163ddeea-95dd-4af0-a329-140623590c47"
volumes_roles_mapping:
scaleio:
- {allocate_size: "min", id: "os"}
- {allocate_size: "full-disk", id: "scaleio"}
## Experimental features: disabled in production
# scaleio-storage-tier1:
# - {allocate_size: "min", id: "os"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier1"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier2"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier3"}
# - {allocate_size: "full-disk", id: "scaleio-storage-rfcache"}
# scaleio-storage-tier2:
# - {allocate_size: "min", id: "os"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier1"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier2"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier3"}
# - {allocate_size: "full-disk", id: "scaleio-storage-rfcache"}
# scaleio-storage-tier3:
# - {allocate_size: "min", id: "os"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier1"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier2"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier3"}
# - {allocate_size: "full-disk", id: "scaleio-storage-rfcache"}
# scaleio-storage-rfcache:
# - {allocate_size: "min", id: "os"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier1"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier2"}
# - {allocate_size: "full-disk", id: "scaleio-storage-tier3"}
# - {allocate_size: "full-disk", id: "scaleio-storage-rfcache"}