This version supports ScaleIO 2.0.
Tested with Fuel 6.1, 7, 8.

old version is available in branch scaleio_1.32

Change-Id: I40b9db5882a57010e5e1ddd766b49bcd0e8db6c8
This commit is contained in:
Andrey Pavlov 2016-06-23 17:38:53 +03:00
parent c850346d4e
commit bb0d4b8000
99 changed files with 2436 additions and 2427 deletions

5
.gitignore vendored
View File

@ -16,3 +16,8 @@ _build/
# PDF
*.pdf
.project
.doctrees
.buildpath
.pydevproject

9
.gitmodules vendored
View File

@ -1,9 +0,0 @@
[submodule "deployment_scripts/puppet/modules/wait_for"]
path = deployment_scripts/puppet/modules/wait_for
url = https://github.com/basti1302/puppet-wait-for
[submodule "deployment_scripts/puppet/modules/scaleio"]
path = deployment_scripts/puppet/modules/scaleio
url = https://github.com/adrianmo/puppet-scaleio
[submodule "deployment_scripts/puppet/modules/java"]
path = deployment_scripts/puppet/modules/java
url = https://github.com/puppetlabs/puppetlabs-java

175
README.md
View File

@ -2,63 +2,62 @@
## Overview
The `ScaleIO` plugin deploys an EMC ScaleIO cluster on the available nodes and replaces the default OpenStack volume backend by ScaleIO.
If you want to leverage an existing ScaleIO cluster and deploy an OpenStack cluster to use that ScaleIO cluster, please take a look at the [ScaleIO Cinder](https://github.com/openstack/fuel-plugin-scaleio-cinder) plugin.
Disclaimer: Current version is RC1.
The `ScaleIO` plugin allows to:
* Deploy an EMC ScaleIO v.2.0 cluster together with OpenStack and configure OpenStack to use ScaleIO
as the storage for persistent and ephemeral volumes
* Configure OpenStack to use an existing ScaleIO cluster as a volume backend
* Support the following ScaleIO custer modes: 1_node, 3_node and 5_node
the mode is chosen automatically depending on the number of controller nodes
## Requirements
| Requirement | Version/Comment |
|----------------------------------|-----------------|
| Mirantis OpenStack compatibility | 6.1 |
| Mirantis OpenStack | 6.1 |
| Mirantis OpenStack | 7.0 |
| Mirantis OpenStack | 8.0 |
## Recommendations
None.
1. Use configuration with 3 controllers or 5 controllers.
Although 1 controller mode is supported is suitable for testing purposees only.
2. Assign Cinder role for all controllers with allocating minimal diskspace for this role.
Some space is needed because of FUEL framework limitation (this space will not used).
Rest of the space keep for images.
3. Use nodes with similar HW configuration within one group of roles.
4. Deploy SDS coponents only on compute nodes.
Deploymen SDS-es on controllers is supported but it is more suitable for testing than for production environment.
5. On compute nodes keep minimal space for virtual storage on the first disk, rest disks use for ScaleIO.
Some space is needed because of FUEL framework limitations.
Other disks should be unallocated and can be used for ScaleIO.
6. In case of extending cluster with new compute nodes not to forget to run update_hosts tasks on controller nodes via FUEL cli.
## Limitations
Due to some software limitations, this plugin is currently only compatible with Mirantis 6.1 and CentOS. Though, future releases of the plugin will support newer versions of Mirantis OpenStack and Ubuntu as base OS.
Below is the support matrix of this plugin and the [ScaleIO-Cinder](https://github.com/openstack/fuel-plugin-scaleio-cinder) plugin.
![ScaleIOSupport](doc/source/images/SIO_Support.png)
1. Plugin supports Ubuntu environment only.
2. The only hyper converged environment is supported - there is no separate ScaleIO Storage nodes.
3. Multi storage backend is not supported.
4. It is not possible to use different backends for persistent and ephemeral volumes.
5. Disks for SDS-es should be unallocated before deployment via FUEL UI or cli.
6. MDMs and Gateways are deployed together and only onto controller nodes.
7. Adding and removing node(s) to/from the OpenStack cluster won't re-configure the ScaleIO.
# Installation Guide
## ScaleIO Plugin install from RPM file
To install the ScaleIO plugin, follow these steps:
1. Download the plugin from the [Fuel Plugins Catalog](https://software.mirantis.com/download-mirantis-openstack-fuel-plug-ins/).
2. Copy the plugin file to the Fuel Master node. Follow the [Quick start guide](https://software.mirantis.com/quick-start/) if you don't have a running Fuel Master node yet.
```
$ scp scaleio-1.0-1.0.1-1.noarch.rpm root@<Fuel Master node IP address>:/tmp/
```
3. Log into the Fuel Master node and install the plugin using the fuel command line.
```
$ fuel plugins --install /tmp/scaleio-1.0-1.0.1-1.noarch.rpm
```
4. Verify that the plugin is installed correctly.
```
$ fuel plugins
```
## ScaleIO Plugin install from source code
To install the ScaleIO Plugin from source code, you first need to prepare an environment to build the RPM file of the plugin. The recommended approach is to build the RPM file directly onto the Fuel Master node so that you won't have to copy that file later.
Prepare an environment for building the plugin on the **Fuel Master node**.
0. You might want to make sure that kernel you have on the nodes for ScaleIO SDC installation (compute and cinder nodes) is suitable for the drivers present here: ``` ftp://QNzgdxXix:Aw3wFAwAq3@ftp.emc.com/ ```. Look for something like ``` Ubuntu/2.0.5014.0/4.2.0-30-generic ```. Local kernel version can be found with ``` uname -a ``` command.
1. Install the standard Linux development tools:
```
$ yum install createrepo rpm rpm-build dpkg-devel
$ yum install createrepo rpm rpm-build dpkg-devel git
```
2. Install the Fuel Plugin Builder. To do that, you should first get pip:
@ -77,29 +76,124 @@ In this case, please refer to the section "Preparing an environment for plugin d
of the [Fuel Plugins wiki](https://wiki.openstack.org/wiki/Fuel/Plugins) if you
need further instructions about how to build the Fuel Plugin Builder.*
4. Clone the ScaleIO Plugin git repository (note the `--recursive` option):
4. Clone the ScaleIO Plugin git repository:
```
$ git clone --recursive git@github.com:openstack/fuel-plugin-scaleio.git
FUEL6.1/7.0:
$ git clone https://github.com/cloudscaling/fuel-plugin-scaleio.git
$ git checkout "tags/v0.3.1"
$ cd fuel-plugin-scaleio
```
```
FUEL8.0:
$ git clone https://github.com/cloudscaling/fuel-plugin-scaleio.git
$ git checkout "tags/v0.3.2"
$ cd fuel-plugin-scaleio
```
5. Check that the plugin is valid:
```
$ fpb --check ./fuel-plugin-scaleio
$ fpb --check .
```
6. Build the plugin:
```
$ fpb --build ./fuel-plugin-scaleio
$ fpb --build .
```
7. Now you have created an RPM file that you can install using the steps described above. The RPM file will be located in:
7. Install plugin:
```
$ ./fuel-plugin-scaleio/scaleio-1.0-1.0.1-1.noarch.rpm
FUEL6.1/7.0:
$ fuel plugins --install ./scaleio-2.0-2.0.0-1.noarch.rpm
```
```
FUEL8.0:
$ fuel plugins --install ./scaleio-2.1-2.1.0-1.noarch.rpm
```
## ScaleIO Plugin install from Fuel Plugins Catalog
To install the ScaleIOv2.0 Fuel plugin:
1. Download it from the [Fuel Plugins Catalog](https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/)
2. Copy the rpm file to the Fuel Master node
```
FUEL6.1/7.0
[root@home ~]# scp scaleio-2.0-2.0.0-1.noarch.rpm root@fuel-master:/tmp
```
```
FUEL8.0
[root@home ~]# scp scaleio-2.1-2.1.0-1.noarch.rpm root@fuel-master:/tmp
```
3. Log into Fuel Master node and install the plugin using the Fuel CLI
```
FUEL6.1/7.0:
$ fuel plugins --install ./scaleio-2.0-2.0.0-1.noarch.rpm
```
```
FUEL8.0:
$ fuel plugins --install ./scaleio-2.1-2.1.0-1.noarch.rpm
```
4. Verify that the plugin is installed correctly
```
FUEL6.1/7.0
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
1 | scaleio | 2.0.0 | 2.0.0
```
```
FUEL8.0
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
1 | scaleio | 2.1.0 | 3.0.0
```
# User Guide
Please read the [ScaleIO Plugin User Guide](doc/source).
Please read the [ScaleIO Plugin User Guide](doc/source/builddir/ScaleIO-Plugin_Guide.pdf) for full description.
First of all, ScaleIOv2.0 plugin functionality should be enabled by switching on ScaleIO in the Settings.
ScaleIO section contains the following info to fill in:
1. Existing ScaleIO Cluster.
Set "Use existing ScaleIO" checkbox.
The following parameters should be specified:
* Gateway IP address - IP address of ScaleIO gateway
* Gateway port - Port of ScaleIO gateway
* Gateway user - User to access ScaleIO gateway
* Gateway password - Password to access ScaleIO gateway
* Protection domain - The protection domain to use
* Storage pools - Comma-separated list of storage pools
2. New ScaleIO deployment
The following parameters should be specified:
* Admin password - Administrator password to set for ScaleIO MDM
* Protection domain - The protection domain to create for ScaleIO cluster
* Storage pools - Comma-separated list of storage pools to create for ScaleIO cluster
* Storage devices - Path to storage devices, comma separated (/dev/sdb,/dev/sdd)
The following parameters are optional and have default values suitable for most cases:
* Controller as Storage - Use controller nodes for ScaleIO SDS (by default only compute nodes are used for ScaleIO SDS deployment)
* Provisioning type - Thin/Thick provisioning for ephemeral and persistent volumes
* Checksum mode - Checksum protection. ScaleIO protects data in-flight by calculating and validating the checksum value for the payload at both ends.
Note, the checksum feature may have a minor effect on performance. ScaleIO utilizes hardware capabilities for this feature, where possible.
* Spare policy - % out of total space to be reserved for rebalance and redundancy recovery cases.
* Enable Zero Padding for Storage Pools - New volumes will be zeroed if the option enabled.
* Background device scanner - This options enables the background device scanner on the devices in device only mode.
* XtremCache devices - List of SDS devices for SSD caching. Cache is disabled if list empty.
* XtremCache storage pools - List of storage pools which should be cached with XtremCache.
* Capacity high priority alert - Threshold of the non-spare capacity of the Storage Pool that will trigger a high-priority alert, in percentage format.
* Capacity critical priority alert - Threshold of the non-spare capacity of the Storage Pool that will trigger a critical-priority alert, in percentage format.
Configuration of disks for allocated nodes:
The devices listed in the "Storage devices" and "XtremCache devices" should be left unallocated for ScaleIO SDS to work.
# Contributions
@ -112,3 +206,4 @@ Please use the [Launchpad project site](https://launchpad.net/fuel-plugin-scalei
# License
Please read the [LICENSE](LICENSE) document for the latest licensing information.

View File

@ -1,14 +0,0 @@
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
notify{'ScaleIO plugin enabled': }
if $::osfamily != 'RedHat' {
fail("Unsupported osfamily: ${::osfamily} operatingsystem: ${::operatingsystem}, module ${module_name} currently only supports osfamily RedHat")
}
#TODO: Check that Storage pool has enough space
} else {
notify{'ScaleIO plugin disabled': }
}

View File

@ -0,0 +1,29 @@
# The puppet configures OpenStack cinder to use ScaleIO.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if empty(filter_nodes($nodes, 'role', 'cinder')) {
fail("Cinder Role is not found on the host ${::hostname}")
}
if $scaleio['provisioning_type'] and $scaleio['provisioning_type'] != '' {
$provisioning_type = $scaleio['provisioning_type']
} else {
$provisioning_type = undef
}
$gateway_ip = $scaleio['existing_cluster'] ? {
true => $scaleio['gateway_ip'],
default => hiera('management_vip')
}
class {'scaleio_openstack::cinder':
ensure => present,
gateway_user => $::gateway_user,
gateway_password => $scaleio['password'],
gateway_ip => $gateway_ip,
gateway_port => $::gateway_port,
protection_domains => $scaleio['protection_domain'],
storage_pools => $::storage_pools,
provisioning_type => $provisioning_type,
}
}

View File

@ -0,0 +1,379 @@
# Protection domains and Storage Pools.
# The puppet configures ScaleIO cluster - adds MDMs, SDSs, sets up
#Helpers for array processing
define mdm_standby() {
$ip = $title
notify {"Configure Standby MDM ${ip}": } ->
scaleio::mdm {"Standby MDM ${ip}":
ensure => 'present',
ensure_properties => 'present',
name => $ip,
role => 'manager',
ips => $ip,
management_ips => $ip,
}
}
define mdm_tb() {
$ip = $title
notify {"Configure Tie-Breaker MDM ${ip}": } ->
scaleio::mdm {"Tie-Breaker MDM ${ip}":
ensure => 'present',
ensure_properties => 'present',
name => $ip,
role => 'tb',
ips => $ip,
management_ips => undef,
}
}
define storage_pool_ensure(
$protection_domain,
$zero_padding,
$scanner_mode,
$checksum_mode,
$spare_policy,
$cached_storage_pools_array,
) {
$sp_name = $title
if $::scaleio_storage_pools and $::scaleio_storage_pools != '' {
$current_pools = split($::scaleio_storage_pools, ',')
} else {
$current_pools = []
}
if ! ("${protection_domain}:${sp_name}" in $current_pools) {
if $sp_name in $cached_storage_pools_array {
$rfcache_usage = 'use'
} else {
$rfcache_usage = 'dont_use'
}
notify {"storage_pool_ensure ${protection_domain}:${sp_name}: zero_padding=${zero_padding}, checksum=${checksum_mode}, scanner=${scanner_mode}, spare=${spare_policy}, rfcache=${rfcache_usage}":
} ->
scaleio::storage_pool {"Storage Pool ${protection_domain}:${sp_name}":
name => $sp_name,
protection_domain => $protection_domain,
zero_padding_policy => $zero_padding,
checksum_mode => $checksum_mode,
scanner_mode => $scanner_mode,
spare_percentage => $spare_policy,
rfcache_usage => $rfcache_usage,
}
} else {
notify {"Skip storage pool ${sp_name} because it is already exists in ${::scaleio_storage_pools}": }
}
}
define sds_ensure(
$sds_nodes,
$protection_domain,
$storage_pools, # if sds_devices_config==undef then storage_pools and device_paths are used,
$device_paths, # this is FUELs w/o plugin's roles support, so all SDSes have the same config
$rfcache_devices,
$sds_devices_config, # for FUELs with plugin's roles support, config could be different for SDSes
) {
$sds_name = $title
$sds_node_ = filter_nodes($sds_nodes, 'name', $sds_name)
$sds_node = $sds_node_[0]
#ips for data path traffic
$storage_ips = $sds_node['storage_address']
$storage_ip_roles = 'sdc_only'
#ips for communication with MDM and SDS<=>SDS
$mgmt_ips = $sds_node['internal_address']
$mgmt_ip_roles = 'sds_only'
if count(split($storage_ips, ',')) != 1 or count(split($mgmt_ips, ',')) != 1 {
fail("TODO: behaviour changed: address becomes comma-separated list ${storage_ips} or ${mgmt_ips}, so it is needed to add the generation of ip roles")
}
if $mgmt_ips == $storage_ips {
$sds_ips = "${storage_ips}"
$sds_ip_roles = 'all'
}
else {
$sds_ips = "${storage_ips},${mgmt_ips}"
$sds_ip_roles = "${storage_ip_roles},${mgmt_ip_roles}"
}
if $sds_devices_config {
$cfg = $sds_devices_config[$sds_name]
if $cfg {
notify{"sds ${sds_name} config: ${cfg}": }
$pool_devices = $cfg ? { false => undef, default => convert_sds_config($cfg) }
if $pool_devices {
$sds_pools = $pool_devices[0]
$sds_device = $pool_devices[1]
} else {
warn("sds ${sds_name} there is empty pools and devices in configuration")
$sds_pools = undef
$sds_device = undef
}
if $cfg['rfcache_devices'] and $cfg['rfcache_devices'] != '' {
$sds_rfcache_devices = $cfg['rfcache_devices']
} else {
$sds_rfcache_devices = undef
}
} else {
warn("sds ${sds_name} there is no sds config in DB")
$sds_pools = undef
$sds_device = undef
$sds_rfcache_devices = undef
}
} else {
$sds_pools = $storage_pools
$sds_device = $device_paths
$sds_rfcache_devices = $rfcache_devices
}
notify { "sds ${sds_name}: pools:devices:rfcache: '${sds_pools}': '${sds_device}': '${sds_rfcache_devices}'": } ->
scaleio::sds {$sds_name:
ensure => 'present',
name => $sds_name,
protection_domain => $protection_domain,
ips => $sds_ips,
ip_roles => $sds_ip_roles,
storage_pools => $sds_pools,
device_paths => $sds_device,
rfcache_devices => $sds_rfcache_devices,
}
}
define cleanup_sdc () {
$sdc_ip = $title
scaleio::sdc {"Remove SDC ${sdc_ip}":
ensure => 'absent',
ip => $sdc_ip,
}
}
define cleanup_sds () {
$sds_name = $title
scaleio::sds {"Remove SDS ${sds_name}":
ensure => 'absent',
name => $sds_name,
}
}
# The only first mdm which is proposed to be the first master does cluster configuration
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
if $::managers_ips {
$all_nodes = hiera('nodes')
# primary controller configures cluster
if ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller')) {
$fuel_version = hiera('fuel_version')
if $fuel_version <= '8.0' {
$storage_nodes = filter_nodes($all_nodes, 'role', 'compute')
if $scaleio['sds_on_controller'] {
$controller_nodes = filter_nodes($all_nodes, 'role', 'controller')
$pr_controller_nodes = filter_nodes($all_nodes, 'role', 'primary-controller')
$sds_nodes = concat(concat($pr_controller_nodes, $controller_nodes), $storage_nodes)
} else {
$sds_nodes = $storage_nodes
}
} else {
$sds_nodes = concat(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier1'), filter_nodes($all_nodes, 'role', 'scaleio-storage-tier2'))
}
$sds_nodes_names = keys(nodes_to_hash($sds_nodes, 'name', 'internal_address'))
$sds_nodes_count = count($sds_nodes_names)
$sdc_nodes =concat(filter_nodes($all_nodes, 'role', 'compute'), filter_nodes($all_nodes, 'role', 'cinder'))
$sdc_nodes_ips = values(nodes_to_hash($sdc_nodes, 'name', 'internal_address'))
$mdm_ip_array = split($::managers_ips, ',')
$tb_ip_array = split($::tb_ips, ',')
$mdm_count = count($mdm_ip_array)
$tb_count = count($tb_ip_array)
if $mdm_count < 2 or $tb_count == 0 {
$cluster_mode = 1
$standby_ips = []
$slave_names = undef
$tb_names = undef
} else {
# primary controller IP is first in the list in case of first deploy and it creates cluster.
# it's guaranied by the tasks environment.pp and resize_cluster.pp
# in case of re-deploy the first ip is current master ip
$standby_ips = delete($mdm_ip_array, $mdm_ip_array[0])
if $mdm_count < 3 or $tb_count == 1 {
$cluster_mode = 3
$slave_names = join(values_at($standby_ips, "0-0"), ',')
$tb_names = join(values_at($tb_ip_array, "0-0"), ',')
} else {
$cluster_mode = 5
# incase of switch 3 to 5 nodes add only standby mdm/tb
$to_add_slaves = difference(values_at($standby_ips, "0-1"), intersection(values_at($standby_ips, "0-1"), split($::scaleio_mdm_ips, ',')))
$to_add_tb = difference(values_at($tb_ip_array, "0-1"), intersection(values_at($tb_ip_array, "0-1"), split($::scaleio_tb_ips, ',')))
$slave_names = join($to_add_slaves, ',')
$tb_names = join($to_add_tb, ',')
}
}
$password = $scaleio['password']
if $scaleio['protection_domain_nodes'] {
$protection_domain_number = ($sds_nodes_count + $scaleio['protection_domain_nodes'] - 1) / $scaleio['protection_domain_nodes']
} else {
$protection_domain_number = 1
}
$protection_domain = $protection_domain_number ? {
0 => $scaleio['protection_domain'],
1 => $scaleio['protection_domain'],
default => "${scaleio['protection_domain']}_${protection_domain_number}"
}
# parse config from centralized DB if exists
if $::scaleio_sds_config and $::scaleio_sds_config != '' {
$sds_devices_config = parsejson($::scaleio_sds_config)
}
else {
$sds_devices_config = undef
}
if $scaleio['device_paths'] and $scaleio['device_paths'] != '' {
# if devices come from settings, remove probable trailing commas
$paths = join(split($scaleio['device_paths'], ','), ',')
} else {
$paths = undef
}
if $scaleio['storage_pools'] and $scaleio['storage_pools'] != '' {
# if storage pools come from settings remove probable trailing commas
$pools_array = split($scaleio['storage_pools'], ',')
$pools = join($pools_array, ',')
} else {
$pools_array = get_pools_from_sds_config($sds_devices_config)
$pools = undef
}
$zero_padding = $scaleio['zero_padding'] ? {
false => 'disable',
default => 'enable'
}
$scanner_mode = $scaleio['scanner_mode'] ? {
false => 'disable',
default => 'enable'
}
$checksum_mode = $scaleio['checksum_mode'] ? {
false => 'disable',
default => 'enable'
}
$spare_policy = $scaleio['spare_policy'] ? {
false => undef,
default => $scaleio['spare_policy']
}
if $scaleio['rfcache_devices'] and $scaleio['rfcache_devices'] != '' {
$rfcache_devices = $scaleio['rfcache_devices']
} else {
$rfcache_devices = undef
}
if $scaleio['cached_storage_pools'] and $scaleio['cached_storage_pools'] != '' {
$cached_storage_pools_array = split($scaleio['cached_storage_pools'], ',')
} else {
$cached_storage_pools_array = []
}
if $scaleio['capacity_high_alert_threshold'] and $scaleio['capacity_high_alert_threshold'] != '' {
$capacity_high_alert_threshold = $scaleio['capacity_high_alert_threshold']
} else {
$capacity_high_alert_threshold = undef
}
if $scaleio['capacity_critical_alert_threshold'] and $scaleio['capacity_critical_alert_threshold'] != '' {
$capacity_critical_alert_threshold = $scaleio['capacity_critical_alert_threshold']
} else {
$capacity_critical_alert_threshold = undef
}
notify {"Configure cluster MDM: ${master_mdm}": } ->
scaleio::login {'Normal':
password => $password,
require => File_line['SCALEIO_discovery_allowed']
}
if $::scaleio_sdc_ips {
$current_sdc_ips = split($::scaleio_sdc_ips, ',')
$to_keep_sdc = intersection($current_sdc_ips, $sdc_nodes_ips)
$to_remove_sdc = difference($current_sdc_ips, $to_keep_sdc)
# todo: not clear is it safe: actually task sdc is run before cluster task,
# so there to_add_sdc_ips is always empty, because all SDCs
# are already registered in cluster and are returned from facter scaleio_current_sdc_list
notify {"SDC change current='${::scaleio_current_sdc_list}', to_add='${to_add_sdc_ips}', to_remove='${to_remove_sdc}'": } ->
cleanup_sdc {$to_remove_sdc:
require => Scaleio::Login['Normal'],
}
}
if $::scaleio_sds_names {
$current_sds_names = split($::scaleio_sds_names, ',')
$to_keep_sds = intersection($current_sds_names, $sds_nodes_names)
$to_add_sds_names = difference($sds_nodes_names, $to_keep_sds)
$to_remove_sds = difference($current_sds_names, $to_keep_sds)
notify {"SDS change current='${::scaleio_sds_names}' new='${new_sds_names}' to_remove='${to_remove_sds}'": } ->
cleanup_sds {$to_remove_sds:
require => Scaleio::Login['Normal'],
}
} else {
$to_add_sds_names = $sds_nodes_names
}
if $cluster_mode != 1 {
mdm_standby {$standby_ips:
require => Scaleio::Login['Normal'],
} ->
mdm_tb{$tb_ip_array:} ->
scaleio::cluster {'Configure cluster mode':
ensure => 'present',
cluster_mode => $cluster_mode,
slave_names => $slave_names,
tb_names => $tb_names,
require => Scaleio::Login['Normal'],
}
}
$protection_domain_resource_name = "Ensure protection domain ${protection_domain}"
scaleio::protection_domain {$protection_domain_resource_name:
name => $protection_domain,
require => Scaleio::Login['Normal'],
} ->
storage_pool_ensure {$pools_array:
protection_domain => $protection_domain,
zero_padding => $zero_padding,
scanner_mode => $scanner_mode,
checksum_mode => $checksum_mode,
spare_policy => $spare_policy,
cached_storage_pools_array => $cached_storage_pools_array,
} ->
sds_ensure {$to_add_sds_names:
sds_nodes => $sds_nodes,
protection_domain => $protection_domain,
storage_pools => $pools,
device_paths => $paths,
rfcache_devices => $rfcache_devices,
sds_devices_config => $sds_devices_config,
require => Scaleio::Protection_domain[$protection_domain_resource_name],
}
if $capacity_high_alert_threshold and $capacity_critical_alert_threshold {
scaleio::cluster {'Configure alerts':
ensure => 'present',
capacity_high_alert_threshold => $capacity_high_alert_threshold,
capacity_critical_alert_threshold => $capacity_critical_alert_threshold,
require => Scaleio::Protection_domain[$protection_domain_resource_name],
}
}
# Apply high performance profile to SDC-es
# Use first sdc ip because underlined puppet uses all_sdc parameters
if ! empty($sdc_nodes_ips) {
scaleio::sdc {'Set performance settings for all available SDCs':
ip => $sdc_nodes_ips[0],
require => Scaleio::Protection_domain[$protection_domain_resource_name],
}
}
} else {
notify {"Not Master MDM IP ${master_mdm}": }
}
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_mdm_ips=",
line => "SCALEIO_mdm_ips=${::managers_ips}",
} ->
# forbid requesting sdc/sds from discovery facters,
# this is a workaround of the ScaleIO problem -
# these requests hangs in some reason if cluster is in degraded state
file_line {'SCALEIO_discovery_allowed':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_discovery_allowed=",
line => "SCALEIO_discovery_allowed=no",
}
} else {
fail('Empty MDM IPs configuration')
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel::configure_cinder': }

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel::configure_gateway': }

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel::configure_nova': }

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel::create_volume_type': }

View File

@ -0,0 +1,35 @@
# The puppet discovers cluster and updates mdm_ips and tb_ips values for next cluster task.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
# names of mdm and tb are IPs in fuel
$current_mdms = split($::scaleio_mdm_ips, ',')
$current_managers = concat(split($::scaleio_mdm_ips, ','), split($::scaleio_standby_mdm_ips, ','))
$current_tbs = concat(split($::scaleio_tb_ips, ','), split($::scaleio_standby_tb_ips, ','))
$discovered_mdms_ips = join($current_mdms, ',')
$discovered_managers_ips = join($current_managers, ',')
$discovered_tbs_ips = join($current_tbs, ',')
notify {"ScaleIO cluster: discovery: discovered_managers_ips='${discovered_managers_ips}', discovered_tbs_ips='${discovered_tbs_ips}'": } ->
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_mdm_ips=",
line => "SCALEIO_mdm_ips=${discovered_mdms_ips}",
} ->
file_line {'SCALEIO_managers_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_managers_ips=",
line => "SCALEIO_managers_ips=${discovered_managers_ips}",
} ->
file_line {'SCALEIO_tb_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_tb_ips=",
line => "SCALEIO_tb_ips=${discovered_tbs_ips}",
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel::enable_ha': }

View File

@ -0,0 +1,174 @@
# The puppet reset mdm ips into initial state for next cluster detection on controllers.
# On client nodes just all controllers are used as mdm ips because no way to detect cluster there.
define env_fact($role, $fact, $value) {
file_line { "Append a SCALEIO_${role}_${fact} line to /etc/environment":
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_${role}_${fact}=",
line => "SCALEIO_${role}_${fact}=${value}",
}
}
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
notify{'ScaleIO plugin enabled': }
# The following exec allows interrupt for debugging at the very beginning of the plugin deployment
# because Fuel doesn't provide any tools for this and deployment can last for more than two hours.
# Timeouts in tasks.yaml and in the deployment_tasks.yaml (which in 6.1 is not user-exposed and
# can be found for example in astute docker container during deloyment) should be set to high values.
# It'll be invoked only if /tmp/scaleio_debug file exists on particular node and you can use
# "touch /tmp/go" when you're ready to resume.
exec { "Wait on debug interrupt: use touch /tmp/go to resume":
command => "bash -c 'while [ ! -f /tmp/go ]; do :; done'",
path => [ '/bin/' ],
onlyif => "ls /tmp/scaleio_debug",
}
case $::osfamily {
'RedHat': {
fail('This is a temporary limitation. ScaleIO supports only Ubuntu for now.')
}
'Debian': {
# nothing to do
}
default: {
fail("Unsupported osfamily: ${::osfamily} operatingsystem: ${::operatingsystem}, module ${module_name} only support osfamily RedHat and Debian")
}
}
$all_nodes = hiera('nodes')
if ! $scaleio['skip_checks'] and empty(filter_nodes($all_nodes, 'role', 'cinder')) {
fail('At least one Node with Cinder role is required')
}
if $scaleio['existing_cluster'] {
# Existing ScaleIO cluster attaching
notify{'Use existing ScaleIO cluster': }
env_fact{"Environment fact: role gateway, ips: ${scaleio['gateway_ip']}":
role => 'gateway',
fact => 'ips',
value => $scaleio['gateway_ip']
} ->
env_fact{"Environment fact: role gateway, user: ${scaleio['gateway_user']}":
role => 'gateway',
fact => 'user',
value => $scaleio['gateway_user']
} ->
env_fact{"Environment fact: role gateway, password: ${scaleio['password']}":
role => 'gateway',
fact => 'password',
value => $scaleio['password']
} ->
env_fact{"Environment fact: role gateway, port: ${scaleio['gateway_port']}":
role => 'gateway',
fact => 'port',
value => $scaleio['gateway_port']
} ->
env_fact{"Environment fact: role storage, pools: ${scaleio['existing_storage_pools']}":
role => 'storage',
fact => 'pools',
value => $scaleio['existing_storage_pools']
}
# mdm_ips are requested from gateways in separate manifest because no way to pass args to facter
}
else {
# New ScaleIO cluster deployment
notify{'Deploy ScaleIO cluster': }
$controllers_nodes = filter_nodes($all_nodes, 'role', 'controller')
$primary_controller_nodes = filter_nodes($all_nodes, 'role', 'primary-controller')
#use management network for ScaleIO components communications
# order of ips should be equal on all nodes:
# - first ip must be primary controller, others should be sorted have defined order
$controllers_ips_ = ipsort(values(nodes_to_hash($controllers_nodes, 'name', 'internal_address')))
$controller_ips_array = concat(values(nodes_to_hash($primary_controller_nodes, 'name', 'internal_address')), $controllers_ips_)
$ctrl_ips = join($controller_ips_array, ',')
notify{"ScaleIO cluster: ctrl_ips=${ctrl_ips}": }
# Check SDS count
$fuel_version = hiera('fuel_version')
if $fuel_version <= '8.0' {
$controller_sds_count = $scaleio['sds_on_controller'] ? {
true => count($controller_ips_array),
default => 0
}
$total_sds_count = count(filter_nodes($all_nodes, 'role', 'compute')) + $controller_sds_count
if $total_sds_count < 3 {
$sds_check_msg = 'There should be at least 3 nodes with SDSs, either add Compute node or use Controllers as SDS.'
}
} else {
$tier1_sds_count = count(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier1'))
$tier2_sds_count = count(filter_nodes($all_nodes, 'role', 'scaleio-storage-tier2'))
if $tier1_sds_count != 0 and $tier1_sds_count < 3 {
$sds_check_msg = 'There are less than 3 nodes with Scaleio Storage Tier1 role.'
}
if $tier2_sds_count != 0 and $tier2_sds_count < 3 {
$sds_check_msg = 'There are less than 3 nodes with Scaleio Storage Tier2 role.'
}
}
if $sds_check_msg {
if ! $scaleio['skip_checks'] {
fail($sds_check_msg)
} else{
warning($sds_check_msg)
}
}
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if ! empty(concat(filter_nodes($nodes, 'role', 'controller'), filter_nodes($nodes, 'role', 'primary-controller'))) {
if $::memorysize_mb < 2900 {
if ! $scaleio['skip_checks'] {
fail("Controller node requires at least 3000MB but there is ${::memorysize_mb}")
} else {
warning("Controller node requires at least 3000MB but there is ${::memorysize_mb}")
}
}
}
if $::sds_storage_small_devices {
if ! $scaleio['skip_checks'] {
fail("Storage devices minimal size is 100GB. The following devices do not meet this requirement ${::sds_storage_small_devices}")
} else {
warning("Storage devices minimal size is 100GB. The following devices do not meet this requirement ${::sds_storage_small_devices}")
}
}
# mdm ips and tb ips must be emtpy to avoid queries from ScaleIO about SDC/SDS,
# the next task (cluster discovering) will set them into correct values.
env_fact{'Environment fact: mdm ips':
role => 'mdm',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: managers ips':
role => 'managers',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: tb ips':
role => 'tb',
fact => 'ips',
value => ''
} ->
env_fact{'Environment fact: gateway ips':
role => 'gateway',
fact => 'ips',
value => $ctrl_ips
} ->
env_fact{'Environment fact: controller ips':
role => 'controller',
fact => 'ips',
value => $ctrl_ips
} ->
env_fact{'Environment fact: role gateway, user: admin':
role => 'gateway',
fact => 'user',
value => 'admin'
} ->
env_fact{'Environment fact: role gateway, port: 4443':
role => 'gateway',
fact => 'port',
value => 4443
} ->
env_fact{"Environment fact: role storage, pools: ${scaleio['storage_pools']}":
role => 'storage',
fact => 'pools',
value => $scaleio['storage_pools']
}
}
} else {
notify{'ScaleIO plugin disabled': }
}

View File

@ -0,0 +1,28 @@
# The puppet configure ScaleIO MDM IPs in environment for existing ScaleIO cluster.
#TODO: move it from this file and from environment.pp into modules
define env_fact($role, $fact, $value) {
file_line { "Append a SCALEIO_${role}_${fact} line to /etc/environment":
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_${role}_${fact}=",
line => "SCALEIO_${role}_${fact}=${value}",
}
}
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if $scaleio['existing_cluster'] {
$ips = $::scaleio_mdm_ips_from_gateway
if ! $ips or $ips == '' {
fail('Cannot request MDM IPs from existing cluster. Check Gateway address/port and user name with password.')
}
env_fact{"Environment fact: role mdm, ips from existing cluster ${ips}":
role => 'controller',
fact => 'ips',
value => $ips
}
}
} else {
notify{'ScaleIO plugin disabled': }
}

View File

@ -0,0 +1,40 @@
# The puppet configures ScaleIO Gateway. Sets the password and connects to MDMs.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
if $::managers_ips {
$gw_ips = split($::gateway_ips, ',')
$haproxy_config_options = {
'balance' => 'roundrobin',
'mode' => 'tcp',
'option' => ['tcplog'],
}
Haproxy::Service { use_include => true }
Haproxy::Balancermember { use_include => true }
class {'scaleio::gateway_server':
ensure => 'present',
mdm_ips => $::managers_ips,
password => $scaleio['password'],
} ->
notify { "Configure Haproxy for Gateway nodes: ${gw_ips}": } ->
openstack::ha::haproxy_service { 'scaleio-gateway':
order => 201,
server_names => $gw_ips,
ipaddresses => $gw_ips,
listen_port => $::gateway_port,
public_virtual_ip => hiera('public_vip'),
internal_virtual_ip => hiera('management_vip'),
define_backups => true,
public => true,
haproxy_config_options => $haproxy_config_options,
balancermember_options => 'check inter 10s fastinter 2s downinter 3s rise 3 fall 3',
}
} else {
fail('Empty MDM IPs configuration')
}
} else {
notify{'Skip deploying gateway server because of using existing cluster': }
}
}

View File

@ -1,2 +0,0 @@
$fuel_settings = parseyaml(file('/etc/astute.yaml'))
class {'scaleio_fuel': }

View File

@ -0,0 +1,19 @@
# The puppet installs ScaleIO MDM packages.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$node_ips = split($::ip_address_array, ',')
if ! empty(intersection(split($::controller_ips, ','), $node_ips))
{
notify {"Mdm server installation": } ->
class {'scaleio::mdm_server':
ensure => 'present',
}
} else {
notify{'Skip deploying mdm server because it is not controller': }
}
} else {
notify{'Skip deploying mdm server because of using existing cluster': }
}
}

View File

@ -0,0 +1,66 @@
# The puppet installs ScaleIO MDM packages.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$all_nodes = hiera('nodes')
$node_ips = split($::ip_address_array, ',')
$new_mdm_ips = split($::managers_ips, ',')
$is_tb = ! empty(intersection(split($::tb_ips, ','), $node_ips))
$is_mdm = ! empty(intersection($new_mdm_ips, $node_ips))
if $is_tb or $is_mdm {
if $is_tb {
$is_manager = 0
$master_mdm_name = undef
$master_ip = undef
} else {
$is_manager = 1
$is_new_cluster = ! $::scaleio_mdm_ips or $::scaleio_mdm_ips == ''
$is_primary_controller = ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller'))
if $is_new_cluster and $is_primary_controller {
$master_ip = $new_mdm_ips[0]
$master_mdm_name = $new_mdm_ips[0]
} else {
$master_ip = undef
$master_mdm_name = undef
}
}
$old_password = $::mdm_password ? {
undef => 'admin',
default => $::mdm_password
}
$password = $scaleio['password']
notify {"Controller server is_manager=${is_manager} master_mdm_name=${master_mdm_name} master_ip=${master_ip}": } ->
class {'scaleio::mdm_server':
ensure => 'present',
is_manager => $is_manager,
master_mdm_name => $master_mdm_name,
mdm_ips => $master_ip,
}
if $old_password != $password {
if $master_mdm_name {
scaleio::login {'First':
password => $old_password,
require => Class['scaleio::mdm_server']
} ->
scaleio::cluster {'Set password':
password => $old_password,
new_password => $password,
before => File_line['Append a SCALEIO_mdm_password line to /etc/environment']
}
}
file_line {'Append a SCALEIO_mdm_password line to /etc/environment':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_mdm_password=",
line => "SCALEIO_mdm_password=${password}",
}
}
} else {
notify{'Skip deploying mdm server because it is not mdm and tb': }
}
} else {
notify{'Skip deploying mdm server because of using existing cluster': }
}
}

View File

@ -0,0 +1,29 @@
# The puppet configures OpenStack nova to use ScaleIO.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
if empty(filter_nodes($nodes, 'role', 'compute')) {
fail("Compute Role is not found on the host ${::hostname}")
}
if $scaleio['provisioning_type'] and $scaleio['provisioning_type'] != '' {
$provisioning_type = $scaleio['provisioning_type']
} else {
$provisioning_type = undef
}
$gateway_ip = $scaleio['existing_cluster'] ? {
true => $scaleio['gateway_ip'],
default => hiera('management_vip')
}
class {'scaleio_openstack::nova':
ensure => present,
gateway_user => $::gateway_user,
gateway_password => $scaleio['password'],
gateway_ip => $gateway_ip,
gateway_port => $::gateway_port,
protection_domains => $scaleio['protection_domain'],
storage_pools => $::storage_pools,
provisioning_type => $provisioning_type,
}
}

View File

@ -0,0 +1,125 @@
# The puppet configures OpenStack entities like flavor, volume_types, etc.
define apply_flavor(
$flavors_hash = undef, # hash of flavors
) {
$resource_name = $title # 'flavor:action'
$parsed_name = split($title, ':')
$action = $parsed_name[1]
if $action == 'add' {
$flavor_name = $parsed_name[0]
$flavor = $flavors_hash[$flavor_name]
scaleio_openstack::flavor {$resource_name:
ensure => present,
name => $resource_name,
storage_pool => $flavor['storage_pool'],
id => $flavor['id'],
ram_size => $flavor['ram_size'],
vcpus => $flavor['vcpus'],
disk_size => $flavor['disk_size'],
ephemeral_size => $flavor['ephemeral_size'],
swap_size => $flavor['swap_size'],
rxtx_factor => $flavor['rxtx_factor'],
is_public => $flavor['is_public'],
provisioning => $flavor['provisioning'],
}
} else {
scaleio_openstack::flavor {$resource_name:
ensure => absent,
name => $resource_name,
}
}
}
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
$all_nodes = hiera('nodes')
if ! empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller')) {
if $scaleio['storage_pools'] and $scaleio['storage_pools'] != '' {
# if storage pools come from settings remove probable trailing commas
$pools_array = split($scaleio['storage_pools'], ',')
} else {
$pools_array = get_pools_from_sds_config($sds_devices_config)
}
$storage_pool = $pools_array ? {
undef => undef,
default => $pools_array[0] # use first pool for flavors
}
$flavors = {
'm1.micro' => {
'id' => undef,
'ram_size' => 64,
'vcpus' => 1,
'disk_size' => 8, # because ScaleIO requires the size be multiplier of 8
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.tiny' => {
'id' => 1,
'ram_size' => 512,
'vcpus' => 1,
'disk_size' => 8,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.small' => {
'id' => 2,
'ram_size' => 2048,
'vcpus' => 1,
'disk_size' => 24,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.medium' => {
'id' => 3,
'ram_size' => 4096,
'vcpus' => 2,
'disk_size' => 48,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.large' => {
'id' => 4,
'ram_size' => 8192,
'vcpus' => 4,
'disk_size' => 80,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
'm1.xlarge' => {
'id' => 5,
'ram_size' => 16384,
'vcpus' => 8,
'disk_size' => 160,
'ephemeral_size' => 0,
'rxtx_factor' => 1,
'is_public' => 'True',
'provisioning' => 'thin',
'storage_pool' => $storage_pool,
},
}
$to_remove_flavors = suffix(keys($flavors), ':remove')
$to_add_flavors = suffix(keys($flavors), ':add')
apply_flavor {$to_remove_flavors:
flavors_hash => undef,
} ->
apply_flavor {$to_add_flavors:
flavors_hash => $flavors,
}
}
}

View File

@ -0,0 +1,120 @@
# The puppet sets 1_node mode and removes absent nodes if there are any ones.
# It expects that facters mdm_ips and tb_ips are correctly set to current cluster state
define cleanup_mdm () {
$mdm_name = $title
scaleio::mdm {"Remove MDM ${mdm_name}":
ensure => 'absent',
name => $mdm_name,
}
}
# The only mdm with minimal IP from current MDMs does cleaunp
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$all_nodes = hiera('nodes')
$controller_ips_array = split($::controller_ips, ',')
# names of mdm and tb are IPs in fuel
$current_mdms = split($::managers_ips, ',')
$current_tbs = split($::tb_ips, ',')
$mdms_present = intersection($current_mdms, $controller_ips_array)
$mdms_present_str = join($mdms_present, ',')
$mdms_absent = difference($current_mdms, $mdms_present)
$tbs_present = intersection($current_tbs, $controller_ips_array)
$tbs_absent = difference($current_tbs, $tbs_present)
$controllers_count = count($controller_ips_array)
if $controllers_count < 3 {
# 1 node mode
$to_add_mdm_count = 1 - count($mdms_present)
$to_add_tb_count = 0
} else {
# 3 node mode
if $controllers_count < 5 {
$to_add_mdm_count = 2 - count($mdms_present)
$to_add_tb_count = 1 - count($tbs_present)
} else {
# 5 node mode
$to_add_mdm_count = 3 - count($mdms_present)
$to_add_tb_count = 2 - count($tbs_present)
}
}
$nodes_present = concat(intersection($current_mdms, $controller_ips_array), $tbs_present)
$available_nodes = difference($controller_ips_array, intersection($nodes_present, $controller_ips_array))
if $to_add_tb_count > 0 and count($available_nodes) >= $to_add_tb_count {
$last_tb_index = count($available_nodes) - 1
$first_tb_index = $last_tb_index - $to_add_tb_count + 1
$tbs_present_tmp = intersection($current_tbs, $controller_ips_array) # use tmp because concat modifys first param
$new_tb_ips = join(concat($tbs_present_tmp, values_at($available_nodes, "${first_tb_index}-${last_tb_index}")), ',')
} else {
$new_tb_ips = join($tbs_present, ',')
}
if $to_add_mdm_count > 0 and count($available_nodes) >= $to_add_mdm_count {
$last_mdm_index = $to_add_mdm_count - 1
$mdms_present_tmp = intersection($current_mdms, $controller_ips_array) # use tmp because concat modifys first param
$new_mdms_ips = join(concat($mdms_present_tmp, values_at($available_nodes, "0-${last_mdm_index}")), ',')
} else {
$new_mdms_ips = join($mdms_present, ',')
}
$is_primary_controller = !empty(filter_nodes(filter_nodes($all_nodes, 'name', $::hostname), 'role', 'primary-controller'))
notify {"ScaleIO cluster: resize: controller_ips_array='${controller_ips_array}', current_mdms='${current_mdms}', current_tbs='${current_tbs}'": }
if !empty($mdms_absent) or !empty($tbs_absent) {
notify {"ScaleIO cluster: change: mdms_present='${mdms_present}', mdms_absent='${mdms_absent}', tbs_present='${tbs_present}', tbs_absent='${tbs_absent}'": }
# primary-controller will do cleanup
if $is_primary_controller {
$active_mdms = split($::scaleio_mdm_ips, ',')
$slaves_names = join(delete($active_mdms, $active_mdms[0]), ',') # first is current master
$to_remove_mdms = concat(split(join($mdms_absent, ','), ','), $tbs_absent) # join/split because concat affects first argument
scaleio::login {'Normal':
password => $scaleio['password']
} ->
scaleio::cluster {'Resize cluster mode to 1_node and remove other MDMs':
ensure => 'absent',
cluster_mode => 1,
slave_names => $slaves_names,
tb_names => $::scaleio_tb_ips,
require => Scaleio::Login['Normal'],
before => File_line['SCALEIO_mdm_ips']
} ->
cleanup_mdm {$to_remove_mdms:
before => File_line['SCALEIO_mdm_ips']
}
} else {
notify {"ScaleIO cluster: resize: Not primary controller ${::hostname}": }
}
} else {
notify {'ScaleIO cluster: resize: nothing to resize': }
}
file_line {'SCALEIO_mdm_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_mdm_ips=",
line => "SCALEIO_mdm_ips=${mdms_present_str}",
} ->
file_line {'SCALEIO_managers_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_managers_ips=",
line => "SCALEIO_managers_ips=${new_mdms_ips}",
} ->
file_line {'SCALEIO_tb_ips':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_tb_ips=",
line => "SCALEIO_tb_ips=${new_tb_ips}",
}
# only primary-controller needs discovery of sds/sdc
if $is_primary_controller {
file_line {'SCALEIO_discovery_allowed':
ensure => present,
path => '/etc/environment',
match => "^SCALEIO_discovery_allowed=",
line => "SCALEIO_discovery_allowed=yes",
require => File_line['SCALEIO_tb_ips']
}
}
} else {
notify{'Skip configuring cluster because of using existing cluster': }
}
}

View File

@ -0,0 +1,14 @@
# The puppet installs ScaleIO SDC packages and connect to MDMs.
# It expects that any controller could be MDM
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $::controller_ips {
fail('Empty Controller IPs configuration')
}
class {'scaleio::sdc_server':
ensure => 'present',
mdm_ip => $::controller_ips,
}
}

View File

@ -0,0 +1,9 @@
# The puppet installs ScaleIO SDS packages.
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
class {'scaleio::sdc_server':
ensure => 'present',
mdm_ip => undef,
}
}

View File

@ -0,0 +1,114 @@
# The puppet installs ScaleIO SDS packages
# helping define for array processing
define sds_device_cleanup() {
$device = $title
exec { "device ${device} cleaup":
command => "bash -c 'for i in \$(parted ${device} print | awk \"/^ [0-9]+/ {print(\\\$1)}\"); do parted ${device} rm \$i; done'",
path => [ '/bin/', '/sbin/' , '/usr/bin/', '/usr/sbin/' ],
}
}
# Just install packages
$scaleio = hiera('scaleio')
if $scaleio['metadata']['enabled'] {
if ! $scaleio['existing_cluster'] {
$fuel_version = hiera('fuel_version')
$use_plugin_roles = $fuel_version > '8.0'
if ! $use_plugin_roles {
#it is supposed that task is run on compute or controller
$node_ips = split($::ip_address_array, ',')
$is_sds_server = empty(intersection(split($::controller_ips, ','), $node_ips)) or $scaleio['sds_on_controller']
} else {
$all_nodes = hiera('nodes')
$nodes = filter_nodes($all_nodes, 'name', $::hostname)
$is_sds_server = ! empty(concat(
concat(filter_nodes($nodes, 'role', 'scaleio-storage-tier1'), filter_nodes($nodes, 'role', 'scaleio-storage-tier2')),
filter_nodes($nodes, 'role', 'scaleio-storage-tier3')))
}
if $is_sds_server {
if ! $use_plugin_roles {
if $scaleio['device_paths'] and $scaleio['device_paths'] != '' {
$device_paths = split($scaleio['device_paths'], ',')
} else {
$device_paths = []
}
if $scaleio['rfcache_devices'] and $scaleio['rfcache_devices'] != '' {
$rfcache_devices = split($scaleio['rfcache_devices'], ',')
} else {
$rfcache_devices = []
}
if ! empty($rfcache_devices) {
$use_xcache = 'present'
} else {
$use_xcache = 'absent'
}
$devices = concat(flatten($device_paths), $rfcache_devices)
sds_device_cleanup {$devices:
before => Class['Scaleio::Sds_server']
} ->
class {'scaleio::sds_server':
ensure => 'present',
xcache => $use_xcache,
}
} else {
# save devices in shared DB
$tier1_devices = $::sds_storage_devices_tier1 ? {
undef => '',
default => join(split($::sds_storage_devices_tier1, ','), ',')
}
$tier2_devices = $::sds_storage_devices_tier2 ? {
undef => '',
default => join(split($::sds_storage_devices_tier2, ','), ',')
}
$tier3_devices = $::sds_storage_devices_tier3 ? {
undef => '',
default => join(split($::sds_storage_devices_tier3, ','), ',')
}
$rfcache_devices = $::sds_storage_devices_rfcache ? {
undef => '',
default => join(split($::sds_storage_devices_rfcache, ','), ',')
}
if $rfcache_devices and $rfcache_devices != '' {
$use_xcache = 'present'
} else {
$use_xcache = 'absent'
}
$sds_name = $::hostname
$sds_config = {
"${sds_name}" => {
'devices' => {
'tier1' => "${tier1_devices}",
'tier2' => "${tier2_devices}",
'tier3' => "${tier3_devices}",
},
'rfcache_devices' => "${rfcache_devices}",
}
}
# convert hash to string and add escaping of qoutes
$sds_config_str = regsubst(regsubst(inline_template('<%= @sds_config.to_s %>'), '=>', ":", 'G'), '"', '\"', 'G')
$galera_host = hiera('management_vip')
$mysql_opts = hiera('mysql')
$mysql_password = $mysql_opts['root_password']
$sql_connect = "mysql -h ${galera_host} -uroot -p${mysql_password}"
$db_query = 'CREATE DATABASE IF NOT EXISTS scaleio; USE scaleio'
$table_query = 'CREATE TABLE IF NOT EXISTS sds (name VARCHAR(64), PRIMARY KEY(name), value TEXT(1024))'
$update_query = "INSERT INTO sds (name, value) VALUES ('${sds_name}', '${sds_config_str}') ON DUPLICATE KEY UPDATE value='${sds_config_str}'"
$sql_query = "${sql_connect} -e \"${db_query}; ${table_query}; ${update_query};\""
class {'scaleio::sds_server':
ensure => 'present',
xcache => $use_xcache,
} ->
package {'mysql-client':
ensure => present,
} ->
exec {'sds_devices_config':
command => $sql_query,
path => '/bin:/usr/bin:/usr/local/bin',
}
}
}
} else {
notify{'Skip sds server because of using existing cluster': }
}
}

@ -1 +0,0 @@
Subproject commit 1236d9ee474da4ef342fae64e243998cf50678da

@ -1 +0,0 @@
Subproject commit c7394e224a422f7cdb8d3062964c51e48feda27b

View File

@ -1,2 +0,0 @@
443
8081

View File

@ -1,2 +0,0 @@
[Filters]
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root

File diff suppressed because it is too large Load Diff

View File

@ -1,504 +0,0 @@
# Copyright (c) 2013 EMC Corporation
# All Rights Reserved
# This software contains the intellectual property of EMC Corporation
# or is licensed to EMC Corporation from third parties. Use of this
# software and the intellectual property contained therein is expressly
# limited to the terms and conditions of the License Agreement under which
# it is provided by or on behalf of EMC.
import glob
import hashlib
import os
import time
import urllib2
import urlparse
import requests
import json
import re
import sys
import urllib
from oslo.config import cfg
from nova import exception
from nova.openstack.common.gettextutils import _
from nova.openstack.common import log as logging
from nova.openstack.common import loopingcall
from nova.openstack.common import processutils
from nova import paths
from nova.storage import linuxscsi
from nova import utils
from nova.virt.libvirt import config as vconfig
from nova.virt.libvirt import utils as virtutils
from nova.virt.libvirt.volume import LibvirtBaseVolumeDriver
LOG = logging.getLogger(__name__)
volume_opts = [
cfg.IntOpt('num_iscsi_scan_tries',
default=3,
help='number of times to rescan iSCSI target to find volume'),
cfg.IntOpt('num_iser_scan_tries',
default=3,
help='number of times to rescan iSER target to find volume'),
cfg.StrOpt('rbd_user',
help='the RADOS client name for accessing rbd volumes'),
cfg.StrOpt('rbd_secret_uuid',
help='the libvirt uuid of the secret for the rbd_user'
'volumes'),
cfg.StrOpt('nfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the nfs volume is mounted on the compute node'),
cfg.StrOpt('nfs_mount_options',
help='Mount options passed to the nfs client. See section '
'of the nfs man page for details'),
cfg.IntOpt('num_aoe_discover_tries',
default=3,
help='number of times to rediscover AoE target to find volume'),
cfg.StrOpt('glusterfs_mount_point_base',
default=paths.state_path_def('mnt'),
help='Dir where the glusterfs volume is mounted on the '
'compute node'),
cfg.BoolOpt('libvirt_iscsi_use_multipath',
default=False,
help='use multipath connection of the iSCSI volume'),
cfg.BoolOpt('libvirt_iser_use_multipath',
default=False,
help='use multipath connection of the iSER volume'),
cfg.StrOpt('scality_sofs_config',
help='Path or URL to Scality SOFS configuration file'),
cfg.StrOpt('scality_sofs_mount_point',
default='$state_path/scality',
help='Base dir where Scality SOFS shall be mounted'),
cfg.ListOpt('qemu_allowed_storage_drivers',
default=[],
help='Protocols listed here will be accessed directly '
'from QEMU. Currently supported protocols: [gluster]')
]
CONF = cfg.CONF
CONF.register_opts(volume_opts)
OK_STATUS_CODE = 200
VOLUME_NOT_MAPPED_ERROR = 84
VOLUME_ALREADY_MAPPED_ERROR = 81
class LibvirtScaleIOVolumeDriver(LibvirtBaseVolumeDriver):
"""Class implements libvirt part of volume driver
for ScaleIO cinder driver."""
local_sdc_id = None
mdm_id = None
pattern3 = None
def __init__(self, connection):
"""Create back-end to nfs."""
LOG.warning("ScaleIO libvirt volume driver INIT")
super(LibvirtScaleIOVolumeDriver,
self).__init__(connection, is_block_dev=False)
def find_volume_path(self, volume_id):
LOG.info("looking for volume %s" % volume_id)
# look for the volume in /dev/disk/by-id directory
disk_filename = ""
tries = 0
while not disk_filename:
if (tries > 15):
raise exception.NovaException(
"scaleIO volume {0} not found at expected \
path ".format(volume_id))
by_id_path = "/dev/disk/by-id"
if not os.path.isdir(by_id_path):
LOG.warn(
"scaleIO volume {0} not yet found (no directory \
/dev/disk/by-id yet). Try number: {1} ".format(
volume_id,
tries))
tries = tries + 1
time.sleep(1)
continue
filenames = os.listdir(by_id_path)
LOG.warning(
"Files found in {0} path: {1} ".format(
by_id_path,
filenames))
for filename in filenames:
if (filename.startswith("emc-vol") and
filename.endswith(volume_id)):
disk_filename = filename
if not disk_filename:
LOG.warn(
"scaleIO volume {0} not yet found. \
Try number: {1} ".format(
volume_id,
tries))
tries = tries + 1
time.sleep(1)
if (tries != 0):
LOG.warning(
"Found scaleIO device {0} after {1} retries ".format(
disk_filename,
tries))
full_disk_name = by_id_path + "/" + disk_filename
LOG.warning("Full disk name is " + full_disk_name)
return full_disk_name
# path = os.path.realpath(full_disk_name)
# LOG.warning("Path is " + path)
# return path
def _get_client_id(self, server_ip, server_port,
server_username, server_password, server_token, sdc_ip):
request = "https://" + server_ip + ":" + server_port + \
"/api/types/Client/instances/getByIp::" + sdc_ip + "/"
LOG.info("ScaleIO get client id by ip request: %s" % request)
r = requests.get(
request,
auth=(
server_username,
server_token),
verify=False)
r = self._check_response(
r,
request,
server_ip,
server_port,
server_username,
server_password,
server_token)
sdc_id = r.json()
if (sdc_id == '' or sdc_id is None):
msg = ("Client with ip %s wasn't found " % (sdc_ip))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != 200 and "errorCode" in sdc_id):
msg = (
"Error getting sdc id from ip %s: %s " %
(sdc_ip, sdc_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO sdc id is %s" % sdc_id)
return sdc_id
def _get_volume_id(self, server_ip, server_port,
server_username, server_password,
server_token, volname):
volname_encoded = urllib.quote(volname, '')
volname_double_encoded = urllib.quote(volname_encoded, '')
# volname = volname.replace('/', '%252F')
LOG.info(
"volume name after double encoding is %s " %
volname_double_encoded)
request = "https://" + server_ip + ":" + server_port + \
"/api/types/Volume/instances/getByName::" + volname_double_encoded
LOG.info("ScaleIO get volume id by name request: %s" % request)
r = requests.get(
request,
auth=(
server_username,
server_token),
verify=False)
r = self._check_response(
r,
request,
server_ip,
server_port,
server_username,
server_password,
server_token)
volume_id = r.json()
if (volume_id == '' or volume_id is None):
msg = ("Volume with name %s wasn't found " % (volname))
LOG.error(msg)
raise exception.NovaException(data=msg)
if (r.status_code != OK_STATUS_CODE and "errorCode" in volume_id):
msg = (
"Error getting volume id from name %s: %s " %
(volname, volume_id['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
LOG.info("ScaleIO volume id is %s" % volume_id)
return volume_id
def _check_response(self, response, request, server_ip,
server_port, server_username,
server_password, server_token, isGetRequest=True, params=None):
if (response.status_code == 401 or response.status_code == 403):
LOG.info("Token is invalid, going to re-login and get a new one")
login_request = "https://" + server_ip + \
":" + server_port + "/api/login"
r = requests.get(
login_request,
auth=(
server_username,
server_password),
verify=False)
token = r.json()
# repeat request with valid token
LOG.debug(
"going to perform request again {0} \
with valid token".format(request))
if isGetRequest:
res = requests.get(
request,
auth=(
server_username,
token),
verify=False)
else:
headers = {'content-type': 'application/json'}
res = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(
server_username,
token),
verify=False)
return res
return response
def connect_volume(self, connection_info, disk_info):
"""Connect the volume. Returns xml for libvirt."""
conf = super(LibvirtScaleIOVolumeDriver,
self).connect_volume(connection_info,
disk_info)
LOG.info("scaleIO connect volume in scaleio libvirt volume driver")
data = connection_info
LOG.info("scaleIO connect to stuff " + str(data))
data = connection_info['data']
# LOG.info("scaleIO connect to joined "+str(data))
# LOG.info("scaleIO Dsk info "+str(disk_info))
volname = connection_info['data']['scaleIO_volname']
# sdc ip here is wrong, probably not retrieved properly in cinder
# driver. Currently not used.
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
iops_limit = connection_info['data']['iopsLimit']
bandwidth_limit = connection_info['data']['bandwidthLimit']
LOG.debug(
"scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: {2}, \
REST Server username: {3}, REST Server password: {4}, iops limit: \
{5}, bandwidth limit: {6}".format(
volname,
sdc_ip,
server_ip,
server_username,
server_password,
iops_limit,
bandwidth_limit))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: " + str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
# sdc_id = self._get_client_id(server_ip, server_port, \
# server_username, server_password, server_token, sdc_ip)
# params = {'sdcId' : sdc_id}
params = {'guid': guid, 'allowMultipleMappings': 'TRUE'}
volume_id = self._get_volume_id(
server_ip,
server_port,
server_username,
server_password,
server_token,
volname)
headers = {'content-type': 'application/json'}
request = "https://" + server_ip + ":" + server_port + \
"/api/instances/Volume::" + str(volume_id) + "/action/addMappedSdc"
LOG.info("map volume request: %s" % request)
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(
server_username,
server_token),
verify=False)
r = self._check_response(
r,
request,
server_ip,
server_port,
server_username,
server_password,
server_token,
False,
params)
# LOG.info("map volume response: %s" % r.text)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_ALREADY_MAPPED_ERROR):
msg = (
"Ignoring error mapping volume %s: volume already mapped" %
(volname))
LOG.warning(msg)
else:
msg = (
"Error mapping volume %s: %s" %
(volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)
# convert id to hex
# val = int(volume_id)
# id_in_hex = hex((val + (1 << 64)) % (1 << 64))
# formated_id = id_in_hex.rstrip("L").lstrip("0x") or "0"
formated_id = volume_id
conf.source_path = self.find_volume_path(formated_id)
conf.source_type = 'block'
# set QoS settings after map was performed
if (iops_limit is not None or bandwidth_limit is not None):
params = {
'guid': guid}
if (bandwidth_limit is not None):
params['bandwidthLimitInKbps'] = bandwidth_limit
if (iops_limit is not None):
params['iops_limit'] = iops_limit
request = "https://" + server_ip + ":" + server_port + \
"/api/instances/Volume::" + \
str(volume_id) + "/action/setMappedSdcLimits"
LOG.info("set client limit request: %s" % request)
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(
server_username,
server_token),
verify=False)
r = self._check_response(
r,
request,
server_ip,
server_port,
server_username,
server_password,
server_token,
False,
params)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
LOG.info("set client limit response: %s" % response)
msg = (
"Error setting client limits for volume %s: %s" %
(volname, response['message']))
LOG.error(msg)
return conf
def disconnect_volume(self, connection_info, disk_info):
conf = super(LibvirtScaleIOVolumeDriver,
self).disconnect_volume(connection_info,
disk_info)
LOG.info("scaleIO disconnect volume in scaleio libvirt volume driver")
volname = connection_info['data']['scaleIO_volname']
sdc_ip = connection_info['data']['hostIP']
server_ip = connection_info['data']['serverIP']
server_port = connection_info['data']['serverPort']
server_username = connection_info['data']['serverUsername']
server_password = connection_info['data']['serverPassword']
server_token = connection_info['data']['serverToken']
LOG.debug(
"scaleIO Volume name: {0}, SDC IP: {1}, REST Server IP: \
{2}".format(
volname,
sdc_ip,
server_ip))
cmd = ['drv_cfg']
cmd += ["--query_guid"]
LOG.info("ScaleIO sdc query guid command: " + str(cmd))
try:
(out, err) = utils.execute(*cmd, run_as_root=True)
LOG.info("map volume %s: stdout=%s stderr=%s" % (cmd, out, err))
except processutils.ProcessExecutionError as e:
msg = ("Error querying sdc guid: %s" % (e.stderr))
LOG.error(msg)
raise exception.NovaException(data=msg)
guid = out
msg = ("Current sdc guid: %s" % (guid))
LOG.info(msg)
params = {'guid': guid}
headers = {'content-type': 'application/json'}
volume_id = self._get_volume_id(
server_ip,
server_port,
server_username,
server_password,
server_token,
volname)
request = "https://" + server_ip + ":" + server_port + \
"/api/instances/Volume::" + \
str(volume_id) + "/action/removeMappedSdc"
LOG.info("unmap volume request: %s" % request)
r = requests.post(
request,
data=json.dumps(params),
headers=headers,
auth=(
server_username,
server_token),
verify=False)
r = self._check_response(
r,
request,
server_ip,
server_port,
server_username,
server_password,
server_token,
False,
params)
if (r.status_code != OK_STATUS_CODE):
response = r.json()
error_code = response['errorCode']
if (error_code == VOLUME_NOT_MAPPED_ERROR):
msg = (
"Ignoring error unmapping volume %s: volume not mapped" %
(volname))
LOG.warning(msg)
else:
msg = (
"Error unmapping volume %s: %s" %
(volname, response['message']))
LOG.error(msg)
raise exception.NovaException(data=msg)

View File

@ -0,0 +1,221 @@
# The set of facts about ScaleIO cluster.
# All facts expect that MDM IPs are available via the fact 'mdm_ips'.
# If mdm_ips is absent then the facts are skipped.
# The facts about SDS/SDC and getting IPs from Gateway additionally expect that
# MDM password is available via the fact 'mdm_password'.
#
# Facts about MDM:
# (they go over the MDM IPs one by one and request informatiom from the MDM cluster
# via SCLI query_cluster command)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_mdm_ips | Comma separated list of MDM IPs (excepting stanby)
# | scaleio_mdm_names | Comma separated list of MDM names (excepting stanby)
# | scaleio_tb_ips | Comma separated list of Tie-Breaker IPs (excepting stanby)
# | scaleio_tb_names | Comma separated list of Tie-Breaker names (excepting stanby)
# | scaleio_standby_mdm_ips | Comma separated list of standby managers IPs
# | scaleio_standby_tb_ips | Comma separated list of stnadby tie breakers IPs
#
# Facts about SDS and SDC:
# (they use MDM IPs as a single list and request information from a cluster via
# SCLI query_all_sds and query_cll_sdc commands)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_sds_ips | Comma separated list of SDS IPs.
# | scaleio_sds_names | Comma separated list of SDS names.
# | scaleio_sdc_ips | Comma separated list of SDC IPs,
# | | it is list of management IPs, not storage IPs.
# Facts about MDM from Gateway:
# (It requests them from Gateway via curl and requires the fact 'gateway_ips'.
# An user is 'admin' by default or the fact 'gateway_user' if it exists.
# A port is 4443 or the fact 'gateway_port' if it exists.)
# ---------------------------------------------------------------------------------
# | Name | Description
# |--------------------------------------------------------------------------------
# | scaleio_mdm_ips_from_gateway | Comma separated list of MDM IP.
require 'date'
require 'facter'
require 'json'
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
# Facter to scan existing cluster
# Controller IPs to scan
$controller_ips = Facter.value(:controller_ips)
if $controller_ips and $controller_ips != ''
# Register all facts for MDMs
# Example of output that facters below parse:
# Cluster:
# Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
# Master MDM:
# Name: 192.168.0.4, ID: 0x0ecb483853835e00
# IPs: 192.168.0.4, Management IPs: 192.168.0.4, Port: 9011
# Version: 2.0.5014
# Slave MDMs:
# Name: 192.168.0.5, ID: 0x3175fbe7695bbac1
# IPs: 192.168.0.5, Management IPs: 192.168.0.5, Port: 9011
# Status: Normal, Version: 2.0.5014
# Tie-Breakers:
# Name: 192.168.0.6, ID: 0x74ccbc567622b992
# IPs: 192.168.0.6, Port: 9011
# Status: Normal, Version: 2.0.5014
# Standby MDMs:
# Name: 192.168.0.5, ID: 0x0ce414fa06a17491, Manager
# IPs: 192.168.0.5, Management IPs: 192.168.0.5, Port: 9011
# Name: 192.168.0.6, ID: 0x74ccbc567622b992, Tie Breaker
# IPs: 192.168.0.6, Port: 9011
mdm_components = {
'scaleio_mdm_ips' => ['/Master MDM/,/\(Tie-Breakers\)\|\(Standby MDMs\)/p', '/./,//p', 'IPs:'],
'scaleio_tb_ips' => ['/Tie-Breakers/,/Standby MDMs/p', '/./,//p', 'IPs:'],
'scaleio_mdm_names' => ['/Master MDM/,/\(Tie-Breakers\)\|\(Standby MDMs\)/p', '/./,//p', 'Name:'],
'scaleio_tb_names' => ['/Tie-Breakers/,/Standby MDMs/p', '/./,//p', 'Name:'],
'scaleio_standby_mdm_ips' => ['/Standby MDMs/,//p', '/Manager/,/Tie Breaker/p', 'IPs:'],
'scaleio_standby_tb_ips' => ['/Standby MDMs/,//p', '/Tie Breaker/,/Manager/p', 'IPs:'],
}
# Define mdm opts for SCLI tool to connect to ScaleIO cluster.
# If there is no mdm_ips available it is expected to be run on a node with MDM Master.
mdm_opts = []
$controller_ips.split(',').each do |ip|
mdm_opts.push("--mdm_ip %s" % ip)
end
# the cycle over MDM IPs because for query cluster SCLI's behaiveour is strange
# it works for one IP but doesn't for the list.
query_result = nil
mdm_opts.detect do |opts|
query_cmd = "scli %s --query_cluster --approve_certificate 2>>%s && echo success" % [opts, $scaleio_log_file]
res = Facter::Util::Resolution.exec(query_cmd)
debug_log("%s returns:\n'%s'" % [query_cmd, res])
query_result = res unless !res or !res.include?('success')
end
if query_result
mdm_components.each do |name, selector|
Facter.add(name) do
setcode do
ip = nil
cmd = "echo '%s' | sed -n '%s' | sed -n '%s' | awk '/%s/ {print($2)}' | tr -d ','" % [query_result, selector[0], selector[1], selector[2]]
res = Facter::Util::Resolution.exec(cmd)
ip = res.split(' ').join(',') unless !res
debug_log("%s='%s'" % [name, ip])
ip
end
end
end
end
end
# Facter to scan existing cluster
# MDM IPs to scan
$discovery_allowed = Facter.value(:discovery_allowed)
$mdm_ips = Facter.value(:mdm_ips)
$mdm_password = Facter.value(:mdm_password)
if $discovery_allowed == 'yes' and $mdm_ips and $mdm_ips != '' and $mdm_password and $mdm_password != ''
sds_sdc_components = {
'scaleio_sdc_ips' => ['sdc', 'IP: [^ ]*', nil],
'scaleio_sds_ips' => ['sds', 'IP: [^ ]*', 'Protection Domain'],
'scaleio_sds_names' => ['sds', 'Name: [^ ]*', 'Protection Domain'],
}
sds_sdc_components.each do |name, selector|
Facter.add(name) do
setcode do
mdm_opts = "--mdm_ip %s" % $mdm_ips
login_cmd = "scli %s --approve_certificate --login --username admin --password %s 1>/dev/null 2>>%s" % [mdm_opts, $mdm_password, $scaleio_log_file]
query_cmd = "scli %s --approve_certificate --query_all_%s 2>>%s" % [mdm_opts, selector[0], $scaleio_log_file]
cmd = "%s && %s" % [login_cmd, query_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
skip_cmd = ''
if selector[2]
skip_cmd = "grep -v '%s' | " % selector[2]
end
select_cmd = "%s grep -o '%s' | awk '{print($2)}'" % [skip_cmd, selector[1]]
cmd = "echo '%s' | %s" % [result, select_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
result = result.split(' ')
if result.count() > 0
result = result.join(',')
end
end
end
debug_log("%s='%s'" % [name, result])
result
end
end
end
Facter.add(:scaleio_storage_pools) do
setcode do
mdm_opts = "--mdm_ip %s" % $mdm_ips
login_cmd = "scli %s --approve_certificate --login --username admin --password %s 1>/dev/null 2>>%s" % [mdm_opts, $mdm_password, $scaleio_log_file]
query_cmd = "scli %s --approve_certificate --query_all 2>>%s" % [mdm_opts, $scaleio_log_file]
fiter_cmd = "awk '/Protection Domain|Storage Pool/ {if($2==\"Domain\"){pd=$3}else{if($2==\"Pool\"){print(pd\":\"$3)}}}'"
cmd = "%s && %s | %s" % [login_cmd, query_cmd, fiter_cmd]
debug_log(cmd)
result = Facter::Util::Resolution.exec(cmd)
if result
result = result.split(' ')
if result.count() > 0
result = result.join(',')
end
end
debug_log("%s='%s'" % ['scaleio_storage_pools', result])
result
end
end
end
#The fact about MDM IPs.
#It requests them from Gateway.
$gw_ips = Facter.value(:gateway_ips)
$gw_passw = Facter.value(:gateway_password)
if $gw_passw && $gw_passw != '' and $gw_ips and $gw_ips != ''
Facter.add('scaleio_mdm_ips_from_gateway') do
setcode do
result = nil
if Facter.value('gateway_user')
gw_user = Facter.value('gateway_user')
else
gw_user = 'admin'
end
host = $gw_ips.split(',')[0]
if Facter.value('gateway_port')
port = Facter.value('gateway_port')
else
port = 4443
end
base_url = "https://%s:%s/api/%s"
login_url = base_url % [host, port, 'login']
config_url = base_url % [host, port, 'Configuration']
login_req = "curl -k --basic --connect-timeout 5 --user #{gw_user}:#{$gw_passw} #{login_url} 2>>%s | sed 's/\"//g'" % $scaleio_log_file
debug_log(login_req)
token = Facter::Util::Resolution.exec(login_req)
if token && token != ''
req_url = "curl -k --basic --connect-timeout 5 --user #{gw_user}:#{token} #{config_url} 2>>%s" % $scaleio_log_file
debug_log(req_url)
request_result = Facter::Util::Resolution.exec(req_url)
if request_result
config = JSON.parse(request_result)
if config and config['mdmAddresses']
result = config['mdmAddresses'].join(',')
end
end
end
debug_log("%s='%s'" % ['scaleio_mdm_ips_from_gateway', result])
result
end
end
end

View File

@ -0,0 +1,74 @@
require 'date'
require 'facter'
$scaleio_tier1_guid = 'f2e81bdc-99b3-4bf6-a68f-dc794da6cd8e'
$scaleio_tier2_guid = 'd5321bb3-1098-433e-b4f5-216712fcd06f'
$scaleio_tier3_guid = '97987bfc-a9ba-40f3-afea-13e1a228e492'
$scaleio_rfcache_guid = '163ddeea-95dd-4af0-a329-140623590c47'
$scaleio_tiers = {
'tier1' => $scaleio_tier1_guid,
'tier2' => $scaleio_tier2_guid,
'tier3' => $scaleio_tier3_guid,
'rfcache' => $scaleio_rfcache_guid,
}
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
$scaleio_tiers.each do |tier, part_guid|
facter_name = "sds_storage_devices_%s" % tier
Facter.add(facter_name) do
setcode do
devices = nil
res = Facter::Util::Resolution.exec("lsblk -nr -o KNAME,TYPE 2>%s | awk '/disk/ {print($1)}'" % $scaleio_log_file)
if res and res != ''
parts = []
disks = res.split(' ')
disks.each do |d|
disk_path = "/dev/%s" % d
part_number = Facter::Util::Resolution.exec("partx -s %s -oTYPE,NR 2>%s | awk '/%s/ {print($2)}'" % [disk_path, $scaleio_log_file, part_guid])
parts.push("%s%s" % [disk_path, part_number]) unless !part_number or part_number == ''
end
if parts.count() > 0
devices = parts.join(',')
end
end
debug_log("%s='%s'" % [facter_name, devices])
devices
end
end
end
# facter to validate storage devices that are less than 96GB
Facter.add('sds_storage_small_devices') do
setcode do
result = nil
disks1 = Facter.value('sds_storage_devices_tier1')
disks2 = Facter.value('sds_storage_devices_tier2')
disks3 = Facter.value('sds_storage_devices_tier3')
if disks1 or disks2 or disks3
disks = [disks1, disks2, disks3].join(',')
end
if disks
devices = disks.split(',')
if devices.count() > 0
devices.each do |d|
size = Facter::Util::Resolution.exec("partx -r -b -o SIZE %s 2>%s | grep -v SIZE" % [d, $scaleio_log_file])
if size and size != '' and size.to_i < 96*1024*1024*1024
if not result
result = {}
end
result[d] = "%s MB" % (size.to_i / 1024 / 1024)
end
end
result = result.to_s unless !result
end
end
debug_log("%s='%s'" % ['sds_storage_small_devices', result])
result
end
end

View File

@ -0,0 +1,15 @@
# set of facts about deploying environment
require 'facter'
base_cmd = "bash -c 'source /etc/environment; echo $SCALEIO_%s'"
facters = ['controller_ips', 'tb_ips', 'mdm_ips', 'managers_ips',
'gateway_user', 'gateway_port', 'gateway_ips', 'gateway_password', 'mdm_password',
'storage_pools', 'discovery_allowed']
facters.each { |f|
if ! Facter.value(f)
Facter.add(f) do
setcode base_cmd % f
end
end
}

View File

@ -0,0 +1,17 @@
require 'facter'
Facter.add("ip_address_array") do
setcode do
interfaces = Facter.value(:interfaces)
interfaces_array = interfaces.split(',')
ip_address_array = []
interfaces_array.each do |interface|
ipaddress = Facter.value("ipaddress_#{interface}")
ip_address_array.push(ipaddress) unless !ipaddress
end
ssh_ip = Facter.value(:ssh_ip)
ip_address_array.push(ssh_ip) unless !ssh_ip
ip_address_array.join(',')
end
end

View File

@ -0,0 +1,22 @@
require 'facter'
base_cmd = "bash -c 'source /root/openrc; echo $%s'"
if File.exist?("/root/openrc")
Facter.add(:os_password) do
setcode base_cmd % 'OS_PASSWORD'
end
Facter.add(:os_tenant_name) do
setcode base_cmd % 'OS_TENANT_NAME'
end
Facter.add(:os_username) do
setcode base_cmd % 'OS_USERNAME'
end
Facter.add(:os_auth_url) do
setcode base_cmd % 'OS_AUTH_URL'
end
end

View File

@ -0,0 +1,43 @@
require 'date'
require 'facter'
require 'json'
$scaleio_log_file = "/var/log/fuel-plugin-scaleio.log"
def debug_log(msg)
File.open($scaleio_log_file, 'a') {|f| f.write("%s: %s\n" % [Time.now.strftime("%Y-%m-%d %H:%M:%S"), msg]) }
end
$astute_config = '/etc/astute.yaml'
if File.exists?($astute_config)
Facter.add(:scaleio_sds_config) do
setcode do
result = nil
config = YAML.load_file($astute_config)
if config and config.key('fuel_version') and config.key('fuel_version') > '8.0'
galera_host = config['management_vip']
mysql_opts = config['mysql']
password = mysql_opts['root_password']
sql_query = "mysql -h %s -uroot -p%s -e 'USE scaleio; SELECT * FROM sds \\G;' 2>>%s | awk '/value:/ {sub($1 FS,\"\" );print}'" % [galera_host, password, $scaleio_log_file]
debug_log(sql_query)
query_result = Facter::Util::Resolution.exec(sql_query)
debug_log(query_result)
if query_result
query_result.each_line do |r|
if r
if not result
result = '{'
else
result += ', '
end
result += r.strip.slice(1..-2)
end
end
result += '}' unless !result
end
end
debug_log("scaleio_sds_config='%s'" % result)
result
end
end
end

View File

@ -0,0 +1,38 @@
# Convert sds_config from centralized db to form of two lists pools and devices with equal lengths
module Puppet::Parser::Functions
newfunction(:convert_sds_config, :type => :rvalue, :doc => <<-EOS
Takes sds config as a has and returns array - first element is pools, second devices
EOS
) do |args|
sds_config = args[0]
result = nil
if sds_config
pools_devices = sds_config['devices']
if pools_devices
pools = nil
devices = nil
pools_devices.each do |pool, devs|
devs.split(',').each do |d|
if d and d != ""
if ! pools
pools = pool.strip
else
pools += ",%s" % pool.strip
end
if ! devices
devices = d.strip
else
devices += ",%s" % d.strip
end
end
end
end
if pools and devices
result = [pools, devices]
end
end
end
return result
end
end

View File

@ -0,0 +1,23 @@
# Convert sds_config from centralized db to form of two lists pools and devices with equal lengths
require File.join([File.expand_path(File.dirname(__FILE__)), 'convert_sds_config.rb'])
module Puppet::Parser::Functions
newfunction(:get_pools_from_sds_config, :type => :rvalue, :doc => <<-EOS
Takes sds config as a has and returns array - first element is pools, second devices
EOS
) do |args|
config = args[0]
result = []
if config
config.each do |sds, cfg|
pools_devices = function_convert_sds_config([cfg]) # prefix function_ is required for puppet functions
# args - is arrays with required options
if pools_devices and pools_devices[0]
result += pools_devices[0].split(',')
end
end
end
return result.uniq
end
end

View File

@ -1,24 +0,0 @@
module Puppet::Parser::Functions
newfunction(:get_sds_devices, :type => :rvalue) do |args|
result = {}
nodes = args[0]
device = args[1]
protection_domain = args[2]
pool_size = args[3]
storage_pool = args[4]
nodes.each do |node|
result[node["fqdn"]] = {
"ip" => node["storage_address"],
"protection_domain" => protection_domain,
"devices" => {
device => {
"size" => pool_size,
"storage_pool" => storage_pool
}
}
}
end
return result
end
end

View File

@ -1,95 +0,0 @@
class scaleio_fuel::configure_cinder
inherits scaleio_fuel::params {
$mdm_ip = $scaleio_fuel::params::mdm_ip
$gw_password = $scaleio_fuel::params::gw_password
$protection_domain = $scaleio_fuel::params::protection_domain
$storage_pool = $scaleio_fuel::params::storage_pool
notice('Configuring Controller node for ScaleIO integration')
$services = ['openstack-cinder-volume', 'openstack-cinder-api', 'openstack-cinder-scheduler', 'openstack-nova-scheduler']
#2. Copy ScaleIO Files
file { 'scaleio.py':
path => '/usr/lib/python2.6/site-packages/cinder/volume/drivers/emc/scaleio.py',
source => 'puppet:///modules/scaleio_fuel/scaleio.py',
mode => '0644',
owner => 'root',
group => 'root',
} ->
file { 'scaleio.filters':
path => '/usr/share/cinder/rootwrap/scaleio.filters',
source => 'puppet:///modules/scaleio_fuel/scaleio.filters',
mode => '0644',
owner => 'root',
group => 'root',
before => File['cinder_scaleio.config'],
}
# 3. Create config for ScaleIO
$cinder_scaleio_config = "[scaleio]
rest_server_ip=${::fuel_settings['management_vip']}
rest_server_port=4443
rest_server_username=admin
rest_server_password=${gw_password}
protection_domain_name=${protection_domain}
storage_pools=${protection_domain}:${storage_pool}
storage_pool_name=${storage_pool}
round_volume_capacity=True
force_delete=True
verify_server_certificate=False
"
file { 'cinder_scaleio.config':
ensure => present,
path => '/etc/cinder/cinder_scaleio.config',
content => $cinder_scaleio_config,
mode => '0644',
owner => 'root',
group => 'root',
before => Ini_setting['cinder_conf_enabled_backeds'],
} ->
# 4. To /etc/cinder/cinder.conf add
ini_setting { 'cinder_conf_enabled_backeds':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'DEFAULT',
setting => 'enabled_backends',
value => 'ScaleIO',
before => Ini_setting['cinder_conf_volume_driver'],
} ->
ini_setting { 'cinder_conf_volume_driver':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_driver',
value => 'cinder.volume.drivers.emc.scaleio.ScaleIODriver',
before => Ini_setting['cinder_conf_scio_config'],
} ->
ini_setting { 'cinder_conf_scio_config':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'cinder_scaleio_config_file',
value => '/etc/cinder/cinder_scaleio.config',
before => Ini_setting['cinder_conf_volume_backend_name'],
} ->
ini_setting { 'cinder_conf_volume_backend_name':
ensure => present,
path => '/etc/cinder/cinder.conf',
section => 'ScaleIO',
setting => 'volume_backend_name',
value => 'ScaleIO',
} ~>
service { $services:
ensure => running,
}
}

View File

@ -1,42 +0,0 @@
class scaleio_fuel::configure_gateway
inherits scaleio_fuel::params {
$role = $scaleio_fuel::params::role
if ($role == 'mdm') or ($role == 'tb') {
exec { 'Change ScaleIO gateway HTTP port to 8081 (server.xml)':
command => "sed -i 's|<Connector port=\"80\"|<Connector port=\"8081\"|g' /opt/emc/scaleio/gateway/conf/server.xml",
path => ['/bin'],
onlyif => '/usr/bin/test -f /opt/emc/scaleio/gateway/conf/server.xml',
} ->
exec { 'Change ScaleIO gateway HTTP port to 8081 (server_clientcert.xml)':
command => "sed -i 's|<Connector port=\"80\"|<Connector port=\"8081\"|g' /opt/emc/scaleio/gateway/conf/server_clientcert.xml",
path => ['/bin'],
onlyif => '/usr/bin/test -f /opt/emc/scaleio/gateway/conf/server_clientcert.xml',
} ->
exec { 'Check port file presence':
command => '/bin/true',
onlyif => '/usr/bin/test -f /opt/emc/scaleio/gateway/conf/port',
} ->
file { 'Update ScaleIO port file':
path => '/opt/emc/scaleio/gateway/conf/port',
source => 'puppet:///modules/scaleio_fuel/port',
mode => '0644',
owner => 'root',
group => 'root',
} ~>
service { 'scaleio-gateway':
ensure => running,
enable => true,
hasrestart => true,
}
} else {
notify {'Gateway not installed. Not doing anything.':}
}
}

View File

@ -1,39 +0,0 @@
class scaleio_fuel::configure_nova {
notice("Configuring Compute node for ScaleIO integration")
$nova_service = 'openstack-nova-compute'
#Configure nova-compute
ini_subsetting { 'nova-volume_driver':
ensure => present,
path => '/etc/nova/nova.conf',
subsetting_separator => ',',
section => 'libvirt',
setting => 'volume_drivers',
subsetting => 'scaleio=nova.virt.libvirt.scaleiolibvirtdriver.LibvirtScaleIOVolumeDriver',
notify => Service[$nova_service],
}
file { 'scaleiolibvirtdriver.py':
path => '/usr/lib/python2.6/site-packages/nova/virt/libvirt/scaleiolibvirtdriver.py',
source => 'puppet:///modules/scaleio_fuel/scaleiolibvirtdriver.py',
mode => '0644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
file { 'scaleio.filters':
path => '/usr/share/nova/rootwrap/scaleio.filters',
source => 'puppet:///modules/scaleio_fuel/scaleio.filters',
mode => '0644',
owner => 'root',
group => 'root',
notify => Service[$nova_service],
}
service { $nova_service:
ensure => 'running',
}
}

View File

@ -1,20 +0,0 @@
class scaleio_fuel::create_volume_type
inherits scaleio_fuel::params {
$scaleio = $::fuel_settings['scaleio']
$protection_domain = $scaleio_fuel::params::protection_domain
$storage_pool = $scaleio_fuel::params::storage_pool
$volume_type = $scaleio_fuel::params::volume_type
exec { "Create Cinder volume type \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-create ${volume_type}'",
path => ['/usr/bin', '/bin'],
unless => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
} ->
exec { "Create Cinder volume type extra specs for \'${volume_type}\'":
command => "bash -c 'source /root/openrc; cinder type-key ${volume_type} set sio:pd_name=${protection_domain} sio:provisioning=thin sio:sp_name=${storage_pool}'",
path => ['/usr/bin', '/bin'],
onlyif => "bash -c 'source /root/openrc; cinder type-list |grep -q \" ${volume_type} \"'",
}
}

View File

@ -1,53 +0,0 @@
class scaleio_fuel::enable_ha
inherits scaleio_fuel::params {
$gw1_ip = $scaleio_fuel::params::mdm_ip[0]
$gw2_ip = $scaleio_fuel::params::mdm_ip[1]
$gw3_ip = $scaleio_fuel::params::tb_ip
$nodes_hash = $::fuel_settings['nodes']
$gw1 = filter_nodes($nodes_hash, 'storage_address', $gw1_ip)
$gw2 = filter_nodes($nodes_hash, 'storage_address', $gw2_ip)
$gw3 = filter_nodes($nodes_hash, 'storage_address', $gw3_ip)
$gw_nodes = concat(concat($gw1, $gw2), $gw3)
notify { "Gateway nodes: ${gw_nodes}": }
Haproxy::Service { use_include => true }
Haproxy::Balancermember { use_include => true }
Openstack::Ha::Haproxy_service {
server_names => filter_hash($gw_nodes, 'name'),
ipaddresses => filter_hash($gw_nodes, 'storage_address'),
public_virtual_ip => $::fuel_settings['public_vip'],
internal_virtual_ip => $::fuel_settings['management_vip'],
}
openstack::ha::haproxy_service { 'scaleio-gateway':
order => 201,
listen_port => 4443,
balancermember_port => 443,
define_backups => true,
before_start => true,
public => true,
haproxy_config_options => {
'balance' => 'roundrobin',
'mode' => 'tcp',
'option' => ['tcplog'],
},
balancermember_options => 'check',
}
exec { 'haproxy reload':
command => 'export OCF_ROOT="/usr/lib/ocf"; (ip netns list | grep haproxy) && ip netns exec haproxy /usr/lib/ocf/resource.d/fuel/ns_haproxy reload',
path => '/usr/bin:/usr/sbin:/bin:/sbin',
logoutput => true,
provider => 'shell',
tries => 10,
try_sleep => 10,
returns => [0, ''],
}
Haproxy::Listen <||> -> Exec['haproxy reload']
Haproxy::Balancermember <||> -> Exec['haproxy reload']
}

View File

@ -1,11 +0,0 @@
class scaleio_fuel
inherits scaleio_fuel::params {
$role = $scaleio_fuel::params::role
case $role {
'mdm': { include scaleio_fuel::mdm }
'tb': { include scaleio_fuel::tb }
default: { include scaleio_fuel::sds }
}
}

View File

@ -1,21 +0,0 @@
class scaleio_fuel::mdm {
$admin_password = $scaleio_fuel::params::admin_password
$gw_password = $scaleio_fuel::params::gw_password
$version = $scaleio_fuel::params::version
$mdm_ip = $scaleio_fuel::params::mdm_ip
$tb_ip = $scaleio_fuel::params::tb_ip
$cluster_name = $scaleio_fuel::params::cluster_name
$sio_sds_device = $scaleio_fuel::params::sio_sds_device
class {'::scaleio':
password => $admin_password,
gw_password => $gw_password,
version => $version,
mdm_ip => $mdm_ip,
tb_ip => $tb_ip,
cluster_name => $cluster_name,
sio_sds_device => $sio_sds_device,
components => ['mdm','gw','sds','sdc'],
}
}

View File

@ -1,55 +0,0 @@
class scaleio_fuel::params
{
# ScaleIO config parameters
$scaleio = $::fuel_settings['scaleio']
$admin_password = $scaleio['password']
$gw_password = $scaleio['gw_password']
$version = $scaleio['version']
$cluster_name = 'cluster1'
$protection_domain = 'pd1'
$storage_pool = 'sp1'
$pool_size = "${scaleio['pool_size']}GB"
$device = '/var/sio_device1'
$volume_type = 'sio_thin'
$nodes_hash = $::fuel_settings['nodes']
$controller_nodes = concat(filter_nodes($nodes_hash, 'role', 'primary-controller'), filter_nodes($nodes_hash, 'role', 'controller'))
$controller_hashes = nodes_to_hash($controller_nodes, 'name', 'storage_address')
$controller_ips = ipsort(values($controller_hashes))
notify {"Controller Nodes: ${controller_nodes}": }
notify {"Controller IPs: ${controller_ips}": }
if size($controller_nodes) < 3 {
fail('ScaleIO plugin needs at least 3 controller nodes')
}
if $version != '1.32' and $version != '2.0' {
fail("Invalid ScaleIO version '${version}'")
}
$mdm_ip = [$controller_ips[0], $controller_ips[1]]
$tb_ip = $controller_ips[2]
$current_node = filter_nodes($nodes_hash,'uid', $::fuel_settings['uid'])
$node_ip = join(values(
nodes_to_hash($current_node,'name','storage_address')))
notify {"Current Node: ${current_node}": }
case $node_ip {
$mdm_ip[0]: { $role = 'mdm' }
$mdm_ip[1]: { $role = 'mdm' }
$tb_ip: { $role = 'tb' }
default: { $role = 'sds' }
}
notify {"Node role: ${role}, IP: ${node_ip}, FQDN: ${::fqdn}": }
$sio_sds_device = get_sds_devices(
$nodes_hash, $device, $protection_domain,
$pool_size, $storage_pool)
notify {"SDS devices: ${sio_sds_device}": }
}

View File

@ -1,16 +0,0 @@
class scaleio_fuel::sds {
$admin_password = $scaleio_fuel::params::admin_password
$version = $scaleio_fuel::params::version
$mdm_ip = $scaleio_fuel::params::mdm_ip
$sio_sds_device = $scaleio_fuel::params::sio_sds_device
class {'::scaleio':
password => $admin_password,
version => $version,
mdm_ip => $mdm_ip,
sio_sds_device => $sio_sds_device,
sds_ssd_env_flag => true,
components => ['sds','sdc','lia'],
}
}

View File

@ -1,20 +0,0 @@
class scaleio_fuel::tb {
$admin_password = $scaleio_fuel::params::admin_password
$gw_password = $scaleio_fuel::params::gw_password
$version = $scaleio_fuel::params::version
$mdm_ip = $scaleio_fuel::params::mdm_ip
$tb_ip = $scaleio_fuel::params::tb_ip
$sio_sds_device = $scaleio_fuel::params::sio_sds_device
class {'::scaleio':
password => $admin_password,
gw_password => $gw_password,
version => $version,
mdm_ip => $mdm_ip,
tb_ip => $tb_ip,
sio_sds_device => $sio_sds_device,
sds_ssd_env_flag => true,
components => ['tb','gw','sds','sdc'],
}
}

@ -1 +0,0 @@
Subproject commit 7da9594cece8ae4f9d3c9c76e898de9a679a87af

183
deployment_tasks.yaml Normal file
View File

@ -0,0 +1,183 @@
##############################################################################
# ScaleIO task groups
##############################################################################
# for next version:
# - id: scaleio-storage-tier1
# type: group
# role: [scaleio-storage-tier1]
# tasks: [hiera, globals, tools, logging, netconfig, hosts, firewall, deploy_start]
# required_for: [deploy_end]
# requires: [deploy_start]
# parameters:
# strategy:
# type: parallel
#
# - id: scaleio-storage-tier2
# type: group
# role: [scaleio-storage-tier2]
# tasks: [hiera, globals, tools, logging, netconfig, hosts, firewall, deploy_start]
# required_for: [deploy_end]
# requires: [deploy_start]
# parameters:
# strategy:
# type: parallel
##############################################################################
# ScaleIO environment check
##############################################################################
- id: scaleio-environment-check
# groups: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
groups: [primary-controller, controller, compute, cinder]
required_for: [deploy_end, hosts]
requires: [deploy_start] #, netconfig]
type: puppet
parameters:
puppet_manifest: puppet/manifests/environment.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
##############################################################################
# ScaleIO prerequisites tasks
##############################################################################
- id: scaleio-environment
# role: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
role: [primary-controller, controller, compute, cinder]
required_for: [post_deployment_end]
requires: [post_deployment_start]
type: puppet
parameters:
puppet_manifest: puppet/manifests/environment.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-environment-existing-mdm-ips
# role: [scaleio-storage-tier1, scaleio-storage-tier2, primary-controller, controller, compute, cinder]
role: [primary-controller, controller, compute, cinder]
required_for: [post_deployment_end]
requires: [scaleio-environment]
type: puppet
parameters:
puppet_manifest: puppet/manifests/environment_existing_mdm_ips.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
##############################################################################
# ScaleIO cluster tasks
##############################################################################
- id: scaleio-mdm-packages
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-environment-existing-mdm-ips]
type: puppet
parameters:
puppet_manifest: puppet/manifests/mdm_package.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-discover-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-mdm-packages]
type: puppet
parameters:
puppet_manifest: puppet/manifests/discovery_cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-resize-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-discover-cluster]
type: puppet
parameters:
puppet_manifest: puppet/manifests/resize_cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-mdm-server
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-resize-cluster]
type: puppet
parameters:
puppet_manifest: puppet/manifests/mdm_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-gateway-server
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-mdm-server]
type: puppet
parameters:
puppet_manifest: puppet/manifests/gateway_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sds-server
# role: [scaleio-storage-tier1, scaleio-storage-tier2]
role: [primary-controller, controller, compute]
required_for: [post_deployment_end]
requires: [scaleio-gateway-server]
type: puppet
parameters:
puppet_manifest: puppet/manifests/sds_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sdc-server
required_for: [post_deployment_end]
requires: [scaleio-sds-server]
role: [compute, cinder]
type: puppet
parameters:
puppet_manifest: puppet/manifests/sdc_server.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-sdc
required_for: [post_deployment_end]
requires: [scaleio-sdc-server]
role: [compute, cinder]
type: puppet
parameters:
puppet_manifest: puppet/manifests/sdc.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- id: scaleio-configure-cluster
role: [primary-controller, controller]
required_for: [post_deployment_end]
requires: [scaleio-sdc]
type: puppet
parameters:
puppet_manifest: puppet/manifests/cluster.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
##############################################################################
# ScaleIO OS tasks
##############################################################################
- id: scaleio-compute
required_for: [post_deployment_end]
requires: [scaleio-configure-cluster]
role: [compute]
type: puppet
parameters:
puppet_manifest: puppet/manifests/nova.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- id: scaleio-cinder
required_for: [post_deployment_end]
requires: [scaleio-configure-cluster]
role: [cinder]
type: puppet
parameters:
puppet_manifest: puppet/manifests/cinder.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600

Binary file not shown.

7
doc/source/build Executable file
View File

@ -0,0 +1,7 @@
#!/bin/bash
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
sphinx-build -b pdf . builddir
#sphinx-build -b html . builddir

View File

@ -1,7 +1,7 @@
# Always use the default theme for Readthedocs
RTD_NEW_THEME = True
extensions = []
extensions = ['rst2pdf.pdfbuilder']
templates_path = ['_templates']
source_suffix = '.rst'
@ -9,10 +9,10 @@ source_suffix = '.rst'
master_doc = 'index'
project = u'The ScaleIO plugin for Fuel'
copyright = u'2015, EMC Corporation'
copyright = u'2016, EMC Corporation'
version = '1.0'
release = '1.0-1.0.1-1'
version = '2.1-2.1.0-1'
release = '2.1-2.1.0-1'
exclude_patterns = []
@ -31,3 +31,9 @@ latex_elements = {
'classoptions': ',openany,oneside',
'babel' : '\\usepackage[english]{babel}',
}
pdf_documents = [
('index', 'ScaleIO-Plugin_Guide', u'ScaleIO plugin for Fuel Documentation',
u'EMC Corporation', 'manual'),
]

View File

@ -1,14 +1,14 @@
User Guide
==========
Once the Fuel ScaleIO plugin has been installed (following the
Once the Fuel ScaleIOv2.0 plugin has been installed (following the
:ref:`Installation Guide <installation>`), you can create an *OpenStack* environments that
uses ScaleIO as the block storage backend.
Prepare infrastructure
----------------------
At least 5 nodes are required to successfully deploy Mirantis OpenStack with ScaleIO.
At least 5 nodes are required to successfully deploy Mirantis OpenStack with ScaleIO (for 3-controllers mode cluster).
#. Fuel master node (w/ 50GB Disk, 2 Network interfaces [Mgmt, PXE] )
#. OpenStack Controller #1 node
@ -16,15 +16,18 @@ At least 5 nodes are required to successfully deploy Mirantis OpenStack with Sca
#. OpenStack Controller #3 node
#. OpenStack Compute node
Each node shall have at least 2 CPUs, 4GB RAM, 200GB disk, 3 Network interfaces. The 3 interfaces will be used for the following purposes:
Each node shall have at least 2 CPUs, 4GB RAM, 200GB disk, 3 Network interfaces. Each node which is supposed to host ScaleIO SDS should have at least one empty disk of minimum 100GB size.
The 3 interfaces will be used for the following purposes:
#. Admin (PXE) network: Mirantis OpenStack uses PXE booting to install the operating system, and then loads the OpenStack packages for you.
#. Public, Management and Storage networks: All of the OpenStack management traffic will flow over this network (“Management” and “Storage” will be separated by VLANs), and to re-use the network it will also host the public network used by OpenStack service nodes and the floating IP address range.
#. Private network: This network will be added to Virtual Machines when they boot. It will therefore be the route where traffic flows in and out of the VM.
Controllers 1, 2, and 3 will be used as ScaleIO MDMs, being the primary, secondary, and tie-breaker, respectively. Moreover, they will also host the ScaleIO Gateway in HA mode.
In case of new ScaleIO cluster deployment Controllers 1, 2, and 3 will be for hosting ScaleIO MDM and ScaleIO Gateway services.
Cinder role should be deployed if ScaleIO volume functionality is required.
All Compute nodes are used as ScaleIO SDS. It is possible to enable ScaleIO SDS on Controllers node. Keep in mind that 3 SDSs is a minimal required configuration so if you have less than 3 compute nodes you have to deploy ScaleIO SDS on controllers as well. All nodes that will be used as ScaleIO SDS should have equal disk configuration. All disks that will be used as SDS devices should be unallocated in Fuel.
All nodes are used as ScaleIO SDS and, therefore, contribute to the default storage pool.
In case of existing ScaleIO cluster deployment the plugin deploys ScaleIO SDC component onto Compute and Cinder nodes and configures OpenStack Cinder and Nova to use ScaleIO as the block storage backend.
The ScaleIO cluster will use the storage network for all volume and cluster maintenance operations.
@ -44,27 +47,58 @@ It is recommended to install the ScaleIO GUI to easily access and manage the Sca
:width: 50%
Select Environment
------------------
#. Create a new environment with the Fuel UI wizard. Select "Juno on CentOS 6.5" from OpenStack Release dropdown list and continue until you finish with the wizard.
#. Create a new environment with the Fuel UI wizard.
From OpenStack Release dropdown list select "Liberty on Ubunu 14.04" and continue until you finish with the wizard.
.. image:: images/wizard.png
:width: 80%
#. Add VMs to the new environment according to `Fuel User Guide <https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#add-nodes-to-the-environment>`_ and configure them properly.
#. Add VMs to the new environment according to `Fuel User Guide <https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#adding-redeploying-and-replacing-nodes>`_ and configure them properly.
Plugin configuration
--------------------
#. Go to the Settings tab and scroll down to "ScaleIO plugin" section. You need to fill all fields with your preferred ScaleIO configuration. If you do not know the purpose of a field you can leave it with its default value.
\1. Go to the Settings tab and then go to the section Storage. You need to fill all fields with your preferred ScaleIO configuration. If you do not know the purpose of a field you can leave it with its default value.
.. image:: images/settings.png
:width: 70%
.. image:: images/settings1.png
:width: 80%
#. Take the time to review and configure other environment settings such as the DNS and NTP servers, URLs for the repositories, etc.
\2. In order to deploy new ScaleIO cluster together with OpenStack
\a. Disable the checkbox 'Use existing ScaleIO'
\b. Provide Admin passwords for ScaleIO MDM and Gateway, list of Storage devices to be used as ScaleIO SDS storage devices. Optionally you can provide protection domain name and storage pool names.
.. image:: images/settings2.png
:width: 80%
.. image:: images/settings3.png
:width: 80%
\c. In case you want to specify different storage pools for different devices provide a list of pools corresponding to device paths, e.g. 'pool1,pool2' and '/dev/sdb,/dev/sdc' will assign /dev/sdb for the pool1 and /dev/sdc for the pool2.
\d. Make disks for ScaleIO SDS devices unallocated. These disks will be cleaned up and added to SDSs as storage devices. Note, that because of current Fuel framework limitation it is needed to keep some space for Cinder and Nova roles.
.. image:: images/devices_compute.png
:width: 80%
.. image:: images/devices_controller.png
:width: 80%
\3. In order to use existing ScaleIO cluster
\a. Enable checkbox 'Use existing ScaleIO'
\b. Provide IP address and password for ScaleIO Gateway, protection domain name and storage pool names that will be allowed to be used in OpenStack. The first storage pool name will become the default storage pool for volumes.
.. image:: images/settings_existing_cluster.png
:width: 80%
\4. Take the time to review and configure other environment settings such as the DNS and NTP servers, URLs for the repositories, etc.
Finish environment configuration
@ -72,12 +106,12 @@ Finish environment configuration
#. Go to the Network tab and configure the network according to your environment.
#. Run `network verification check <https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#verify-networks>`_
#. Run `network verification check <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Verify network configuration>`_
.. image:: images/network.png
:width: 90%
#. Press `Deploy button <https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#deploy-changes>`_ once you have finished reviewing the environment configuration.
#. Press `Deploy button <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Deploy changes>`_ once you have finished reviewing the environment configuration.
.. image:: images/deploy.png
:width: 60%
@ -91,7 +125,9 @@ Finish environment configuration
ScaleIO verification
--------------------
Once the OpenStack cluster is setup, we can make use of ScaleIO volumes. This is an example about how to attach a volume to a running VM.
Once the OpenStack cluster is set up, you can make use of ScaleIO volumes. This is an example about how to attach a volume to a running VM.
#. Perform OpenStack Health Check via FUEL UI. Note, that it is needed to keep un-selected tests that are related to running of instances because they use a default instance flavour but ScaleIO requires a flavour with volume sizes that are multiple of 8GB. FUEL does not allow to configure these tests from the plugin.
#. Login into the OpenStack cluster:
@ -100,24 +136,19 @@ Once the OpenStack cluster is setup, we can make use of ScaleIO volumes. This is
.. image:: images/block-storage-services.png
:width: 90%
#. Review the System Volumes by navigating to "Admin -> System -> Volumes". You should see a volume type called "sio_thin" with the following extra specs.
.. image:: images/volume-type.png
:width: 70%
#. In the ScaleIO GUI (see :ref:`Install ScaleIO GUI section <scaleiogui>`), enter the IP address of the primary controller node, username `admin`, and the password you entered in the Fuel UI.
#. Connect to ScaleIO cluster in the ScaleIO GUI (see :ref:`Install ScaleIO GUI section <scaleiogui>`). In case of new ScaleIO cluster deployment use the IP address of the master ScaleIO MDM (initially it's the controller node with the minimal IP-address but master MDM can switch to another controller), username `admin`, and the password you entered in the Fuel UI.
#. Once logged in, verify that it successfully reflects the ScaleIO resources:
.. image:: images/scaleio-cp.png
:width: 80%
#. Click on the "Backend" tab and verify all SDS nodes:
#. For the case of new ScaleIO cluster deployment click on the "Backend" tab and verify all SDS nodes:
.. image:: images/scaleio-sds.png
:width: 90%
#. Create a new OpenStack volume using the "sio_thin" volume type.
#. Create a new OpenStack volume (ScaleIO backend is used by default).
#. In the ScaleIO GUI, you will see that there is one volume defined but none have been mapped yet.
@ -128,3 +159,42 @@ Once the OpenStack cluster is setup, we can make use of ScaleIO volumes. This is
.. image:: images/sio-volume-mapped.png
:width: 20%
Troubleshooting
---------------
1. Deployment cluster fails.
* Verify network settings.
* Ensure that the nodes have internet access.
* Ensure that there are at least 3 nodes with SDS in the cluster. All Compute nodes play SDS role, Controller nodes play SDS role in case if the option 'Controller as Storage' is enabled in the Plugin's settings.
* For the nodes that play SDS role ensure that disks which are listed in the Plugin's settings 'Storage devices' and 'XtremCache devices' are unallocated and their sizes are greater than 100GB.
* Ensure that controller nodes have at least 3GB RAM.
2. Deploying changes fails with timeout errors if remove a controller node (only if there were 3 controllers in cluster).
* Connect via ssh to the one of controller nodes
* Get MDM IPs:
::
cat /etc/environment | grep SCALEIO_mdm_ips
* Request ScaleIO cluster state
::
scli --mdm_ip <ip_of_alive_mdm> --query_cluster
* If cluster is in Degraded mode and there is one of Slave MDMs is disconnected then switch the cluster into the mode '1_node':
::
scli --switch_cluster_mode --cluster_mode 1_node
--remove_slave_mdm_ip <ips_of_slave_mdms>
--remove_tb_ip <ips_of_tie_breakers>
Where ips_of_slave_mdms and ips_of_tie_breakers are comma separated lists
of slave MDMs and Tie Breakers respectively (IPs should be taken from
query_cluster command above).
3. ScaleIO cluster does not see new SDS after deploying new Compute node.
It is needed to run update hosts task on controller nodes manually on the FUEL master node, e.g. 'fuel --env 5 node --node-id 1,2,3 --task update_hosts'. This is because FUEL does not trigger plugin's tasks after Compute node deploymet.
4. ScaleIO cluster has SDS/SDC components in disconnected state after nodes deletion.
See previous point.
5. Other issues.
Ensure that ScaleIO cluster is operational and there are storage pool and protection domain available. For more details see ScaleIO user guide.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 257 KiB

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 268 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 330 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 202 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 204 KiB

View File

@ -1,6 +1,6 @@
========================================
Guide to the ScaleIO Plugin for Fuel 6.1
========================================
===================================================
Guide to the ScaleIOv2.0 Plugin for Fuel 8.0
===================================================
Plugin Guide
============

View File

@ -3,33 +3,32 @@
Installation Guide
==================
Install the Plugin
------------------
To install the ScaleIO Fuel plugin:
Install from `Fuel Plugins Catalog`_
------------------------------------
To install the ScaleIOv2.0 Fuel plugin:
#. Download it from the `Fuel Plugins Catalog`_
#. Copy the *rpm* file to the Fuel Master node:
::
[root@home ~]# scp fuel-plugin-scaleio-1.0-1.0.1-1.noarch.rpm
[root@home ~]# scp scaleio-2.1-2.1.0-1.noarch.rpm
root@fuel-master:/tmp
#. Log into Fuel Master node and install the plugin using the
`Fuel CLI <https://docs.mirantis.com/openstack/fuel/fuel-6.1/user-guide.html#using-fuel-cli>`_:
`Fuel CLI <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf#Fuel Plugins CLI>`_:
::
[root@fuel-master ~]# fuel plugins --install
/tmp/fuel-plugin-scaleio-1.0-1.0.1-1.noarch.rpm
/tmp/scaleio-2.1-2.1.0-1.noarch.rpm
#. Verify that the plugin is installed correctly:
::
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
9 | fuel-plugin-scaleio | 1.0.1 | 2.0.0
1 | scaleio | 2.1.0 | 3.0.0
.. _Fuel Plugins Catalog: https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/

View File

@ -3,7 +3,11 @@ Introduction
Purpose
-------
This document will guide you through the steps of install, configure and use of the **ScaleIO Plugin** for Fuel. The ScaleIO Plugin is used to deploy and configure a ScaleIO cluster as a backend for an OpenStack environment.
This document will guide you through the steps of install, configure and use of the **ScaleIOv2.0 Plugin** for Fuel.
The ScaleIO Plugin is used to:
** deploy and configure a ScaleIO cluster as a volume backend for an OpenStack environment
** configure an Openstack environment to use existing ScaleIO cluster as a volume backend
ScaleIO Overview
----------------
@ -17,6 +21,7 @@ With ScaleIO, any administrator can add, move, or remove servers and capacity on
ScaleIO natively supports all leading Linux distributions and hypervisors. It works agnostically with any solid-state drive (SSD) or hard disk drive (HDD) regardless of type, model, or speed.
ScaleIO Components
------------------
**ScaleIO Data Client (SDC)** is a lightweight block device driver that exposes ScaleIO shared block volumes to applications. The SDS runs on the same server as the application. This enables the application to issue a IO request and the SDC fulfills it regardless of where the particular blocks physically reside. The SDC communicates with other nodes over TCP/IP-based protocol, so it is fully routable.
@ -27,11 +32,11 @@ ScaleIO Components
**ScaleIO Gateway** is the HTTP/HTTPS REST endpoint. It is the primary endpoint used by OpenStack to actuate commands against ScaleIO. Due to its stateless nature, we can have multiples instances and easily balance the load.
**Xtrem Cache (RFCache)** is the component enabling caching on PCI flash cards and/or SSDs thus accelerating the reads of SDS's HDD devices. It is deployed together with SDS component.
ScaleIO Cinder Driver
---------------------
ScaleIO includes a Cinder driver, which interfaces between ScaleIO and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage. It also includes an OpenStack Nova driver, for handling compute and instance volume related operations. The ScaleIO driver executes the volume operations by communicating with the backend ScaleIO MDM through the ScaleIO Gateway.
ScaleIO Cinder and Nova Drivers
-------------------------------
ScaleIO includes Cinder driver, which interfaces between ScaleIO and OpenStack, and presents volumes to OpenStack as block devices which are available for block storage. It also includes an OpenStack Nova driver, for handling compute and instance volume related operations. The ScaleIO driver executes the volume operations by communicating with the backend ScaleIO MDM through the ScaleIO Gateway.
Requirements
@ -40,13 +45,19 @@ Requirements
========================= ===============
Requirement Version/Comment
========================= ===============
Mirantis OpenStack 6.1
Mirantis OpenStack 8.0
========================= ===============
* This plugin will deploy an EMC ScaleIO 1.32 cluster on the available nodes and replace the default OpenStack volume backend by ScaleIO.
Limitations
-----------
Currently, this plugin is **only** compatible with Mirantis OpenStack 6.1 and CentOS 6.5 as the base OS.
1. Plugin is only compatible with Mirantis Fuel 8.0.
2. Plugin supports only Ubuntu environment.
3. Only hyper converged environment is supported - there is no separate ScaleIO Storage nodes.
4. Multi storage backend is not supported.
5. It is not possible to use different backends for persistent and ephemeral volumes.
6. Disks for SDS-es should be unallocated before deployment via FUEL UI or cli.
7. MDMs and Gateways are deployed together and only onto controller nodes.
8. Adding and removing node(s) to/from the OpenStack cluster won't re-configure the ScaleIO.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,8 +1,57 @@
attributes:
metadata:
# Settings group can be one of "general", "security", "compute", "network",
# "storage", "logging", "openstack_services" and "other".
group: 'storage'
existing_cluster:
type: "checkbox"
value: false
label: "Use existing ScaleIO."
description: "Do not deploy ScaleIO cluster, just use existing cluster."
weight: 10
gateway_ip:
type: "text"
value: ""
label: "Gateway IP address"
description: "Cinder and Nova use it for requests to ScaleIO."
weight: 20
regex:
source: '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'
error: "Gateway address is requried parameter"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
gateway_port:
type: "text"
value: "4443"
label: "Gateway port"
description: "Cinder and Nova use it for requests to ScaleIO."
weight: 25
regex:
source: '^[0-9]+$'
error: "Gateway port is required parameter"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
gateway_user:
type: "text"
value: "admin"
label: "Gateway user"
description: "Type a user name for the gateway"
weight: 30
regex:
source: '^\w+$'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
password:
type: "password"
weight: 10
weight: 40
value: ""
label: "Admin password"
description: "Type ScaleIO Admin password"
@ -10,34 +59,204 @@ attributes:
source: '^(?=.*[a-z])(?=.*[A-Z])(?=.*\d).{8,15}$'
error: "You must provide a password with between 8 and 15 characters, one uppercase, and one number"
gw_password:
type: "password"
weight: 30
value: ""
label: "Gateway password"
description: "Type a password for the gateway"
protection_domain:
type: "text"
value: "default"
label: "Protection domain"
description: "Name of first protection domain. Next domains will get names like default_2, default_3."
weight: 70
regex:
source: '^(?=.*[a-z])(?=.*[A-Z])(?=.*\d).{8,15}$'
error: "You must provide a password with between 8 and 15 characters, one uppercase, and one number"
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
pool_size:
protection_domain_nodes:
type: "text"
value: "100"
label: "Storage pool size (in GB)"
description: "Amount in GB that each node will contribute to the storage pool. Please make sure all nodes have enough disk space to allocate the provided amount."
weight: 55
label: "Maximum number of nodes in one protection domain"
description:
If number of nodes gets lasrgert than this threshould new protection domain will be created.
Note, in that case it is needed to add at least 3 new nodes with Storage role to make new domain operationable.
weight: 75
regex:
source: '^([1-9]\d\d|[1-9]\d{3,})$'
error: 'You must provide a number greater than 100'
source: '^[1-9]{1}[0-9]*$'
error: "Should be number that equal or larger than 1"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
storage_pools:
type: "text"
value: "default"
label: "Storage pools"
description:
Comma separated list for splitting devices between them.
It could be just one element if all devices are belong to the one pool.
weight: 80
regex:
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '9.0'"
action: hide
existing_storage_pools:
type: "text"
value: "default"
label: "Storage pools"
description: "Storage pools which are allowed to be used in new Cloud."
weight: 90
regex:
source: '^(\w+){1}((,){1}(?=\w+))*'
error: "Can contain characters, numbers and underlines"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == false"
action: hide
device_paths:
type: "text"
value: ""
label: "Storage devices"
description: "Comma separated list of devices, e.g. /dev/sdb,/dev/sdc."
weight: 100
regex:
source: '^(/[a-zA-Z0-9:-_]+)+(,(/[a-zA-Z0-9:-_]+)+)*$'
error: 'List of path is incorrect. It is comma separated list aka /dev/sdb,/dev/sdc'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '9.0'"
action: hide
sds_on_controller:
type: "checkbox"
value: true
label: "Controller as Storage"
description: "Setup SDS-es on controller nodes."
weight: 105
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '9.0'"
action: hide
provisioning_type:
type: "radio"
value: "thin"
label: "Provisioning type"
description: "Thin/Thick provisioning for ephemeral and persistent volumes."
weight: 110
values:
- data: 'thin'
label: 'Thin provisioning'
description: "Thin provisioning for ephemeral and persistent volumes."
- data: 'thick'
label: 'Thick provisioning'
description: "Thick provisioning for ephemeral and persistent volumes."
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
checksum_mode:
type: "checkbox"
value: false
label: "Checksum mode"
description:
Checksum protection. ScaleIO protects data in-flight by calculating and validating the checksum value for the payload at both ends.
Note, the checksum feature may have a minor effect on performance.
ScaleIO utilizes hardware capabilities for this feature, where possible.
weight: 120
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
spare_policy:
type: "text"
value: '10'
label: "Spare policy"
description: "% out of total space"
weight: 130
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
zero_padding:
type: "checkbox"
value: true
label: "Enable Zero Padding for Storage Pools"
description: "New volumes will be zeroed if the option enabled."
weight: 140
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
scanner_mode:
type: "checkbox"
value: false
label: "Background device scanner"
description: "This options enables the background device scanner on the devices in device only mode."
weight: 150
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
rfcache_devices:
type: "text"
value: ""
label: "XtremCache devices"
description: "List of SDS devices for SSD caching. Cache is disabled if list empty."
weight: 160
regex:
source: '^(/[a-zA-Z0-9:-_]+)*(,(/[a-zA-Z0-9:-_]+)+)*$'
error: 'List of path is incorrect. It could be either empty or the comma separated list e.g. /dev/sdb,/dev/sdc'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '9.0'"
action: hide
cached_storage_pools:
type: "text"
value: ""
label: "XtremCache storage pools"
description: "List of storage pools which should be cached with XtremCache."
weight: 170
regex:
source: '^(\w+)*((,){1}(?=\w+))*'
error: 'List of storage pools incorrect. It could be either empty or the comma separated list e.g. pool1,pool2'
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true or cluster:fuel_version == '9.0'"
action: hide
capacity_high_alert_threshold:
type: "text"
value: '80'
label: "Capacity high priority alert"
description: "Threshold of the non-spare capacity of the Storage Pool that will trigger a high-priority alert, in percentage format"
weight: 180
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
capacity_critical_alert_threshold:
type: "text"
value: '90'
label: "Capacity critical priority alert"
description: "Threshold of the non-spare capacity of the Storage Pool that will trigger a critical-priority alert, in percentage format"
weight: 190
regex:
source: '^[0-9]{1,2}$'
error: "Value could be between 0 and 99"
restrictions:
- condition: "settings:scaleio.existing_cluster.value == true"
action: hide
version:
type: "select"
weight: 60
value: "1.32"
weight: 200
value: "2.0"
label: "Version"
description: "Select the ScaleIO version you wish to install"
description: "Select the ScaleIO version you wish to install. The only version 2.0 is supported for now."
values:
- data: "1.32"
label: "1.32"
- data: "2.0"
label: "2.0 (Beta)"
label: "2.0"

View File

@ -1,30 +1,48 @@
# Plugin name
name: scaleio
# Human-readable name for your plugin
title: ScaleIO plugin
title: ScaleIOv2.0 plugin
# Plugin version
version: '1.0.2'
version: '2.1.0'
# Description
description: This plugin deploys and enables EMC ScaleIO as the block storage backend
description: This plugin deploys and enables EMC ScaleIO ver. 2.0 as the block storage backend
# Required fuel version
fuel_version: ['6.1']
fuel_version: ['7.0', '8.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['Adrian Moreno Martinez, EMC', 'Magdy Salem, EMC']
authors: ['EMC']
# A link to the plugin's page
homepage: 'https://github.com/openstack/fuel-plugin-scaleio'
homepage: 'https://github.com/cloudscaling/fuel-plugin-scaleio'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: ['storage']
# The plugin is compatible with releases in the list
releases:
- os: centos
- os: ubuntu
version: 2014.2.2-6.1
mode: ['ha', 'multinode']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/centos
repository_path: repositories/ubuntu
- os: ubuntu
version: 2015.1.0-7.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
- os: ubuntu
version: liberty-8.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '2.0.0'
package_version: "3.0.0"

View File

@ -3,15 +3,78 @@
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.
#
# Define environment variable:
# - FORCE_DOWNLOAD - to force package downloading
# - FORCE_CLONE - to force re-cloning of puppet git reposintories
set -eux
ROOT="$(dirname `readlink -f $0`)"
RPM_REPO="${ROOT}"/repositories/centos/
RELEASE=${RELEASE_TAG:-"v0.3"}
# Download ScaleIO for RHEL6 (zip file)
wget -nv ftp://ftp.emc.com/Downloads/ScaleIO/ScaleIO_RHEL6_Download.zip -O /tmp/ScaleIO_RHEL6_Download.zip
# Unzip files
unzip -o /tmp/ScaleIO_RHEL6_Download.zip -d /tmp/scaleio/
# Copy RPM files to the repository
cp /tmp/scaleio/ScaleIO_*_RHEL6*/*.rpm /tmp/scaleio/ScaleIO_*_Gateway_for_Linux*/*.rpm /tmp/scaleio/ScaleIO_*_GUI_for_Linux*/*.rpm "${RPM_REPO}"
#TODO: use ftp.emc.com
BASE_REPO_URL="http://ec2-52-37-140-129.us-west-2.compute.amazonaws.com"
BASE_PUPPET_URL="https://github.com/cloudscaling"
##############################################################################
# Download packages for plugin
##############################################################################
PLATFORMS=(ubuntu centos)
PLATFORMS_PKG_SUFFIX=(deb rpm)
PLATFORMS_REPO_URL_SUFFIX=("pool/main/e" "centos/x86_64/RPMS")
for r in {0..1}
do
platform=${PLATFORMS[$r]}
repo_suffix=${PLATFORMS_REPO_URL_SUFFIX[$r]}
pkg_suffix=${PLATFORMS_PKG_SUFFIX[$r]}
repo_url="$BASE_REPO_URL/$platform/$repo_suffix/"
destination="./repositories/$platform"
components=`curl --silent "$repo_url" | grep -o 'emc-scaleio-\w\+' | sort| uniq`
for i in $components;
do
packages=`curl --silent "$repo_url$i/" | grep -o "$i[a-zA-Z0-9_.-]\+\.$pkg_suffix" | sort | uniq`
for p in $packages
do
if [[ ! -f "$destination/$p" || ! -z "${FORCE_DOWNLOAD+x}" ]]
then
wget -P "$destination/" "$repo_url/$i/$p"
fi
done
done
done
##############################################################################
# Download required puppet modules
##############################################################################
GIT_REPOS=(puppet-scaleio puppet-scaleio-openstack)
DESTINATIONS=(scaleio scaleio_openstack)
for r in {0..1}
do
puppet_url="$BASE_PUPPET_URL/${GIT_REPOS[$r]}"
destination="./deployment_scripts/puppet/modules/${DESTINATIONS[$r]}"
if [[ ! -d "$destination" || ! -z "${FORCE_CLONE+x}" ]]
then
if [ ! -z "${FORCE_CLONE+x}" ]
then
rm -rf "$destination"
fi
git clone "$puppet_url" "$destination"
pushd "$destination"
git checkout "tags/$RELEASE"
popd
else
if [ -z "${SKIP_PULL+x}" ]
then
pushd "$destination"
git checkout "tags/$RELEASE"
popd
fi
fi
done

View File

@ -1,55 +0,0 @@
- role: ['primary-controller']
stage: post_deployment/2000
type: puppet
parameters:
puppet_manifest: puppet/manifests/check_environment_configuration.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 360
- role: '*'
stage: post_deployment/2050
type: puppet
parameters:
puppet_manifest: puppet/manifests/install_scaleio.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 1800
- role: '*'
stage: post_deployment/2100
type: puppet
parameters:
puppet_manifest: puppet/manifests/configure_gateway.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- role: ['primary-controller', 'controller']
stage: post_deployment/2150
type: puppet
parameters:
puppet_manifest: puppet/manifests/enable_ha.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- role: ['compute']
stage: post_deployment/2200
type: puppet
parameters:
puppet_manifest: puppet/manifests/configure_nova.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- role: ['primary-controller', 'controller']
stage: post_deployment/2250
type: puppet
parameters:
puppet_manifest: puppet/manifests/configure_cinder.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
- role: ['primary-controller']
stage: post_deployment/2300
type: puppet
parameters:
puppet_manifest: puppet/manifests/create_volume_type.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600