add 7mode support and update documentation

Change-Id: Ia8c4831a37a82ec8a4503dc6e2de74e1067ea1cb
Partial-Bug: #1405186
This commit is contained in:
sbartel 2015-05-12 14:43:49 +02:00
parent bbac284d6e
commit a08978e176
8 changed files with 448 additions and 51 deletions

344
README.md
View File

@ -1,45 +1,329 @@
Cinder NetApp plugin
====================
Cinder NetApp plugin for Fuel
=============================
cinder-netapp plugin
--------------------
Overview
--------
NetApp plugin will replace Cinder LVM backend by Cinder Netapp Backend (multi-backend feature is not yet implemented). LVM is the default volume backend that uses local volumes managed by LVM.
The plugin support following storage familly mode:
- 7 Mode
- Cluster Mode
This repo contains all necessary files to build Cinder NetApp Fuel plugin.
Currently the only supported Fuel version is 6.0.
Building the plugin
-------------------
1. Clone the fuel-plugins repo from https://github.com/stackforge/fuel-plugins
2. Install Fuel Plugin Builder using documentation from the fuel-plugins repo
3. Clone the fuel-plugin-cinder-netapp repo from https://github.com/stackforge/fuel-plugin-cinder-netapp
4. Execute fpb --build <path>, where <path> is the path to the plugin's main
folder (fuel-plugin-cinder-netapp)
5. cinder\_netapp-1.0.0.fp plugin file will be created
6. Move this file to the Fuel master node and install it using
the following command: fuel plugins --install cinder\_netapp-1.0.0.fp
7. Plugin is ready to use and can be enabled via Fuel WebUI ('Settings' tab)
Requirements
------------
| Requirement | Version/Comment |
|------------------------------------------------------------------------------------------|---------------------------------------------------------|
| Mirantis Openstack compatibility | 6.0 |
| Netapp filer or appliance is reacheble via one of the Mirantis OPenstack networks | Cluster mode or 7 mode storage familly |
Recommendations
---------------
None.
Limitations
-----------
Since only one netapp backend can be setup, Cinder node will shared the same backend
Installation Guide
==================
CMODE appliance
---------------
Pre setup
~~~~~~~~~
1. Using VMware ESX or VMware player, create 2 networks called VM Network and Cluster Network.
2. Untar the vsim and add it to your VMware ESX inventory/VMware Player inventory.
NOTE: The VM will have 4 NICs. The first 2 e0a and e0b are connected to Cluster
Network and the second 2 (e0c and e0d) are connected to the VM Network. The VM
Network should be the regular VMware vSwitch that is bridged onto the lab network.
The Cluster Network is a vSwitch that's connected to nothing. The purpose of the
Cluster Network is the following: when you have multiple vsims you want to cluster
together, they use this private network to talk to each other. The point is not in
clustering vsims (this will not be done), so this network will be unused, but you should
still create it. You should only take into consideration that e0a and e0b are connected to
a fake network so you should not use them; use e0c and e0d exclusively.
OS setup
OS setup
~~~~~~~~
1. Start up the VM with the console open.
2. Press Ctrl­C when the message about the boot menu appears (you only get about 15 seconds to do this so do not miss it).
3. Select option 4 (Clean configuration and initialize all disks).
4. Answer Yes to the next 2 questions.
5. The VM will reboot and do some work.
Cluster setup
~~~~~~~~~~~~~
1. When it asks if you want to join or create a cluster, select Create.
2. Answer Yes when it asks about a single node cluster.
3. Enter the cluster name: <cluster_name>­cluster.
4. Enter cluster base license key. Do not enter any more license keys.
5. Enter the admin password twice.
6. Cluster management interface.
Port: e0c
IP address: 192.168.4.10
Netmask: 255.255.255.128
Default gateway 192.168.4.1
DNS domain name: <name>.netapp.com
Nameserver IP: 192.18.4.1
Location: <location_name>
7. Node management interface.
Port: e0c
IP address: 192.168.4.12
Netmask: 255.255.255.128
Default gateway 192.168.4.1
8. Press enter to acknowledge the autosupport notification
Cluster configuration
~~~~~~~~~~~~~~~~~~~~~
1. You can either continue through the VMware console, or switch to SSH at this point. If you SSH, connect to the cluster management interface (in our case, that is 192.168.4.10).
2. Login at the prompt using <admin_name> and <password>.
3. Add the unassigned disks to the node by entering the following command: storage disk assign -all true -node <cluster_name>-cluster-01
4. Create an aggregate using 10 disks: storage aggregate create -aggregate aggr1 -diskcount 10
5. Create a vserver:
``vserver create -vserver <server_name>-vserver -rootvolume vol1 -aggregate aggr1 -ns-switch file -rootvolume-security-style unix``
6. Create a data LIF:
``network interface create -vserver bswartz-vserver -lif bswartz-data -role data -home-node <cluster_name>-cluster-01 -home-port e0d -address <192.168.4.15>-netmask <255.255.255.128>``
7. Add a rule to the default export policy:
``vserver export-policy rule create -vserver <server_name>-vserver -policyname default -clientmatch 0.0.0.0/0 -rorule any -rwrule any -superuser any -anon 0``
8. Enable NFS on the vServer:
``vserver nfs create -vserver <server_name>-vserver -access true``
9. Create a volume with some free space:
``volume create -vserver <server_name>-vserver -volume vol<volume_number> -aggregate aggr1 -size 5g -junction-path /vol<volume_number>``
7Mode appliance
---------------
Pre setup
~~~~~~~~~
1. Using VMware ESX or VMware player, create 2 networks called VM Network and Cluster Network.
2. Untar the vsim and add it to your VMware ESX inventory/VMware Player inventory.
NOTE: The VM will have 4 NICs. The first 2 (e0a and e0b) are connected to Cluster
Network and the second 2 (e0c and e0d) are connected to the VM Network. The VM
Network should be the regular VMware vSwitch that is bridged onto the lab network.
The Cluster Network is a vSwitch that's connected to nothing. The purpose of the
Cluster Network is the following: when you have multiple vsims you want to cluster
together, they use this private network to talk to each other. The point is not in
clustering vsims (this will not be done), so this network will be unused, but you should
still create it. You should only take into consideration that e0a and e0b are connected to
a fake network so you should not use them; use e0c and e0d exclusively.
OS setup
OS setup
~~~~~~~~
1. Start up the VM with the console open.
2. Press Ctrl­C when the message about the boot menu appears (you only get about ­15 seconds to do this so do not miss it).
3. Select option 4 (Clean configuration and initialize all disks).
4. Answer Yes to the next 2 questions.
5. The VM will reboot and do some work.
7Mode setup
~~~~~~~~~~~
1. enter the hostname.
2. when ask to enable ipv6 select no.
3. when ask to configure interface groups select no.
4. configure the 4 nic.
IP address: 192.168.4.10
Netmask: 255.255.255.128
5. configure the default gateway ipv4.
6. When ask to enter the name or ip address of the administration host, leave empty and press enter.
7. select your time zonetime zone.
8. enter the root directory for HTTP files [ /home/http]
9. when ask if you you want to run DNS resolver, NIS Client or configure SAS shelves answer no.
10. configure password for admin user.
Storage configuration
~~~~~~~~~~~~~~~~~~~~~
1. You can either continue through the VMware console, or switch to SSH at this point. If you SSH, connect to the cluster management interface (in our case, that is <192.168.4.10>).
2. Login at the prompt using <admin_name> and <password>.
3. Add the unassigned disks to the node:
``disk assign all``
4. Create an aggregate using 10 disks:
``aggr create aggr<aggregate_number> 10``
5. Create a volume with some free space:
``vol create vol<volume_number> aggr<aggregate_number> 5g``
6. add license for nfs :
``license add <xxx>``
7. enable nfs:
``nfs on setup``
8. Unexport the volume (which gets automatically exported):
``exportfs -z /vol/vol<volume_number>``
9. export /vol/vol<volume_number> for NFS readwrite and roor access for server on the Mirantis openstack network:
``exportfs -p rw=<192.168.168.0/24>,root=<192.168.168.0/24>,rw=<10.194.167.0/24>,root=<10.194.167.0/24> /vol/vol1``
10. enable httpd (needed to manage your appliance using the OnCommand System Manager):
``options httpd.admin.enable true``
``options httpd.admin.ssl.enable true``
cinder-netapp plugin installation
---------------------------------
1. Clone the fuel-plugin repo from: https://github.com/stackforge/fuel-plugin-nova-nfs.git
``git clone``
2. Install the Fuel Plugin Builder:
``pip install fuel-plugin-builder``
3. Build nova-nfs Fuel plugin:
``fpb --build fuel-plugin-cinder-netapp/``
4. The cinder_netapp-<x.x.x>.fp file will be created in the plugin folder (fuel-plugin-cinder-netapp)
5. Move this file to the Fuel Master node with secure copy (scp):
``scp cinder_netapp-<x.x.x>.fp root@:<the_Fuel_Master_node_IP address>:/tmp``
``cd /tmp``
6. Install the cinder_netapp plugin:
``fuel plugins --install cinder_netapp-<x.x.x>.fp``
6. Plugin is ready to use and can be enabled on the Settings tab of the Fuel web UI.
User Guide
==========
Cinder-netapp plugin configuration
----------------------------------
1) Enable the plug-in on the settings tab of the Fuel web UI
2) Enter Netapp Credentails int the Cinder and Netapp integration. NetApp parameters vary depending the storage familly mode and storage protocole selected
a) Cluster Mode and nfs
Here is a screenshot of the fields
![Cinder-netapp Cluster Mode nfs fields](./figures/cinder-netapp-cmode-nfs-plugin.png "Cinder-netapp Cluster Mode nfs fields")
b) Cluster Mode and iscsi
Here is a screenshot of the fields
![Cinder-netapp Cluster Mode iscsi fields](./figures/cinder-netapp-cmode-iscsi-plugin.png "Cinder-netapp Cluster Mode iscsi fields")
c) 7 Mode and nfs
Here is a screenshot of the fields
![Cinder-netapp 7 Mode nfs fields](./figures/cinder-netapp-7mode-nfs-plugin.png "Cinder-netapp 7 Mode nfs fields")
d) 7 Mode and iscsi
Here is a screenshot of the fields
![Cinder-netapp 7 Mode iscsi fields](./figures/cinder-netapp-7mode-iscsi-plugin.png "Cinder-netapp 7 Mode iscsi fields")
3) Assign Cinder role to one of the nodes
4) For more information on NetApp integration into Cinder, configuration and API issues, see [the Official Netapp Guide for Openstack](http://docs.openstack.org/juno/config-reference/content/netapp-volume-driver.html).
Deployment details
------------------
Cinder NetApp driver will replace LVM Cinder driver
Create nfs_share config file
Edit cinder config file to use netapp common driver
Restart cinder services
#### Prerequisites
Known issues
------------
1. NetApp applience is deployed and configured
2. NetApp applience is reachable via one of MOS networks
None.
#### Limitations
1. Cinder volume is not highly available
2. Only one Cinder node should be deployed
Release Notes
-------------
#### Deployment procedure
1. Create environment with default Cinder backend (LVM)
2. Enable Cinder NetApp plugin under Settings tab
3. Do a plugin configuration
4. Assign Cinder role to one of the nodes
**1.1.0**
* add 7 mode storage familly support
Accessing Cinder NetApp functionality
------------------------------
**1.0.0**
* Initial release of the plugin
Please use official Openstack documentation to obtain more information:
- http://docs.openstack.org/juno/config-reference/content/netapp-volume-driver.html

View File

@ -44,27 +44,24 @@ class plugin_cinder_netapp
netapp_password => $netapp_password,
netapp_server_hostname => $::fuel_settings['cinder_netapp']['netapp_server_hostname'],
volume_backend_name => $section,
# netapp_server_port => '80',
# netapp_size_multiplier => '1.2',
# netapp_storage_family => 'ontap_cluster',
netapp_storage_protocol => 'nfs',
# netapp_transport_type => 'http',
# netapp_vfiler => '',
# netapp_volume_list => '',
netapp_server_port => $::fuel_settings['cinder_netapp']['netapp_server_port'],
netapp_size_multiplier => $::fuel_settings['cinder_netapp']['netapp_size_multiplier'],
netapp_storage_family => $::fuel_settings['cinder_netapp']['netapp_storage_family'],
netapp_storage_protocol => $::fuel_settings['cinder_netapp']['netapp_storage_protocol'],
netapp_transport_type => $::fuel_settings['cinder_netapp']['netapp_transport_type'],
netapp_vfiler => $::fuel_settings['cinder_netapp']['netapp_vfiler'],
netapp_volume_list => $::fuel_settings['cinder_netapp']['netapp_volume_list'],
netapp_vserver => $::fuel_settings['cinder_netapp']['netapp_vserver'],
# expiry_thres_minutes => '720',
# thres_avl_size_perc_start => '20',
# thres_avl_size_perc_stop => '60',
expiry_thres_minutes => $::fuel_settings['cinder_netapp']['expiry_thres_minutes'],
thres_avl_size_perc_start => $::fuel_settings['cinder_netapp']['thres_avl_size_perc_start'],
thres_avl_size_perc_stop => $::fuel_settings['cinder_netapp']['thres_avl_size_perc_stop'],
nfs_shares_config => '/etc/cinder/shares.conf',
# netapp_copyoffload_tool_path => '',
# netapp_controller_ips => '',
# netapp_sa_password => '',
# netapp_storage_pools => '',
# netapp_webservice_path => '/devmgr/v2',
netapp_copyoffload_tool_path => $::fuel_settings['cinder_netapp']['netapp_copyoffload_tool_path'],
#netapp_controller_ips => '',
#netapp_sa_password => '',
#netapp_storage_pools => '',
#netapp_webservice_path => '/devmgr/v2',
} ~>
service { $::cinder::params::volume_service:
}
}

View File

@ -2,9 +2,21 @@ attributes:
multibackend:
value: false
label: 'Multibackend enabled'
description: 'NetApp driver will be used as a Cinder Multibackend feature'
description: 'NetApp driver will be used as a Cinder Multibackend feature (not implemented)'
weight: 35
type: "checkbox"
netapp_storage_family:
value: "ontap_cluster"
values:
- data: "ontap_cluster"
label: "Ontap cluster"
description: "Clustered data ONTAP storage family"
- data: "ontap_7mode"
label: "Ontap 7 mode"
description: "Data ONTAP operating in 7_mode storage family"
label: "Netapp storage family"
weight: 40
type: "radio"
netapp_login:
value: ''
label: 'Username'
@ -35,9 +47,113 @@ attributes:
description: 'The NFS share path (e.g. /vol2)'
weight: 75
type: "text"
netapp_transport_type:
value: "http"
values:
- data: "http"
label: "http"
description: ""
- data: "https"
label: "https"
description: ""
label: "Netapp transport type"
description: 'The transport protocol used for communication with the storage system or proxy server'
weight: 76
type: "radio"
netapp_server_port:
value: '80'
label: 'NetApp server port'
description: 'The TCP port to use for communication with the storage system or proxy server'
weight: 77
type: "text"
netapp_storage_protocol:
value: "nfs"
values:
- data: "nfs"
label: "nfs"
description: ""
- data: "iscsi"
label: "iscsi"
description: ""
label: "Netapp storage protocol"
description: 'The storage protocol to be used on the data path with the storage system'
weight: 78
type: "radio"
netapp_vserver:
value: ''
label: 'Vserver'
description: 'This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.'
description: 'This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. (Cluster-Mode only and mandatory for NFS Storage protocol)'
weight: 75
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_family.value != 'ontap_cluster'"
action: "hide"
expiry_thres_minutes:
value: '720'
label: 'Expiry thres minutes'
description: 'This option specifies the threshold for last access time for images in the NFS image cache (NFS protocol only)'
weight: 85
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'nfs'"
action: "hide"
netapp_copyoffload_tool_path:
value: ''
label: 'NetApp copyoffload tool path'
description: '(Optionnal) This option specifies the path of the Netapp copy ofload tool binary (NFS protocol only)'
weight: 85
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'nfs'"
action: "hide"
thres_avil_size_perc_start:
value: '20'
label: 'Thres avl size perc start'
description: 'The percentage of available space from which the NFS image cache will be cleaned (NFS protocol only)'
weight: 85
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'nfs'"
action: "hide"
thres_avil_size_perc_stop:
value: '60'
label: 'Thres avl size perc stop'
description: 'The percentage of available space from which the driver will stop cleaning the NFS image cache (NFS protocol only)'
weight: 85
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'nfs'"
action: "hide"
netapp_size_multiplier:
value: '1.2'
label: 'NetApp size multiplier'
description: 'Mutiplication factor used to chack available space on the virtual storage server (iSCSI configuration only)'
weight: 90
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'iscsi'"
action: "hide"
netapp_vfiler:
value: ''
label: 'Netapp vfiler'
description: '(Optionnal) The vFiler unti on which provisioning of block storage volumes will be done (iSCSI configuration in 7-Mode only)'
weight: 90
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'iscsi' or settings:cinder_netapp.netapp_storage_family.value != 'ontap_7mode'"
action: "hide"
netapp_volume_list:
value: ''
label: 'Netapp volume list'
description: '(Optionnal) This option is used to restrict provisionning to the specified controller volumes (iSCSI configuration in 7-Mode only)'
weight: 90
type: "text"
restrictions:
- condition: "settings:cinder_netapp.netapp_storage_protocol.value != 'iscsi' or settings:cinder_netapp.netapp_storage_family.value != 'ontap_7mode'"
action: "hide"

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@ -3,7 +3,7 @@ name: cinder_netapp
# Human-readable name for your plugin
title: Cinder and NetApp integration
# Plugin version
version: 1.0.1
version: 1.1.0
# Description
description: Enable to use NetApp nfs driver as a Cinder backend
# Required fuel version