Documentation: update and conversion into rst format
Change-Id: I257b2da25908f1938e5bf078a21a1ce17ddeb593
17
README.md
|
@ -14,7 +14,7 @@ Requirements
|
|||
|
||||
| Requirement | Version/Comment |
|
||||
|:---------------------------------|:----------------|
|
||||
| Mirantis OpenStack compatibility | >= 6.1 |
|
||||
| Mirantis OpenStack compatibility | >= 7.0 |
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
@ -54,19 +54,19 @@ To install EMC VNX plugin, follow these steps:
|
|||
that. If you do not have the Fuel Master node yet, see
|
||||
[Quick Start Guide](https://software.mirantis.com/quick-start/):
|
||||
|
||||
# scp emc_vnx-1.0-1.0.0-0.noarch.rpm root@<Fuel_master_ip>:/tmp
|
||||
# scp emc_vnx-2.0-2.0.0-0.noarch.rpm root@<Fuel_master_ip>:/tmp
|
||||
|
||||
3. Log into the Fuel Master node. Install the plugin:
|
||||
|
||||
# cd /tmp
|
||||
# fuel plugins --install emc_vnx-1.0-1.0.0-0.noarch.rpm
|
||||
# fuel plugins --install emc_vnx-2.0-2.0.0-0.noarch.rpm
|
||||
|
||||
4. Check if the plugin was installed successfully:
|
||||
|
||||
# fuel plugins
|
||||
id | name | version | package_version
|
||||
---|---------|---------|----------------
|
||||
1 | emc_vnx | 1.0.0 | 2.0.0
|
||||
1 | emc_vnx | 2.0.0 | 2.0.0
|
||||
|
||||
EMC VNX plugin configuration
|
||||
----------------------------
|
||||
|
@ -95,7 +95,8 @@ This is the first release of the plugin.
|
|||
Contributors
|
||||
------------
|
||||
|
||||
Dmitry Klenov <dklenov@mirantis.com> (PM)
|
||||
Szymon Bańka <sbanka@mirantis.com> (developer)
|
||||
Piotr Misiak <pmisiak@mirantis.com> (developer)
|
||||
Dmitry Kalashnik <dkalashnik@mirantis.com> (QA engineer)
|
||||
Dmitry Klenov <dklenov@mirantis.com> (PM)
|
||||
Szymon Bańka <sbanka@mirantis.com> (developer)
|
||||
Piotr Misiak <pmisiak@mirantis.com> (developer)
|
||||
Dmitry Kalashnik <dkalashnik@mirantis.com> (QA engineer)
|
||||
Maciej Relewicz <mrelewicz@mirantis.com> (developer)
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
==================
|
||||
Appendix
|
||||
==================
|
||||
|
||||
Links
|
||||
=========================
|
||||
|
||||
- `Multiple pools support <https://github.com/emc-openstack/vnx-direct-driver
|
||||
/blob/master/README_ISCSI.md#multiple-pools-support>`_
|
||||
- `OpenStack CLI <http://docs.openstack.org/cli-reference/content/>`_
|
||||
- `Fuel Plugins CLI guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0
|
||||
/user-guide.html#fuel-plugins-cli>`_
|
||||
|
||||
Components licenses
|
||||
=========================
|
||||
|
||||
deb packages::
|
||||
|
||||
multipath-tools: GPL-2.0
|
||||
navicli-linux-64-x86-en-us: EMC Freeware Software License
|
||||
|
||||
rpm packages::
|
||||
|
||||
kpartx: GPL+
|
||||
device-mapper-multipath: GPL+
|
||||
device-mapper-multipath-libs: GPL+
|
||||
NaviCLI-Linux-64-x86-en_US: EMC Freeware Software License
|
|
@ -0,0 +1,40 @@
|
|||
============================
|
||||
EMC VNX plugin configuration
|
||||
============================
|
||||
|
||||
1. Create an environment with the default backend for Cinder. Do not add Cinder
|
||||
role to any node, because all Cinder services will be run on Controllers.
|
||||
For more information about environment creation, see `Mirantis OpenStack
|
||||
User Guide - create a new environment <http://docs.mirantis.com/openstack
|
||||
/fuel/fuel-7.0/user-guide.html#create-a-new-openstack-environment>`_.
|
||||
|
||||
2. Open Settings tab of the Fuel web UI and scroll the page down. Select the
|
||||
plugin checkbox and fill in form fields:
|
||||
|
||||
.. image:: images/settings.png
|
||||
:width: 50%
|
||||
|
||||
================================== ===============
|
||||
Field Comment
|
||||
================================== ===============
|
||||
Username/password Access credentials configured on EMC VNX.
|
||||
SP A/B IP IP addresses of the EMC VNX Service
|
||||
Processors.
|
||||
Pool name (optional) The name of the EMC VNX storage pool on
|
||||
which all Cinder volumes will be created.
|
||||
The provided storage pool must be available
|
||||
on EMC VNX. If pool name is not provided,
|
||||
then EMC VNX driver will use a random
|
||||
storage pool available on EMC VNX. You can
|
||||
also use a Volume Type OpenStack feature to
|
||||
create a volume on a specific storage pool.
|
||||
For more information, see `Multiple Pools
|
||||
Support <https://github.com/emc-openstack
|
||||
/vnx-direct-driver/blob/master
|
||||
/README_ISCSI.md#multiple-pools-support>`_.
|
||||
================================== ===============
|
||||
|
||||
3. Adjust other environment settings to your requirements and deploy the
|
||||
environment. For more information, see `Mirantis OpenStack User Guide -
|
||||
deploy changes <http://docs.mirantis.com/openstack/fuel/fuel-7.0
|
||||
/user-guide.html#deploy-changes>`_.
|
|
@ -0,0 +1,32 @@
|
|||
===================================================
|
||||
Guide to the EMC VNX Plugin for Fuel
|
||||
===================================================
|
||||
|
||||
EMC VNX plugin for Fuel extends Mirantis OpenStack functionality by adding
|
||||
support for EMC VNX arrays in Cinder using iSCSI protocol. It replaces Cinder
|
||||
LVM driver which is the default volume backend that uses local volumes managed
|
||||
by LVM. Enabling EMC VNX plugin in Mirantis OpenStack means that all Cinder
|
||||
services are run on Controller nodes.
|
||||
|
||||
Requirements
|
||||
============
|
||||
|
||||
+------------------------------------+----------------------------------------+
|
||||
|Requirement | Version/Comment |
|
||||
+====================================+========================================+
|
||||
|Fuel | 7.0 and higher |
|
||||
+------------------------------------+----------------------------------------+
|
||||
|EMC VNX array | It should be configured and deployed. |
|
||||
| | It should be reachable via one |
|
||||
| | of the Mirantis OpenStack networks. |
|
||||
+------------------------------------+----------------------------------------+
|
||||
|
||||
|
||||
Limitations
|
||||
============
|
||||
|
||||
#. Since only one storage network is available in Fuel 7.x on OpenStack nodes,
|
||||
multipath will bind all storage paths from EMC on one network interface.
|
||||
In case this NIC fails, the communication with storage is lost.
|
||||
|
||||
#. Fibre Channel driver is not supported.
|
|
@ -0,0 +1,98 @@
|
|||
==========
|
||||
User Guide
|
||||
==========
|
||||
|
||||
Creating Cinder volume
|
||||
=========================
|
||||
|
||||
To verify that EMC VNX plugin is properly installed, you should create a Cinder
|
||||
volume and attach it to a newly created VM using for example
|
||||
`OpenStack CLI <http://docs.openstack.org/cli-reference/content/>`_ tools.
|
||||
|
||||
#. Create a Cinder volume. In this example, a 10GB volume was created using
|
||||
*cinder create <volume size>* command:
|
||||
|
||||
.. image:: images/create.png
|
||||
:width: 50%
|
||||
|
||||
#. Using *cinder list* command (see the screenshot above), let’s check if the
|
||||
volume was created. The output provides information on ID, Status
|
||||
(it’s available), Size (10) and some other parameters.
|
||||
|
||||
#. Now you can see how it looks on the EMC VNX. In the example environment,
|
||||
EMC VNX SP has 192.168.200.30 IP address. Before you do this,
|
||||
add */opt/Navisphere/bin* directory to PATH environment variable using
|
||||
*export PATH=$PATH:/opt/Navisphere/bin* command and save your EMC
|
||||
credentials using *naviseccli -addusersecurity -password <password>
|
||||
-scope 0 -user <username>* command to simplify syntax in succeeding
|
||||
*naviseccli* commands.
|
||||
|
||||
Use *naviseccli -h <SP IP> lun -list* command to list LUNs created on the
|
||||
EMC:
|
||||
|
||||
.. image:: images/lunid.png
|
||||
:width: 50%
|
||||
|
||||
In the given example there is one LUN with ID: 0, name:
|
||||
*volume-e1626d9e-82e8-4279-808e-5fcd18016720* (naming schema is
|
||||
“volume-<Cinder volume id>”) and it is in “Ready” state, so everything is
|
||||
fine.
|
||||
|
||||
#. Now create a new VM. To do this, you have to know IDs of a glance image
|
||||
(use *glance image-list* command) and a network (use *nova net-list*
|
||||
command):
|
||||
|
||||
.. image:: images/glance.png
|
||||
:width: 50%
|
||||
|
||||
Note the VM’s ID which is *48e70690-2590-45c7-b01d-6d69322991c3* in the
|
||||
given example.
|
||||
|
||||
#. Show details of the new VM to check its state and to see on which node it
|
||||
has been created (use *nova show <id>* command). In the output, we see that
|
||||
the VM is running on the node-3 and it is active:
|
||||
|
||||
.. image:: images/novaShow.png
|
||||
:width: 50%
|
||||
|
||||
#. Attach the Cinder volume to the VM (use *nova volume-attach <VM id>
|
||||
<volume id>*)
|
||||
and verify using cinder list command:
|
||||
|
||||
.. image:: images/volumeAttach.png
|
||||
:width: 50%
|
||||
|
||||
#. To list storage groups configured on EMC VNX, use *naviseccli -h <SP IP>
|
||||
storagegroup -list* command:
|
||||
|
||||
.. image:: images/storagegroup.png
|
||||
:width: 50%
|
||||
|
||||
There is one “node-3” storage group with one LUN attached. The LUN has local
|
||||
ID 0 (ALU Number) and it is available as LUN 133 (HLU Number) for the
|
||||
node-3. There are four iSCSI HBA/SP Pairs - one per the SP-Port pair.
|
||||
|
||||
#. You can also check if iSCSI sessions are active using
|
||||
*naviseccli -h <SP IP> port -list -hba* command:
|
||||
|
||||
.. image:: images/hba.png
|
||||
:width: 50%
|
||||
|
||||
Look at “Logged In” parameter of each port. In the given example, all four
|
||||
sessions are active (in the output, it looks like Logged In: YES).
|
||||
|
||||
#. When you log into the node-3 node, you can verify the following; if iSCSI
|
||||
sessions are active using iscsiadm -m session command, if a multipath device
|
||||
has been created by multipath daemon using multipath -ll command, if VM is
|
||||
using the multipath device using
|
||||
*lsof -n -p `pgrep -f <VM id>` | grep /dev/<DM device name>* command:
|
||||
|
||||
|
||||
.. image:: images/iscsiadmin.png
|
||||
:width: 50%
|
||||
|
||||
In the example, there are four active sessions (the same as on the EMC) and
|
||||
the multipath device dm-2 has been created. The multipath device has four
|
||||
paths and all are running (each one per iSCSI session). In the output of the
|
||||
third command, you can see that qemu is using */dev/dm-2* multipath device,
|
||||
so everything is fine.
|
|
@ -0,0 +1,67 @@
|
|||
==================
|
||||
Installation Guide
|
||||
==================
|
||||
|
||||
EMC VNX backend configuration
|
||||
============================================
|
||||
|
||||
Before starting a deployment, you have to preconfigure EMC VNX array and
|
||||
connect it properly to the environment. Both EMC SP IPs and all iSCSI ports
|
||||
should be available over storage interface from OpenStack nodes. To learn more
|
||||
about EMC VNX configuration, see `The official EMC VNX series documentation
|
||||
<https://mydocuments.emc.com/DynDispatcher?prod=VNX&page=ConfigGroups_VNX>`_
|
||||
|
||||
EMC VNX configuration checklist:
|
||||
|
||||
+------------------------------------+-------------------------+
|
||||
|Item to confirm | Status (tick if done) |
|
||||
+====================================+=========================+
|
||||
|Create username/password | |
|
||||
+------------------------------------+-------------------------+
|
||||
|Create at least one storage pool | |
|
||||
+------------------------------------+-------------------------+
|
||||
|Configure network: | |
|
||||
| - for A and B Service Processor | |
|
||||
| - for all iSCSI ports | |
|
||||
+------------------------------------+-------------------------+
|
||||
|
||||
|
||||
EMC VNX plugin installation
|
||||
============================================
|
||||
|
||||
#. Download the plugin from the `Fuel Plugins Catalog <https://www.mirantis.com
|
||||
/products/openstack-drivers-and-plugins/fuel-plugins/>`_.
|
||||
|
||||
#. Copy the plugin on already installed Fuel Master node. If you do not have
|
||||
the Fuel Master node yet, see
|
||||
`Quick Start Guide <https://software.mirantis.com/quick-start/>`_::
|
||||
|
||||
# scp emc_vnx-2.0-2.0.0-1.noarch.rpm root@<the_Fuel_Master_node_IP>:/tmp
|
||||
|
||||
#. Log into the Fuel Master node. Install the plugin::
|
||||
|
||||
# cd /tmp
|
||||
# fuel plugins --install emc_vnx-2.0-2.0.0-1.noarch.rpm
|
||||
|
||||
#. Check if the plugin was installed successfully::
|
||||
|
||||
# fuel plugins
|
||||
id | name | version | package_version
|
||||
---|---------|---------|----------------
|
||||
1 | emc_vnx | 2.0.0 | 2.0.0
|
||||
|
||||
|
||||
EMC VNX plugin removal
|
||||
============================================
|
||||
|
||||
#. Delete all Environments in which EMC VNX plugin has been enabled.
|
||||
|
||||
#. Uninstall the plugin:
|
||||
|
||||
# fuel plugins --remove emc_vnx==2.0.0
|
||||
|
||||
#. Check if the plugin was uninstalled successfully::
|
||||
|
||||
# fuel plugins
|
||||
id | name | version | package_version
|
||||
---|---------------------------|----------|------
|
|
@ -0,0 +1,19 @@
|
|||
==================
|
||||
Removal Guide
|
||||
==================
|
||||
|
||||
Zabbix plugin removal
|
||||
============================================
|
||||
|
||||
To uninstall Zabbix plugin, follow these steps:
|
||||
|
||||
1. Delete all Environments in which Zabbix plugin has been enabled.
|
||||
2. Uninstall the plugin:
|
||||
|
||||
# fuel plugins --remove zabbix_monitoring==2.0.0
|
||||
|
||||
3. Check if the plugin was uninstalled successfully:
|
||||
|
||||
# fuel plugins
|
||||
id | name | version | package_version
|
||||
---|---------------------------|----------|----------------
|
|
@ -0,0 +1,26 @@
|
|||
=====================================
|
||||
Key terms, acronyms and abbreviations
|
||||
=====================================
|
||||
|
||||
EMC VNX
|
||||
Unified, hybrid-flash storage used for virtual applications and
|
||||
cloud-environments.
|
||||
|
||||
Cinder
|
||||
OpenStack Block Storage.
|
||||
|
||||
iSCSI
|
||||
Internet Small Computer System Interface. An Internet Protocol (IP) - based
|
||||
storage networking standard for linking data storage facilities. By
|
||||
carrying SCSI commands over IP networks, iSCSI is used to facilitate data
|
||||
transfers over intranets and to manage storage over long distances. iSCSI
|
||||
can be used to transmit data over local area networks (LANs), wide area
|
||||
networks (WANs), or the Internet and can enable location-independent data
|
||||
storage and retrieval.
|
||||
|
||||
LVM
|
||||
LVM is a logical volume manager for the Linux kernel that manages disk
|
||||
drives and similar mass-storage devices.
|
||||
|
||||
LUN
|
||||
Logical Unit Number
|
After Width: | Height: | Size: 60 KiB |
After Width: | Height: | Size: 142 KiB |
After Width: | Height: | Size: 118 KiB |
After Width: | Height: | Size: 85 KiB |
After Width: | Height: | Size: 110 KiB |
After Width: | Height: | Size: 112 KiB |
After Width: | Height: | Size: 40 KiB |
After Width: | Height: | Size: 53 KiB |
After Width: | Height: | Size: 51 KiB |
|
@ -0,0 +1,14 @@
|
|||
****************************************************************
|
||||
Guide to the EMC VNX Plugin version 2.0.0 for Fuel
|
||||
****************************************************************
|
||||
|
||||
This document provides instructions for installing, configuring and using
|
||||
EMC VNX plugin for Fuel.
|
||||
|
||||
.. contents::
|
||||
.. include:: content/terms.rst
|
||||
.. include:: content/description.rst
|
||||
.. include:: content/installation.rst
|
||||
.. include:: content/configuration.rst
|
||||
.. include:: content/guide.rst
|
||||
.. include:: content/appendix.rst
|