Address observations from plugin validation team

- explicitly specify vCenter version
- specify limitation that only clusters on 1st level of hierarchy are
  supported
- group plugins limitations into separate section
- correct spelling of fuel-plugin-builder and turn it into hyperlink
- explain how to check that plugin works as expected
- add output into test plan for install/uninstall plugin
- add step for distributed/exclusive router tests to attach networks
- edit tests accordingly to new requirements

Change-Id: I201d712a1b2e8bdefe813ff394ab8dd6d88d6b26
(cherry picked from commit c64aa2ebc8)
This commit is contained in:
Igor Zinovik 2016-02-29 18:14:23 +03:00 committed by Artem Savinov
parent fbc2256282
commit 56662178f0
10 changed files with 161 additions and 50 deletions

View File

@ -35,6 +35,43 @@ Steps
Expected result
###############
Output::
[root@nailgun ~]# fuel plugins --install nsxv-2.0-2.0.0-1.noarch.rpm
Loaded plugins: fastestmirror, priorities
Examining nsxv-2.0-2.0.0-1.noarch.rpm: nsxv-2.0-2.0.0-1.noarch
Marking nsxv-2.0-2.0.0-1.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package nsxv-2.0.noarch 0:2.0.0-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
Package Arch Version Repository Size
Installing:
nsxv-2.0 noarch 2.0.0-1 /nsxv-2.0-2.0.0-1.noarch 20 M
Transaction Summary
Install 1 Package
Total size: 20 M
Installed size: 20 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : nsxv-2.0-2.0.0-1.noarch 1/1
Ssh key file exists, skip generation
Verifying : nsxv-2.0-2.0.0-1.noarch 1/1
Installed:
nsxv-2.0.noarch 0:2.0.0-1
Complete!
Plugin nsxv-2.0-2.0.0-1.noarch.rpm was successfully installed.
Ensure that plugin is installed successfully using cli, run command 'fuel plugins'. Check name, version and package version of plugin.
@ -70,6 +107,38 @@ Steps
Expected result
###############
Output::
[root@nailgun ~]# fuel plugins --remove nsxv==2.0.0
Loaded plugins: fastestmirror, priorities
Resolving Dependencies
--> Running transaction check
---> Package nsxv-2.0.noarch 0:2.0.0-1 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
Package Arch Version Repository Size
Removing:
nsxv-2.0 noarch 2.0.0-1 @/nsxv-2.0-2.0.0-1.noarch 20 M
Transaction Summary
Remove 1 Package
Installed size: 20 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : nsxv-2.0-2.0.0-1.noarch 1/1
Verifying : nsxv-2.0-2.0.0-1.noarch 1/1
Removed:
nsxv-2.0.noarch 0:2.0.0-1
Complete!
Plugin nsxv==2.0.0 was successfully removed.
Verify that plugin is removed, run command 'fuel plugins'.
@ -100,12 +169,11 @@ Steps
#####
1. Login to the Fuel web UI.
2. Click on the Settings tab.
2. Click on the Networks tab.
3. Verify that section of NSXv plugin is present under the Other menu option.
4. Verify that check box 'NSXv plugin' is disabled by default.
5. Enable NSXv plugin by setting check box 'NSXv plugin' checked.
6. Verify that all labels of 'NSXv plugin' section have the same font style and colour.
7. Verify that all elements of NSXv plugin section are vertical aligned.
4. Verify that check box 'NSXv plugin' is enabled by default.
5. Verify that all labels of 'NSXv plugin' section have the same font style and colour.
6. Verify that all elements of NSXv plugin section are vertical aligned.
Expected result
@ -192,7 +260,7 @@ Steps
2. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with tunnel segmentation
* Storage: Ceph RBD for volumes (Cinder)
* Storage: Ceph RBD for images (Glance)
* Additional services: default
3. Add nodes with following roles:
* Controller

View File

@ -300,10 +300,11 @@ Steps
3. Create distributed router and use it for routing between instances. Only available via CLI::
neutron router-create rdistributed --distributed True
4. Navigate to Project -> Compute -> Instances
5. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az.
6. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az.
7. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
4. Disconnect default networks private and floating from default router and connect to distributed router.
5. Navigate to Project -> Compute -> Instances
6. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az.
7. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az.
8. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
Expected result
@ -342,10 +343,11 @@ Steps
3. Create exclusive router and use it for routing between instances. Only available via CLI::
neutron router-create rexclusive --router_type exclusive
4. Navigate to Project -> Compute -> Instances
5. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az.
6. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az.
7. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM _1 to VM_2 and vice versa.
4. Disconnect default networks private and floating from default router and connect to distributed router.
5. Navigate to Project -> Compute -> Instances
6. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az.
7. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az.
8. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM _1 to VM_2 and vice versa.
Expected result

View File

@ -13,11 +13,11 @@ Plugin can work with VMware NSX 6.1.3, 6.1.4, 6.2.1.
Plugin versions:
* 2.x.x series is compatible with Fuel 8.0. Tests were done on plugin v2.0 with
VMware NSX 6.2.
* 2.x.x series is compatible with Fuel 8.0. Tests were performed on plugin v2.0 with
VMware NSX 6.2 and vCenter 5.5.
* 1.x.x series is compatible with Fuel 7.0. Tests were done on plugin v1.2 with
VMware NSX 6.1.4.
* 1.x.x series is compatible with Fuel 7.0. Tests were performed on plugin v1.2 with
VMware NSX 6.1.4 and vCenter 5.5.
Through documentation we use terms "NSX" and "NSXv" interchangeably, both of
these terms refer to `VMware NSX virtualized network platform
@ -36,6 +36,7 @@ Documentation contents
source/installation
source/environment
source/configuration
source/limitations
source/usage
source/release-notes
source/troubleshooting

View File

@ -1,7 +1,7 @@
How to build the plugin
=======================
To build the plugin you first need to install fuel-plugin-build 4.0.0[1_]
To build the plugin you first need to install fuel-plugin-builder_ 4.0.0
.. code-block:: bash
@ -15,13 +15,13 @@ After that you can build the plugin:
$ cd fuel-plugin-nsxv/
puppet-librarian_ ruby package is required to installed. It is used to fetch
librarian-puppet_ ruby package is required to be installed. It is used to fetch
upstream fuel-library_ puppet modules that plugin use. It can be installed via
gem package manager:
*gem* package manager:
.. code-block:: bash
$ gem install puppet-librarian
$ gem install librarian-puppet
.. code-block:: bash
@ -36,6 +36,6 @@ upload to Fuel master node:
nsxv-2.0-2.0.0-1.noarch.rpm
.. [1] https://pypi.python.org/pypi/fuel-plugin-builder/4.0.0
.. _puppet-librarian: https://librarian-puppet.com
.. _fuel-plugin-builder: https://pypi.python.org/pypi/fuel-plugin-builder/4.0.0
.. _librarian-puppet: http://librarian-puppet.com
.. _fuel-library: https://github.com/openstack/fuel-library

View File

@ -99,7 +99,7 @@ Plugin contains the following settings:
metadata proxy service.
#. Floating IP ranges -- dash separated IP addresses allocation pool from
external network, e.g. "start_ip_address-end_ip_address".
external network, e.g. "192.168.30.1-192.168.30.200".
#. External network CIDR -- network in CIDR notation that includes floating IP ranges.

View File

@ -29,23 +29,6 @@ Pay attention on which interface you assign *Public* network, OpenStack
controllers must have connectivity with NSX Manager host through *Public*
network since it is used as default route for packets.
Is is worth to mention that it is not possible to use compute nodes in this
type of cluster, because NSX switch is available only for ESXi, so it is not
possible to pass traffic inside compute node that runs Linux and KVM. Also it
does not matter on which network interface you assign *Private* traffic,
because it does not flow through controllers.
*Floating IP range* settings on Networks are not used by the plugin, because it
user interface restricts specifying IP range is not within *Public* network
range. Plugin has its own *Floating IP range* setting.
.. image:: /image/floating-ip.png
:scale: 70 %
Pay attention that Neutron L2/L3 configuration on Settings tab does not have
effect in OpenStack cluster that uses NSXv. These settings contain settings
for GRE tunneling which does not have an effect with NSXv.
During deployment process plugin creates simple network topology for admin
tenant. It creates provider network which connects tenants with transport
(physical) network, one internal network and router that is connected to both

View File

@ -5,7 +5,7 @@ Installation
#. Upload package to Fuel master node.
#. Install the plugin with *fuel* command line tool:
#. Install the plugin with ``fuel`` command line tool:
.. code-block:: bash

View File

@ -1,6 +1,52 @@
Limitations
===========
Nested clusters are not supported
---------------------------------
vCenter inventory allows user to form hierarchy by organizing vSphere entities
into folders. Clusters by default are created on first level of hierarchy, then
they can be put into folders. Plugin supports clusters that are located on
first level of hierarchy, if you have cluster inside folder that you want to
use it for OpenStack you have to put it on first level of hierarchy.
Compute node is not supported
-----------------------------
Is is worth to mention that it is not possible to use compute nodes in
vCenter/NSX cluster, because NSX v6.x switch is available only for ESXi, so it
is not possible to pass traffic inside compute node that runs Linux and KVM.
Public floating IP range is ignored
-----------------------------------
Fuel requires that floating IP range must be within *Public* IP range. This
requirement does not make sense with NSXv plugin, because edge nodes provide
connectivity for virtual machines, not controllers. Nevertheless floating IP
range for *Public* network must be assigned. Plugin provides it own field for
floating IP range.
.. image:: /image/floating-ip.png
:scale: 70 %
Pay attention that Neutron L2/L3 configuration on Settings tab does not have
effect in OpenStack cluster that uses NSXv. These settings contain settings
for GRE tunneling which does not have an effect with NSXv.
Private network is not used
---------------------------
It does not matter on which network interface you assign *Private* network
traffic, because it does not flow through controllers. Nevertheless IP range
for *Private* network must be assigned.
OpenStack environment reset/deletion
------------------------------------
Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment
gets reset or deleted. All logical switches and edge virtual machines remain
intact, it is up to operator to delete them and free resources.
Ceph block storage is not supported
-----------------------------------

View File

@ -1,3 +1,6 @@
.. _troubleshooting:
Troubleshooting
===============

View File

@ -1,6 +1,21 @@
Usage
=====
Easiest way to check that plugin works as expected would be trying to create
network or router using ``neutron`` command line client:
::
[root@nailgun ~]# ssh node-4 # node-4 is a controller node
root@node-4:~# . openrc
root@node-4:~# neutron router-create r1
You can monitor plugin actions in ``/var/log/neutron/server.log`` and see how
edges appear in list of ``Networking & Security -> NSX Edges`` pane in vSphere
Web Client. If you see error messages check :ref:`Troubleshooting
<troubleshooting>` section.
VXLAN MTU considerations
------------------------
@ -121,10 +136,3 @@ Create a healthmonitor and associate it with the pool.
$ neutron lb-heathmonitor-create --delay 3 --type HTTP --max-retries 3
--timeout 5 --pool http-pool
$ neutron lb-healthmonitor-associate <healthmonitor_name> http-pool
OpenStack environment reset/deletion
------------------------------------
Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment
gets reset or deleted. All logical switches and edge virtual machines remain
intact, it is up to operator to delete them and free resources.