diff --git a/doc/test/source/test_suite_smoke.rst b/doc/test/source/test_suite_smoke.rst index b896eef..7fb28a5 100644 --- a/doc/test/source/test_suite_smoke.rst +++ b/doc/test/source/test_suite_smoke.rst @@ -35,6 +35,43 @@ Steps Expected result ############### +Output:: + + [root@nailgun ~]# fuel plugins --install nsxv-2.0-2.0.0-1.noarch.rpm + Loaded plugins: fastestmirror, priorities + Examining nsxv-2.0-2.0.0-1.noarch.rpm: nsxv-2.0-2.0.0-1.noarch + Marking nsxv-2.0-2.0.0-1.noarch.rpm to be installed + Resolving Dependencies + --> Running transaction check + ---> Package nsxv-2.0.noarch 0:2.0.0-1 will be installed + --> Finished Dependency Resolution + + Dependencies Resolved + + + Package Arch Version Repository Size + Installing: + nsxv-2.0 noarch 2.0.0-1 /nsxv-2.0-2.0.0-1.noarch 20 M + + Transaction Summary + Install 1 Package + + Total size: 20 M + Installed size: 20 M + Downloading packages: + Running transaction check + Running transaction test + Transaction test succeeded + Running transaction + Installing : nsxv-2.0-2.0.0-1.noarch 1/1 + Ssh key file exists, skip generation + Verifying : nsxv-2.0-2.0.0-1.noarch 1/1 + + Installed: + nsxv-2.0.noarch 0:2.0.0-1 + + Complete! + Plugin nsxv-2.0-2.0.0-1.noarch.rpm was successfully installed. Ensure that plugin is installed successfully using cli, run command 'fuel plugins'. Check name, version and package version of plugin. @@ -70,6 +107,38 @@ Steps Expected result ############### +Output:: + + [root@nailgun ~]# fuel plugins --remove nsxv==2.0.0 + Loaded plugins: fastestmirror, priorities + Resolving Dependencies + --> Running transaction check + ---> Package nsxv-2.0.noarch 0:2.0.0-1 will be erased + --> Finished Dependency Resolution + + Dependencies Resolved + + Package Arch Version Repository Size + Removing: + nsxv-2.0 noarch 2.0.0-1 @/nsxv-2.0-2.0.0-1.noarch 20 M + + Transaction Summary + Remove 1 Package + + Installed size: 20 M + Downloading packages: + Running transaction check + Running transaction test + Transaction test succeeded + Running transaction + Erasing : nsxv-2.0-2.0.0-1.noarch 1/1 + Verifying : nsxv-2.0-2.0.0-1.noarch 1/1 + + Removed: + nsxv-2.0.noarch 0:2.0.0-1 + + Complete! + Plugin nsxv==2.0.0 was successfully removed. Verify that plugin is removed, run command 'fuel plugins'. @@ -100,12 +169,11 @@ Steps ##### 1. Login to the Fuel web UI. - 2. Click on the Settings tab. + 2. Click on the Networks tab. 3. Verify that section of NSXv plugin is present under the Other menu option. - 4. Verify that check box 'NSXv plugin' is disabled by default. - 5. Enable NSXv plugin by setting check box 'NSXv plugin' checked. - 6. Verify that all labels of 'NSXv plugin' section have the same font style and colour. - 7. Verify that all elements of NSXv plugin section are vertical aligned. + 4. Verify that check box 'NSXv plugin' is enabled by default. + 5. Verify that all labels of 'NSXv plugin' section have the same font style and colour. + 6. Verify that all elements of NSXv plugin section are vertical aligned. Expected result @@ -192,7 +260,7 @@ Steps 2. Create a new environment with following parameters: * Compute: KVM/QEMU with vCenter * Networking: Neutron with tunnel segmentation - * Storage: Ceph RBD for volumes (Cinder) + * Storage: Ceph RBD for images (Glance) * Additional services: default 3. Add nodes with following roles: * Controller diff --git a/doc/test/source/test_suite_system.rst b/doc/test/source/test_suite_system.rst index dbc2752..f3bcee4 100644 --- a/doc/test/source/test_suite_system.rst +++ b/doc/test/source/test_suite_system.rst @@ -300,10 +300,11 @@ Steps 3. Create distributed router and use it for routing between instances. Only available via CLI:: neutron router-create rdistributed --distributed True - 4. Navigate to Project -> Compute -> Instances - 5. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az. - 6. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az. - 7. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa. + 4. Disconnect default networks private and floating from default router and connect to distributed router. + 5. Navigate to Project -> Compute -> Instances + 6. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az. + 7. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az. + 8. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa. Expected result @@ -342,10 +343,11 @@ Steps 3. Create exclusive router and use it for routing between instances. Only available via CLI:: neutron router-create rexclusive --router_type exclusive - 4. Navigate to Project -> Compute -> Instances - 5. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az. - 6. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az. - 7. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM _1 to VM_2 and vice versa. + 4. Disconnect default networks private and floating from default router and connect to distributed router. + 5. Navigate to Project -> Compute -> Instances + 6. Launch instance VM_1 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter1 az. + 7. Launch instance VM_2 in the provider network with image TestVM-VMDK and flavor m1.tiny in the vcenter2 az. + 8. Verify that VMs of same provider network should communicate between each other. Send icmp ping from VM _1 to VM_2 and vice versa. Expected result diff --git a/doc/user/index.rst b/doc/user/index.rst index fffe76c..abf9555 100644 --- a/doc/user/index.rst +++ b/doc/user/index.rst @@ -13,11 +13,11 @@ Plugin can work with VMware NSX 6.1.3, 6.1.4, 6.2.1. Plugin versions: -* 2.x.x series is compatible with Fuel 8.0. Tests were done on plugin v2.0 with - VMware NSX 6.2. +* 2.x.x series is compatible with Fuel 8.0. Tests were performed on plugin v2.0 with + VMware NSX 6.2 and vCenter 5.5. -* 1.x.x series is compatible with Fuel 7.0. Tests were done on plugin v1.2 with - VMware NSX 6.1.4. +* 1.x.x series is compatible with Fuel 7.0. Tests were performed on plugin v1.2 with + VMware NSX 6.1.4 and vCenter 5.5. Through documentation we use terms "NSX" and "NSXv" interchangeably, both of these terms refer to `VMware NSX virtualized network platform @@ -36,6 +36,7 @@ Documentation contents source/installation source/environment source/configuration + source/limitations source/usage source/release-notes source/troubleshooting diff --git a/doc/user/source/build.rst b/doc/user/source/build.rst index 998f7de..2a64330 100644 --- a/doc/user/source/build.rst +++ b/doc/user/source/build.rst @@ -1,7 +1,7 @@ How to build the plugin ======================= -To build the plugin you first need to install fuel-plugin-build 4.0.0[1_] +To build the plugin you first need to install fuel-plugin-builder_ 4.0.0 .. code-block:: bash @@ -15,13 +15,13 @@ After that you can build the plugin: $ cd fuel-plugin-nsxv/ -puppet-librarian_ ruby package is required to installed. It is used to fetch +librarian-puppet_ ruby package is required to be installed. It is used to fetch upstream fuel-library_ puppet modules that plugin use. It can be installed via -gem package manager: +*gem* package manager: .. code-block:: bash - $ gem install puppet-librarian + $ gem install librarian-puppet .. code-block:: bash @@ -36,6 +36,6 @@ upload to Fuel master node: nsxv-2.0-2.0.0-1.noarch.rpm -.. [1] https://pypi.python.org/pypi/fuel-plugin-builder/4.0.0 -.. _puppet-librarian: https://librarian-puppet.com +.. _fuel-plugin-builder: https://pypi.python.org/pypi/fuel-plugin-builder/4.0.0 +.. _librarian-puppet: http://librarian-puppet.com .. _fuel-library: https://github.com/openstack/fuel-library diff --git a/doc/user/source/configuration.rst b/doc/user/source/configuration.rst index 2ed3510..c737e18 100644 --- a/doc/user/source/configuration.rst +++ b/doc/user/source/configuration.rst @@ -99,7 +99,7 @@ Plugin contains the following settings: metadata proxy service. #. Floating IP ranges -- dash separated IP addresses allocation pool from - external network, e.g. "start_ip_address-end_ip_address". + external network, e.g. "192.168.30.1-192.168.30.200". #. External network CIDR -- network in CIDR notation that includes floating IP ranges. diff --git a/doc/user/source/environment.rst b/doc/user/source/environment.rst index ee9d514..1f7fa45 100644 --- a/doc/user/source/environment.rst +++ b/doc/user/source/environment.rst @@ -29,23 +29,6 @@ Pay attention on which interface you assign *Public* network, OpenStack controllers must have connectivity with NSX Manager host through *Public* network since it is used as default route for packets. -Is is worth to mention that it is not possible to use compute nodes in this -type of cluster, because NSX switch is available only for ESXi, so it is not -possible to pass traffic inside compute node that runs Linux and KVM. Also it -does not matter on which network interface you assign *Private* traffic, -because it does not flow through controllers. - -*Floating IP range* settings on Networks are not used by the plugin, because it -user interface restricts specifying IP range is not within *Public* network -range. Plugin has its own *Floating IP range* setting. - -.. image:: /image/floating-ip.png - :scale: 70 % - -Pay attention that Neutron L2/L3 configuration on Settings tab does not have -effect in OpenStack cluster that uses NSXv. These settings contain settings -for GRE tunneling which does not have an effect with NSXv. - During deployment process plugin creates simple network topology for admin tenant. It creates provider network which connects tenants with transport (physical) network, one internal network and router that is connected to both diff --git a/doc/user/source/installation.rst b/doc/user/source/installation.rst index 34738a1..d9a3c56 100644 --- a/doc/user/source/installation.rst +++ b/doc/user/source/installation.rst @@ -5,7 +5,7 @@ Installation #. Upload package to Fuel master node. -#. Install the plugin with *fuel* command line tool: +#. Install the plugin with ``fuel`` command line tool: .. code-block:: bash diff --git a/doc/user/source/limitations.rst b/doc/user/source/limitations.rst index b73ee0e..bc181da 100644 --- a/doc/user/source/limitations.rst +++ b/doc/user/source/limitations.rst @@ -1,6 +1,52 @@ Limitations =========== +Nested clusters are not supported +--------------------------------- + +vCenter inventory allows user to form hierarchy by organizing vSphere entities +into folders. Clusters by default are created on first level of hierarchy, then +they can be put into folders. Plugin supports clusters that are located on +first level of hierarchy, if you have cluster inside folder that you want to +use it for OpenStack you have to put it on first level of hierarchy. + +Compute node is not supported +----------------------------- + +Is is worth to mention that it is not possible to use compute nodes in +vCenter/NSX cluster, because NSX v6.x switch is available only for ESXi, so it +is not possible to pass traffic inside compute node that runs Linux and KVM. + +Public floating IP range is ignored +----------------------------------- + +Fuel requires that floating IP range must be within *Public* IP range. This +requirement does not make sense with NSXv plugin, because edge nodes provide +connectivity for virtual machines, not controllers. Nevertheless floating IP +range for *Public* network must be assigned. Plugin provides it own field for +floating IP range. + +.. image:: /image/floating-ip.png + :scale: 70 % + +Pay attention that Neutron L2/L3 configuration on Settings tab does not have +effect in OpenStack cluster that uses NSXv. These settings contain settings +for GRE tunneling which does not have an effect with NSXv. + +Private network is not used +--------------------------- + +It does not matter on which network interface you assign *Private* network +traffic, because it does not flow through controllers. Nevertheless IP range +for *Private* network must be assigned. + +OpenStack environment reset/deletion +------------------------------------ + +Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment +gets reset or deleted. All logical switches and edge virtual machines remain +intact, it is up to operator to delete them and free resources. + Ceph block storage is not supported ----------------------------------- diff --git a/doc/user/source/troubleshooting.rst b/doc/user/source/troubleshooting.rst index 3f2a68c..ff8beee 100644 --- a/doc/user/source/troubleshooting.rst +++ b/doc/user/source/troubleshooting.rst @@ -1,3 +1,6 @@ + +.. _troubleshooting: + Troubleshooting =============== diff --git a/doc/user/source/usage.rst b/doc/user/source/usage.rst index 61422bf..551e094 100644 --- a/doc/user/source/usage.rst +++ b/doc/user/source/usage.rst @@ -1,6 +1,21 @@ Usage ===== +Easiest way to check that plugin works as expected would be trying to create +network or router using ``neutron`` command line client: + +:: + + [root@nailgun ~]# ssh node-4 # node-4 is a controller node + root@node-4:~# . openrc + root@node-4:~# neutron router-create r1 + +You can monitor plugin actions in ``/var/log/neutron/server.log`` and see how +edges appear in list of ``Networking & Security -> NSX Edges`` pane in vSphere +Web Client. If you see error messages check :ref:`Troubleshooting +` section. + + VXLAN MTU considerations ------------------------ @@ -121,10 +136,3 @@ Create a healthmonitor and associate it with the pool. $ neutron lb-heathmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 5 --pool http-pool $ neutron lb-healthmonitor-associate http-pool - -OpenStack environment reset/deletion ------------------------------------- - -Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment -gets reset or deleted. All logical switches and edge virtual machines remain -intact, it is up to operator to delete them and free resources.