User Guide RST Format Documentation

Added new user guide Documentation in rst format

Change-Id: Ie2e02c890c49712f8b2122082d28b2a05eb86037
This commit is contained in:
Rawan Herzallah 2016-03-20 13:29:36 +00:00 committed by Aviram Bar-Haim
parent 67f4ea041c
commit 1a52c1fd5f
25 changed files with 277 additions and 129 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

BIN
doc/source/_static/iser.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

BIN
doc/source/_static/neo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

BIN
doc/source/_static/qos.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 KiB

View File

@ -1,10 +1,8 @@
.. _appendix:
Appendix
========
Mellanox site where users can read about possible configurations:
- `Mellanox ConnectX-3 pro <http://www.mellanox.com/page/products_dyn?product_family=119&mtag=connectx_3_vpi>`_
- `HowTo Install Mirantis Fuel OpenStack with Mellanox <https://community.mellanox.com/docs/DOC-1474>`_
- `HowTo Install Mirantis Fuel OpenStack with Mellanox <https://community.mellanox.com/docs/DOC-2435>`_
- `Mellanox InfiniBand Switches <https://community.mellanox.com/docs/DOC-1164>`_

View File

@ -0,0 +1,5 @@
Contact Support
===============
| For reporting a problem, suggesting an enhancements or new features please contact
Mellanox support for help (support@mellanox.com).

View File

@ -1,5 +1,3 @@
.. _def:
Definitions, Acronyms and abbreviations
=======================================
@ -17,3 +15,18 @@ ConnectX-3 Pro
Infiniband
A computer-networking communications standard used in high-performance computing, features very high throughput and very low latency. It is used for data interconnect both among and within computers. InfiniBand is also utilized as either a direct, or switched interconnect between servers and storage systems, as well as an interconnect between storage systems.
VXLAN offload
Virtual Extensible LAN (VXLAN) is a network virtualization technology that attempts to improve the scalability problems associated with large cloud computing deployments
QoS
QoS is defined as the ability to guarantee certain network requirements like bandwidth, latency, jitter and reliability in order to satisfy a Service Level Agreement (SLA) between an application provider and end users.
VF
VF is virtual NIC that will be available for VMs on Compute nodes.
OpenSM
OpenSM is an InfiniBand compliant Subnet Manager and Administration, and runs on top of OpenIB. It provides an implementation of an InfiniBand Subnet Manager and Administration. Such a software entity is required to run for in order to initialize the InfiniBand hardware (at least one per each InfiniBand subnet).
PKey
PKEY stands for partition key. It is a 16 bit field within the InfiniBand header called BTH (Base Transport Header). A collection of endnodes with the same PKey in their PKey Tables are referred to as being members of a partition.

View File

@ -1,58 +1,70 @@
.. _configuration:
Mellanox plugin configuration
=============================
To configure Mellanox backend, follow these steps:
If you plan to enable VM to VM RDMA and to use iSER storage transport you need to configure switching fabric to support the features.
**Ethernet network:**
* Configure the required VLANs and enable flow control on the Ethernet switch ports.
All related VLANs should be enabled on the 40/56GbE switch (Private, Management, Storage networks).
#. Configure the required VLANs and enable flow control on the Ethernet switch ports.
#. All related VLANs should be enabled on the Mellanox switch ports (for relevant Fuel logical networks).
#. Login to the Mellanox switch by ssh and execute following commands:
* On Mellanox switches, use the commands in Mellanox `reference configuration <https://community.mellanox.com/docs/DOC-1460>`_
flow to enable VLANs (e.g. VLAN 1-100 on all ports).
.. note:: In case of using NEO auto provisioning, private network VLANs can be considered as dynamically configured.
* An example of configuring the switch for Ethernet deployment can be found in
the `Mirantis Planning Guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0/planning-guide.html#planning-guide>`_.
::
switch > enable
switch # configure terminal
switch (config) # vlan 1-100
switch (config vlan 1-100) # exit
switch (config) # interface ethernet 1/1 switchport mode hybrid
switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/2 switchport mode hybrid
switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
...
switch (config) # interface ethernet 1/36 switchport mode hybrid
switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
Environment creation and configuration
--------------------------------------
Flow control is required when running iSER (RDMA over RoCE - Ethernet). On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):::
#. `Create an environment <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#create-a-new-openstack-environment>`_.
switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force
#. Open the Settings tab of the Fuel web UI and scroll down the page.
In Mellanox OpenStack features section, select the required features:
save the configuration (permanently), run:::
.. image:: ./_static/mellanox_features_section.png
.. :alt: A screenshot of mellanox features section
switch (config) # configuration write
* The SR-IOV feature supports only KVM hypervisor and Neutron with VLAN segmentation (the latter should be enabled
at `environment creation step <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#create-a-new-openstack-environment>`_:
.. note:: When using an untagged storage network for iSER over Ethernet - please add the following commands for Mellanox switches:
.. image:: ./_static/hypervisor_type.png
.. :alt: A screenshot of hypervisors type
::
.. image:: ./_static/network_type.png
.. :alt: A screenshot network type
interface ethernet 1/1 switchport hybrid allowed-vlan add 1
interface ethernet 1/2 switchport hybrid allowed-vlan add 1
...
* The iSER feature requires “Cinder LVM over iSCSI for volumes” enabled in the “Storage” section:
or use trunk mode instead of hybrid.
.. image:: ./_static/storage_backends.png
.. :alt: A screenshot of storage backends
**Infiniband network:**
If you use OpenSM you need to enable virtualization and allow all PKeys:
#. Create a new opensm.conf file::
.. note:: When configuring Mellanox plugin, please mind the following:
opensm -c /etc/opensm/opensm.conif
#. You *cannot* install a plugin at the already existing environment.
That means, the plugin will appear in the certain environment only if the plugin was installed before creating the environment.
#. Enable virtualization by editing /etc/opensm/opensm.conf and changing the allow_both_pkeys value to TRUE.::
#. Enabling the “Mellanox Openstack features” section enables Mellanox
hardware support on your environment, regardless of the iSER & SR-IOV features.
allow_both_pkeys TRUE
#. In Ethernet cloud, when using SR-IOV & iSER, one of the virtual NICs for SR-IOV will be reserved to the storage network.
#. Define the partition keys which are analog for Ethernet VLAN. Each VLAN will be mapped to one PK. Add/Change the following with the command ::
#. When using SR-IOV you can set the number of virtual NICs (virtual functions) to up to 64
if your hardware and system capabilities like memory and BIOSsupport it).
In any case of SR-IOV hardware limitation, the installation will try to fallback the VF number to the default of 16 VFs.
vi /etc/opensm/partitions.conf file:
(Example)
management=0x7fff,ipoib, sl=0, defmember=full : ALL, ALL_SWITCHES=full,SELF=full;
vlan1=0x1, ipoib, sl=0, defmember=full : ALL;
vlan2=0x2, ipoib, sl=0, defmember=full : ALL;
. . .
vlan100=0x64, ipoib, sl=0, defmember=full : ALL;
#. Restart OpenSM::
/etc/init.d/opensmd restart

View File

@ -4,7 +4,7 @@
contain the root `toctree` directive.
======================================================
Guide to the Mellanox Plugin ver. 2.0-2.0.0-2 for Fuel
Guide to the Mellanox Plugin ver. 3.0-3.0.0-1 for Fuel
======================================================
User documentation
@ -12,12 +12,14 @@ User documentation
.. toctree::
:maxdepth: 2
definitions
overview
supported_images
installation
guide
installation
post_deployment
known_issues
supported_images
contact_support
troubleshooting_notes
appendix

View File

@ -1,52 +1,134 @@
.. _installation:
Installation Guide
==================
Mellanox plugin installation
----------------------------
To install Mellanox plugin, follow these steps:
#. Download the plugin from `Fuel Plugins Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/>`_.
#. Copy the plugin on already installed Fuel Master node.
If you do not have the Fuel Master node yet, see `Quick Start Guide <https://software.mirantis.com/quick-start/>`_ ::
# scp mellanox-plugin-2.0-2.0.0-1.noarch.rpm root@<Fuel_Master_ip>:/tmp
#. Install Fuel Master node. For more information on how to create a Fuel Master node, please see `Mirantis Fuel 8.0 documentation <https://docs.mirantis.com/openstack/fuel/fuel-8.0/>`_.
#. Download the plugin rpm file for MOS 8.0 from `Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins>`_.
#. Copy the plugin on already installed Fuel Master. scp can be used for that.::
# scp mellanox-plugin-3.0-3.0.0-1.noarch.rpm root@<Fuel_Master_ip>:/tmp
#. Install the plugin::
# cd /tmp
# fuel plugins --install mellanox-plugin-2.0-2.0.0-1.noarch.rpm
.. note:: Mellanox plugin installation replaces your bootstrap image only in Fuel 6.1 at this stage.
The original image is backed up in `/opt/old_bootstrap_image/`.
# cd /tmp
# fuel plugins --install mellanox-plugin-2.0-2.0.0-1.noarch.rpm
#. Verify the plugin was installed successfully by having it listed using ``fuel plugins`` command::
# fuel plugins
# id | name | version | package_version
# ---|-------------------|---------|----------------
# 1 | mellanox-plugin | 3.0.0 | 3.0.0
# fuel plugins
id | name | version | package_version
---|-------------------|---------|----------------
1 | mellanox-plugin | 2.0.0 | 2.0.0
#. Create new bootstrap image for supporting infiniband networks (``create_mellanox_vpi_bootstrap can be used``):::
#. You must boot your target nodes with the new bootstrap image (installed by the plugin)
**after** the plugin is installed. (In Fuel 7.0, the plugin doesnt replace bootstrap images and uses Mirantis bootstrap images)
Check your Fuels node status by running ``fuel node`` command:
[root@fuel ~]# create_mellanox_vpi_bootstrap
* If you already have nodes in `discover` status (with the original bootstrap image) like in the screenshot below:
.. image:: ./_static/list_fuel_nodes.png
:alt: A screenshot of the nodes list
:scale: 90%
::
use the ``reboot_bootstrap_nodes`` script to reboot your nodes with the new image.
Try to build image with data:
bootstrap:
certs: null
container: {format: tar.gz, meta_file: metadata.yaml}
. . .
. . .
. . .
Bootstrap image f790e9f8-5bc5-4e61-9935-0640f2eed949 has been activated.
.. note:: For more info about using the script, run `reboot_bootstrap_nodes --help`.
#. In case of using the customized bootstrap image, you must reboot your target nodes with the new bootstrap image you just created.
If you already have discovered nodes you can either reboot them manually or use :bash: `reboot_bootstrap_nodes` command. Run :bash: `reboot_bootstrap_nodes -h` for help.
#. Create an environment - for more information please see `how to create an environment <https://docs.mirantis.com/openstack/fuel/fuel-8.0/user-guide.html>`_.
We support both main network configurations:
- `Neutron with VLAN segmentation`
- `Neutron with tunneling segmentation`
.. image:: ./_static/ml2_driver.png
.. :alt: Network Configuration Type
#. Enable KVM hypervisor type. KVM is required to enable Mellanox Openstack features.
Open the Settings tab, select Compute section and then choose KVM hypervisor type.
.. image:: ./_static/kvm_hypervisor.png
.. :alt: Hypervisor Type
#. Enable desired Mellanox Openstack features.
Open the Other tab.
Enable Mellanox features by selecting Mellanox Openstack features checkbox.
Select relevant plugin version if you have multiple versions installed.
.. image:: ./_static/mellanox_features.png
.. :alt: Enable Mellanox Openstack Features
* If ``fuel node`` command doesnt show any nodes, you can boot your nodes only once after the plugin is installed.
Now you can enable one or more features relevant for your deployment:
#. Support SR-IOV direct port creation in private VLAN networks
**Note**: Relevant for `VLAN segmentation` only
- This enables Neutron SR-IOV support.
- **Number of virtual NICs** is amount of virtual functions (VFs) that will be available on Compute node.
**Note**: One VF will be utilized for iSER storage transport if you choose to use iSER. In this case you will get 1 VF less for Virtual Machines.
.. image:: ./_static/sriov.png
.. :alt: Enable SR-IOV
#. Support quality of service over VLAN networks with Mellanox SR-IOV direct ports (Neutron)
**Note**: Relevant for `VLAN segmentation` only
If selected, Neutron "Quality of service" (QoS) will be enabled for VLAN networks and ports over Mellanox HCAs.
**Note**: This feature is supported only if:
- Ethernet mode is used
- SR-IOV is enabled
.. image:: ./_static/qos.png
.. :alt: Enable QoS
#. Support NEO SDN controller auto VLAN Provisioning (Neutron)
**Note**: Relevant for `VLAN segmentation` only
If selected, Mellanox NEO Mechanism driver will be used in order to support Auto switch VLAN auto-provisioning for Ethernet network
To use this feature please provide IP address, username and password for NEO SDN controller.
.. image:: ./_static/neo.png
.. :alt: Enable NEO Driver mechanism support
Additional info about NEO can be found by link: https://community.mellanox.com/docs/DOC-2155
#. Support VXLAN Offloading (Neutron)
**Note**: Relevant for `tunneling segmentation` only
If selected, Mellanox hardware will be used to achieve a better performance and significant CPU overhead reduction using VXLAN traffic offloading.
.. image:: ./_static/vxlan.png
.. :alt: Enable VXLAN offloading
#. iSER protocol for volumes (Cinder)
**Note**: Relevant for both `VLAN segmentation` and `VLAN segmentation` deployments
By enabling this feature you.ll use iSER block storage transport instead or ISCSI.
iSER stands for ISCSI Extension over RDMA and improver latency, bandwidth and reduce CPU overhead.
**Note**: In Ethernet mode, a dedicated Virtual Function will be reserved for a storage endpoint, and the priority flow control has to be enabled on the switch side port.
**Note**: In Infiniband mode, the IPoIB parent interface of the network storage interface will be used as the storage endpoint
.. image:: ./_static/iser.png
.. :alt: Enable iSER
.. note:: When configuring Mellanox plugin, please mind the following:
#. You *cannot* install a plugin for existing environment without the plugin support.
That means, the plugin will appear in the certain environment only if the plugin was installed before creating the environment. You can upgrade the plugin for existing non-deployed environments.
#. Enabling the mellanox Openstack features hardware support on your environment, regardless of the chosen Mellanox features.
#. In Ethernet cloud, when using SR-IOV & iSER, one of the virtual NICs for SR-IOV will be reserved to the storage network.
#. When using SR-IOV you can set the number of virtual NICs (virtual functions) to up to 62
if your hardware and system capabilities like memory and BIOS support it).
In any case of SR-IOV hardware limitation, the installation will try to fallback a VF number to the default of 8 VFs.

View File

@ -1,32 +1,34 @@
.. _known_issues:
Known issues
============
Issue 1
- Description: This release supports Mellanox ConnectX®-3 family adapters only.
- Workaround: NA
Issue 2
- Description: For custom (OEM) adapter cards based on Mellanox ConnectX-3 / ConnectX-3 Pro ICs, adapter firmware must be manually burnt prior to the installation with SR-IOV support
- Workaround: See `the firmware installation instructions <http://www.mellanox.com/page/oem_firmware_download>`_.
Issue 2
- Description: The number of SR-IOV virtual functions supported by Mellanox adapters is up to 16 on ConnectX-3 adapters and up to 62 on ConnectX-3 Pro adapters (depends on your HW capabilities).
- Workaround: NA
Issue 3
- Description: The number of SR-IOV virtual functions supported by Mellanox adapters is 16 on ConnectX-3 adapters and 128 on ConnectX-3 Pro adapters.
- Description: When using a dual port physical NIC for SR-IOV over Ethernet, the Openstack private network has to be allocated on the first port.
- Workaround: NA
Issue 4
- Description: Deploying more than 10 nodes at a time over a slow PXE network can cause timeouts during the OFED installation
- Workaround: Deploy chunks of up to 10 nodes or increase the delay-before-timeout in the plugins tasks.yaml file on the Fuel master node. If timeout occurs, click **Deploy Changes** button again.
- Description: A single port HCA might not be supported for SRIOV and iSER over Ethernet network.
- Workaround: NA
Issue 5
- Description: Using an untagged storage network on the same interface with a private network over Ethernet is not supported when using iSER.
- Workaround: Use a separate interface for untagged storage networks for iSER over Ethernet or use a tagged storage network instead.
- Description: SR-IOV QoS is supported only with updating SR-IOV existing ports with a policy. QoS-policy detach might result in non accurate bandwidth limit. (https://bugs.launchpad.net/neutron/+bug/1504165).
- Workaround: Delete port / instance and attach a new port.
Issue 6
- Description: Recovering of a Cinder target might take more than 10 minutes in tagged storage network.
- Workaround: Ping from the Cinder target after the reboot to another machine in the cluster over the storage network. The VLAN storage network will be over vlan<vlan#> interface.
- Description: Starting large amount (>15) of IB VMs with normal port at once may result in some VMs not getting DHCP over InfiniBand networks.
- Workaround: Reboot VMs that didn't get IP from DHCP on time or start VMs in smaller chunks (<10).
Issue 7
- Description: After large InfinBand deployment of more than ~20 nodes at once with Controllers HA, it might take time for controllers services to stabilize.
- Workaround: Restart openibd service on controller nodes after the deployment, or deploy with phases.
Issue 8
- Description: Network verification for IB network is not supported over untagged networks or after deployment.
- Workaround: NA

View File

@ -1,47 +1,48 @@
.. _overview:
Mellanox plugin
===============
The Mellanox Fuel plugin is a bundle of scripts, packages and metadata that will extend Fuel and add Mellanox features such as SR-IOV for networking and iSER protocol for storage.
| The Mellanox Fuel plugin is a bundle of scripts, packages and metadata that will extend Fuel
and add Mellanox features such as SR-IOV for networking and iSER protocol for storage.
Beginning with version 5.1, Fuel can configure `Mellanox ConnectX-3 Pro <http://www.mellanox.com/page/products_dyn?product_family=119&mtag=connectx_3_vpi>`_ network adapters to accelerate the performance of compute and storage traffic.This implements the following performance enhancements:
| Fuel can configure `Mellanox ConnectX-3 Pro
<http://www.mellanox.com/page/products_dyn?product_family=161&mtag=connectx_3_pro_vpi_card>`_
network adapters to accelerate the performance of compute and storage traffic.
- Compute nodes use SR-IOV based networking.
- Cinder nodes use iSER block storage as the iSCSI transport rather than the default iSCSI over TCP.
This implements the following performance enhancements:
These features reduce CPU overhead, boost throughput, reduce latency, and enable network traffic to bypass the software switch layer (e.g. Open vSwitch).
- Compute nodes network enhancements:
- SR-IOV based networking
- QoS for VM traffic
- VXLAN traffic offload
- Cinder nodes use iSER block storage as the iSCSI transport rather than the default iSCSI over TCP.
Starting with version 6.1, Mellanox plugin can deploy those features over Infiniband network as well. However, for Fuel version 7.0, plugin uses Mirantis bootstrap images and therefore only Ethernet Network is supported.
| These features reduce CPU overhead, boost throughput, reduce latency, and enable network
traffic to bypass the software switch layer (e.g. Open vSwitch).
Developers specification
| Mellanox Plugin integration with Mellanox NEO SDN Controller enables switch VLAN auto
provisioning and port configuration for Ethernet and SM PK auto provisioning for InfiniBand
networks, over private VLAN networks.
Developer's specification
-------------------------
Please refer to: `HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 7.0 <https://community.mellanox.com/docs/DOC-2374>`_
| Please refer to: `HowTo Install Mellanox OpenStack Plugin for Mirantis Fuel 8.0
<https://community.mellanox.com/docs/DOC-2435>`_
Requirements
------------
+-----------------------------------+-----------------+
| Requirement | Version/Comment |
+===================================+=================+
| Mirantis OpenStack compatibility | 7.0 |
+-----------------------------------+-----------------+
The Mellanox ConnectX-3 Pro adapters family supports up to 40/56GbE.
To reach 56 GbE speed in your network with ConnectX-3 Pro adapters, you must use Mellanox Ethernet / Infiniband switches (e.g. SX1036)
with the additional 56GbE license.
The switch ports should be configured specifically to use 56GbE speed. No additional configuration is required on the adapter side.
For additional information about how to run in 56GbE speed, see
`HowTo Configure 56GbE Link on Mellanox Adapters and Switches <http://community.mellanox.com/docs/DOC-1460>`_.
For detailed setup configuration and BOM (Bill of Material) requirements please see
`Fuel Ethernet cloud details <https://community.mellanox.com/docs/DOC-2077>`_ or
`Fuel Infiniband cloud details <https://community.mellanox.com/docs/DOC-2304>`_.
| The Mellanox ConnectX-3 Pro adapters family supports up to 40/56 Gb. To reach 56 Gb speed in
your network with ConnectX-3 Pro adapters, you must use Mellanox Ethernet / Infiniband switches
supporting 56 Gb (e.g. SX1710, SX6710). The switch ports should be configured specifically to use
56 Gb speed. No additional configuration is required on the adapter side. For additional
information about how to run in 56GbE speed, see `HowTo Configure 56GbE Link on Mellanox Adapters
and Switches <http://community.mellanox.com/docs/DOC-1460>`_.
Limitations
-----------
- Mellanox SR-IOV and iSER are supported only when choosing Neutron with VLAN.
- OVS bonding and Mellanox SR-IOV based networking over the Mellanox ConnectX-3 Pro adapter family are not supported.
- In order to use the SR-IOV feature, one should choose KVM hypervisor and “Neutron with Vlan segmentation” in the Network settings tab.
- Mellanox SR-IOV and iSER are supported only when choosing Neutron with VLAN segmentation.
- ConnectX-3 Pro adapters are required in order to enable VXLAN HW offload over Ethernet networks.
- QoS feature is implemented only for Ethernet VLAN SR-IOV ports using ConnectX-3 Pro adapters.
- Infiniband is configured by using OpenSM only.

View File

@ -0,0 +1,22 @@
Post-deployment SR-IOV test scripts
===================================
In order to test that SR-IOV is working properly **after** deploying an OpenStack environment with SR-IOV support **successfully**, a couple of scripts have been added under /sbin/:
**Note**: Please use the 2 last commands with caution, since they can delete some of your environment ports and image resources.
- **upload_sriov_cirros**
Uploads a pre-configured Mellanox Cirros image to glance images list.
- **start_sriov_vm**
For starting a VM with direct port from previous image. In order to test that SR-IOV is working properly, start two SR-IOV VM.s and make sure you have ping between these nodes. Assumes upload_sriov_cirros was executed before.
- **delete_sriov_ports**
Deletes all SR-IOV ports created in previous scripts.
- **delete_all_glance_images**
Deletes all Glance images.

View File

@ -1,16 +1,14 @@
.. _supported_images:
Supported images
----------------
+-------+--------------------+---------------------------------+
| Issue | Supported OS | Tested kernel |
+=======+====================+=================================+
| 1 | CentOS6.5 | 2.6.32-431.el6.x86_64 |
| 1 | CentOS7 | 3.10.0-229.14.1.el7.x86_64 |
+-------+--------------------+---------------------------------+
| 2 | ubuntu14.04.3 | 3.19.0-25-generic |
| 2 | ubuntu14.04 | 3.13.0-67-generic |
+-------+--------------------+---------------------------------+
| 3 | Cirros Mellanox | 3.2.0-80-virtual Ubuntu 3_amd64 |
| 3 | Cirros Mellanox | 3.11.0-26-generic |
+-------+--------------------+---------------------------------+
This Fuel Mellanox plugin ver. 2.0-2.0.0-1 is using MLNX_OFED version 2.3-2.0.8.
This Fuel Mellanox plugin ver. 3.0-3.0.0-1 is using MLNX_OFED version 3.1-1.5.5.

View File

@ -0,0 +1,13 @@
Troubleshooting notes
=====================
- Please verify your network configurations prior to the deployment.
- Please make sure all your Health check Tests are passing.
- To make sure SR-IOV is working properly, please refer to user scripts mentioned previously.
- Mellanox Plugin log file is located on each slave node on the following path:
- /var/log/Mellanox-plugin.log
- For further information you can check the relevant logs too:
- /var/log/docker-logs/astute/astute.log (fuel-master)
- /var/log/dmesg (target nodes)
- /var/log/messages (target nodes)
- For debugging Ethernet or Infinband driver issues, please deploy with Openstack debug logging enabled.

View File

@ -5,7 +5,7 @@ name: mellanox-plugin
title: Mellanox Openstack Features
# Plugin version
version: 2.0.43
version: 3.0.0
# Description
description: Enable features over Mellanox hardware