From 27abd341778d080f382d9859a895e7a0bb9ec0d4 Mon Sep 17 00:00:00 2001 From: Sam Stoelinga Date: Tue, 18 Aug 2015 11:06:22 +0800 Subject: [PATCH] Add advanced SR-IOV section to networking guide Co-Authored-By: Moshe Levi Closes-bug: #1476242 Change-Id: Idf85af2153da2ce51383542f4c2616c62b56bafb --- doc/networking-guide/source/adv_config.rst | 1 + .../source/adv_config_sriov.rst | 346 ++++++++++++++++++ tox.ini | 3 +- 3 files changed, 349 insertions(+), 1 deletion(-) create mode 100644 doc/networking-guide/source/adv_config_sriov.rst diff --git a/doc/networking-guide/source/adv_config.rst b/doc/networking-guide/source/adv_config.rst index f2af08cf00..4674fbe034 100644 --- a/doc/networking-guide/source/adv_config.rst +++ b/doc/networking-guide/source/adv_config.rst @@ -13,3 +13,4 @@ Advanced configuration adv_config_group_policy.rst adv_config_debugging.rst adv_config_ipv6.rst + adv_config_sriov.rst diff --git a/doc/networking-guide/source/adv_config_sriov.rst b/doc/networking-guide/source/adv_config_sriov.rst new file mode 100644 index 0000000000..25df812d7d --- /dev/null +++ b/doc/networking-guide/source/adv_config_sriov.rst @@ -0,0 +1,346 @@ +========================== +Using SR-IOV functionality +========================== + +The purpose of this page is to describe how to enable SR-IOV +functionality available in OpenStack (using OpenStack Networking) as of +the Juno release. This page serves as a how-to guide on configuring +OpenStack Networking and OpenStack Compute to create neutron SR-IOV ports. + +The basics +~~~~~~~~~~ + +PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) +specification defines a standardized mechanism to virtualize PCIe devices. +The mechanism can virtualize a single PCIe Ethernet controller to appear as +multiple PCIe devices. You can directly assign each virtual PCIe device to +a VM, bypassing the hypervisor and virtual switch layer. As a result, users +are able to achieve low latency and near-line wire speed. + +The following terms are used over the document: + +.. list-table:: + :header-rows: 1 + :widths: 10 90 + + * - Term + - Definition + * - PF + - Physical Function. This is the physical ethernet controller + that supports SR-IOV. + * - VF + - Virtual Function. This is a virtual PCIe device created + from a physical ethernet controller. + + +In order to enable SR-IOV, the following steps are required: + +#. Create Virtual Functions (Compute) +#. Whitelist PCI devices in nova-compute (Compute) +#. Configure neutron-server (Controller) +#. Configure nova-scheduler (Controller) +#. Enable Neutron sriov-agent (Compute) + +Neutron sriov-agent +-------------------- +There are 2 ways of configuring SR-IOV: + +#. Without the sriov-agent running on each compute node +#. With the sriov-agent running on each compute node + +The sriov-agent allows you to set the admin state of ports and +starting from Liberty allows you to control +port security (enable and disable spoofchecking) and QoS rate limit settings. + +When would you decide to not use sriov-agent? + +- **Hardware is not supported:** Currently Intel cards are known not to work + with sriov-agent. +- **Extra configuration burden:** Using sriov-agent requires you to run an + extra agent on each compute node. + + +Known limitations +~~~~~~~~~~~~~~~~~ + +* No OpenStack Dashboard integration. Users need to use CLI or API to + create Neutron SR-IOV ports. +* The following functionalities are not implemented for SR-IOV ports: + security groups, QoS, and ARP spoofing filtering. +* When using Intel SR-IOV cards the sriov-agent should be disabled. +* Live migration is not supported for instances with SR-IOV ports. + + .. note:: + QoS and ARP spoofing filtering are supported since Liberty when using + sriov-agent. + +Environment example +~~~~~~~~~~~~~~~~~~~ +We recommend using Open vSwitch with VLAN as segration. This +way you can combine normal VMs without SR-IOV ports +and instances with SR-IOV ports on a single neutron +network. + +.. note:: + Thoughout this guide, eth3 is used as the PF and + physnet2 is used as the provider network configured as a VLAN range. + You are expected to change this according to your actual + environment. + + +Create Virtual Functions (Compute) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +In this step, create the VFs for the network +interface that will be used for SR-IOV. +Use eth3 as PF, which is also used +as the interface for Open vSwitch VLAN and has access +to the private networks of all machines. + +The step to create VFs differ between SR-IOV card ethernet controller +manufacturers. Currently the following manufactuers are known to work: + +- Intel +- Mellanox + +For **Melanox SR-IOV ethernet cards** see: +`Mellanox: HowTo Configure SR-IOV VFs +`_ + +To create the VFs on Ubuntu for **Intel SR-IOV ethernet cards**, +do the following: + +#. Make sure SR-IOV is enabled in BIOS, check for VT-d and + make sure it is enabled. After enabling VT-d, enable IOMMU on + Linux by adding ``intel_iommu=on`` to kernel parameters. Edit the file + :file:`/etc/default/grub`: + + .. code-block:: ini + + GRUB_CMDLINE_LINUX_DEFAULT="nomdmonddf nomdmonisw intel_iommu=on + +#. Run the following if you have added new parameters: + + .. code-block:: console + + # update-grub + # reboot + +#. On each compute node, create the VFs via the PCI SYS interface: + + .. code-block:: console + + # echo '7' > /sys/class/net/eth3/device/sriov_numvfs + + Alternatively VFs can be created by passing the ``max_vfs`` to the kernel + module of your network interface. The ``max_vfs`` parameter has been + deprecated so the PCI SYS interface is the preferred method. + +#. Now verify that the VFs have been created (Should see Virtual Function + device): + + .. code-block:: console + + # lspci | grep Ethernet + +#. Persist created VFs on reboot: + + .. code-block:: console + + # echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local + + + .. note:: + The suggested way of making PCI SYS settings persistent + is through :file:`sysfs.conf` but for unknown reason + changing :file:`sysfs.conf` does not have any effect on Ubuntu 14.04. + + +Whitelist PCI devices nova-compute (Compute) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Tell nova-compute which pci devices are allowed to be passed +through. Edit the file :file:`/etc/nova/nova.conf`: + +.. code-block:: ini + + [default] + pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"} + +This tells nova that all VFs belonging to eth3 are allowed to be passed +through to VMs and belong to the neutron provider network physnet2. Restart +nova compute with :command:`service nova-compute restart` to let the changes +have effect. + +Alternatively the ``pci_passthrough_whitelist`` parameter also supports +whitelisting by: + +- PCI address: The address uses the same syntax as in ``lspci`` and an + asterisk (*) can be used to match anything. + + .. code-block:: ini + + pci_passthrough_whitelist = { "address": "[[[[]:]]:][][.[]]", "physical_network": "physnet2" } + + # Example match any domain, bus 0a, slot 00, all function + pci_passthrough_whitelist = { "address": "*:0a:00.*", "physical_network": "physnet2" } + +- PCI ``vendor_id`` and ``product_id`` as displayed by the Linux utility + ``lspci``. + + .. code-block:: ini + + pci_passthrough_whitelist = { "vendor_id": "", "product_id": "", + "physical_network": "physnet2"} + + +If the device defined by the PCI address or devname corresponds to a SR-IOV PF, +all VFs under the PF will match the entry. Multiple pci_passhtrough_whitelist +entries per host are supported. + + +Configure neutron-server (Controller) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. Add ``sriovnicswitch`` as mechanism driver. Edit the file + :file:`/etc/neutron/plugins/ml2/ml2_conf.ini`: + + .. code-block:: ini + + mechanism_drivers = openvswitch,sriovnicswitch + +#. Find out the ``vendor_id`` and ``product_id`` of your **VFs** by logging + in to your compute node with VFs previously created: + + .. code-block:: console + + # lspci -nn | grep -i ethernet + 87:00.0 Ethernet controller [0200]: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection [8086:10f8] (rev 01) + 87:10.1 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01) + 87:10.3 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01) + +#. Update the :file:`/etc/neutron/plugins/ml2/ml2_conf_sriov.ini` on each + controller. In our case the vendor_id is 8086 and the product_id is 10ed. + Tell Neutron the vendor_id and product_id of the VFs that are supported. + + .. code-block:: ini + + supported_pci_vendor_devs = 8086:10ed + +#. Enable or disable the sriovagent. Please see the section, + `Neutron sriov-agent`_ if you want to disable or enable the sriovagent. + Edit the agent_required parameter under the ml2_sriov section in + :file:`/etc/neutron/plugins/ml2/ml2_conf_sriov.ini`: + + .. code-block:: ini + + [ml2_sriov] + agent_required = True + + .. note:: + If you enabled agent_required=True make sure that you run the sriov-agent + on each compute node. + + +#. Add the newly configured :file:`ml2_conf_sriov.ini` as parameter to + the neutron-server daemon. Edit the file + :file:`/etc/init/neutron-server.conf`: + + .. code-block:: ini + + --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini + --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini + +#. To make the changes have effect, restart the neutron-server service with + the :command:`service neutron-server restart`. + +Configure nova-scheduler (Controller) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. On every controller node running nova-scheduler add + PCIDeviceScheduler to the the scheduler_default_filters parameter + and add a new line for scheduler_available_filters parameter + under the [default] section in + :file:`/etc/nova/nova.conf`: + + .. code-block:: ini + + [DEFAULT] + scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter + scheduler_available_filters = nova.scheduler.filters.all_filters + scheduler_available_filters = nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter + + +#. Now restart the nova-scheduler service with + :command:`service nova-scheduler restart`. + + +Enable Neutron sriov-agent (Compute) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. note:: + You only need to enable the sriov-agent if you decided to set + agent_required=True in the step `Configure neutron-server (Controller)`_. + If you set agent_required=False, you can safely skip this step. + +#. On each compute node edit the file + :file:`/etc/neutron/plugins/ml2/ml2_conf_sriov.ini`: + + .. code-block:: ini + :linenos: + + [securitygroup] + firewall_driver = neutron.agent.firewall.NoopFirewallDriver + + [sriov_nic] + physical_device_mappings = physnet2:eth3 + exclude_devices = + + exclude_devices is empty so all the VFs associated with eth3 may be + configured by the agent. If you want to exclude specific VFs, add + them to the exclude_devices parameter as follows: + + .. code-block:: ini + + exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2 + +#. Test whether the sriov-agent runs successfully: + + .. code-block:: console + + # neutron-sriov-nic-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini + +#. Enable the neutron-sriov-agent to start automatically at system start. + If your distribution does not come with a daemon file for your init + system, create a daemon configuration file. + For example on Ubuntu install the package: + + .. code-block:: console + + # apt-get install neutron-plugin-sriov-agent + + +Creating instances with SR-IOV ports +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +After the configuration is done, you can now launch Instances +with Neutron SR-IOV ports. + +#. Get the id of the neutron network where you want the SR-IOV port to be + created: + + .. code-block:: console + + $ net_id=`neutron net-show net04 | grep "\ id\ " | awk '{ print $4 }'` + +#. Create the SR-IOV port. We specify vnic_type direct, but other options + include macvtap: + + .. code-block:: console + + $ port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'` + +#. Create the VM. For the nic we specify the SR-IOV port created in step 2: + + .. code-block:: console + + $ nova boot --flavor m1.large --image ubuntu_14.04 --nic port-id=$port_id test-sriov + diff --git a/tox.ini b/tox.ini index fb98cb36cf..1f88751fb7 100644 --- a/tox.ini +++ b/tox.ini @@ -94,7 +94,8 @@ commands = {toxinidir}/tools/generatepot-rst.sh {posargs} [doc8] # Settings for doc8: # Ignore target directories -ignore-path = doc/*/target,doc/*/build*,doc/common-rst/glossary.rst +# TODO(samos123): remove sriov from ignore when fix for #1487302 is in doc8 +ignore-path = doc/*/target,doc/*/build*,doc/common-rst/glossary.rst,doc/networking-guide/source/adv_config_sriov.rst # File extensions to use extensions = .rst,.txt # Maximal line length should be 79 but we have some overlong lines.