diff --git a/doc/source/configuration/index.rst b/doc/source/configuration/index.rst index 293df825..438e3d88 100644 --- a/doc/source/configuration/index.rst +++ b/doc/source/configuration/index.rst @@ -81,7 +81,53 @@ Alternatively, CSVs can be used: Block Storage (Cinder) configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -TODO +This section describes Nova configuration options that handle the way in which +Cinder volumes are consumed. + +When having multiple paths connecting the host to the storage backend, +make sure to enable the following config option: + +.. code-block:: ini + + [hyperv] + use_multipath_io = True + +This will ensure that the available paths are actually leveraged. Also, before +attempting any volume connection, it will ensure that the MPIO service is +enabled and that passthrough block devices (iSCSI / FC) are claimed by MPIO. +SMB backed volumes are not affected by this option. + +In some cases, Nova may fail to attach volumes due to transient connectivity +issues. The following options specify how many and how often retries should be +performed. + +.. code-block:: ini + + [hyperv] + # Those are the default values. + volume_attach_retry_count = 10 + volume_attach_retry_interval = 5 + + # The following options only apply to disk scan retries. + mounted_disk_query_retry_count = 10 + mounted_disk_query_retry_interval = 5 + +When having one or more hardware iSCSI initiators, you may use the following +config option, explicitly telling Nova which iSCSI initiator to use: + +.. code-block:: ini + + [hyperv] + iscsi_initiator_list = PCI\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\\4&257301f0&0&0010_0, PCI\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\4&257301f0&0&0010_1 + +The list of available initiators may be retrieved using: + +.. code-block:: powershell + + Get-InitiatorPort + +If no iSCSI initiator is specified, the MS iSCSI Initiator service will only +pick one of the available ones when establishing iSCSI sessions. Live migration configuration diff --git a/doc/source/install/prerequisites.rst b/doc/source/install/prerequisites.rst index 525faf00..257f3277 100644 --- a/doc/source/install/prerequisites.rst +++ b/doc/source/install/prerequisites.rst @@ -52,10 +52,18 @@ not. If all the requirements are met, the host is Hyper-V capable. Storage considerations ---------------------- -The Hyper-V compute nodes needs to have ample storage for storing the virtual -machine images running on the compute nodes (for boot-from-image instances). +Instance files +~~~~~~~~~~~~~~ -For Hyper-V compute nodes, the following storage options are available: +Nova will use a pre-configured directory for storing instance files such as: + +* instance boot images and ``ephemeral`` disk images +* instance config files (config drive image and Hyper-V files) +* instance console log +* cached Glance images +* snapshot files + +The following options are available for the instance directory: * Local disk. * SMB shares. Make sure that they are persistent. @@ -64,6 +72,11 @@ For Hyper-V compute nodes, the following storage options are available: * Storage Spaces Direct (``S2D``) * SAN LUNs as underlying CSV storage +.. note:: + + Ample storage may be required when using Nova "local" storage for the + instance virtual disk images (as opposed to booting from Cinder volumes). + Compute nodes can be configured to use the same storage option. Doing so will result in faster cold / live migration operations to other compute nodes using the same storage, but there's a risk of disk overcommitment. Nova is not aware @@ -77,6 +90,99 @@ to spawn only one instance, but both will spawn on different hosts, overcommiting the disk by 60 GB. +Cinder volumes +~~~~~~~~~~~~~~ + +The Nova Hyper-V driver can attach Cinder volumes exposed through the +following protocols: + +* iSCSI +* Fibre Channel +* SMB - the volumes are stored as virtual disk images (e.g. VHD / VHDX) + +.. note:: + + The Nova Hyper-V Cluster driver only supports SMB backed volumes. The + reason is that the volumes need to be available on the destination + host side during an unexpected instance failover. + +Before configuring Nova, you should ensure that the Hyper-V compute nodes +can properly access the storage backend used by Cinder. + +The MSI installer can enable the Microsoft Software iSCSI initiator for you. +When using hardware iSCSI initiators or Fibre Channel, make sure that the HBAs +are properly configured and the drivers are up to date. + +Please consult your storage vendor documentation to see if there are any other +special requirements (e.g. additional software to be installed, such as iSCSI +DSMs - Device Specific Modules). + +Some Cinder backends require pre-configured information (specified via volume +types or Cinder Volume config file) about the hosts that are going to consume +the volumes (e.g. the operating system type), based on which the LUNs will be +created/exposed. The reason is that the supported SCSI command set may differ +based on the operating system. An incorrect LUN type may prevent Windows nodes +from accessing the volumes (although generic LUN types should be fine in most +cases). + +Multipath IO +"""""""""""" + +You may setup multiple paths between your Windows hosts and the storage +backends in order to provide increased throughput and fault tolerance. + +When using iSCSI or Fibre Channel, make sure to enable and configure the +MPIO service. MPIO is a service that manages available disk paths, performing +failover and load balancing based on pre-configured policies. It's extendable, +in the sense that Device Specific Modules may be imported. + +The MPIO service will ensure that LUNs accessible through multiple paths are +exposed by the OS as a single disk drive. + +.. warning:: + If multiple disk paths are available and the MPIO service is not + configured properly, the same LUN can be exposed as multiple disk drives + (one per available path). This must be addressed urgently as it can + potentially lead to data corruption. + +Run the following to enable the MPIO service: + +.. code-block:: powershell + + Enable-WindowsOptionalFeature –Online –FeatureName MultiPathIO + + # Ensure that the "mpio" service is running + Get-Service mpio + +Once you have enabled MPIO, make sure to configure it to automatically +claim volumes exposed by the desired storage backend. If needed, import +vendor provided DSMs. + +For more details about Windows MPIO, check the following `page`__. + +__ https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619734(v=ws.10) + +SMB 3.0 and later also supports using multiple paths to a share (the UNC +path can be the same), leveraging ``SMB Direct`` and ``SMB Multichannel``. + +By default, all available paths will be used when accessing SMB shares. +You can configure constraints in order to choose which adapters should +be used when connecting to SMB shares (for example, to avoid using a +management network for SMB traffic). + +.. note:: + + SMB does not require or interact in any way with the MPIO service. + +For best performance, ``SMB Direct`` (RDMA) should also be used, if your +network cards support it. + +For more details about ``SMB Multichannel``, check the following +`blog post`__. + +__ https://blogs.technet.microsoft.com/josebda/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0/ + + NTP configuration -----------------