Adds documentation

This patch adds basic details about:

* Supported OS versions
* Requirements and host configurations
* MSI installer, its features, and arguments
* Further steps after installing and configuring the
  nova-compute service
* Troubleshooting tips
* Usage

Change-Id: Icf7ba18121547ba7ea37fdefdb3c9510aeb1654e
This commit is contained in:
Claudiu Belu 2018-01-25 08:34:19 -08:00
parent 264efe76c1
commit efe5d8634b
14 changed files with 569 additions and 36 deletions

View File

@ -23,6 +23,7 @@ sys.path.insert(0, os.path.abspath('../..'))
extensions = [
'sphinx.ext.autodoc',
'oslo_config.sphinxconfiggen',
'oslo_config.sphinxext',
#'sphinx.ext.intersphinx',
'openstackdocstheme'
]

View File

@ -0,0 +1,10 @@
===============================
Configuration options reference
===============================
The following is an overview of all available configuration options in Nova
and compute-hyperv.
For a sample configuration file, refer to :ref:`config_sample`.
.. show-options::
:config-file: etc/compute-hyperv-config-generator.conf

View File

@ -0,0 +1,113 @@
.. _config_index:
=============
Configuration
=============
In addition to the Nova config options, compute-hyperv has a few extra
configuration options. For a sample configuration file, refer to
:ref:`config_sample`.
Driver configuration
--------------------
In order to use the compute-hyperv Nova driver, the following configuration
option will have to be set in the ``nova.conf`` file:
.. code-block:: ini
[DEFAULT]
compute_driver = compute_hyperv.driver.HyperVDriver
And for Hyper-V Clusters, the following:
.. code-block:: ini
[DEFAULT]
compute_driver = compute_hyperv.cluster.driver.HyperVClusterDriver
instances_path = path\to\cluster\wide\storage\location
sync_power_state_interval = -1
[workarounds]
handle_virt_lifecycle_events = False
By default, the OpenStack Hyper-V installer will configure the ``nova-compute``
service to use the ``compute_hyperv.driver.HyperVDriver`` driver.
Storage configuration
---------------------
When spawning instances, ``nova-compute`` will create the VM related files (
VM configuration file, ephemerals, configdrive, console.log, etc.) in the
location specified by the ``instances_path`` configuration option, even if
the instance is volume-backed.
It is not recommended for Nova and Cinder to use the same storage location, as
that can create scheduling and disk overcommitment issues.
Nova instance files location
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, the OpenStack Hyper-V installer will configure ``nova-compute`` to
use the following path as the ``instances_path``:
.. code-block:: ini
[DEFAULT]
instances_path = C:\OpenStack\Instances
``instances_path`` can be set to an SMB share, mounted or unmounted:
.. code-block:: ini
[DEFAULT]
# in this case, X is a persistently mounted SMB share.
instances_path = X:\OpenStack\Instances
# or
instances_path = \\SMB_SERVER\share_name\OpenStack\Instances
Alternatively, CSVs can be used:
.. code-block:: ini
[DEFAULT]
instances_path = C:\ClusterStorage\Volume1\OpenStack\Instances
Block Storage (Cinder) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TODO
Live migration configuration
----------------------------
For live migrating virtual machines to hosts with different CPU features the
following configuration option must be set in the compute node's ``nova.conf``
file:
.. code-block:: ini
[hyperv]
limit_cpu_features = True
Keep in mind that changing this configuration option will not affect the
instances that are already spawned, meaning that instances spawned with this
flag set to False will not be able to live migrate to hosts with different CPU
features, and that they will have to be shut down and rebuilt, or have the
setting manually set.
Configuration options
---------------------
.. toctree::
:maxdepth: 1
config
sample_config

View File

@ -1,15 +1,17 @@
====================================
Compute-hyperv Configuration Options
====================================
.. _config_sample:
====================
Configuration sample
====================
The following is a sample compute-hyperv configuration for adaptation and
use.
The sample configuration can also be viewed in :download:`file from
The sample configuration can also be viewed in :download:`file form
</_static/compute-hyperv.conf.sample>`.
Config options that are specific to the Hyper-V Nova driver can be found in
the '[hyperv]' config group section.
the ``[hyperv]`` config group section.
.. important::

View File

@ -3,23 +3,32 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to compute-hyperv's documentation!
========================================================
==============================================
Welcome to the documentation of compute_hyperv
==============================================
Starting with Folsom, Hyper-V can be used as a compute node within OpenStack
deployments.
This documentation contains information on how to setup and configure Hyper-V
hosts as OpenStack compute nodes, more specifically:
* Supported OS versions
* Requirements and host configurations
* How to install the necessary OpenStack services
* ``nova-compute`` configuration options
* Troubleshooting and debugging tips & tricks
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
install/index
troubleshooting/index
configuration/index
usage/index
Sample Configuration File
-------------------------
.. toctree::
:maxdepth: 1
sample_config
* :ref:`search`

View File

@ -0,0 +1,33 @@
==================
Installation guide
==================
The compute-hyperv project offers two Nova Hyper-V drivers, providing
additional features and bug fixes compared to the in-tree Nova
Hyper-V driver:
* ``compute_hyperv.driver.HyperVDriver``
* ``compute_hyperv.cluster.driver.HyperVClusterDriver``
These drivers receive the same degree of testing (if not even more) as the
upstream driver, being covered by a range of official OpenStack Continuous
Integration (CI) systems.
Most production Hyper-V based OpenStack deployments use the compute-hyperv
drivers.
The ``HyperVClusterDriver`` can be used on Hyper-V Cluster compute nodes and
will create and manage highly available clustered virtual machines.
This chapter assumes a working setup of OpenStack following the
`OpenStack Installation Tutorial
<https://docs.openstack.org/install-guide/>`_.
.. toctree::
:maxdepth: 2
prerequisites.rst
install.rst
next-steps.rst
verify.rst

View File

@ -0,0 +1,66 @@
.. _install:
Install
~~~~~~~
This section describes how to install a Hyper-V nova compute node into an
OpenStack deployment. For details about configuration, refer to
:ref:`config_index`.
This section assumes that you already have a working OpenStack environment.
The easiest way to install and configure the ``nova-compute`` service is to use
an MSI, which can be freely downloaded from:
https://cloudbase.it/openstack-hyperv-driver/
The MSI can optionally include the installation and / or configuration of:
* Neutron L2 agents: Neutron Hyper-V Agent, Neutron OVS Agent (if OVS is
installed on the compute node).
* Ceilometer Polling Agent.
* Windows Services for the mentioned agents.
* Live migration feature (if the compute node is joined in an AD).
* OVS vSwitch extension, OVS bridge, OVS tunnel IP (if OVS is installed, and
Neutron OVS Agent is used).
* Free RDP
* iSCSI Initiator
MSIs can be installed normally through its GUI, or can be installed in an
unattended mode (useful for automation). In order to do so, the following
command has to be executed:
.. code-block:: bat
msiexec /i \path\to\the\HyperVNovaCompute.msi /qn /l*v log.txt
The command above will install the given MSI in the quiet, no UI mode, and
will output its verbose logs into the given ``log.txt`` file. Additional
key-value arguments can be given to the MSI for configuration. Some of the
configurations are:
* ADDLOCAL: Comma separated list of features to install. Acceptable values:
``HyperVNovaCompute,NeutronHyperVAgent,iSCSISWInitiator,FreeRDP``
* INSTALLDIR: The location where the OpenStack services and their
configuration files are installed. By default, they are installed in:
``%ProgramFiles%\Cloudbase Solutions\OpenStack\Nova``
* SKIPNOVACONF: Installs the MSI without doing any of the other actions:
creating configuration files, services, vSwitches, OVS bridges, etc.
Example:
.. code-block:: bat
msiexec /i HyperVNovaCompute.msi /qn /l*v log.txt `
ADDLOCAL="HyperVNovaCompute,NeutronHyperVAgent,iSCSISWInitiator,FreeRDP"
After installing the OpenStack services on the Hyper-V compute node, check that
they are up and running:
.. code-block:: powershell
Get-Service nova-compute
Get-Service neutron-*
Get-Service ceilometer-* # if the Ceilometer Polling Agent has been installed.
All the listed services must have the ``Running`` status. If not, refer to the
:ref:`troubleshooting`.

View File

@ -0,0 +1,62 @@
.. _next-steps:
Next steps
~~~~~~~~~~
Your OpenStack environment now includes the ``nova-compute`` service
installed and configured with the compute_hyperv driver.
If the OpenStack services are Running on the Hyper-V compute node, make sure
that they're reporting to the OpenStack controller and that they're alive by
running the following:
.. code-block:: bash
neutron agent-list
nova service-list
The output should contain the Hyper-V host's ``nova-compute`` service and
Neutron L2 agent (either a Neutron Hyper-V Agent, or a Neutron OVS Agent) as
alive / running.
Starting with Ocata, Nova cells became mandatory. Make sure that the newly
added Hyper-V compute node is mapped into a Nova cell, otherwise Nova will not
build any instances on it.
If Neutron Hyper-V Agent has been chosen as an L2 agent, make sure that the
Neutron Server meets the following requirements:
* ``networking-hyperv`` installed. To check if ``networking-hyperv`` is
installed, run the following:
.. code-block:: bash
pip freeze | grep networking-hyperv
If there is no output, it can be installed by running the command:
.. code-block:: bash
pip install networking-hyperv==VERSION
The ``VERSION`` is dependent on your OpenStack deployment version. For
example, for Queens, the ``VERSION`` is 6.0.0. For other release names and
versions, you can look here:
https://github.com/openstack/networking-hyperv/releases
* The Neutron Server has been configured to use the ``hyperv`` mechanism
driver. The configuration option can be found in
``/etc/neutron/plugins/ml2/ml2_conf.ini``:
.. code-block:: ini
[ml2]
mechanism_drivers = openvswitch,hyperv
If the configuration file has been modified, or ``networking-hyperv`` has been
installed, the Neutron Server service will have to be restarted.
Additionally, keep in mind that the Neutron Hyper-V Agent only supports the
following network types: local, flat, VLAN. Ports with any other network
type will result in a PortBindingFailure exception. If tunneling is desired,
the Neutron OVS Agent should be used instead.

View File

@ -0,0 +1,128 @@
=============
Prerequisites
=============
Starting with Folsom, Hyper-V can be used as a compute node within OpenStack
deployments.
The Hyper-V versions that are currently supported are:
* (deprecated) Windows / Hyper-V Server 2012
* Windows / Hyper-V Server 2012 R2
* Windows / Hyper-V Server 2016
Newer Hyper-V versions come with an extended list of features, and can offer
better overall performance. Thus, Windows / Hyper-V Server 2016 is recommended
for the best experience.
Hardware requirements
---------------------
Although this document does not provide a complete list of Hyper-V compatible
hardware, the following items are necessary:
* 64-bit processor with Second Level Address Translation (SLAT).
* CPU support for VM Monitor Mode Extension (VT-c on Intel CPU's).
* Minimum of 4 GB memory. As virtual machines share memory with the Hyper-V
host, you will need to provide enough memory to handle the expected virtual
workload.
* Minimum 16-20 GB of disk space for the OS itself and updates.
* At least one NIC, but optimally two NICs: one connected to the management
network, and one connected to the guest data network. If a single NIC is
used, when creating the Hyper-V vSwitch, make sure the ``-AllowManagementOS``
option is set to ``True``, otherwise you will lose connectivity to the host.
The following items will need to be enabled in the system BIOS:
* Virtualization Technology - may have a different label depending on
motherboard manufacturer.
* Hardware Enforced Data Execution Prevention.
To check a host's Hyper-V compatibility, open up cmd or Powershell and run:
.. code-block:: bat
systeminfo
The output will include the Hyper-V requirements and if the host meets them or
not. If all the requirements are met, the host is Hyper-V capable.
Storage considerations
----------------------
The Hyper-V compute nodes needs to have ample storage for storing the virtual
machine images running on the compute nodes (for boot-from-image instances).
For Hyper-V compute nodes, the following storage options are available:
* Local disk.
* SMB shares. Make sure that they are persistent.
* Cluster Shared Volumes (``CSV``)
* Storage Spaces
* Storage Spaces Direct (``S2D``)
* SAN LUNs as underlying CSV storage
Compute nodes can be configured to use the same storage option. Doing so will
result in faster cold / live migration operations to other compute nodes using
the same storage, but there's a risk of disk overcommitment. Nova is not aware
of compute nodes sharing the same storage and because of this, the Nova
scheduler might pick a host it normally wouldn't.
For example, hosts A and B are configured to use a 100 GB SMB share. Both
compute nodes will report as having 100 GB storage available. Nova has to
spawn 2 instances requiring 80 GB storage each. Normally, Nova would be able
to spawn only one instance, but both will spawn on different hosts,
overcommiting the disk by 60 GB.
NTP configuration
-----------------
Network time services must be configured to ensure proper operation of the
OpenStack nodes. To set network time on your Windows host you must run the
following commands:
.. code-block:: bat
net stop w32time
w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
net start w32time
Keep in mind that the node will have to be time synchronized with the other
nodes of your OpenStack environment, so it is important to use the same NTP
server. Note that in case of an Active Directory environment, you may do this
only for the AD Domain Controller.
Live migration configuration
----------------------------
In order for the live migration feature to work on the Hyper-V compute nodes,
the following items are required:
* A Windows domain controller with the Hyper-V compute nodes as domain members.
* The ``nova-compute`` service must run with domain credentials. You can set
the service credentials with:
.. code-block:: bat
sc.exe config openstack-compute obj="DOMAIN\username" password="password"
`This guide`__ contains information on how to setup and configure live
migration on your Hyper-V compute nodes (authentication options, constrained
delegation, migration performance options, etc), and a few troubleshooting
tips.
__ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine
Hyper-V Cluster configuration
-----------------------------
compute-hyperv also offers a driver for Hyper-V Cluster nodes, which will be
able to create and manage highly available virtual machines. For the Hyper-V
Cluster Driver to be usable, the Hyper-V Cluster nodes will have to be joined
to an Active Directory and a Microsoft Failover Cluster. The nodes in a
Hyper-V Cluster must be identical.

View File

@ -0,0 +1,13 @@
.. _verify:
Verify operation
~~~~~~~~~~~~~~~~
Verify that instances can be created on the Hyper-V compute node through
nova. If spawning fails, check the nova compute log file on the Hyper-V
compute node for relevant information (by default, it can be found in
``C:\OpenStack\Log\``). Additionally, setting the ``debug`` configuration
option in ``nova.conf`` will help troubleshoot the issue.
If there is no relevant information in the compute node's logs, check the
Nova controller's logs.

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install compute-hyperv
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv compute-hyperv
$ pip install compute-hyperv

View File

@ -0,0 +1,83 @@
.. _troubleshooting:
=====================
Troubleshooting guide
=====================
This section contains a few tips and tricks which can help you troubleshoot
and solve your Hyper-V compute node's potential issues.
OpenStack Services not running
------------------------------
You can check if the OpenStack services are up by running:
.. code-block:: powershell
Get-Service nova-compute
Get-Service neutron-*
All the listed services must have the ``Running`` status. If not, check their
logs, which can typically be found in ``C:\OpenStack\Log\``. If there are no
logs, try to run the services manually. To see how to run ``nova-compute``
manually, run the following command:
.. code-block:: powershell
sc.exe qc nova-compute
The output will contain the ``BINARY_PATH_NAME`` with the service's command.
The command will contain the path to the ``nova-compute.exe`` executable and
its configuration file path. Edit the configuration file and add the
following:
.. code-block:: ini
[DEFAULT]
debug = True
use_stderr = True
This will help troubleshoot the service's issues. Next, run ``nova-compute``
in PowerShell manually:
.. code-block:: powershell
&"C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python27\Scripts\nova-compute.exe" `
--config-file "C:\Program Files\Cloudbase Solutions\OpenStack\Nova\etc\nova.conf"
The reason why the service could not be started should be visible in the
output.
Live migration
--------------
`This guide`__ offers a few tips for troubleshooting live migration issues.
If live migration fails because the nodes have incompatible hardware, refer to
refer to :ref:`config_index`.
__ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine
How to restart a service on Hyper-V
-----------------------------------
Restarting a service on OpenStack can easily be done through Powershell:
.. code-block:: powershell
Restart-Service service-name
or through cmd:
.. code-block:: bat
net stop service_name && net start service_name
For example, the following command will restart the iSCSI initiator service:
.. code-block:: powershell
Restart-Service msiscsi

View File

@ -1,7 +0,0 @@
========
Usage
========
To use compute-hyperv in a project::
import compute_hyperv

View File

@ -0,0 +1,32 @@
===========
Usage guide
===========
This section contains information on how to create Glance images for Hyper-V
compute nodes and how to use various Hyper-V features through image metadata
properties and Nova flavor extra specs.
Prepare images for use with Hyper-V
-----------------------------------
Hyper-V currently supports only the VHD and VHDx file formats for virtual
machines.
OpenStack Hyper-V images should have the following items installed:
* cloud-init (Linux) or cloudbase-init (Windows)
* Linux Integration Services (on Linux type OSes)
Images can be uploaded to `glance` using the `openstack` client:
.. code-block:: bash
openstack image create --name "VM_IMAGE_NAME" --property hypervisor_type=hyperv --public \
--container-format bare --disk-format vhd --file /path/to/image
.. note::
VHD and VHDx files sizes can be bigger than their maximum internal size,
as such you need to boot instances using a flavor with a slightly bigger
disk size than the internal size of the disk files.