Remove HyperV: cleanup doc/code ref

Cleanup doc/code ref of HyperV driver.

Change-Id: I6cd8fb90829e040bfd356ff6b1c41aa9a1c906d2
This commit is contained in:
Ghanshyam Mann 2024-02-05 12:23:13 -08:00
parent b068b04372
commit c76c72cfe0
17 changed files with 15 additions and 750 deletions

View File

@ -2619,7 +2619,6 @@ driver_diagnostics:
- ``libvirt``
- ``xenapi``
- ``hyperv``
- ``vmwareapi``
- ``ironic``
in: body

View File

@ -76,9 +76,6 @@ availability zones. Nova supports the following hypervisors:
- :ironic-doc:`Baremetal <>`
- `Hyper-V
<https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview>`__
- `Kernel-based Virtual Machine (KVM)
<https://www.linux-kvm.org/page/Main_Page>`__

View File

@ -1,451 +0,0 @@
===============================
Hyper-V virtualization platform
===============================
.. todo:: This is really installation guide material and should probably be
moved.
It is possible to use Hyper-V as a compute node within an OpenStack Deployment.
The ``nova-compute`` service runs as ``openstack-compute``, a 32-bit service
directly upon the Windows platform with the Hyper-V role enabled. The necessary
Python components as well as the ``nova-compute`` service are installed
directly onto the Windows platform. Windows Clustering Services are not needed
for functionality within the OpenStack infrastructure. The use of the Windows
Server 2012 platform is recommend for the best experience and is the platform
for active development. The following Windows platforms have been tested as
compute nodes:
- Windows Server 2012
- Windows Server 2012 R2 Server and Core (with the Hyper-V role enabled)
- Hyper-V Server
Hyper-V configuration
---------------------
The only OpenStack services required on a Hyper-V node are ``nova-compute`` and
``neutron-hyperv-agent``. Regarding the resources needed for this host you have
to consider that Hyper-V will require 16 GB - 20 GB of disk space for the OS
itself, including updates. Two NICs are required, one connected to the
management network and one to the guest data network.
The following sections discuss how to prepare the Windows Hyper-V node for
operation as an OpenStack compute node. Unless stated otherwise, any
configuration information should work for the Windows 2012 and 2012 R2
platforms.
Local storage considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Hyper-V compute node needs to have ample storage for storing the virtual
machine images running on the compute nodes. You may use a single volume for
all, or partition it into an OS volume and VM volume.
.. _configure-ntp-windows:
Configure NTP
~~~~~~~~~~~~~
Network time services must be configured to ensure proper operation of the
OpenStack nodes. To set network time on your Windows host you must run the
following commands:
.. code-block:: bat
C:\>net stop w32time
C:\>w32tm /config "/manualpeerlist:pool.ntp.org,0x8" /syncfromflags:MANUAL
C:\>net start w32time
Keep in mind that the node will have to be time synchronized with the other
nodes of your OpenStack environment, so it is important to use the same NTP
server. Note that in case of an Active Directory environment, you may do this
only for the AD Domain Controller.
Configure Hyper-V virtual switching
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Information regarding the Hyper-V virtual Switch can be found in the `Hyper-V
Virtual Switch Overview`__.
To quickly enable an interface to be used as a Virtual Interface the
following PowerShell may be used:
.. code-block:: none
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
.. note::
It is very important to make sure that when you are using a Hyper-V node
with only 1 NIC the -AllowManagementOS option is set on ``True``, otherwise
you will lose connectivity to the Hyper-V node.
__ https://technet.microsoft.com/en-us/library/hh831823.aspx
Enable iSCSI initiator service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To prepare the Hyper-V node to be able to attach to volumes provided by cinder
you must first make sure the Windows iSCSI initiator service is running and
started automatically.
.. code-block:: none
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI
Configure shared nothing live migration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Detailed information on the configuration of live migration can be found in
`this guide`__
The following outlines the steps of shared nothing live migration.
#. The target host ensures that live migration is enabled and properly
configured in Hyper-V.
#. The target host checks if the image to be migrated requires a base VHD and
pulls it from the Image service if not already available on the target host.
#. The source host ensures that live migration is enabled and properly
configured in Hyper-V.
#. The source host initiates a Hyper-V live migration.
#. The source host communicates to the manager the outcome of the operation.
The following three configuration options are needed in order to support
Hyper-V live migration and must be added to your ``nova.conf`` on the Hyper-V
compute node:
* This is needed to support shared nothing Hyper-V live migrations. It is used
in ``nova/compute/manager.py``.
.. code-block:: ini
instances_shared_storage = False
* This flag is needed to support live migration to hosts with different CPU
features. This flag is checked during instance creation in order to limit the
CPU features used by the VM.
.. code-block:: ini
limit_cpu_features = True
* This option is used to specify where instances are stored on disk.
.. code-block:: ini
instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES
Additional Requirements:
* Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled
* A Windows domain controller with the Hyper-V compute nodes as domain members
* The instances_path command-line option/flag needs to be the same on all hosts
* The ``openstack-compute`` service deployed with the setup must run with
domain credentials. You can set the service credentials with:
.. code-block:: bat
C:\>sc config openstack-compute obj="DOMAIN\username" password="password"
__ https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/Use-live-migration-without-Failover-Clustering-to-move-a-virtual-machine
How to setup live migration on Hyper-V
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable 'shared nothing live' migration, run the 3 instructions below on each
Hyper-V host:
.. code-block:: none
PS C:\> Enable-VMMigration
PS C:\> Set-VMMigrationNetwork IP_ADDRESS
PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos
.. note::
Replace the ``IP_ADDRESS`` with the address of the interface which will
provide live migration.
Additional Reading
~~~~~~~~~~~~~~~~~~
This article clarifies the various live migration options in Hyper-V:
`Hyper-V Live Migration of Yesterday
<https://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html>`_
Install nova-compute using OpenStack Hyper-V installer
------------------------------------------------------
In case you want to avoid all the manual setup, you can use Cloudbase
Solutions' installer. You can find it here:
`HyperVNovaCompute_Beta download
<https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi>`_
The tool installs an independent Python environment in order to avoid conflicts
with existing applications, and dynamically generates a ``nova.conf`` file
based on the parameters provided by you.
The tool can also be used for an automated and unattended mode for deployments
on a massive number of servers. More details about how to use the installer and
its features can be found here:
`Cloudbase <https://www.cloudbase.it>`_
.. _windows-requirements:
Requirements
------------
Python
~~~~~~
**Setting up Python prerequisites**
#. Download and install Python 3.8 using the MSI installer from the `Python
website`__.
.. __: https://www.python.org/downloads/windows/
.. code-block:: none
PS C:\> $src = "https://www.python.org/ftp/python/3.8.8/python-3.8.8.exe"
PS C:\> $dest = "$env:temp\python-3.8.8.exe"
PS C:\> Invoke-WebRequest -Uri $src -OutFile $dest
PS C:\> Unblock-File $dest
PS C:\> Start-Process $dest
#. Make sure that the ``Python`` and ``Python\Scripts`` paths are set up in the
``PATH`` environment variable.
.. code-block:: none
PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path")
PS C:\> $newPath = $oldPath + ";C:\python38\;C:\python38\Scripts\"
PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User
Python dependencies
~~~~~~~~~~~~~~~~~~~
The following packages must be installed with pip:
* ``pywin32``
* ``pymysql``
* ``greenlet``
* ``pycrypto``
* ``ecdsa``
* ``amqp``
* ``wmi``
.. code-block:: none
PS C:\> pip install ecdsa
PS C:\> pip install amqp
PS C:\> pip install wmi
Other dependencies
~~~~~~~~~~~~~~~~~~
``qemu-img`` is required for some of the image related operations. You can get
it from here: http://qemu.weilnetz.de/. You must make sure that the
``qemu-img`` path is set in the PATH environment variable.
Some Python packages need to be compiled, so you may use MinGW or Visual
Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/.
You must configure which compiler is to be used for this purpose by using the
``distutils.cfg`` file in ``$Python38\Lib\distutils``, which can contain:
.. code-block:: ini
[build]
compiler = mingw32
As a last step for setting up MinGW, make sure that the MinGW binaries'
directories are set up in PATH.
Install nova-compute
--------------------
Download the nova code
~~~~~~~~~~~~~~~~~~~~~~
#. Use Git to download the necessary source code. The installer to run Git on
Windows can be downloaded here:
https://gitforwindows.org/
#. Download the installer. Once the download is complete, run the installer and
follow the prompts in the installation wizard. The default should be
acceptable for the purposes of this guide.
#. Run the following to clone the nova code.
.. code-block:: none
PS C:\> git.exe clone https://opendev.org/openstack/nova
Install nova-compute service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To install ``nova-compute``, run:
.. code-block:: none
PS C:\> cd c:\nova
PS C:\> python setup.py install
Configure nova-compute
~~~~~~~~~~~~~~~~~~~~~~
The ``nova.conf`` file must be placed in ``C:\nova\etc\nova`` for running OpenStack
on Hyper-V. Below is a sample ``nova.conf`` for Windows:
.. code-block:: ini
[DEFAULT]
auth_strategy = keystone
image_service = nova.image.glance.GlanceImageService
compute_driver = nova.virt.hyperv.driver.HyperVDriver
volume_api_class = nova.volume.cinder.API
fake_network = true
instances_path = C:\Program Files (x86)\OpenStack\Instances
use_cow_images = true
force_config_drive = false
injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template
policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.yaml
mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe
allow_resize_to_same_host = true
running_deleted_instance_action = reap
running_deleted_instance_poll_interval = 120
resize_confirm_window = 5
resume_guests_state_on_host_boot = true
rpc_response_timeout = 1800
lock_path = C:\Program Files (x86)\OpenStack\Log\
rpc_backend = nova.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = Passw0rd
logdir = C:\Program Files (x86)\OpenStack\Log\
logfile = nova-compute.log
instance_usage_audit = true
instance_usage_audit_period = hour
[glance]
api_servers = http://IP_ADDRESS:9292
[neutron]
endpoint_override = http://IP_ADDRESS:9696
auth_strategy = keystone
project_name = service
username = neutron
password = Passw0rd
auth_url = http://IP_ADDRESS:5000/v3
auth_type = password
[hyperv]
vswitch_name = newVSwitch0
limit_cpu_features = false
config_drive_inject_password = false
qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom = true
dynamic_memory_ratio = 1
enable_instance_metrics_collection = true
[rdp]
enabled = true
html5_proxy_base_url = https://IP_ADDRESS:4430
Prepare images for use with Hyper-V
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hyper-V currently supports only the VHD and VHDX file format for virtual
machine instances. Detailed instructions for installing virtual machines on
Hyper-V can be found here:
`Create Virtual Machines
<http://technet.microsoft.com/en-us/library/cc772480.aspx>`_
Once you have successfully created a virtual machine, you can then upload the
image to `glance` using the `openstack-client`:
.. code-block:: none
PS C:\> openstack image create \
--name "VM_IMAGE_NAME" \
--property hypervisor_type=hyperv \
--public \
--container-format bare \
--disk-format vhd
.. note::
VHD and VHDX files sizes can be bigger than their maximum internal size,
as such you need to boot instances using a flavor with a slightly bigger
disk size than the internal size of the disk file.
To create VHDs, use the following PowerShell cmdlet:
.. code-block:: none
PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE
Inject interfaces and routes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``interfaces.template`` file describes the network interfaces and routes
available on your system and how to activate them. You can specify the location
of the file with the :oslo.config:option:`injected_network_template`
configuration option in ``nova.conf``.
.. code-block:: ini
injected_network_template = PATH_TO_FILE
A default template exists in ``nova/virt/interfaces.template``.
Run Compute with Hyper-V
~~~~~~~~~~~~~~~~~~~~~~~~
To start the ``nova-compute`` service, run this command from a console in the
Windows server:
.. code-block:: none
PS C:\> C:\Python38\python.exe c:\Python38\Scripts\nova-compute --config-file c:\nova\etc\nova\nova.conf
Troubleshooting
---------------
* I ran the :command:`nova-manage service list` command from my controller;
however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do?
Verify that you are synchronized with a network time source. For
instructions about how to configure NTP on your Hyper-V compute node, see
:ref:`configure-ntp-windows`.
* How do I restart the compute service?
.. code-block:: none
PS C:\> net stop nova-compute && net start nova-compute
* How do I restart the iSCSI initiator service?
.. code-block:: none
PS C:\> net stop msiscsi && net start msiscsi

View File

@ -816,7 +816,7 @@ Tag VMware images
In a mixed hypervisor environment, OpenStack Compute uses the
``hypervisor_type`` tag to match images to the correct hypervisor type. For
VMware images, set the hypervisor type to ``vmware``. Other valid hypervisor
types include: ``hyperv``, ``ironic``, ``lxc``, and ``qemu``.
types include: ``ironic``, ``lxc``, and ``qemu``.
Note that ``qemu`` is used for both QEMU and KVM hypervisor types.
.. code-block:: console

View File

@ -9,7 +9,6 @@ Hypervisors
hypervisor-qemu
hypervisor-lxc
hypervisor-vmware
hypervisor-hyper-v
hypervisor-virtuozzo
hypervisor-zvm
hypervisor-ironic
@ -36,10 +35,6 @@ The following hypervisors are supported:
* `VMware vSphere`_ 5.1.0 and newer - Runs VMware-based Linux and Windows
images through a connection with a vCenter server.
* `Hyper-V`_ - Server virtualization with Microsoft Hyper-V, use to run
Windows, Linux, and FreeBSD virtual machines. Runs ``nova-compute`` natively
on the Windows virtualization platform.
* `Virtuozzo`_ 7.0.0 and newer - OS Containers and Kernel-based Virtual
Machines supported. The supported formats include ploop and qcow2 images.
@ -62,8 +57,6 @@ virt drivers:
* :oslo.config:option:`compute_driver` = ``vmwareapi.VMwareVCDriver``
* :oslo.config:option:`compute_driver` = ``hyperv.HyperVDriver``
* :oslo.config:option:`compute_driver` = ``zvm.ZVMDriver``
* :oslo.config:option:`compute_driver` = ``fake.FakeDriver``
@ -75,7 +68,6 @@ virt drivers:
.. _LXC: https://linuxcontainers.org
.. _QEMU: https://wiki.qemu.org/Manual
.. _VMware vSphere: https://www.vmware.com/support/vsphere-hypervisor.html
.. _Hyper-V: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/hyper-v-technology-overview
.. _Virtuozzo: https://www.virtuozzo.com/products/vz7.html
.. _zVM: https://www.ibm.com/it-infrastructure/z/zvm
.. _Ironic: https://docs.openstack.org/ironic/latest/

View File

@ -64,8 +64,7 @@ Customizing instance NUMA placement policies
.. important::
The functionality described below is currently only supported by the
libvirt/KVM and Hyper-V driver. The Hyper-V driver may require :ref:`some
host configuration <configure-hyperv-numa>` for this to work.
libvirt/KVM driver.
When running workloads on NUMA hosts, it is important that the vCPUs executing
processes are on the same NUMA node as the memory used by these processes.
@ -223,11 +222,6 @@ memory mapping between the two nodes, run:
are greater than the available number of CPUs or memory respectively, an
exception will be raised.
.. note::
Hyper-V does not support asymmetric NUMA topologies, and the Hyper-V
driver will not spawn instances with such topologies.
For more information about the syntax for ``hw:numa_nodes``, ``hw:numa_cpus.N``
and ``hw:num_mem.N``, refer to :doc:`/configuration/extra-specs`.
@ -241,8 +235,7 @@ Customizing instance CPU pinning policies
The functionality described below is currently only supported by the
libvirt/KVM driver and requires :ref:`some host configuration
<configure-libvirt-pinning>` for this to work. Hyper-V does not support CPU
pinning.
<configure-libvirt-pinning>` for this to work.
.. note::
@ -377,7 +370,6 @@ Customizing instance CPU thread pinning policies
The functionality described below requires the use of pinned instances and
is therefore currently only supported by the libvirt/KVM driver and requires
:ref:`some host configuration <configure-libvirt-pinning>` for this to work.
Hyper-V does not support CPU pinning.
When running pinned instances on SMT hosts, it may also be necessary to
consider the impact that thread siblings can have on the instance workload. The
@ -493,7 +485,6 @@ Customizing instance emulator thread pinning policies
The functionality described below requires the use of pinned instances and
is therefore currently only supported by the libvirt/KVM driver and requires
:ref:`some host configuration <configure-libvirt-pinning>` for this to work.
Hyper-V does not support CPU pinning.
In addition to the work of the guest OS and applications running in an
instance, there is a small amount of overhead associated with the underlying
@ -826,48 +817,6 @@ if those arbitrary rules aren't enforced :
- if they decide using ``governor``, then all dedicated CPU cores *MUST* be
online.
Configuring Hyper-V compute nodes for instance NUMA policies
------------------------------------------------------------
Hyper-V is configured by default to allow instances to span multiple NUMA
nodes, regardless if the instances have been configured to only span N NUMA
nodes. This behaviour allows Hyper-V instances to have up to 64 vCPUs and 1 TB
of memory.
Checking NUMA spanning can easily be done by running this following PowerShell
command:
.. code-block:: console
(Get-VMHost).NumaSpanningEnabled
In order to disable this behaviour, the host will have to be configured to
disable NUMA spanning. This can be done by executing these following
PowerShell commands:
.. code-block:: console
Set-VMHost -NumaSpanningEnabled $false
Restart-Service vmms
In order to restore this behaviour, execute these PowerShell commands:
.. code-block:: console
Set-VMHost -NumaSpanningEnabled $true
Restart-Service vmms
The *Virtual Machine Management Service* (*vmms*) is responsible for managing
the Hyper-V VMs. The VMs will still run while the service is down or
restarting, but they will not be manageable by the ``nova-compute`` service. In
order for the effects of the host NUMA spanning configuration to take effect,
the VMs will have to be restarted.
Hyper-V does not allow instances with a NUMA topology to have dynamic
memory allocation turned on. The Hyper-V driver will ignore the configured
``dynamic_memory_ratio`` from the given ``nova.conf`` file when spawning
instances with a NUMA topology.
.. Links
.. _`Image metadata`: https://docs.openstack.org/image-guide/introduction.html#image-metadata
.. _`MTTCG project`: http://wiki.qemu.org/Features/tcg-multithread

View File

@ -83,28 +83,6 @@ are required:
``amd_iommu=on`` parameter to the kernel parameters
* Assignable PCIe devices
To enable PCI passthrough on a Hyper-V compute node, the following are
required:
* Windows 10 or Windows / Hyper-V Server 2016 or newer
* VT-d enabled on the host
* Assignable PCI devices
In order to check the requirements above and if there are any assignable PCI
devices, run the following Powershell commands:
.. code-block:: console
Start-BitsTransfer https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/master/hyperv-samples/benarm-powershell/DDA/survey-dda.ps1
.\survey-dda.ps1
If the compute node passes all the requirements, the desired assignable PCI
devices to be disabled and unmounted from the host, in order to be assignable
by Hyper-V. The following can be read for more details: `Hyper-V PCI
passthrough`__.
.. __: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/
Configure ``nova-compute``
~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -280,34 +280,3 @@ To configure a minimum bandwidth allocation of 1024 Mbits/sec and a maximum of
$ openstack flavor set $FLAVOR \
--quota:vif_reservation=1024 \
--quota:vif_limit=2048
Hyper-V
~~~~~~~
CPU limits
^^^^^^^^^^
The Hyper-V driver does not support CPU limits.
Memory limits
^^^^^^^^^^^^^
The Hyper-V driver does not support memory limits.
Disk I/O limits
^^^^^^^^^^^^^^^
Hyper-V enforces disk limits through maximum total bytes and total I/O
operations per second, using the :nova:extra-spec:`quota:disk_total_bytes_sec`
and :nova:extra-spec:`quota:disk_total_iops_sec` extra specs, respectively. For
example, to set a maximum disk read/write of 10 MB/sec for a flavor:
.. code-block:: console
$ openstack flavor set $FLAVOR \
--property quota:disk_total_bytes_sec=10485760
Network bandwidth limits
^^^^^^^^^^^^^^^^^^^^^^^^
The Hyper-V driver does not support network bandwidth limits.

View File

@ -487,8 +487,7 @@ The image properties that the filter checks for are:
This was previously called ``architecture``.
``img_hv_type``
Describes the hypervisor required by the image. Examples are ``qemu``
and ``hyperv``.
Describes the hypervisor required by the image. Examples are ``qemu``.
.. note::
@ -511,7 +510,7 @@ The image properties that the filter checks for are:
.. code-block:: console
$ openstack image set --property hypervisor_type=hyperv --property \
$ openstack image set --property hypervisor_type=qemu --property \
hypervisor_version_requires=">=6000" img-uuid
.. versionchanged:: 12.0.0 (Liberty)
@ -1051,8 +1050,7 @@ hosts with different hypervisors.
For example, the ironic virt driver uses the ironic API micro-version as the hypervisor
version for a given node. The libvirt driver uses the libvirt version
i.e. Libvirt `7.1.123` becomes `700100123` vs Ironic `1.82` becomes `1`
Hyper-V `6.3` becomes `6003`.
i.e. Libvirt `7.1.123` becomes `700100123` vs Ironic `1.82` becomes `1`.
If you have a mixed virt driver deployment in the ironic vs non-ironic
case nothing special needs to be done. ironic nodes are scheduled using custom
@ -1225,17 +1223,6 @@ resources are overcommitted or not:
Some virt drivers may benefit from the use of these options to account for
hypervisor-specific overhead.
HyperV
Hyper-V creates a VM memory file on the local disk when an instance starts.
The size of this file corresponds to the amount of RAM allocated to the
instance.
You should configure the
:oslo.config:option:`reserved_host_disk_mb` config option to
account for this overhead, based on the amount of memory available
to instances.
Cells considerations
--------------------

View File

@ -19,9 +19,8 @@ Enabling Secure Boot
Currently the configuration of UEFI guest bootloaders is only supported when
using the libvirt compute driver with a :oslo.config:option:`libvirt.virt_type`
of ``kvm`` or ``qemu`` or when using the Hyper-V compute driver with certain
machine types. In both cases, it requires the guests also be configured with a
:doc:`UEFI bootloader <uefi>`.
of ``kvm`` or ``qemu``. In both cases, it requires the guests also be configured
with a :doc:`UEFI bootloader <uefi>`.
With these requirements satisfied, you can verify UEFI Secure Boot support by
inspecting the traits on the compute node's resource provider:
@ -89,48 +88,11 @@ configured, are met. For example:
If both the image metadata property and flavor extra spec are provided,
they must match. If they do not, an error will be raised.
.. rubric:: Hyper-V
Like libvirt, configuring a guest for UEFI Secure Boot support also requires
that it be configured with a UEFI bootloader: As noted in :doc:`uefi`, it is
not possible to do this explicitly in Hyper-V. Rather, you should configure the
guest to use the *Generation 2* machine type. In addition to this, the Hyper-V
compute driver also requires that the OS type be configured.
When both of these constraints are met, you can configure UEFI Secure Boot
support using the :nova:extra-spec:`os:secure_boot` extra spec or equivalent
image metadata property. For example, to configure an image that meets all the
above requirements:
.. code-block:: bash
$ openstack image set \
--property hw_machine_type=hyperv-gen2 \
--property os_type=windows \
--property os_secure_boot=required \
$IMAGE
As with the libvirt driver, it is also possible to request that secure boot be
disabled. This is the default behavior, so this is typically useful when an
admin wishes to explicitly prevent a user requesting secure boot. For example,
to disable secure boot via the flavor:
.. code-block:: bash
$ openstack flavor set --property os:secure_boot=disabled $IMAGE
However, unlike the libvirt driver, the Hyper-V driver does not respect the
``optional`` value for the image metadata property. If this is configured, it
will be silently ignored.
References
----------
* `Allow Secure Boot (SB) for QEMU- and KVM-based guests (spec)`__
* `Securing Secure Boot with System Management Mode`__
* `Generation 2 virtual machine security settings for Hyper-V`__
.. __: https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/allow-secure-boot-for-qemu-kvm-guests.html
.. __: http://events17.linuxfoundation.org/sites/events/files/slides/kvmforum15-smm.pdf
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/learn-more/generation-2-virtual-machine-security-settings-for-hyper-v

View File

@ -16,8 +16,7 @@ Enabling UEFI
Currently the configuration of UEFI guest bootloaders is only supported when
using the libvirt compute driver with a :oslo.config:option:`libvirt.virt_type`
of ``kvm`` or ``qemu`` or when using the Hyper-V compute driver with certain
machine types. When using the libvirt compute driver with AArch64-based guests,
of ``kvm`` or ``qemu``. When using the libvirt compute driver with AArch64-based guests,
UEFI is automatically enabled as AArch64 does not support BIOS.
.. todo::
@ -41,29 +40,11 @@ architectures, you can request UEFI support with libvirt by setting the
$ openstack image set --property hw_firmware_type=uefi $IMAGE
.. rubric:: Hyper-V
It is not possible to explicitly request UEFI support with Hyper-V. Rather, it
is enabled implicitly when using `Generation 2`__ guests. You can request a
Generation 2 guest by setting the ``hw_machine_type`` image metadata property
to ``hyperv-gen2``. For example:
.. code-block:: bash
$ openstack image set --property hw_machine_type=hyperv-gen2 $IMAGE
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v
References
----------
* `Hyper-V UEFI Secure Boot (spec)`__
* `Open Virtual Machine Firmware (OVMF) Status Report`__
* `Anatomy of a boot, a QEMU perspective`__
* `Should I create a generation 1 or 2 virtual machine in Hyper-V?`__
.. __: https://specs.openstack.org/openstack/nova-specs/specs/ocata/implemented/hyper-v-uefi-secureboot.html
.. __: http://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt
.. __: https://www.qemu.org/2020/07/03/anatomy-of-a-boot/
.. __: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/should-i-create-a-generation-1-or-2-virtual-machine-in-hyper-v

View File

@ -176,8 +176,8 @@ from the relevant third party test, on the latest patchset, before a +2 vote
can be applied.
Specifically, changes to nova/virt/driver/<NNNN> need a +1 vote from the
respective third party CI.
For example, if you change something in the Hyper-V virt driver, you must wait
for a +1 from the Hyper-V CI on the latest patchset, before you can give that
For example, if you change something in the VMware virt driver, you must wait
for a +1 from the VMware CI on the latest patchset, before you can give that
patch set a +2 vote.
This is important to ensure:

View File

@ -22,10 +22,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
title=VMware CI
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
[target.hyperv]
title=Hyper-V CI
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI
[target.zvm]
title=IBM zVM CI
link=https://wiki.openstack.org/wiki/ThirdPartySystems/IBM_z/VM_CI
@ -64,7 +60,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=unknown
zvm=complete
@ -83,7 +78,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=unknown
hyperv=unknown
ironic=unknown
zvm=complete
@ -101,7 +95,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=unknown
zvm=complete
@ -119,7 +112,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=unknown
zvm=missing
@ -137,7 +129,6 @@ libvirt-virtuozzo-ct=complete
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=unknown
zvm=missing
@ -154,7 +145,6 @@ libvirt-kvm-s390=unknown
libvirt-virtuozzo-ct=complete
libvirt-virtuozzo-vm=complete
vmware=complete
hyperv=complete
ironic=missing
zvm=missing
@ -175,7 +165,6 @@ libvirt-virtuozzo-ct=missing
libvirt-virtuozzo-vm=complete
vmware=partial
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
hyperv=complete:n
ironic=missing
zvm=missing
@ -195,8 +184,6 @@ libvirt-virtuozzo-ct=unknown
libvirt-virtuozzo-vm=unknown
vmware=partial
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
hyperv=partial
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
ironic=missing
zvm=partial
driver-notes-zvm=This is not tested in a CI system, but it is implemented.
@ -216,7 +203,6 @@ libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=partial
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
hyperv=complete
ironic=missing
zvm=complete
@ -235,7 +221,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=missing
zvm=missing
@ -253,8 +238,6 @@ libvirt-virtuozzo-ct=unknown
libvirt-virtuozzo-vm=unknown
vmware=partial
driver-notes-vmware=This is not tested in a CI system, but it is implemented.
hyperv=partial
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
ironic=missing
zvm=complete
@ -273,8 +256,6 @@ libvirt-virtuozzo-ct=partial
driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is implemented.
libvirt-virtuozzo-vm=complete
vmware=complete
hyperv=partial
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
ironic=missing
zvm=missing
@ -293,7 +274,6 @@ libvirt-virtuozzo-ct=missing
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=complete
hyperv=complete
ironic=partial
driver-notes-ironic=This is not tested in a CI system, but it is implemented.
zvm=complete
@ -312,8 +292,6 @@ libvirt-kvm-s390=unknown
libvirt-virtuozzo-ct=missing
libvirt-virtuozzo-vm=missing
vmware=missing
hyperv=partial
driver-notes-hyperv=This is not tested in a CI system, but it is implemented.
ironic=missing
zvm=missing
@ -332,6 +310,5 @@ libvirt-kvm-s390=unknown
libvirt-virtuozzo-ct=missing
libvirt-virtuozzo-vm=complete
vmware=missing
hyperv=complete
ironic=missing
zvm=missing

View File

@ -18,10 +18,6 @@ link=https://wiki.openstack.org/wiki/ThirdPartySystems/Virtuozzo_Storage_CI
title=VMware CI
link=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper
[target.hyperv]
title=Hyper-V CI
link=https://wiki.openstack.org/wiki/ThirdPartySystems/Hyper-V_CI
[target.ironic]
title=Ironic
link=http://docs.openstack.org/infra/manual/developers.html#project-gating
@ -45,7 +41,6 @@ driver-notes-libvirt-virtuozzo-ct=This is not tested in a CI system, but it is i
libvirt-virtuozzo-vm=partial
driver-notes-libvirt-virtuozzo-vm=This is not tested in a CI system, but it is implemented.
vmware=missing
hyperv=missing
ironic=unknown
@ -60,5 +55,4 @@ libvirt-kvm-s390=unknown
libvirt-virtuozzo-ct=unknown
libvirt-virtuozzo-vm=unknown
vmware=missing
hyperv=missing
ironic=missing

View File

@ -134,7 +134,7 @@ Secure Boot
.. note::
Supported by the Hyper-V and libvirt drivers.
Supported by the libvirt driver.
.. versionchanged:: 23.0.0 (Wallaby)

View File

@ -98,9 +98,6 @@ title=Libvirt Virtuozzo CT
[driver.vmware]
title=VMware vCenter
[driver.hyperv]
title=Hyper-V
[driver.ironic]
title=Ironic
@ -126,7 +123,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -145,7 +141,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -163,7 +158,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -188,7 +182,6 @@ driver.libvirt-kvm-s390x=unknown
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=unknown
driver.libvirt-vz-ct=missing
@ -212,10 +205,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=partial
driver-notes.hyperv=Works without issue if instance is off. When
hotplugging, only works if using Windows/Hyper-V Server 2016 and
the instance is a Generation 2 VM.
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -234,7 +223,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -252,10 +240,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver-notes.hyperv=Works without issue if instance is off. When
hotplugging, only works if using Windows/Hyper-V Server 2016 and
the instance is a Generation 2 VM.
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -279,7 +263,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=missing
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -304,7 +287,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=unknown
driver.libvirt-lxc=unknown
driver.vmware=unknown
driver.hyperv=unknown
driver.ironic=unknown
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -326,7 +308,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -346,7 +327,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=unknown
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -367,7 +347,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -386,7 +365,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -405,7 +383,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -433,7 +410,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -464,7 +440,6 @@ driver.libvirt-qemu-x86=complete
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.3.3, qemu>=2.5.0
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -492,7 +467,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=unknown
driver.libvirt-vz-ct=unknown
@ -512,7 +486,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -538,7 +511,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -560,7 +532,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -585,7 +556,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -610,7 +580,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver-notes.vz-vm=Resizing Virtuozzo instances implies guest filesystem resize also
@ -630,7 +599,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -660,7 +628,6 @@ driver.libvirt-qemu-x86=complete
driver-notes.libvirt-qemu-x86=Requires libvirt>=1.2.16 and hw_qemu_guest_agent.
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver-notes.libvirt-vz-vm=Requires libvirt>=2.0.0
@ -689,7 +656,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -721,7 +687,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -746,7 +711,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -771,7 +735,6 @@ driver-notes.libvirt-lxc=Fails in latest Ubuntu Trusty kernel
3.13.x kernels as well as default Ubuntu Trusty latest kernel
(3.13.0-58-generic).
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -793,7 +756,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=complete
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -811,7 +773,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -831,7 +792,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=missing
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -854,7 +814,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -881,7 +840,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -907,7 +865,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -937,7 +894,6 @@ driver-notes.libvirt-qemu-x86=Only for Debian derived guests
driver.libvirt-lxc=missing
driver.vmware=partial
driver-notes.vmware=requires vmware tools installed
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -962,7 +918,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=missing
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -988,7 +943,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1015,7 +969,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=unknown
driver.libvirt-lxc=unknown
driver.vmware=missing
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1040,7 +993,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1065,7 +1017,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -1092,7 +1043,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=partial
driver.libvirt-vz-ct=missing
@ -1113,7 +1063,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -1137,7 +1086,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -1158,7 +1106,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -1181,7 +1128,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -1199,9 +1145,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=complete
driver.hyperv=complete
driver-notes.hyperv=In order to use uefi, a second generation Hyper-V vm must
be requested.
driver.ironic=partial
driver-notes.ironic=depends on hardware support
driver.libvirt-vz-vm=missing
@ -1231,7 +1174,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=unknown
driver.vmware=missing
driver.hyperv=complete
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=unknown
@ -1251,7 +1193,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1269,7 +1210,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1292,7 +1232,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=missing
@ -1321,7 +1260,6 @@ driver.libvirt-qemu-x86=complete
driver-notes.libvirt-qemu-x86=The same restrictions apply as KVM x86.
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=unknown
driver.libvirt-vz-ct=missing
@ -1343,7 +1281,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -1365,7 +1302,6 @@ driver.libvirt-kvm-s390x=unknown
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1385,7 +1321,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1407,7 +1342,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1432,7 +1366,6 @@ driver.libvirt-kvm-s390x=missing
driver.libvirt-qemu-x86=missing
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing
@ -1454,7 +1387,6 @@ driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=unknown
driver.vmware=partial
driver.hyperv=partial
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
@ -1477,7 +1409,6 @@ driver.libvirt-qemu-x86=partial
driver-notes.libvirt-qemu-x86=Move operations are not yet supported.
driver.libvirt-lxc=missing
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=missing
driver.libvirt-vz-ct=missing

View File

@ -58,11 +58,11 @@ class HackingTestCase(test.NoDBTestCase):
self.assertEqual(expect, checks.import_no_virt_driver_import_deps(
"from nova.virt.libvirt import utils as libvirt_utils",
"./nova/virt/hyperv/driver.py"))
"./nova/virt/zvm/driver.py"))
self.assertEqual(expect, checks.import_no_virt_driver_import_deps(
"import nova.virt.libvirt.utils as libvirt_utils",
"./nova/virt/hyperv/driver.py"))
"./nova/virt/zvm/driver.py"))
self.assertIsNone(checks.import_no_virt_driver_import_deps(
"from nova.virt.libvirt import utils as libvirt_utils",
@ -72,7 +72,7 @@ class HackingTestCase(test.NoDBTestCase):
self.assertIsInstance(checks.import_no_virt_driver_config_deps(
"CONF.import_opt('volume_drivers', "
"'nova.virt.libvirt.driver', group='libvirt')",
"./nova/virt/hyperv/driver.py"), tuple)
"./nova/virt/zm/driver.py"), tuple)
self.assertIsNone(checks.import_no_virt_driver_config_deps(
"CONF.import_opt('volume_drivers', "