Live migrating to a host with cpu_shared_set configured will now
update the VM's configuration accordingly.
Example: live migrating a VM from source host with cpu_shared_set=0,1
to destination host with cpu_shared_set=2,3 will now update the
VM configuration.
(<vcpu cpuset="0-1"> will be updated to <vcpu cpuset="2-3">).
Related-Bug: #1869804
Change-Id: I7c717503eba58088094fac05cb99b276af9a3460
Live migrating to a host with cpu_shared_set configured will now
update the VM's configuration accordingly.
Example: live migrating a VM from source host with cpu_shared_set=0,1
to destination host with cpu_shared_set=2,3 will now update the
VM configuration.
(<vcpu cpuset="0-1"> will be updated to <vcpu cpuset="2-3">).
This update adds a new field, dst_cpu_shared_set_info, to the
LibvirtLiveMigrateData object, which requires an increase in the
object's version. As a result, this patch cannot be backported.
Related-Bug: #1869804
Change-Id: I806da0958fe436c989e09a52ca6b6f1bbd25a865
When resize instance, the flavors returned may not meet the image
minimum memory requirement, resizing instance ignores the minimum
memory limit of the image, which may cause the resizing be
successfully, but the instance fails to start because the memory is
too small to run the system.
Related-Bug: 2007968
Change-Id: I132e444eedc10b950a2fc9ed259cd6d9aa9bed65
This is an odd child, registering standard REST operations as actions
(in the '/action' API sense of the term). There's no reason for this
delineation these days so simply remove it. This makes auto-generation
much easier down the road.
Change-Id: Ia45013fc988acb9517aea42c3caa1fa45d63892e
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Previously, live migrations completely ignored CPU power management.
This patch makes sure that we correctly:
* Power up the cores on the destination during pre_live_migration, as
we need them powered up before the instance starts on the
destination.
* If the live migration is successful, power down the vacated cores on
the source.
* In case of a rollback, power down the cores previously powered up on
pre_live_migration.
Closes-bug: 2056613
Change-Id: I787bd7807950370cd865f29b95989d489d4826d0
Building on the previous patch's refactor, we can now do functional
testing of live migration with CPU power management. We quickly notice
that it's mostly broken, leaving the CPUs powered up on the source,
and not powering them up on the dest.
Related-bug: 2056613
Change-Id: Ib4de77d68ceeffbc751bca3567ada72228b750af
We want to test power management in our functional tests in multinode
scenarios (ex: live migration).
This was previously impossible because all the methods in
nova.virt.libvirt.cpu.api and were at the module level, meaning both
source and destination libvirt drivers would call the same method to
online and offline cores. This made it impossible to maintain distinct
core power state between source and destination.
This patch inserts a nova.virt.libvirt.cpu.api.API class, and gives
the libvirt driver a cpu_api attribute with an instance of that
class. Along with the tiny API.core() helper, this allows new
functional tests in the subsequent patches to stub out the core
"model" code with distinct objects on the source and destination
libvirt drivers, and enables a whole bunch of testing (and fixes!)
around live migration.
Related-bug: 2056613
Change-Id: I052535249b9a3e144bb68b8c588b5995eb345b97
Previously, with the `isolate` emulator threads policy and libvirt cpu
power management enabled, we did not power on the cores to which the
emulator threads were pin. Start doing that, and don't forget to power
them down when the instance is stopped.
Closes-bug: 2056612
Change-Id: I6e5383d8a0bf3f0ed8c870754cddae4e9163b4fd
When we pin emulator threads with the `isolate` policy, those pins are
stored in the `cpuset_reserved` field in each NUMACell. In subsequent
patches we'll need those pins for the whole instance, so this patch
adds a helper property that does this for us, similar to how the
`cpu_pinning` property helper currently works.
Related-bug: 2056612
Change-Id: I8597f13e8089106434018b94e9bbc2091f95fee9
Sometimes, some GPU may have a long list of PCI addresses (say a SRIOV
GPU) or operators may have a long list of GPUs. In order to help their
lifes, let's allow device_addresses to be optional.
This means that a valid configuration could be :
[devices]
enabled_mdev_types = nvidia-35, nvidia-36
[mdev_nvidia-35]
[mdev_nvidia-36]
NOTE(sbauza): we have a slight coverage gap for testing what happens
if the groups aren't set, but I'll add it in a next patch
Related-Bug: #2041519
Change-Id: I73762a0295212ee003db2149d6a9cf701023464f
We want to cap a maximum mdevs we can create.
If some type has enough capacity, then other GPUs won't be used and
existing ResourceProviders would be deleted.
Closes-Bug: #2041519
Change-Id: I069879a333152bb849c248b3dcb56357a11d0324
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
In general the card_serial_number will not be present on sriov
VFs/PFs, it is only supported on very new cards.
Also, all 3 need not to be always required for vf_profile.
Related-Bug: #2008238
Change-Id: I00b126635612ace51b5e3138afcb064f001f1901
This was a TODO to remove delete attachment call from refresh after
remove_volume_connection call.
Remove volume connection process itself deletes attachment on passing
delete_attachment flag.
Bumps RPC API version.
Change-Id: I03ec3ee3ee1eeb6563a1dd6876094a7f4423d860
cmd nova-manage volume_attachment refresh vm-id vol-id connetor
There were cases where the instance said to live in compute#1 but the
connection_info in the BDM record was for compute#2, and when the script
called `remote_volume_connection` then nova would call os-brick on
compute#1 (the wrong node) and try to detach it.
In some case os-brick would mistakenly think that the volume was
attached (because the target and lun matched an existing volume on the
host) and would try to disconnect, resulting in errors on the compute
logs.
- Added HostConflict exception
- Fixes dedent in cmd/manange.py
- Updates nova-mange doc
Closes-Bug: #2012365
Change-Id: I21109752ff1c56d3cefa58fcd36c68bf468e0a73
This adds encryption related methods and attributes to test fixtures to
enable functional testing for ephemeral encryption.
Related to blueprint ephemeral-encryption-libvirt
Change-Id: If65ec55d311ecf7fb3fe745ebbf116a430f60681