When resize instance, the flavors returned may not meet the image
minimum memory requirement, resizing instance ignores the minimum
memory limit of the image, which may cause the resizing be
successfully, but the instance fails to start because the memory is
too small to run the system.
Related-Bug: 2007968
Change-Id: I132e444eedc10b950a2fc9ed259cd6d9aa9bed65
Previously, live migrations completely ignored CPU power management.
This patch makes sure that we correctly:
* Power up the cores on the destination during pre_live_migration, as
we need them powered up before the instance starts on the
destination.
* If the live migration is successful, power down the vacated cores on
the source.
* In case of a rollback, power down the cores previously powered up on
pre_live_migration.
Closes-bug: 2056613
Change-Id: I787bd7807950370cd865f29b95989d489d4826d0
Building on the previous patch's refactor, we can now do functional
testing of live migration with CPU power management. We quickly notice
that it's mostly broken, leaving the CPUs powered up on the source,
and not powering them up on the dest.
Related-bug: 2056613
Change-Id: Ib4de77d68ceeffbc751bca3567ada72228b750af
Previously, with the `isolate` emulator threads policy and libvirt cpu
power management enabled, we did not power on the cores to which the
emulator threads were pin. Start doing that, and don't forget to power
them down when the instance is stopped.
Closes-bug: 2056612
Change-Id: I6e5383d8a0bf3f0ed8c870754cddae4e9163b4fd
We want to cap a maximum mdevs we can create.
If some type has enough capacity, then other GPUs won't be used and
existing ResourceProviders would be deleted.
Closes-Bug: #2041519
Change-Id: I069879a333152bb849c248b3dcb56357a11d0324
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
In general the card_serial_number will not be present on sriov
VFs/PFs, it is only supported on very new cards.
Also, all 3 need not to be always required for vf_profile.
Related-Bug: #2008238
Change-Id: I00b126635612ace51b5e3138afcb064f001f1901
This adds encryption related methods and attributes to test fixtures to
enable functional testing for ephemeral encryption.
Related to blueprint ephemeral-encryption-libvirt
Change-Id: If65ec55d311ecf7fb3fe745ebbf116a430f60681
Now the destination returns the list of the needed mdevs for the
migration, we can change the XML.
Note: this is the last patch of the feature branch.
I'll work on adding mtty support in the next patches in the series
but that's not a feature usage.
Change-Id: Ib448444be09df50c3db5ccda8a49bfd882c18edf
Implements: blueprint libvirt-mdev-live-migrate
Many bugs around nova-compute rebalancing are focused around
problems when the compute node and placement resources are
deleted, and sometimes they never get re-created.
To limit this class of bugs, we add a check to ensure a compute
node is only ever deleted when it is known to have been deleted
in Ironic.
There is a risk this might leave orphaned compute nodes and
resource providers that need manual clean up because users
do not want to delete the node in Ironic, but are removing it
from nova management. But on balance, it seems safer to leave
these cases up to the operator to resolve manually, and collect
feedback on how to better help those users.
blueprint ironic-shards
Change-Id: I2bc77cbb77c2dd5584368563dc4250d71913906b
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.
For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.
blueprint ironic-shards
Change-Id: I33034ec77b033752797bd679c6e61cef5af0a18f
The destination lookups at the src mdev types and returns its own
mdevs using the same type. We also reserve them by an internal dict
and we make sure we can cleanup this dict if the live-migration aborts.
Partially-Implements: blueprint libvirt-mdev-live-migrate
Change-Id: I4a7e5292dd3df63943bd9f01803fa933e0466014
RDP console was only for HyperV driver so removing the
API. As API url stay same (because same used for other
console types API), RDP console API will return 400.
Cleaning up the related config options as well as moving its
API ref to obsolete seciton.
Keeping RPC method to avoid error when old controller is used
with new compute. It can be removed in next RPC version bump.
Change-Id: I8f5755009da4af0d12bda096d7a8e85fd41e1a8c
The RDP console was only available for HyperV driver, therefore its
connection information via API ``os-console-auth-tokens`` will now return
HTTP ``400 (BadRequest)`` error.
Starting from 2.31 microversion, this API return connection info
for all other console type.
Change-Id: I94e590eb4cbe3b2d8eff7fe881f7b98af8979be2
Now that the source knows that both the computes support the right
libvirt version, it passes to the destination the list of mdevs it has
for the instance. By this change, we'll verify if the types of those
mdevs are actually supported by the destination.
On the next change, we'll pass the destination mdevs back to the
source.
Partially-Implements: blueprint libvirt-mdev-live-migrate
Change-Id: Icb52fa5eb0adc0aa6106a90d87149456b39e79c2
Since only qemu 8.1 and libvirt 8.6.0 supports mdev live-migration,
we need to verify the values of the hypervisor for both the source
and the destination.
If one of them are older, the conductor raises an exception that will
eventually fact the API to return an HTTP500.
Change-Id: I17f170143c58401b8b0a5a93e83355b1f7178ab5
Partially-Implements: blueprint libvirt-mdev-live-migrate
we only need to verify if bdm has attachment id and it should be present in both nova and cinde DB.
For tests coverage, added tests for bfv server to test different bdm source type.
Closes-Bug: 2048154
Closes-Bug: 2048184
Change-Id: Icffcbad27d99a800e3f285565c0b823f697e388c
In the previous patch we changed the ordering of operations during
post_live_migration() to minimize guest networking downtime by
activating destination host port bindings as soon as possible.
Review of that patch led to the realization that exceptions during
notification sending can prevent the port binding activation from
happening. Instead of handling that in a localized try/catch, this
patch implements a general best_effort kwarg to our two notification
sending helpers to allow callers to indicate that any exceptions
during notification sending should not be fatal.
Change-Id: I01a15d6fffe98816ae019e67dc72784299fedfd3
This chnage adds the pre-commit config and
tox targets to run codespell both indepenetly
and via the pep8 target.
This change correct all the final typos in the
codebase as detected by codespell.
Change-Id: Ic4fb5b3a5559bc3c43aca0a39edc0885da58eaa2
Live migrating a VM with no CPU policy and no NUMA topology to a host with
cpu_shared_set configured will not update VM's configuration accordingly.
Example: live migrating a VM from source host with cpu_shared_set=0,1 to
destination host with cpu_shared_set=2,3 will leave the VM configuration
pinned to CPUs 0,1 (<vcpu cpuset="0-1"> instead of <vcpu cpuset="2-3">).
This patch adds reproducers for live migrated instances and various
combinations of cpu_shared_set configuration.
- From a host with cpu_shared_set to a host with different cpu_shared_set.
- From a host with cpu_shared_set to a host without cpu_shared_set.
- From a host without cpu_shared_set to a host with cpu_shared_set.
This also adds the required changes to the libvirt fixture to manage
cpuset inside the vcpu tag.
Related-Bug: #1869804
Change-Id: Ib294a9d3c25b9a8548347dbe00416a55db567773
Placing 'yield' after a 'return' is dark magic, even if it's technically
allowed. Use 'yield from' instead.
Change-Id: Ie5e8befcbeeb42094d1056e00c939e384a72ceb3
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
We had indicated that we would never switch this over to SDK, however,
this is the sole remaining user of ironicclient which means users would
continue needing to install that just for this little API. Better to
switch this holdout over and finally delete all the things.
Change-Id: I880523935d73ca94c83e618f10c2e587362c53be
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>