* Update nova from branch 'master'
to dcc32981ddad1cad5f41d3715132d72ddd7de435
- ignore sphinx-lint series in git blame
This is the final patch in the sphinx-lint series
and simply extends .git-blame-ignore-revs to ignore
the series.
Change-Id: I04008b6e3a18ffeef2f3f7b83cb57444cb15528f
* Update nova from branch 'master'
to 33a56781f48d603a54bc0cd9cd4dbe94f01fb88a
- fix sphinx-lint errors in docs and add ci
This change mainly fixes incorrect use of backticks
but also adress some other minor issues like unbalanced
backticks, incorrect spacing or missing _ in links.
This change add a tox target to run sphinx-lint
as well as adding it to the relevent tox envs to enforce
it in ci. pre-commit is leveraged to install and execute
sphinx-lint but it does not reqiure you to install the
hooks locally into your working dir.
Change-Id: Ib97b35c9014bc31876003cef4362c47a8a3a4e0e
* Update nova from branch 'master'
to c199becf52267ba37c5191f6f82e29bb5232b607
- Merge "Refactor vf profile for PCI device"
- Refactor vf profile for PCI device
In general the card_serial_number will not be present on sriov
VFs/PFs, it is only supported on very new cards.
Also, all 3 need not to be always required for vf_profile.
Related-Bug: #2008238
Change-Id: I00b126635612ace51b5e3138afcb064f001f1901
* Update nova from branch 'master'
to 1bca24aeb0323d70f053d18c61bd0b94e211f5f8
- Merge "Always delete NVRAM files when deleting instances"
- Always delete NVRAM files when deleting instances
When deleting an instance, always send VIR_DOMAIN_UNDEFINE_NVRAM to
delete the NVRAM file, regardless of whether the image is of type UEFI.
This prevents a bug when rebuilding an instance from an UEFI image to a
non-UEFI image.
Closes-Bug: #1997352
Change-Id: I24648f5b7895bf5d093f222b6c6e364becbb531f
Signed-off-by: Simon Hensel <simon.hensel@inovex.de>
* Update nova from branch 'master'
to c9d8317a76b621994f1c75eb5f1fd53a3bcad0ac
- Merge "Update min support for Dalmatian"
- Update min support for Dalmatian
Now that master is on Dalmatian, which is a non-SLURP release, we need
to bump our minimum supported version to the previous SLURP release,
which is now Caracal (and no longer Antelope).
Change-Id: I9d5150be2c131899fa2281a971bca965b8fff0b0
* Update nova from branch 'master'
to 9ebb9d1198baf22d7d1f653e5e47b0169d19953c
- Merge "reno: Update master for unmaintained/wallaby"
- reno: Update master for unmaintained/wallaby
Update the wallaby release notes configuration to build from
unmaintained/wallaby.
Change-Id: I4adad186800b900485737c6b1e88bf658f55d72d
* Update nova from branch 'master'
to 1634e073242a70cfe054325781812f40d5a216fd
- Merge "reno: Update master for unmaintained/victoria"
- reno: Update master for unmaintained/victoria
Update the victoria release notes configuration to build from
unmaintained/victoria.
Change-Id: I262dfacc85520a3d26b211b92f22abd9a105c2fd
* Update nova from branch 'master'
to 36c686dc3229e46aed9bbe8d916aad964c2e932f
- Merge "reno: Update master for unmaintained/xena"
- reno: Update master for unmaintained/xena
Update the xena release notes configuration to build from
unmaintained/xena.
Change-Id: Icc46726d7849a8b4ecdbfb913745c5acfd61ebb4
* Update nova from branch 'master'
to 6bd99eb2ea68de6783c5c6e9b2386e28960eef68
- Merge "Correctly reset instance task state in rebooting hard"
- Correctly reset instance task state in rebooting hard
When a user ask for a reboot hard of a running instance while nova compute is
unavailable (service stopped or host down) it might happens under certain
conditions that the instance stays in rebooting_hard task_state after
nova-compute start again. This patch aims to fix that.
Closes-Bug: #1999674
Change-Id: I170e390fe4e467898a8dc7df6a446f62941d49ff
* Update nova from branch 'master'
to 3a9d8406b46fd18f556102b98938b8288c1370fe
- Merge "Update master for stable/2024.1"
- Update master for stable/2024.1
Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: Ifc9062236483c2139921ccdb2ceed197aa6b8b11
* Update nova from branch 'master'
to e61bb3cf8f23b28cb8ccfdac72eb8c43ac19fd57
- Merge "Add new nova.wsgi module"
- Add new nova.wsgi module
This allows deployment tooling to easily switch from passing a binary
path to passing a Python module path. We'll use it shortly.
Change-Id: I37393656a70d7c22dc18e7bd65f3dc515532c237
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Update nova from branch 'master'
to 818f0cd4a355e24ed248978512f84df9c732b883
- Merge "Remove nova.wsgi module"
- Remove nova.wsgi module
We want this module for use elsewhere. Given there's only a single
caller (nova.service) we can simply move the code to the caller.
Change-Id: I2c3887db8b3f6833bf24f5114fd955e1af590d03
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Update nova from branch 'master'
to 77de002d61c4bf591f43c879a07dcdd3598c0954
- Merge "Add a Caracal prelude section"
- Add a Caracal prelude section
Shamelessly copied from the cycle highlights.
Change-Id: I6fd5ce392ee07700600ccae8916cd4e6b524cbc3
* Update nova from branch 'master'
to 3e358bc37ce72a4492fa1ce2245c7b1f34d0401b
- Merge "vgpu: Allow device_addresses to not be set"
- vgpu: Allow device_addresses to not be set
Sometimes, some GPU may have a long list of PCI addresses (say a SRIOV
GPU) or operators may have a long list of GPUs. In order to help their
lifes, let's allow device_addresses to be optional.
This means that a valid configuration could be :
[devices]
enabled_mdev_types = nvidia-35, nvidia-36
[mdev_nvidia-35]
[mdev_nvidia-36]
NOTE(sbauza): we have a slight coverage gap for testing what happens
if the groups aren't set, but I'll add it in a next patch
Related-Bug: #2041519
Change-Id: I73762a0295212ee003db2149d6a9cf701023464f
* Update nova from branch 'master'
to e255323f46cdbc9fbb548d63d4d07d221770f6c7
- Merge "libvirt: Cap with max_instances GPU types"
- libvirt: Cap with max_instances GPU types
We want to cap a maximum mdevs we can create.
If some type has enough capacity, then other GPUs won't be used and
existing ResourceProviders would be deleted.
Closes-Bug: #2041519
Change-Id: I069879a333152bb849c248b3dcb56357a11d0324
* Update nova from branch 'master'
to 8f3976d4cc5390fe649f2ff94afc971c5d6f7bc0
- Merge "Update python classifier in setup.cfg"
- Update python classifier in setup.cfg
As per the current release tested runtime, we test
till python 3.11 so updating the same in python
classifier in setup.cfg
Change-Id: Ib6f14c4348c6fbb251c15c9982dfb2fb4b8d249c
* Update nova from branch 'master'
to a87c10afa744af699e0394707d89edbfd4ac73df
- Update compute rpc alias for caracal
This adds an alias for Caracal
Change-Id: I4a57cdac68cab4cda2a1928dd4346c9f2bca14c3
* Update nova from branch 'master'
to 45e5d213f86dba618f9460f8a860b742723b13f4
- Merge "Removed explicit call to delete attachment"
- Removed explicit call to delete attachment
This was a TODO to remove delete attachment call from refresh after
remove_volume_connection call.
Remove volume connection process itself deletes attachment on passing
delete_attachment flag.
Bumps RPC API version.
Change-Id: I03ec3ee3ee1eeb6563a1dd6876094a7f4423d860
* Update nova from branch 'master'
to ef069d928a45116da904ec3deb38cc1303e91b18
- Merge "pwr mgmt: handle live migrations correctly"
- pwr mgmt: handle live migrations correctly
Previously, live migrations completely ignored CPU power management.
This patch makes sure that we correctly:
* Power up the cores on the destination during pre_live_migration, as
we need them powered up before the instance starts on the
destination.
* If the live migration is successful, power down the vacated cores on
the source.
* In case of a rollback, power down the cores previously powered up on
pre_live_migration.
Closes-bug: 2056613
Change-Id: I787bd7807950370cd865f29b95989d489d4826d0
* Update nova from branch 'master'
to b10cca028233e0d340a6dca088eb739fedb68082
- Merge "Reproducer test for live migration with power management"
- Reproducer test for live migration with power management
Building on the previous patch's refactor, we can now do functional
testing of live migration with CPU power management. We quickly notice
that it's mostly broken, leaving the CPUs powered up on the source,
and not powering them up on the dest.
Related-bug: 2056613
Change-Id: Ib4de77d68ceeffbc751bca3567ada72228b750af
* Update nova from branch 'master'
to 52a7d9cef9b8a48c5cb836faefdf73a94241feae
- Merge "pwr mgmt: make API into a per-driver object"
- pwr mgmt: make API into a per-driver object
We want to test power management in our functional tests in multinode
scenarios (ex: live migration).
This was previously impossible because all the methods in
nova.virt.libvirt.cpu.api and were at the module level, meaning both
source and destination libvirt drivers would call the same method to
online and offline cores. This made it impossible to maintain distinct
core power state between source and destination.
This patch inserts a nova.virt.libvirt.cpu.api.API class, and gives
the libvirt driver a cpu_api attribute with an instance of that
class. Along with the tiny API.core() helper, this allows new
functional tests in the subsequent patches to stub out the core
"model" code with distinct objects on the source and destination
libvirt drivers, and enables a whole bunch of testing (and fixes!)
around live migration.
Related-bug: 2056613
Change-Id: I052535249b9a3e144bb68b8c588b5995eb345b97
* Update nova from branch 'master'
to db9351ab5191e058994209464aa7fc2b2fa34561
- Merge "doc: mark the maximum microversion for 2024.1 Caracal"
- doc: mark the maximum microversion for 2024.1 Caracal
We need it for this release.
Change-Id: I17fbd9523067b0c19982499e66c314d47e9ee4bb
* Update nova from branch 'master'
to 1e35a461a62c7830ad60dff109eb8d32c6116e95
- Merge "Add service version for Caracal"
- Add service version for Caracal
We agreed by I2dd906f34118da02783bb7755e0d6c2a2b88eb5d on the support
envelope.
Pre-RC1, we need to add a service version in the object.
Post-RC1, depending on whether it's SLURP or not SLURP, we need to bump
the minimum version or not.
This patch only focuses on pre-RC1 stage.
Given Dalmatian will be skippable, we will need a post-RC1 patch for updating the min
that will bump to Caracal.
HTH.
Change-Id: I85a37f652900affaec626aa68f5f2388139a3a87
* Update nova from branch 'master'
to b59e1f8c001d79002fe61c76c6d59217626784aa
- Merge "Power on cores for isolated emulator threads"
- Power on cores for isolated emulator threads
Previously, with the `isolate` emulator threads policy and libvirt cpu
power management enabled, we did not power on the cores to which the
emulator threads were pin. Start doing that, and don't forget to power
them down when the instance is stopped.
Closes-bug: 2056612
Change-Id: I6e5383d8a0bf3f0ed8c870754cddae4e9163b4fd
* Update nova from branch 'master'
to 671c4e03134db539aaabd266cbdc020b8b34d4df
- Merge "Reproducer for not powering on isolated emulator threads cores"
- Reproducer for not powering on isolated emulator threads cores
Related-bug: 2056612
Change-Id: Icd586cdd015143b2e113fd14904f40410809d247
* Update nova from branch 'master'
to 3cb7329ad2ffe936338f997d894bb26cda4c85b5
- Merge "Add cpuset_reserved helper to instance NUMA topology"
- Add cpuset_reserved helper to instance NUMA topology
When we pin emulator threads with the `isolate` policy, those pins are
stored in the `cpuset_reserved` field in each NUMACell. In subsequent
patches we'll need those pins for the whole instance, so this patch
adds a helper property that does this for us, similar to how the
`cpu_pinning` property helper currently works.
Related-bug: 2056612
Change-Id: I8597f13e8089106434018b94e9bbc2091f95fee9
* Update nova from branch 'master'
to 336b815a3083ea1ab167f0f3af1ac1559d11a70b
- Merge "Reproducers for bug 1869804"
- Reproducers for bug 1869804
Live migrating a VM with no CPU policy and no NUMA topology to a host with
cpu_shared_set configured will not update VM's configuration accordingly.
Example: live migrating a VM from source host with cpu_shared_set=0,1 to
destination host with cpu_shared_set=2,3 will leave the VM configuration
pinned to CPUs 0,1 (<vcpu cpuset="0-1"> instead of <vcpu cpuset="2-3">).
This patch adds reproducers for live migrated instances and various
combinations of cpu_shared_set configuration.
- From a host with cpu_shared_set to a host with different cpu_shared_set.
- From a host with cpu_shared_set to a host without cpu_shared_set.
- From a host without cpu_shared_set to a host with cpu_shared_set.
This also adds the required changes to the libvirt fixture to manage
cpuset inside the vcpu tag.
Related-Bug: #1869804
Change-Id: Ib294a9d3c25b9a8548347dbe00416a55db567773
* Update nova from branch 'master'
to 13ccaf75f61a19ba11f39b63052bcef5dfbce91f
- Merge "Implement add_consumer, remove_consumer KeyManager APIs"
- Implement add_consumer, remove_consumer KeyManager APIs
These were introduced in Bobcat but later reverted [1]. Add them now in
preparation for a future major version bump of Castellan.
[1] https://review.opendev.org/c/openstack/castellan/+/895502/
Change-Id: I7565523d052d48109c7e70490c2c31b9944d2fc1
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Update nova from branch 'master'
to 6230018d65d1ac41c256a141afd27f73fc1dc259
- Merge "Disconnecting volume from the compute host"
- Disconnecting volume from the compute host
cmd nova-manage volume_attachment refresh vm-id vol-id connetor
There were cases where the instance said to live in compute#1 but the
connection_info in the BDM record was for compute#2, and when the script
called `remote_volume_connection` then nova would call os-brick on
compute#1 (the wrong node) and try to detach it.
In some case os-brick would mistakenly think that the volume was
attached (because the target and lun matched an existing volume on the
host) and would try to disconnect, resulting in errors on the compute
logs.
- Added HostConflict exception
- Fixes dedent in cmd/manange.py
- Updates nova-mange doc
Closes-Bug: #2012365
Change-Id: I21109752ff1c56d3cefa58fcd36c68bf468e0a73
* Update nova from branch 'master'
to 2f6418d1a7694f6afa52500ccb3dc1a9e2b4a73a
- Merge "add multinode ironic shard job"
- add multinode ironic shard job
This change adds the
ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode-shard
to the periodic-weekly and experimental pipelines replacing
the existing ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode
job in the experimental line.
&policies-irrelevant-files is moved to the job defintions to
make workign on jobs simpler when commeting out jobs during devleopemt
yes this is unrelated but it make our lives simpler.
Change-Id: I1ee8f7f7d0cbcfb3a8cd61b5fec201b4ba4bf671
* Update nova from branch 'master'
to 39de10777ba180e1203842ada23cba38770c5306
- Merge "Add support for showing requested az in output"
- Add support for showing requested az in output
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
* Update nova from branch 'master'
to 9675f142b02c3869b36fa619df404edb66a6ec88
- Merge "testing: Add ephemeral encryption support to fixtures"
- testing: Add ephemeral encryption support to fixtures
This adds encryption related methods and attributes to test fixtures to
enable functional testing for ephemeral encryption.
Related to blueprint ephemeral-encryption-libvirt
Change-Id: If65ec55d311ecf7fb3fe745ebbf116a430f60681
* Update nova from branch 'master'
to dac8bd2493c8706ae9ba0ab6d62fd53defd9c734
- Merge "libvirt: make <encryption> a sub element of <source>"
- libvirt: make <encryption> a sub element of <source>
For encryption of local ephemeral disks, the <encryption> XML should be
a sub element of the <source> XML element [1][2] in order for more
involved operations like live migration to work properly.
This adds generation of ephemeral <encryption> XML as a sub element of
the <source> XML.
This also renames the internal LibvirtConfigGuestDisk attribute for
volume encryption from "encryption" to "volume_encryption" in an effort
to clearly differentiate between volume encryption and ephemeral disk
encryption.
[1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1371022#c13
Related to blueprint ephemeral-encryption-libvirt
Change-Id: Ie4e5f2b27f7ef05f5c45b9adc1df2966e7f05e62
* Update nova from branch 'master'
to 91ec918ee7991eb4562f0df27d56578e2447a3ed
- Merge "Add hw_ephemeral_encryption_secret_uuid image property"
- Add hw_ephemeral_encryption_secret_uuid image property
If an image is encrypted, we will need to retrieve the passphrase from
the key manager service in order to create an instance from it.
This adds an image property to store the secret UUID that belongs to
the image. It will only be used to decrypt the image and will not be
used to encrypt or decrypt any other image. Nova will create a new
secret for each disk image it creates, including snapshots.
Related to blueprint ephemeral-storage-encryption
Change-Id: I01eef6adc2c8feb64e86b33392b8b4b483041e27
* Update nova from branch 'master'
to 1c903ccc8db8f011b92364b4225a3336c2a77bb3
- Merge "Fix nova-metadata-api for ovn dhcp native networks"
- Fix nova-metadata-api for ovn dhcp native networks
With the change from ml2/ovs DHCP agents towards OVN implementation
in neutron there is no port with device_owner network:dhcp anymore.
Instead DHCP is provided by network:distributed port.
Closes-Bug: 2055245
Change-Id: Ibb569b9db1475b8bbd8f8722d49228182cd47f85
* Update nova from branch 'master'
to 815fcbfa6bf7487ea605f8558ddef3eaf894d605
- Merge "Add encryption support to convert_image"
- Add encryption support to convert_image
This change enables ephemeral encryption support to convert:
* encrypted source image to unencrypted destination image
* unencrypted source image to encrypted destination image
* encrypted source image to encrypted destination image
This also makes necessary changes for mypy checks to pass.
Related to blueprint ephemeral-storage-encryption
Change-Id: I9edc87006b1f7de69bc52f916f45c2cbb66abe23
* Update nova from branch 'master'
to 7275e6088edc1985de28e164cc5d223b03fe0d93
- Merge "imagebackend: Add support to libvirt_info for LUKS based encryption"
- imagebackend: Add support to libvirt_info for LUKS based encryption
Related to blueprint ephemeral-encryption-libvirt
Change-Id: I909c86ab722179efcb673b66f1f81121ab8b5f66
* Update nova from branch 'master'
to d29a9b64eeef03f5a567dd68392f35f6c0525e01
- Merge "Make compute node rebalance safer"
- Make compute node rebalance safer
Many bugs around nova-compute rebalancing are focused around
problems when the compute node and placement resources are
deleted, and sometimes they never get re-created.
To limit this class of bugs, we add a check to ensure a compute
node is only ever deleted when it is known to have been deleted
in Ironic.
There is a risk this might leave orphaned compute nodes and
resource providers that need manual clean up because users
do not want to delete the node in Ironic, but are removing it
from nova management. But on balance, it seems safer to leave
these cases up to the operator to resolve manually, and collect
feedback on how to better help those users.
blueprint ironic-shards
Change-Id: I2bc77cbb77c2dd5584368563dc4250d71913906b
* Update nova from branch 'master'
to b6dc43183160049cf5379a8bfa427587552197a5
- Merge "Add nova-manage ironic-compute-node-move"
- Add nova-manage ironic-compute-node-move
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.
For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.
blueprint ironic-shards
Change-Id: I33034ec77b033752797bd679c6e61cef5af0a18f
* Update nova from branch 'master'
to 163f6823623ffd2568fdc0987be44e778a39ded6
- Merge "Limit nodes by ironic shard key"
- Limit nodes by ironic shard key
Ironic in API 1.82 added the option for nodes to be associated with
a specific shard key. This can be used to partition up the nodes within
a single ironic conductor group into smaller sets of nodes that can
each be managed by their own nova-compute ironic service.
We add a new [ironic]shard config option to allow operators to say
which shard each nova-compute process should target.
As such, when the shard is set we ignore the peer_list setting
and always have a hash ring of one.
Also corrects an issue where [ironic]/conductor_group was considered
a mutable configuration; it is not mutable, nor is shards. In any
situation where an operator changes the scope of nodes managed by a
nova compute process, a restart is required.
blueprint ironic-shards
Co-Authored-By: Jay Faulkner <jay@jvf.cc>
Change-Id: Ie0c71f7bc5a62d607ffd3134837299fee952a947
* Update nova from branch 'master'
to 6834b2da1e68ac1050fa637391f73ad792a1e769
- Merge "docs: Further clarifications to the SG doc"
- docs: Further clarifications to the SG doc
Add some notes and clarify some details:
- You don't *have* to specify an IP protocol: non-IP Ethertypes are
possible
- It is not possible to automatically create ports *without* the default
SG (nor will it ever be possible - proxy APIs are bad)
- Remove the default SG can break access to the metadata service
Change-Id: Id66a92bdfd6e1663acddca830b2a9e99ac23a758
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
* Update nova from branch 'master'
to 9c6e593144dd49e810a2cac37224ce58677a175a
- Merge "HyperV: Remove extra specs of HyperV driver"
- HyperV: Remove extra specs of HyperV driver
There are a few extra spec which are only applicable for HyperV driver,
therefore those are removed.
Change-Id: I9bd959fdf9938b2752c4927c5ff7daf89b5f0d38
* Update nova from branch 'master'
to a8d8e9a573f7ae91ee4d8bc4ede14ae354da9f17
- Merge "Separate OSError with ValueError"
- Separate OSError with ValueError
OSError will only be raised, if file path is not readable because of permission issue.
With this change we will get correct error msg.
Change-Id: Iad3b0f2ab3e6eafd9f6c98477edfa35c4cd46ee8
* Update nova from branch 'master'
to 5272c20a581440592c0d7555bdbd65b200495419
- Merge "Added context manager for instance lock"
- Added context manager for instance lock
Moved lock and unlock instance code to context manager.
Updated _refresh volume attachment method to use instance
lock context manager
Now there will be a single request ID for the lock, refresh, and unlock
actions. Earlier, the volume_attachment refresh operation used to have a
unique req-id for each action.
Related-Bug: #2012365
Change-Id: I6588836c3484a26d67a5995710761f0f6b6a4c18
* Update nova from branch 'master'
to 149585bca1ee2aa7688803bc88168f4aee3af7f2
- Merge "libvirt: Configure and teardown ephemeral encryption secrets"
- libvirt: Configure and teardown ephemeral encryption secrets
This adds configuration of the default ephemeral encryption format and
sets default encryption attributes in the driver block device mapping
when needed. This includes generation of a secret passphrase when one
has not been provided.
Co-Authored-By: melanie witt <melwittt@gmail.com>
Related to blueprint ephemeral-encryption-libvirt
Change-Id: I052441076c677c0fe76a8d9421af70b0ffa1d400
* Update nova from branch 'master'
to 060445aa2f656ddbb50b2b6a952cbbbcc7d4b4a4
- Merge "Modify the mdevs in the migrate XML"
- Modify the mdevs in the migrate XML
Now the destination returns the list of the needed mdevs for the
migration, we can change the XML.
Note: this is the last patch of the feature branch.
I'll work on adding mtty support in the next patches in the series
but that's not a feature usage.
Change-Id: Ib448444be09df50c3db5ccda8a49bfd882c18edf
Implements: blueprint libvirt-mdev-live-migrate