Now that master is on Dalmatian, which is a non-SLURP release, we need
to bump our minimum supported version to the previous SLURP release,
which is now Caracal (and no longer Antelope).
Change-Id: I9d5150be2c131899fa2281a971bca965b8fff0b0
We agreed by I2dd906f34118da02783bb7755e0d6c2a2b88eb5d on the support
envelope.
Pre-RC1, we need to add a service version in the object.
Post-RC1, depending on whether it's SLURP or not SLURP, we need to bump
the minimum version or not.
This patch only focuses on pre-RC1 stage.
Given Dalmatian will be skippable, we will need a post-RC1 patch for updating the min
that will bump to Caracal.
HTH.
Change-Id: I85a37f652900affaec626aa68f5f2388139a3a87
Previously, live migrations completely ignored CPU power management.
This patch makes sure that we correctly:
* Power up the cores on the destination during pre_live_migration, as
we need them powered up before the instance starts on the
destination.
* If the live migration is successful, power down the vacated cores on
the source.
* In case of a rollback, power down the cores previously powered up on
pre_live_migration.
Closes-bug: 2056613
Change-Id: I787bd7807950370cd865f29b95989d489d4826d0
Building on the previous patch's refactor, we can now do functional
testing of live migration with CPU power management. We quickly notice
that it's mostly broken, leaving the CPUs powered up on the source,
and not powering them up on the dest.
Related-bug: 2056613
Change-Id: Ib4de77d68ceeffbc751bca3567ada72228b750af
We want to test power management in our functional tests in multinode
scenarios (ex: live migration).
This was previously impossible because all the methods in
nova.virt.libvirt.cpu.api and were at the module level, meaning both
source and destination libvirt drivers would call the same method to
online and offline cores. This made it impossible to maintain distinct
core power state between source and destination.
This patch inserts a nova.virt.libvirt.cpu.api.API class, and gives
the libvirt driver a cpu_api attribute with an instance of that
class. Along with the tiny API.core() helper, this allows new
functional tests in the subsequent patches to stub out the core
"model" code with distinct objects on the source and destination
libvirt drivers, and enables a whole bunch of testing (and fixes!)
around live migration.
Related-bug: 2056613
Change-Id: I052535249b9a3e144bb68b8c588b5995eb345b97
Previously, with the `isolate` emulator threads policy and libvirt cpu
power management enabled, we did not power on the cores to which the
emulator threads were pin. Start doing that, and don't forget to power
them down when the instance is stopped.
Closes-bug: 2056612
Change-Id: I6e5383d8a0bf3f0ed8c870754cddae4e9163b4fd
When we pin emulator threads with the `isolate` policy, those pins are
stored in the `cpuset_reserved` field in each NUMACell. In subsequent
patches we'll need those pins for the whole instance, so this patch
adds a helper property that does this for us, similar to how the
`cpu_pinning` property helper currently works.
Related-bug: 2056612
Change-Id: I8597f13e8089106434018b94e9bbc2091f95fee9
Sometimes, some GPU may have a long list of PCI addresses (say a SRIOV
GPU) or operators may have a long list of GPUs. In order to help their
lifes, let's allow device_addresses to be optional.
This means that a valid configuration could be :
[devices]
enabled_mdev_types = nvidia-35, nvidia-36
[mdev_nvidia-35]
[mdev_nvidia-36]
NOTE(sbauza): we have a slight coverage gap for testing what happens
if the groups aren't set, but I'll add it in a next patch
Related-Bug: #2041519
Change-Id: I73762a0295212ee003db2149d6a9cf701023464f
We want to cap a maximum mdevs we can create.
If some type has enough capacity, then other GPUs won't be used and
existing ResourceProviders would be deleted.
Closes-Bug: #2041519
Change-Id: I069879a333152bb849c248b3dcb56357a11d0324
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
This was a TODO to remove delete attachment call from refresh after
remove_volume_connection call.
Remove volume connection process itself deletes attachment on passing
delete_attachment flag.
Bumps RPC API version.
Change-Id: I03ec3ee3ee1eeb6563a1dd6876094a7f4423d860
cmd nova-manage volume_attachment refresh vm-id vol-id connetor
There were cases where the instance said to live in compute#1 but the
connection_info in the BDM record was for compute#2, and when the script
called `remote_volume_connection` then nova would call os-brick on
compute#1 (the wrong node) and try to detach it.
In some case os-brick would mistakenly think that the volume was
attached (because the target and lun matched an existing volume on the
host) and would try to disconnect, resulting in errors on the compute
logs.
- Added HostConflict exception
- Fixes dedent in cmd/manange.py
- Updates nova-mange doc
Closes-Bug: #2012365
Change-Id: I21109752ff1c56d3cefa58fcd36c68bf468e0a73
This adds encryption related methods and attributes to test fixtures to
enable functional testing for ephemeral encryption.
Related to blueprint ephemeral-encryption-libvirt
Change-Id: If65ec55d311ecf7fb3fe745ebbf116a430f60681
With the change from ml2/ovs DHCP agents towards OVN implementation
in neutron there is no port with device_owner network:dhcp anymore.
Instead DHCP is provided by network:distributed port.
Closes-Bug: 2055245
Change-Id: Ibb569b9db1475b8bbd8f8722d49228182cd47f85