Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: Ifc9062236483c2139921ccdb2ceed197aa6b8b11
As of now, the server show and server list --long output
shows the availability zone, that is, the AZ to which the
host of the instance belongs. There is no way to tell from
this information if the instance create request included an
AZ or not.
This change adds a new api microversion to add support for
including availability zone requested during instance create
in server show and server list --long responses.
Change-Id: If4cf09c1006a3f56d243b9c00712bb24d2a796d3
With the change from ml2/ovs DHCP agents towards OVN implementation
in neutron there is no port with device_owner network:dhcp anymore.
Instead DHCP is provided by network:distributed port.
Closes-Bug: 2055245
Change-Id: Ibb569b9db1475b8bbd8f8722d49228182cd47f85
Now the destination returns the list of the needed mdevs for the
migration, we can change the XML.
Note: this is the last patch of the feature branch.
I'll work on adding mtty support in the next patches in the series
but that's not a feature usage.
Change-Id: Ib448444be09df50c3db5ccda8a49bfd882c18edf
Implements: blueprint libvirt-mdev-live-migrate
With Libvirt v8.7.0+, the <maxphysaddr> sub-element
of the <cpu> element specifies the number of vCPU
physical address bits [1].
[1] https://libvirt.org/news.html#v8-7-0-2022-09-01
New flavor extra_specs and image properties are added to
control the physical address bits of vCPUs in Libvirt guests.
The nova-scheduler requests COMPUTE_ADDRESS_SPACE_* traits
based on them. The traits are already defined in os-traits
v2.10.0. Also numerical comparisons are performed at
both compute capabilities filter and image props filter.
blueprint: libvirt-maxphysaddr-support-caracal
Change-Id: I98968f6ef1621c9fb4f682c119038e26d62ce381
Signed-off-by: Nobuhiro MIKI <nmiki@yahoo-corp.jp>
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.
For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.
blueprint ironic-shards
Change-Id: I33034ec77b033752797bd679c6e61cef5af0a18f
Ironic in API 1.82 added the option for nodes to be associated with
a specific shard key. This can be used to partition up the nodes within
a single ironic conductor group into smaller sets of nodes that can
each be managed by their own nova-compute ironic service.
We add a new [ironic]shard config option to allow operators to say
which shard each nova-compute process should target.
As such, when the shard is set we ignore the peer_list setting
and always have a hash ring of one.
Also corrects an issue where [ironic]/conductor_group was considered
a mutable configuration; it is not mutable, nor is shards. In any
situation where an operator changes the scope of nodes managed by a
nova compute process, a restart is required.
blueprint ironic-shards
Co-Authored-By: Jay Faulkner <jay@jvf.cc>
Change-Id: Ie0c71f7bc5a62d607ffd3134837299fee952a947
There are a few extra spec which are only applicable for HyperV driver,
therefore those are removed.
Change-Id: I9bd959fdf9938b2752c4927c5ff7daf89b5f0d38
RDP console was only for HyperV driver so removing the
API. As API url stay same (because same used for other
console types API), RDP console API will return 400.
Cleaning up the related config options as well as moving its
API ref to obsolete seciton.
Keeping RPC method to avoid error when old controller is used
with new compute. It can be removed in next RPC version bump.
Change-Id: I8f5755009da4af0d12bda096d7a8e85fd41e1a8c
The RDP console was only available for HyperV driver, therefore its
connection information via API ``os-console-auth-tokens`` will now return
HTTP ``400 (BadRequest)`` error.
Starting from 2.31 microversion, this API return connection info
for all other console type.
Change-Id: I94e590eb4cbe3b2d8eff7fe881f7b98af8979be2
Nova Hyper-V driver is not tested in OpenStack upstream and no maintianers.
This driver has been marked as deprecated in Antelope release. It has dependency
on the OpenStack Winstacker project which has been retired[1].
As discussed in vPTG[2], removing the HyperV driver, tests, and its config.
[1] https://review.opendev.org/c/openstack/governance/+/886880
[2] https://etherpad.opendev.org/p/nova-caracal-ptg#L301
Change-Id: I568c79bae9b9736a20c367096d748c730ed59f0e
list_instances and list_instance_uuids, as written in the Ironic driver,
do not currently respect conductor_group paritioning. Given a nova
compute is intended to limit it's scope of work to the conductor group
it is configured to work with; this is a bug.
Additionally, this should be a significant performance boost for a
couple of reasons; firstly, instead of calling the Ironic API and
getting all nodes, instead of the subset (when using conductor group),
we're now properly getting the subset of nodes -- this is the optimized
path in the Ironic DB and API code. Secondly, we're now using the
driver's node cache to respond to these requests. Since list_instances
and list_instance_uuids is used by periodic tasks, these operating with
data that may be slightly stale should have minimal impact compared to
the performance benefits.
Closes-bug: #2043036
Change-Id: If31158e3269e5e06848c29294fdaa147beedb5a5
This option was deprecated in favor of the HTTPProxyToWSGI middleware
in 26.0.0 release[1].
[1] cf906cdcc2
Related-Bug: #1967686
Change-Id: Iad8880127531dc2788d646f8a05b5c17fd9d0969
In I008841988547573878c4e06e82f0fa55084e51b5 we started enabling a
bunch of libvirt enlightenments for Windows unconditionally. Turns
out, the `evmcs` enlightenment only works on Intel hosts, and we broke
the ability to run Windows guests on AMD machines. Until we become
smarter about conditionally enabling evmcs (with something like traits
for host CPU features), just stop enabling it at all.
Change-Id: I2ff4fdecd9dc69de283f0e52e07df1aeaf0a9048
Closes-bug: 2009280
This allows deployment tooling to easily switch from passing a binary
path to passing a Python module path. We'll use it shortly.
Change-Id: I37393656a70d7c22dc18e7bd65f3dc515532c237
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Libvirt has implemented the capability to expose maximum number of
SEV guests and SEV-ES guests in 8.0.0[1][2]. This allows nova to detect
maximum number of memory encrypted guests using that feature.
The detection is not used if the [libvirt] num_memory_encrypted_guests
option is set to preserve the current behavior.
Note that current nova supports only SEV and does not support SEV-ES,
so this implementation only uses the maximum number of SEV guests.
The maximum number of SEV-ES guests will be used in case we implement
support for SEV-ES.
[1] 34cb8f6fcd
[2] 7826148a72
Implements: blueprint libvirt-detect-sev-max-guests
Change-Id: I502e1713add7e6a1eb11ecce0cc2b5eb6a14527a
The CPU power management feature of the libvirt driver, enabled with
[libvirt]cpu_power_management, only manages dedicated CPUs and does not
touch share CPUs. Today nova-compute refuses to start if configured
with [libvirt]cpu_power_management=true [compute]cpu_dedicated_set=None.
While this is functionally not limiting it does limit the possibility to
independently enable the power management and define the
cpu_dedicated_set. E.g. there might be a need to enable the former in
the whole cloud in a single step, while not all nodes of the cloud will
have dedicated CPUs configured.
This patch removes the strict config check. The implementation already
handles each PCPU individually, so if there are an empty list of PCPUs
then it does nothing.
Closes-Bug: #2043707
Change-Id: Ib070e1042c0526f5875e34fa4f0d569590ec2514