This was a great tool but it seems it is no longer maintained, with no
commits since late 2021 [1]. Remove it, capturing static copies of the
SVGs it was generating so we don't lose information. The original
"source" is retained in case we ever want to revive our use of the tool
but that seems unlikely at this point.
[1] https://github.com/blockdiag/blockdiag
Change-Id: Ie3f89730128fdb8beca8bb02312d11516affcbbc
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This moves the completed rocky specs to the implemented
directory, adds the redirects and also removes the rocky-template
symlink from the approved directory and docs.
This was done using:
$ tox -e move-implemented-specs -- rocky
Change-Id: I298d6d106f392b662930be97bf88637f1056c0d8
This spec aims at addressing the behavioural changes that are required to
support some of the basic nova operations like listing of instances and
services when a cell goes down.
APIImpact
Change-Id: I6eaa6ce26b86099ad623374c64ce214ec65a7d12
Implements: blueprint handling-down-cell
There is nothing using CIDR notation in the metadata.
For consistency, it's better to use a list of dict with address/netmask.
Change-Id: I66a5cd523f42761d06f32949dfc02c202e31ba9f
Virt drivers need to be able to change the structure of the provider
trees they expose. When moving existing resources, existing allocations
need to be moved along with the inventories. And this must be done in
such a way as to avoid races where a second entity can create or remove
allocations against the moving inventories.
Change-Id: I1508c8e12c75b24ec9da04468b700b60f055ec24
blueprint: reshape-provider-tree
Original spec change mixed two distinct API paths.
This change fixes the spec to address each path.
As a result, responses shall be fixed for server API paths:
- GET /servers/detail
- GET /servers/{server_id}
- GET /servers/{server_id}/ips
- PUT /servers/{server_id}
- POST /servers/{server_id}/action (rebuild)
APIImpact
Spec for blueprint servers-ips-non-unique-network-names
Partial-Bug: #1708316
Change-Id: I6a0ca905996a50a6953e3932cf2636804d53a407
Signed-off-by: Maciej Kucia <maciej@kucia.net>
blueprint enhanced-kvm-storage-qos
QEMU 1.7 provides options to specify maximum burst IOPS and maximum burst
bandwidth per disk. Additionally disk IO size can be specified as part of
this version of QEMU.
At the moment, the Nova libvirt driver does not support setting storage
burst IOPS limits. For this reason, some instances might exhaust
storage resources, impacting other tenants and applications.
Change-Id: Ic51231e3c0d5d01f479d1ca630abdfcd74ce6a07
Metadata can be consumed from metadata service and configuration drive.
The spec should refer to the generic metadata term so both mechanisms
are implied.
blueprint multiple-fixed-ips-network-information
Change-Id: If877de77793d5be35ddd2ffc41c7817760b67bfb
Example was missing the new services field per network
introduced in a previous change. [1]
[1] I003a25b0d60cb6cd16c3ee1ad1a43910825622be
Change-Id: Ib05aea87fb14f1363007d54e0ceceab9bf2c2b12
Introduce support for multiple fixed-ips in network information found
in metadata service. Network information currently only considers
the first fixed-ip even if instance has more than one fixed-ip per port.
NOTE: The current and proposed formats do not fully address multiple subnets.
Each subnet can have its own routes and default route which create challenges:
* Should all routes be merged in the same "routes" attribute?
* What should be the default route if more than one subnet provides
a default route? Should we provide all of them?
* What should be the implementation logic an in-guest agent
use to determine the default route?
* Are there use cases where one default route should be used over the others?
I'm thinking about NFV or other specialized use cases.
blueprint multiple-fixed-ips-network-information
Change-Id: I135e40edc8850d1dfb352f6b7abd841a4f516888
This spec proposes adding a compute API microversion to allow
creating multiple servers in the same request using the same
multiattach volume and to allow the user to specify the attach
mode when attaching volumes to a server.
APIImpact
Spec for blueprint volume-multiattach-enhancements
Change-Id: If75f0b0a19411b553e04643c37921ef4958933f2
This spec describes changes that would allow Nova to perform
certificate validation when verifying Glance image signatures.
While image signing ensures that image data is obtained
unmodified from Glance, it does not prevent an attacker from
uploading and signing a malicious image. The addition of Nova
API changes allows Nova users to control the certificates
which are allowed to sign images.
This spec describes work related to image verification. For
more information, see: https://review.openstack.org/#/c/343654
APIImpact
Change-Id: I47aefdb8a8135ab5c1c49a764f468035a0afb8ab
Previously-approved: Rocky
Updates the add-consumer-generation spec to focus strictly on the
behaviour changes expected in the placement API and removing information
about changing consumers.project_id and consumers.user_id fields to be
NULLable.
Change-Id: Iccd0c3214354e7c0d73318a3f728ea321ad32afa
blueprint: add-consumer-generation
Spec for blueprint libvirt-file-backed-memory
https://blueprints.launchpad.net/nova/+spec/libvirt-file-backed-memory
With the advent of large capacity memory devices, it's now reasonable to run
virtual machines with file backed memory. This enables a much larger total
memory area per compute node
Change-Id: I99304f6c5f9c436efe25d0ab016c5d50e90f3e22
Signed-off-by: Zack Cornelius <zack.cornelius@kove.net>
Currently, we delete ironic node resource provider inventory from placement
to indicate that the node is not available for deployment during the cleaning
process or when in maintenance. This is not actually required as we can
reserve all the inventory, which describes more accurately what is going on,
and introduces issues like https://bugs.launchpad.net/nova/+bug/1771577.
APIImpact
blueprint: allow-reserved-equal-total-inventory
Change-Id: I0aec5e9f0a26430a9b104159c62ba1d07944ce9f
The scheduler needs to provide some mechanism to control
the last of these by limiting the proportion of a group
of related VMs that are scheduled on the same host.
APIImpact
blueprint: complex-anti-affinity-policies
Change-Id: I2b71f27ce11f21aaa5eb4895c0fa57fb6829a705
This spec proposes adding new resource classes representing network
bandwidth and modeling network backends as resource providers in
placement. As well as adding scheduling support for the new resources
in Nova.
blueprint bandwidth-resource-provider
Co-Authored-By: Rodolfo Alonso Hernandez <rodolfo.alonso.hernandez@intel.com>
Co-Authored-By: Bence Romsics <bence.romsics@ericsson.com>
Change-Id: Ie7be551f4f03957ade9beb64457736f400560486
Add a paragraph describing how we can handle rebuild of instances
while changing either the image or the image traits.
Change-Id: If6f38c62e67c7d977da815202b93356644bcf2d4
blueprint: glance-image-traits
This spec aims to extend the placement ``GET /allocation_candidates``
API to include all resource class inventories of entire tree in
``provider_summaries`` field in the response body.
Change-Id: I7992e16dd63cc2c5d22c3c3ce58adae9ad741569
Blueprint: placement-return-all-resources
It's important to note that, there are some differents between the
blueprint proposed and the spec accepted:
- It has been indicated in the approved spec that the option
introduced will not be 'CONF.overhead_pin_set' but
'CONF.cpu_shared_set'.
- Also instead of introducing a new policy HOST for
hw:emulator_threads_policy it has been indicated in the approved
spec that, the already defined policy SHARE in conjuncture with
CONF.cpu_shared_set will make the guests emulator threads to float
on the whole set of pCPUs defined by CONF.cpu_shared_set.
Related-to: bp overhead-pin-set
Change-Id: Ia6a7be7e3abb0cd27752fdef3721a748848b3747
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@redhat.com>
After much discussion [1], we have agreed to implement both
"anti-affinity" and "any fit" in the granular request syntax by
requiring a `group_policy` queryparam when more than one numbered
request group is specified. The queryparam may have one of two values:
`isolate`: Different numbered request groups will be satisfied by
different providers (forced anti-affinity).
`none`: Different numbered request groups may be satisfied by different
providers or common providers ("any fit").
[1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129477.html
Change-Id: I116089b2d4f3faeb985329fd33507ece9e9ca944
Amend the spec to include more implementation details.
Change-Id: I8cb13ddba872f43f1e5404b7f9a0d96482036a42
Related: blueprint abort-live-migration-in-queued-status