4ac7575717
This never actually worked. What we wanted to do was test a setup where we had two hosts, one with the necessary configuration needed for a specific request and one without, and attempt to cold migrate the instance from the former to the latter resulting in a fail. However, because it's not possible to use different configuration for different hosts, we were attempting to "break" the configuration on one host. Unfortunately, the driver correctly detects this broken behavior, resulting in an error message like so: ERROR [nova.compute.manager] Error updating resources for node test_compute1. Traceback (most recent call last): File "nova/compute/manager.py", line 8524, in _update_available_resource_for_node startup=startup) File "nova/compute/resource_tracker.py", line 867, in update_available_resource resources = self.driver.get_available_resource(nodename) File "nova/virt/libvirt/driver.py", line 7907, in get_available_resource numa_topology = self._get_host_numa_topology() File "nova/virt/libvirt/driver.py", line 7057, in _get_host_numa_topology physnet_affinities = _get_physnet_numa_affinity() File "nova/virt/libvirt/driver.py", line 7039, in _get_physnet_numa_affinity raise exception.InvalidNetworkNUMAAffinity(reason=msg) InvalidNetworkNUMAAffinity: Invalid NUMA network affinity configured: node 1 for physnet foo is not present in host affinity set {0: set([])} There isn't really an alternative. We can't configure compute nodes separately so that's ruled out and the virt driver will correctly detect any other attempts to break the configuration. Since the test never actually worked, the correct thing to do seems to be to remove it. NOTE(stephenfin): This was backported to make the backport of change I0322d872bdff68936033a6f5a54e8296a6fb3434 cleaner, but it also applies here. Conflicts: nova/tests/functional/libvirt/test_numa_servers.py NOTE(stephenfin): Conflicts are due to change I8ef852d449e9e637d45e4ac92ffc5d1abd8d31c5 ("Include all network devices in nova diagnostics") which modified the test we are removing here. Change-Id: I14637d788205408dcf9a007d7727358c03033dcd Signed-off-by: Stephen Finucane <sfinucan@redhat.com> (cherry picked from commit |
||
---|---|---|
api-guide/source | ||
api-ref/source | ||
contrib | ||
devstack | ||
doc | ||
etc/nova | ||
gate | ||
nova | ||
placement-api-ref | ||
playbooks/legacy | ||
releasenotes | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.mailmap | ||
.stestr.conf | ||
.zuul.yaml | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
MAINTAINERS | ||
README.rst | ||
babel.cfg | ||
bindep.txt | ||
lower-constraints.txt | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tests-py3.txt | ||
tox.ini |
README.rst
Team and repository tags
OpenStack Nova
OpenStack Nova provides a cloud computing fabric controller, supporting a wide variety of compute technologies, including: libvirt (KVM, Xen, LXC and more), Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM.
Use the following resources to learn more.
API
To learn how to use Nova's API, consult the documentation available online at:
For more information on OpenStack APIs, SDKs and CLIs in general, refer to:
Operators
To learn how to deploy and configure OpenStack Nova, consult the documentation available online at:
In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at:
Developers
For information on how to contribute to Nova, please see the contents of the CONTRIBUTING.rst.
Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests.
Further developer focused documentation is available at:
Other Information
During each Summit and Project Team Gathering, we agree on what the whole community wants to focus on for the upcoming release. The plans for nova can be found at: