a1a335f19a
While getting a HostState object for a given compute node during scheduling, if the HostState does not have its instance info set, either because it's out of date or because config option "track_instance_changes" is False, the HostManager still pulls the list of instances for that host from the database and stores it in HostState.instances. This is *only* used (in-tree) by the affinity filters and even then the only thing those filters use from HostState.instances is the set of keys from the dict, which is the list of instance UUIDs on a given host. The actual Instance objects aren't used at all. See blueprint put-host-manager-instance-info-on-a-diet for more details on that. The point of this change, is that when we go to pull the set of instances from the database for a given host, we don't need to join on the default columns (info_cache and security_groups) defined in the _instance_get_all_query() method in the DB API. This should be at least some minor optimization in scheduling for hosts that have several instances on them in a large cloud. As noted in the comment in the code, any out of tree filters that rely on using the info_cache or security_groups from the instance are now going to be hit with a lazy-load penalty per instance, but we have no contract on out of tree filters so if this happens, the people maintaining said filters can (1) live with it (2) fork the HostManager code or (3) upstream their filter so it's in-tree. A more impactful change would be to refactor HostManager._get_host_states to bulk query the instances on the given set of compute nodes in a single query per cell. But that is left for a later change. Change-Id: Iccefbfdfa578515a004ef6ac718bac1a49d5c5fd Partial-Bug: #1737465 |
||
---|---|---|
api-guide/source | ||
api-ref/source | ||
contrib | ||
devstack | ||
doc | ||
etc/nova | ||
gate | ||
nova | ||
placement-api-ref/source | ||
playbooks/legacy | ||
releasenotes | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.mailmap | ||
.stestr.conf | ||
.zuul.yaml | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
MAINTAINERS | ||
README.rst | ||
babel.cfg | ||
bindep.txt | ||
lower-constraints.txt | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tests-py3.txt | ||
tox.ini |
README.rst
Team and repository tags
OpenStack Nova
OpenStack Nova provides a cloud computing fabric controller, supporting a wide variety of compute technologies, including: libvirt (KVM, Xen, LXC and more), Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM.
Use the following resources to learn more.
API
To learn how to use Nova's API, consult the documentation available online at:
For more information on OpenStack APIs, SDKs and CLIs in general, refer to:
Operators
To learn how to deploy and configure OpenStack Nova, consult the documentation available online at:
In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at:
Developers
For information on how to contribute to Nova, please see the contents of the CONTRIBUTING.rst.
Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests.
Further developer focused documentation is available at:
Other Information
During each Summit and Project Team Gathering, we agree on what the whole community wants to focus on for the upcoming release. The plans for nova can be found at: