0ecb859b96
zypper performs some network operations when collecting the list of installed packages which slow down the overall process. We already have the per host list of repositories so we can do some maths to figure out where each package is coming from. As we see below, disabling the repositories saves us ~9s. ~$ time zypper --quiet pa -i real 0m10.607s user 0m9.024s sys 0m0.193s ~$ time zypper --disable-repositories pa -i real 0m1.043s user 0m0.833s sys 0m0.136s On a deployment with multiple containers, the 9s difference causes a lot of overhead and jobs are timing out from time to time. As such, lets just disable the repositories when collecting the list of the installed packages. Change-Id: I3e48dad0e213f6e24a2ceb4033b1a53873b6e298 |
||
---|---|---|
ansible-lint | ||
common-tasks | ||
doc | ||
network_interfaces | ||
releasenotes | ||
sync/tasks | ||
tests | ||
zuul.d | ||
.gitignore | ||
.gitreview | ||
LICENSE | ||
README.rst | ||
Vagrantfile | ||
bindep.txt | ||
create-grant-db.yml | ||
destroy_containers.yml | ||
gen-projects-list.sh | ||
get-ansible-role-requirements.yml | ||
iptables-clear.sh | ||
manual-test.rc | ||
run_tests.sh | ||
run_tests_common.sh | ||
setting-nodepool-variables.yml | ||
setup.cfg | ||
setup.py | ||
sync-test-repos.sh | ||
test-ansible-deps.txt | ||
test-ansible-env-prep.sh | ||
test-ansible-functional.sh | ||
test-ansible-lint.sh | ||
test-ansible-role-requirements.yml | ||
test-ansible-syntax.sh | ||
test-ansible.cfg | ||
test-bashate.sh | ||
test-create-previous-venv.sh | ||
test-distro_install-vars.yml | ||
test-install-cinder.yml | ||
test-install-etcd.yml | ||
test-install-galera.yml | ||
test-install-glance.yml | ||
test-install-gnocchi.yml | ||
test-install-haproxy.yml | ||
test-install-heat.yml | ||
test-install-horizon.yml | ||
test-install-infra.yml | ||
test-install-ironic.yml | ||
test-install-keystone.yml | ||
test-install-memcached.yml | ||
test-install-neutron.yml | ||
test-install-nova.yml | ||
test-install-openstack-hosts.yml | ||
test-install-rabbitmq.yml | ||
test-install-swift.yml | ||
test-install-tempest.yml | ||
test-log-collect.sh | ||
test-pep8.sh | ||
test-prepare-containers.yml | ||
test-prepare-host.yml | ||
test-prepare-keys.yml | ||
test-repo-setup.yml | ||
test-requirements.txt | ||
test-setup-cinder-localhost.yml | ||
test-setup-host.yml | ||
test-setup-swifthosts.yml | ||
test-vars.yml | ||
tox.ini |
README.rst
Team and repository tags
OpenStack-Ansible testing
This is the openstack-ansible-tests
repository,
providing a framework and consolidation of testing configuration and
playbooks. This can be used to integrate new projects, and ensure that
code duplication is minimized whilst allowing the addition of new
testing scenarios with greater ease.
Role Integration
To enable the openstack-ansible-tests
repository, ensure
that the tox.ini
configuration in the role repository
matches the galera_client
repository tox.ini with the exception of the value for
ROLE_NAME
. A more advanced configuration which implements
multiple functional test scenarios is available in the neutron
role tox.ini.
To override variables you can create a
${rolename}-overrides.yml
file inside the role's tests
folder. This variable file can be includes in the functional tox target
configuration in tox.ini
as demonstrated in the following
extract:
ansible-playbook -i {toxinidir}/tests/inventory \
-e @{toxinidir}/tests/${rolename}-overrides.yml \
-vvvv {toxinidir}/tests/test.yml
In your repositories tests/test.yml
file, you can call
any of the included playbooks, for example:
- include: common/test-prepare-keys.yml
Network Settings
The networking can be configured and setup using the
bridges
variable.
The base option, when only 1 interface is required is to specify just
a single base - this is only for backwards compatibility with existing
test setup and will default to br-mgmt
with an IP of
10.1.0.1
.
bridges:
- "br-mgmt"
To allow a more complicated network setup we can specify
ip_addr
: The IP address on the interface.
netmask
: Netmask of the interface (defaults to
255.255.255.0) name
: Name of the interface
veth_peer
: Set up a veth peer for the interface
alias
: Add an alias IP address
For example, a Nova setup may look like this:
bridges:
- name: "br-mgmt"
ip_addr: "10.1.0.1"
- name: "br-vxlan"
ip_addr: "10.1.1.1"
- name: "br-vlan"
ip_addr: "10.1.2.200"
veth_peer: "eth12"
alias: "10.1.2.1"