This incorrect disk_cachemodes setting was causing nova-compute to set
'none' as the disk cache instead of 'writeback'. We want to use
writeback instead because we don't care about data integrity on nodepool
guests.
Change-Id: I9b31c4afa8b7836f2f77294033d0a92bec55dd84
Configure OSA to use the utility container for bootstrapping OpenStack
services. In Rocky, OSA began using the deployment host by default
instead of the utility container. This breaks our deployment model
because the deployment host does not have access to the internal
OpenStack API endpoints.
Revert back to the previous behavior of using the utility container
for bootstrapping services.
Change-Id: Iebfb6583c1b02bdc7422fb7c3fbdaf3a851aec43
Add the infra hosts to the haproxy global whitelist. This is needed
so that these hosts can access the endpoints for nova metadata api
as well as the apt-cacher-ng endpoint.
Change-Id: I27eee08ab6f3b1e5ec3bd9afcebbabce181526ee
A series of env.d overrides were applied in order to deploy the Pike
container infrastructure with the same hyperconverged scenario that
was implemented for Queens. Now that OSA has been updated to queens,
these overrides are no longer needed.
Change-Id: I57107d101368d76d508d2ebcc2fc27f3110aa197
This is an unsafe cache mode in production, but since this cloud only
runs ephemeral test instances for CI, use cache mode writeback for
maximum disk IO.
Change-Id: I9c0f50c9182d0372e232f517cb431559eb98d233
Host passthrough will directly pass through the compute host's CPU model
and features without regard for migration compatibility. We don't care
about migration because all of the instances on this cloud are ephemeral
nodepool instances.
Change-Id: Id02f1826b58acf1834ec117679a26d9bbe981c2e
Since only the compute hosts are not connected to the internet,
these are the only hosts in the environment that should require
a proxy configuration.
Change-Id: Ie3b8474fda2b6d2a0c2de4e18739824fec25fd3f
This change adds the elk_metrics_6x deployment tooling to this project.
A new submodule was added for the openstack-ansible-ops repo which will
be setup and executed via the deployment script.
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
On the 4 core/8 thread Xeon E3 controllers used, the resulting 800
max_connection setting in galera causes database connection contention
in the environment.
This will increase the max connections to accommodate the full
amount of connections needed by the OSA base services.
With only a single flat interface, neutron-linuxbridge-agent breaks
the controller networking, taking all controllers offline.
neutron-linuxbridge-agent unbinds bond0 from the br-mgmt bridge and
attaches it to its own bridges instead, which breaks the host
networking.
This is not an issue when the flat network is passed to the agents
container, since the container tap is attached to the br-mgmt
bridge on the host, and neutron can hang its bridges off the tap
interface in the container.