This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140
Puppet cron is no longer being run on puppetmaster (yay!) so start
running it in cron from bridge.
Change-Id: Idc579a2660a5450092544c21a2e9e6cb9688e5f9
We copied this over from puppetmaster, but let's manage it in ansible.
The key has been renamed in host_vars on bridge.openstack.org already.
Change-Id: Ia102dbe2ae2836880092b8997cb99135f5197b00
We have a bunch of this handled now in ansible, so remove the old stuff.
Remove puppetmaster group management files. It's confusing for there to
be two files. Remove the old one.
Remove mqtt config. This isn't really a thing currently, and we're
eyeing running things from zuul anyway, so no need to port to ansible.
Change-Id: I8b64d21eadcc4a08bd5e5440fc5f756ae5bcd46b
Bridge can run puppet on the remote hosts. Stop running on puppetmaster
so that we can run from bridge. Put it in the disabled group so that we
don't try to run puppet on it from bridge.
Change-Id: Ibcfa7e902c07c55e3a84f8232a11792c5f7d80e9
There were updates upstream in ansible to rename this script due to
import issues. Additionally this switches us from using shade to
openstacksdk to get the inventory contents dynamically.
Note that we ensure the old file is absent prior to adding the new file
to avoid a race where we'll have two dynamic inventory scripts providing
the same functionality.
Change-Id: I76b1099bf0cf3bfead17f96e456cdce87d0e8a49
We clone ansible to /opt/ansible and use it as the source of our
openstack inventory script. Newer ansible has renamed this script so we
need to migrate to the new thing. Until we are ready to do that pin to
an older version of ansible that has the script at the old location.
Change-Id: I2084601b8f2f3629205b3c2c415bc1ad793226b0
Infracloud is sadly deceased. The upside is we can delete a lot of code
we don't need anymore. This patch removes infracloud nodes from
site.pp so that the puppet-apply test no longer bothers to validate
them, removes the infracloud modules from modules.env so that we don't
bother to install those modules in puppet-apply and puppet functional
tests, and removes the infracloud-specific data from the public hiera.
Additionally stop the puppetmaster from trying to run the infracloud
ansible playbook and finally remove the chocolate region from nodepool's
clouds.yaml (vanilla was already done).
This patch leaves the run_infracloud.sh script and the
infracloud-specific ansible playbooks as well as the infracloud
manifests in the openstack_project puppet module. It's possible those
tools could come in handy in the future if we ever have another
infracloud, and leaving those tools in place doesn't add confusion about
which hosts are actually active nor does it leave cruft that gets
unnecessarily tested.
Change-Id: Ic760cc55f8e17fa7f39f2dd0433f5560aa8e2d65
We only run expand-groups.sh during launch-node -- which makes sense
for additions as we don't have hosts appearing that don't come via
that path. However nothing really runs it on removal of hosts,
meaning /etc/ansible/hosts/generated-groups can contain old entries
until the next time a new host is launched. Simply run it once a day
to keep it fresh.
[1] http://git.openstack.org/cgit/openstack-infra/system-config/tree/launch/launch-node.py#n172
Change-Id: Ia112082df33b5ebf465f7d5a23685cc3e28b0551
Currently puppetdb and puppetboard have been broken for some time (+1
year) and with ubuntu precise becoming EOL it is prime for deleting.
This leaves openstack-infra with a gap in reporting for non-root
users. As such, as proposal is in the works to maybe use ARA.
Change-Id: Ifc73a2dba3b37ebe790a29c0daa948d6bad0aa33
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
The /var/log/puppet_run_all_infracloud.log file (created by
/opt/system-config/production/run_infracloud.sh as it is run by cron)
is currently growing without bound (~6gb). Add it to logrotate like
puppet_run_all.log
Change-Id: I4528ad1bab871ac489fb53aeaa33f9dabe98bbc7
The new paho-mqtt 1.3.0 release brings
https://github.com/eclipse/paho.mqtt.python/commit/0a8cccc which
prevents its use on Ubuntu Trusty's default Python interpreter.
Until we upgrade to a newer Python there, stay on paho-mqtt 1.2.3 so
that the MQTT callback plugin for Ansible will remain functional.
Change-Id: I2d8d5f74a3a8244da226d18365650780d3350d1f
We want to start encrypting our gearman traffic for zuulv3, as such
we'll need to bring online a CA service. The idea here, is we create a
new CA for each interconnecting service we want SSL certs for.
As an example /etc/zuul-ca will be used to generate SSL certs for our
gearman service.
Change-Id: I8c341559292c78d5428fe16837f28494a76e65db
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Co-Authored-By: Jeremy Stanley <fungi@yuggoth.org>
Now that we are running puppet in masterless mode, we don't need to tell
nodes where the puppetmaster is, or what their certname is, nor do we
need to keep running the puppetmaster in Apache. This patch cleans those
things up.
Change-Id: I663af0d9948f2ce3a47cc22ada47c3bbbbf316fa
This commit changes the dest for the mqtt callback plugin to be in the
local callback plugins dir instead of the ansible source repo.
Change-Id: Iedef1d6ae57888de62b32db77c9c1e717d613632
Depends-On: I697a74a5dbd63e9a87913c96a3e9be93ee7860da
We need the python mqtt library if want to use the mqtt callback plugin.
This commit ensures we actually install it.
Change-Id: Ic9b3fd12bd2e8bdffc62f9b6d227c2a67a10eeb7
This commit adds the mqtt ansible callback plugin to the puppetmaster
config so that whenever we run ansible we'll emit events to the
firehose for that.
Change-Id: Id5f10705687c5bb9854d386efd7fed486172f745
Create a wrapper script and crontab entry on puppetmaster.
Change-Id: Ida2a86d13731c40141163d43236b9856d227e5af
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Nothing in the template or puppetmaster classes uses any of the params
values. Classes that do use values in the params class, which are
o_p::server, o_p::users and o_p::users_install, include the params class
already either explicitly or by including other classes.
Change-Id: If91ff59e26bdb345f96224603becfb3f937ea90f
Update to the release version 2.2.1.0 from the RC we put in for
Iba0962d2fe8241f882833f4ecbadfad88aa753e3
Change-Id: I4d4deab8af8e3419ca309b6b5ffb4d13ff0a2502
Update to the ansible 2.2.1-rc3 to avoid potential of issues relating
to CVE-2016-9587
Depends-On: Ia6b50e6889a08edefb4e17957ba37d86f8db7cdb
Change-Id: Iba0962d2fe8241f882833f4ecbadfad88aa753e3
Create the signing01.ci.openstack.org job node and puppet the
signing subkey onto it via pubring.gpg and secring.gpg files stored
in private hiera. Also set up some basic configuration and packages
on the management bastion to aid in key management/rotation, and add
the beginnings of administrative documentation for this.
Change-Id: Iecddb778994a38f7898e0c20e7f3f8e93f0a7f60
Depends-On: I70c3b82185681ee64791cda653360c26a93bd466
Story: #2000336
Signed-off-by: Jeremy Stanley <fungi@yuggoth.org>
We are doing this so non-root users can use launch-node.py. As the
bootstrap process copies hieradata over to the new server launched.
Change-Id: If16bd3adbf9877927dd10a74077c04ddeeeeffed
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
We require an infracloud ssl param, which later on we use it to
make absent that file.
This is probably safe to remove now.
Change-Id: Id64853eaaf84a7cd1a9e73c7fdf377f85fb8747c
https://github.com/ansible/ansible/pull/14882 landed, so the inventory
will understand that an empty cache means the inventory needs
refetching. Zero out the file, and start consuming inventory from the
master branch of ansible since mordred controls that file anyway.
Change-Id: I2a4f4b21c50bfa94a229dd109e3d21f47552f0a1
In order for individuals to be able to run launch node commands without
becoming root, make these group owned and group writeable by admin.
Change-Id: I0a2fa336919be24d41a6a9c0a88b91a87536cbcc
Ansible-clouds.yaml becomes /etc/openstack/clouds.yaml on the
puppetmaster and is used forr the ansible dynamic inventory. When a
cloud there does not respond, the ansible inventory fails completely.
Remove infracloudwest from all-clouds.yaml until it comes back.
Change-Id: I34d265a60f0a97f040b6703ab74c93a8fd0063af
And also the certs and the other clouds.yaml file.
So that admins can run openstackclient, etc, without sudo.
Change-Id: Ib8be3cd0601531284ec5d33cb5024b8363d924ca
We have had an all-clouds.yaml file that was not being managed on disk
by puppet. Actually apply it to disk so that the template ends up on the
puppetmaster as expected.
Change-Id: I0136cab7c03b1932be5b24ff2e93ea8adb84c20d
We started collecting these things... and never stopped.
exmaple:
find /var/lib/puppet/reports/zm08.openstack.org/ -mtime +5 | wc -l
690
Change-Id: I72dc2bb32c76ae8f2ebd22801e8d3e9924c25d4d
Since these are baremetal hosts, they need to come from a static
inventory not the openstack inventory. Fortunately, thats pretty easy.
Also setting infracloud groups to be children of disabled to keep them
disabled until we are ready.
Change-Id: I87ed4008ed9c4867f79bbb5fbb6be53707b42625
We already have a dynamic system for managing static group management.
Use it for the disabled group so that the rules for managing the members
are not different.
Also, update the disabled list to match reality.
Also, Update docs because hosts are no longer groups
The upstream OpenStack Inventory in Ansible was fixed to no longer
return each cloud host as its own group unless there are duplicates for
the host in question. This means it's no longer the right thing to do
to put hosts into disabled:children - disabled is just fine.
Change-Id: I95c83ed64801db15ad99a14547895f3520356f99
We have a set of hostname patterns which is not a thing that ansible
supports in inventory files. While we can put hostname patterns into
playbooks directly, that does not help us with copying hiera group files
since ansible doesn't know about the groups in site.pp and puppet
doesn't know about the ansible groups.
Instead, do a quick expansion any time the groups.txt file changes and
at the end of launch-node. It will be left to admins to run
expand-groups.sh whenever they delete a node.
Change-Id: I00c60748ddb2d35a3b98f78d828dabebcf065118