I was a little too hasty in I76acbd08acda20c88ff9fd9148e3918b78d6c6c6
removing the scripts/ directory. It has broken the puppeting of old
hosts.
Restore the directory with a blank file explaining the situation.
Also, we don't need to copy this in the nodepool elements; remove that.
Change-Id: I8b82950237ef69c4941866900cac9bda42f58ca2
CloudFlare's public recursive DNS resolvers are available at
multiple anycast addresses. For some reason 1.1.1.1 is unreachable
from parts of OVH's BHS1 region, but 1.0.0.1 seems to be
consistently reachable. Swap this for improved reliability.
Change-Id: I9a264282ea6c8239883d252f52e004deebca3edc
Ianw noticed problems on fedora29 with unbound. That resulted in a bug
filed upstream,
https://www.nlnetlabs.nl/bugs-script/show_bug.cgi?id=4226. In this bug
the helpful unbound maintainers point out that OpenDNS servers are
having trouble with RRSIG records which leads to not validating dnssec
which we require in our unbound config.
Address this by switching to CloudFlare DNS which is suppsoed to be
super localized (aka responsive), and not record queries against it.
Also if we want to we can update our config to do dns over tls against
these servers.
Change-Id: I08ef6a6fba2706803d2e9de6197e0ef8d695e313
We are seeing a problem on Fedora where it appears on hosts without
configured ipv6 unbound chooses to send queries via the ipv6
forwarders and then returns DNS failures.
An upstream issue has been filed [1], but it remains unclear exactly
why this happens on Fedora but not other platforms.
However, having ipv6 forwarders is not always correct. Not all our
platforms have glean support for ipv6 configuration, nor do all our
providers provide ipv6 transit.
Therefore, ipv4 is the lowest common denominator across all platforms.
Even those who are "ipv6 only" still provide ipv4 via NAT --
originally it was the unreliability of this NAT transit that lead to
unbound being used in the first place. It should be noted that in
most all jobs, the configure-unbound role [2] called from the base-job
will re-write the forwarding information and configure ipv4/6
correctly during the base job depending on the node & provider
support. Thus this only really affects some of the
openstack-zuul-jobs/system-config integration jobs, where we start out
without unbound configured because we're actually *testing* the
unbound configuration role.
An additional complication is that we want to keep backwards
compatability and populate the settings if
NODEPOOL_STATIC_NAMESERVER_V6 is explicitly set -- this is sometimes
required if you building infra-style images and are within a corporate
network that disallows outbound DNS queries for example.
Thus by default only populate ipv4 forwarders, unless explicitly asked
to add ipv6 with the new variable or the static v6 nameservers are
explicitly specified.
[1] https://www.nlnetlabs.nl/bugs-script/show_bug.cgi?id=4188
[2] http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/roles/configure-unbound
Change-Id: If060455e163266b2c3e72b4a2ac2838a61859496
Turns out that we set these vars via an evironment.d file in the DIB
element which was overriding the finalise script's values to continue to
use google dns as primary resolver. Update the environment.d file to use
opendns by default with fallback being google dns.
Change-Id: I87809d8917fdd5ca7319241934a006480b736bd3
Now that osic-cloud1 is only using IPv6 public IPs, we can also add
IPv6 support for unbound.
Change-Id: I9da5a06fdbea04b322cddf6c7e6e829e47492d4c
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
The nodepool-base element relies on a couple environement variables.
Describe NODEPOOL_SCRIPTDIR and NODEPOOL_STATIC_NAMESERVER in the
README.rst file.
Change-Id: I56f2aab095a0504e19598d7296d072e7a51b07c2
This repo was created from filter branching the openstack-infra/
config repo. This process brought a lot of cruft with it in the
form of directories that we no longer need. This patch removes
that cruft so we begin with a tidier repo.
Change-Id: Ibffad1b11c0c5f84eedfb0365369f60c4961a0f3