We are seeing error in BMC console on some clouds:
Error, some other host (<% MAC_ADDR %>) already uses
address <% IP ADDR %>.
Set port_security_enabled: false on BMC other ports.
Closes-Bug: #1997561
Change-Id: I178bd5c642ac8c54c94cd854452f9bcebf697fba
The CloudConfig write_files entry fro chrony.conf is
a nested list, instead of an entry. The file does not
get written.
Change-Id: I5ff6b81c6aaf454fad93e7c2fe2ff5ac68b91261
Add support to explicitly request config-drive
for BMC and Undercloud type instances.
NOTE: config-drive is always disabled for the
virtual barmetal intances. This is already hard
coded in virtual-baremetal-servers.yaml and
virtual-baremetal-servers-volume.yaml.
Related-Bug: #1929384
Closes-Bug: #1929419
Change-Id: I1f6454363b5d8a5c325afe194ed1484ff618f729
The dhcpv6-relay acts as both DHCPv6 relay and router
with radvd. Introspection and provisionin baremetal
nodes in the OVB environment fail's with connection
timeout unless net.ipv6.conf.all.forwarding is enabled.
Change-Id: Ida15d7e5c573ea09f8e6929d70901408330dc8e8
https://review.opendev.org/#/c/733598/ added support
of allocation_pools, but set type of public_net_allocation_pools
to comma_delimited_list which causes below issue because for
comma_delimited_list, list items are converted to string:-
Property error: : resources.public_subnet.properties.allocation_pools[0]:
"{'end': '10.0.0.199', 'start': '10.0.0.128'}" is not a map
We need to use type: json to get it work, this patch fixes it.
Related-Bug: #1874418
Change-Id: Iaebb297e5018ce8db6dd1f67a308e7707117fe03
TripleO CI uses 10.0.0.1 statically for the undercloud's
public interface. When using extra node in some job's
there is sometime a conflict, because the extra node get's
the 10.0.0.1 address allocated.
Adding support to define the allocation pools on the
public_net allows TripleO CI to define a pool with the
10.0.0.1 address eliminated.
A good practice would be to set up OVB to use
[{start: 10.0.0.128, end: 10.0.0.253}], and then configure
the undercloud/overcloud deployed on the OVB infrastructure
to use addresses in the range 10.0.0.1-10.0.0.127.
The parameter public_net_allocation_pools controls the
allocation pool setting, by default all addresses of the
subnet is in the pool.
Related-Bug: #1874418
Change-Id: Ieca4864e069148abb49eb709bf7f48a14ef04e77
Add prefix support for radvd and dhcrelay instances.
Also adds missing parameters for these instances in
the sample env generator environment.
Change-Id: I86bd6b014b62c3a382458f68443cfb02ed2e7031
Add the public IP of the undercloud to the stack output
so that it's easily available to configure the public
interface on the undercloud with the correct IP statically.
Change-Id: I90ad37cc683f1640464eb7b2ccfb3ba5d107f259
Related-Bug: #1874418
Add a new templates to configure radvd and dhcpv6 relay.
For IPv6 routed network the radvd daemon and the dhcpv6
relay is hosted on the same instance.
Since we do not want the networks in the OVB infra to
provide any DHCP or auto configuration we cannot use
neutron routers for provisioning network routing. The
instance running dhcpv6 relay and radvd will also be
the router for the provisioning networks.
Bump template version in undercloud-networks-routed.yaml
to version 2015-10-15. Need this version to avoid error:
'Items to join must be strings not
{u'str_split': [u'/', u'fd12:3456:789a:3::/64', 1]}'
Change-Id: Ib95f7d7cfd3d2318ac4f4f44f22955b0c18c465e
Currently the advertized MTU is hardcoded to 1450.
1450 is to high in case of geneve tunnels on a net
with mtu of 1500 in the underlay.
Automatically get the mtu from the network via the
port on the provisioning network.
Change-Id: I0725b6357bda6219ca49127184f6121167f4f319
Run chronyd as timeserver for clients on the provsioning
network. The cloud hosting OVB might not have external
IPv6 connectivity so we need a local timeserver for
OVB baremetal instances with IPv6 only.
Change-Id: I52eb326fa98c2089f6118ba4a4a575872abab2dc
Add parameters to set the ip_version for the subnets.
By default ip_version for all networks are 4.
Change-Id: I1c5a001fe2ec5c4194030fdf373c0a4318cba10c
When using pre-deployed servers, you may want all of the networking
setup of OVB but don't actually need to control the instances via
IPMI. While this could already be done, it left a useless BMC
instance lying around. This change allows the BMC to be disabled
completely to clean up such environments.
Change-Id: Icd6936977684d178277ebb721a7fbb3ffad51d9a
It turns out that the instance name isn't really important for
build-nodes-json, so we can allow overriding this without breaking
anything.
Change-Id: I83e318ee710e2c815bd8a4cfa065ccb4c7253291
This was missed when the changes for routed-networks were made, and
it means the template doesn't work properly.
Change-Id: I7357883133c7a37687b8b13f274ff54c34abddf1
This file was missed in the original commit to add undercloud
network configuration templates. It's essentially a noop for adding
a second undercloud-like vm to the existing networks.
Instead of having Heat fire-and-forget the bmc deployment, have the
bmc explicitly signal back to Heat. This way bmc failures can be
caught at env deployment time instead of the first time the
undercloud tries to make an IPMI call.
The IP addresses for the dhcp-relay service on the
provision networks need to be fixed. If we end up
using an address on the dhcp-relay instance that
overlaps the address range in the Undercloud's
provisioning networks we end up with conflicts.
When deploying TripleO overcloud nodes using the ctlplane
network as the default gateway need to reach the internet
(ntp servers etc.). Previously this was done using the
undercloud as a masquerading router, doing so when nodes
are not on the same L2 network as the undercloud is not as
straight forward. (I.e we would have to set up routes on
the provision router in ovb with a default route via the
ip-address of the undercloud.)
Hooking up the router for the provision networks to the
external_net and let the ovb infra router do the NAT'ing
makes more sense.
TripleO CI currently configures an interface on the
undercloud connected to the public network and uses the
undercloud as the router for the public network. This
deviates from what a non CI deployment would.
This change adds an optional undercloud-network-public-router
template with a router on the public_net which can provide
NAT'ed external access for overcloud nodes that use External
network interface as the default route.
The undercloud-networks-routed template have the public-router
added as well.
This removes the need for undercloud to provide masqueraded
routing for the external network when these templates are
used.
192.0.x.x is non-private addresses. Since we now in some
cases care about the address ranges used in the environment
switch to use addresses in the private space 192.168.x.x.
Also minor update to doc, router addresses are no longer
dynamically allocated.
Drop the use of allocation pools, and instead use fixed
ip's for the router addresses.
Using an allocation pool forced the use of large CIDR's
to avoid overlapping addresses.
Prior to routed networks the OVB workloads could use any
IP addressing, since it did not rely on any infrastructure
networking. With routed networks the workloads must use
IP addressing in the subnets in the OVB infrastructure to
enable use of the routers and dhcp_relay.
* Use allocation pool's to control the OVB infrastructure
use of addresses in IP subnets.
* Add stack output to templates containing iformation
about the infrastructure provisioned. I.e the addresses
of routers in the different subnets.
Also make the dhcp_servers to which the dhcp_relay instance
will relay dhcp request to configurable.
The role does not always override the network information.
Update the networks in role_env, get the network from
parameter_defaults, fallback to parameters if not set and
finally if the netwok is not in parameters set default.
Also make default for networks: in templates json instead
of literal string.
These extra nodes are likely to need to run arbitrary services, so
it's not ideal to have a security group that only allows port 22.
Also, the floating ip version of this template doesn't have a
security group and that one actually exposes the port on an external
network, so there's no need to lock down this one that only exposes
it to the private network.