Add support for openstack inventory provider

Add support for openstack hosted pre-provisioned undercloud and
overcloud cluster nodes, identified by a clusterid metadata
(optional). Adapt the tripleo-inventory role for that need. Access overcloud nodes over a particular private admin/control network
(multiple networks supported) via the undercloud floating IP,
acting as a bastion node public IP.

List of changes:
* Reuse access URL/creds and clouds.yaml template from the
  extras' OVB roles' defaults.
* Split inventory generation tasks into the openstack provider
  case and remaining all/multinode/undercloud cases.
* Add var overcloud_user (defaults to heat-admin), overcloud_key,
  undercloud_key instead of hardcoded values.
* Rely on shade dynamic inventory and update requirements.txt for os-
  client-config and jmespath as well.
  for its dependencies to become the part of quickstart deps.
* Add docs

Related-bug: #1691467
Change-Id: If2aef32a5df6fb407605a717523c5fcdec338534
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
This commit is contained in:
Bogdan Dobrelya 2017-05-09 11:48:43 +02:00
parent acc09e10b7
commit fdaa9748e6
15 changed files with 436 additions and 161 deletions

View File

@ -10,6 +10,12 @@ host as documented in :ref:`accessing-undercloud` is sufficient, but there are
situations when you may want direct access to overcloud services from your
desktop.
Note, when overcloud nodes are hosted on an OpenStack cloud instead, the ssh
access user name may be 'centos' or the like. And you may not be able to login
as the root. Node names may be also prefixed with a given heat stack name, like
`foo-overcloud-controller-0`. The undercloud node should be given a floating IP
and will be serving as a bastion host proxying ansible/ssh to overcloud nodes.
Logging in to overcloud hosts
-----------------------------

View File

@ -44,6 +44,12 @@ the overcloud::
| 7 | nova-compute | overcloud-novacompute-0.localdomain | nova | ...
+----+------------------+-------------------------------------+----------+-...
Note, when an undercloud node is hosted on an OpenStack cloud instead, the ssh
access user name may be 'centos' or the like. And you may not be able to login
as the root. The UC node name may be also prefixed with a given heat stack name,
like `foo-undercloud`. The node also should be given a floating IP to serve as a
bastion host proxying ansible/ssh to overcloud nodes.
Access via the TripleO-UI
-------------------------

View File

@ -85,3 +85,62 @@ By default, the kernel executable and initial rootfs for an undercloud VM
are extracted from the overcloud image. In order to switch to custom
``undercloud_custom_initrd`` and ``undercloud_custom_vmlinuz`` images,
set the ``undercloud_use_custom_boot_images`` to True.
Consuming OpenStack hosted VM instances as overcloud/undercloud nodes
---------------------------------------------------------------------
Nova servers pre-provisioned on openstack clouds may be consumed by
quickstart ansible roles by specifying ``inventory: openstack``.
You should also provide a valid admin user name, like 'centos' or
'heat-admin', and paths to ssh keys in the ``overcloud_user``,
``overcloud_key``, ``undercloud_user``, ``undercloud_key`` variables.
.. note:: The ``ssh_user`` should be refering to the same value as the
``undercloud_user``.
To identify and filter Nova servers by a cluster ID, define the
`clusterid` variable. Note that the Nova servers need to have the
`metadata.clusterid` defined for this to work as expected.
Then set `openstack_private_network_name` to the private network name,
over which ansible will be connecting the inventory nodes, via the
undercloud/bastion node's floating IP.
Finally, the host openstack cloud access URL and credentials need to be
configured. Here is an example playbook to generate ansible inventory
file and ssh config given an access URL and credentials:
.. code-block:: yaml
---
- name: Generate static inventory for openstack provider by shade
hosts: localhost
any_errors_fatal: true
gather_facts: true
become: false
vars:
undercloud_user: centos
ssh_user: centos
non_root_user: centos
overcloud_user: centos
inventory: openstack
os_username: fuser
os_password: secret
os_tenant_name: fuser
os_auth_url: 'http://cool_cloud.lc:5000/v2.0'
cloud_name: cool_cloud
clusterid: tripleo_dev
openstack_private_network_name: my_private_net
overcloud_key: '{{ working_dir }}/fuser.pem'
undercloud_key: '{{ working_dir }}/fuser.pem'
roles:
- tripleo-inventory
Next, you may want to check if the nodes are ready to proceed with the
overcloud deployment steps:
.. code-block:: bash
ansible --ssh-common-args='-F $HOME/.quickstart/ssh.config.ansible' \
-i $HOME/.quickstart/hosts all -m ping

View File

@ -1,6 +1,8 @@
ara
ansible==2.2.0.0
jmespath
netaddr>=0.7.18
os-client-config
pbr>=1.6
setuptools>=11.3
shade>=1.8.0

View File

@ -6,8 +6,10 @@ local_working_dir: "{{ lookup('env', 'HOME') }}/.quickstart"
# this will define the user that ansible will connect with
ssh_user: stack
# This defines the user that deploys the overcloud from the undercloud
undercloud_user: "stack"
# This defines the users that deploys the overcloud from the undercloud
# and accesses overcloud as the orchestration admin user
undercloud_user: stack
overcloud_user: heat-admin
# This is where we store generated artifacts (like ssh config files,
# keys, deployment scripts, etc) on the undercloud.

View File

@ -0,0 +1,30 @@
# Configure a static inventory and SSH config to access nodes
The `tripleo-inventory` role generates a static inventory and SSH config,
based on a deployment mode and a type of undercloud.
## Openstack inventory
When cluster nodes are pre-provisioned and hosted on an OpenStack host cloud,
use the `inventory: openstack` mode. Fetched data from the dynamic
`shade-inventory` will be filtered by the `clusterid`, if defined, and
by node types, like overcloud/undercloud/bastion. Then the filtered data is
stored into the static inventory and SSH config is generated to access
overcloud nodes via a bastion, which is the undercloud node as well.
The following variables should be customized to match the host OpenStack cloud
configuration:
* `clusterid`: -- an optional Nova servers' metadata parameter to identify
your environment VMs.
* `openstack_private_network_name`:<'private'> -- defines a private network
name used as an admin/control network. Ansible and SSH users will use that
network when connecting to the inventory nodes via the undercloud/bastion.
* `os_username`: -- credentials to connect the OpenStack cloud hosting
your pre-provisioned Nova servers.
* `os_password`: -- credentials to connect the host OpenStack cloud.
* `os_tenant_name`: -- credentials to connect the host OpenStack cloud.
* `os_auth_url`: -- credentials to connect the host OpenStack cloud.
* `cloud_name`: -- credentials to connect the host OpenStack cloud.
TODO(bogdando) document remaining modes 'all', 'undercloud'.

View File

@ -1,10 +1,13 @@
---
# SSH key used to access the undercloud machine.
# SSH key used to access the undercloud/overcloud machines.
undercloud_key: "{{ local_working_dir }}/id_rsa_undercloud"
overcloud_key: "{{ local_working_dir }}/id_rsa_overcloud"
# Default to 'undercloud' if the overcloud has not been deployed yet, or 'all'
# in case we want to inventory all the hosts.
# in case we want to inventory all the hosts. For OpenStack provider case,
# use the 'openstack' value.
inventory: undercloud
# Type of undercloud.
undercloud_type: virtual
# Admin/control network name for the openstack inventory provider
openstack_private_network_name: private

View File

@ -0,0 +1,156 @@
---
- when: inventory == 'all'
block:
#required for liberty based deployments
- name: copy get-overcloud-nodes.py to undercloud
template:
src: 'get-overcloud-nodes.py.j2'
dest: '{{ working_dir }}/get-overcloud-nodes.py'
mode: 0755
#required for liberty based deployments
- name: fetch overcloud node names and IPs
shell: >
source {{ working_dir }}/stackrc;
python {{ working_dir }}/get-overcloud-nodes.py
register: registered_overcloud_nodes
- name: list the overcloud nodes
debug: var=registered_overcloud_nodes.stdout
- name: fetch the undercloud ssh key
fetch:
src: '{{ working_dir }}/.ssh/id_rsa'
dest: '{{ overcloud_key }}'
flat: yes
mode: 0400
# add host to the ansible group formed from its type
# novacompute nodes are added as compute for backwards compatibility
- name: add overcloud node to ansible
with_dict: '{{ registered_overcloud_nodes.stdout | default({}) }}'
add_host:
name: '{{ item.key }}'
groups: "overcloud,{{ item.key | regex_replace('overcloud-(?:nova)?([a-zA-Z0-9_]+)-[0-9]+$', '\\1') }}"
ansible_host: '{{ item.key }}'
ansible_fqdn: '{{ item.value }}'
ansible_user: "{{ overcloud_user | default('heat-admin') }}"
ansible_private_key_file: "{{ overcloud_key }}"
ansible_ssh_extra_args: '-F "{{ local_working_dir }}/ssh.config.ansible"'
- when: inventory == 'multinode'
block:
- name: Get subnodes
command: cat /etc/nodepool/sub_nodes_private
register: nodes
- name: Add subnode to ansible inventory
with_indexed_items: '{{ nodes.stdout_lines | default([]) }}'
add_host:
name: 'subnode-{{ item.0 + 2 }}'
groups: "overcloud"
ansible_host: '{{ item.1 }}'
ansible_fqdn: '{{ item.1 }}'
ansible_user: "{{ lookup('env','USER') }}"
ansible_private_key_file: "/etc/nodepool/id_rsa"
#required for regeneration of ssh.config.ansible
- name: set_fact for undercloud ip
set_fact: undercloud_ip={{ hostvars['undercloud'].undercloud_ip }}
when: hostvars['undercloud'] is defined and hostvars['undercloud'].undercloud_ip is defined
# Add the supplemental to the in-memory inventory.
- name: Add supplemental node vm to inventory
add_host:
name: supplemental
groups: supplemental
ansible_host: supplemental
ansible_fqdn: supplemental
ansible_user: '{{ supplemental_user }}'
ansible_private_key_file: '{{ local_working_dir }}/id_rsa_supplemental'
ansible_ssh_extra_args: '-F "{{local_working_dir}}/ssh.config.ansible"'
supplemental_node_ip: "{{ supplemental_node_ip }}"
when: supplemental_node_ip is defined
- name: set_fact for supplemental ip
set_fact: supplemental_node_ip={{ hostvars['supplemental'].supplemental_node_ip }}
when: hostvars['supplemental'] is defined and hostvars['supplemental'].supplemental_node_ip is defined
#readd the undercloud to reset the ansible_ssh parameters set in quickstart
- name: Add undercloud vm to inventory
add_host:
name: undercloud
groups: undercloud
ansible_host: undercloud
ansible_fqdn: undercloud
ansible_user: '{{ undercloud_user }}'
ansible_private_key_file: '{{ undercloud_key }}'
ansible_ssh_extra_args: '-F "{{ local_working_dir }}/ssh.config.local.ansible"'
undercloud_ip: "{{ undercloud_ip }}"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is not defined and undercloud_ip is defined
#required for regeneration of ssh.config.ansible
- name: set undercloud ssh proxy command
set_fact: undercloud_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_private_key_file }}
{{ ssh_user }}@{{ hostvars[groups['virthost'][0]].ansible_host }}
-W {{ undercloud_ip }}:22"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is defined and undercloud_ip is defined
#required for regeneration of ssh.config.ansible
- name: set undercloud ssh proxy command
set_fact: undercloud_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ hostvars['localhost'].ansible_user_dir }}/.quickstart/id_rsa_virt_power
{{ ssh_user }}@{{ hostvars['localhost'].ansible_default_ipv4.address }}
-W {{ undercloud_ip }}:22"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is not defined and undercloud_ip is defined
- name: set supplemental ssh proxy command
set_fact: supplemental_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ local_working_dir }}/id_rsa_virt_power
{{ ssh_user }}@{{ hostvars[groups['virthost'][0]].ansible_host }}
-W {{ supplemental_node_ip }}:22"
when: supplemental_node_ip is defined
- name: create inventory from template
delegate_to: localhost
template:
src: 'inventory.j2'
dest: '{{ local_working_dir }}/hosts'
- name: regenerate ssh config
delegate_to: localhost
template:
src: 'ssh_config.j2'
dest: '{{ local_working_dir }}/ssh.config.ansible'
mode: 0644
when: undercloud_ip is defined
- name: regenerate ssh config for ssh connections from the virthost
delegate_to: localhost
template:
src: 'ssh_config_localhost.j2'
dest: '{{ local_working_dir }}/ssh.config.local.ansible'
mode: 0644
when: undercloud_ip is defined
# just setup the ssh.config.ansible and hosts file for the virthost
- name: check for existence of identity key
delegate_to: localhost
stat: path="{{ local_working_dir }}/id_rsa_virt_power"
when: undercloud_ip is not defined
register: result_stat_id_rsa_virt_power
- name: set fact used in ssh_config_no_undercloud.j2 to determine if IdentityFile should be included
set_fact:
id_rsa_virt_power_exists: true
when: undercloud_ip is not defined and result_stat_id_rsa_virt_power.stat.exists == True
- name: regenerate ssh config, if no undercloud has been launched.
delegate_to: localhost
template:
src: 'ssh_config_no_undercloud.j2'
dest: '{{ local_working_dir }}/ssh.config.ansible'
mode: 0644
when: undercloud_ip is not defined

View File

@ -6,158 +6,10 @@
delegate_facts: True
when: hostvars['localhost'].ansible_user_dir is not defined
- when: inventory == 'all'
block:
#required for liberty based deployments
- name: copy get-overcloud-nodes.py to undercloud
template:
src: 'get-overcloud-nodes.py.j2'
dest: '{{ working_dir }}/get-overcloud-nodes.py'
mode: 0755
#required for liberty based deployments
- name: fetch overcloud node names and IPs
shell: >
source {{ working_dir }}/stackrc;
python {{ working_dir }}/get-overcloud-nodes.py
register: registered_overcloud_nodes
- name: list the overcloud nodes
debug: var=registered_overcloud_nodes.stdout
- name: fetch the undercloud ssh key
fetch:
src: '{{ working_dir }}/.ssh/id_rsa'
dest: '{{ local_working_dir }}/id_rsa_overcloud'
flat: yes
mode: 0400
# add host to the ansible group formed from its type
# novacompute nodes are added as compute for backwards compatibility
- name: add overcloud node to ansible
with_dict: '{{ registered_overcloud_nodes.stdout | default({}) }}'
add_host:
name: '{{ item.key }}'
groups: "overcloud,{{ item.key | regex_replace('overcloud-(?:nova)?([a-zA-Z0-9_]+)-[0-9]+$', '\\1') }}"
ansible_host: '{{ item.key }}'
ansible_fqdn: '{{ item.value }}'
ansible_user: 'heat-admin'
ansible_private_key_file: "{{ local_working_dir }}/id_rsa_overcloud"
ansible_ssh_extra_args: '-F "{{ local_working_dir }}/ssh.config.ansible"'
- when: inventory == 'multinode'
block:
- name: Get subnodes
command: cat /etc/nodepool/sub_nodes_private
register: nodes
- name: Add subnode to ansible inventory
with_indexed_items: '{{ nodes.stdout_lines | default([]) }}'
add_host:
name: 'subnode-{{ item.0 + 2 }}'
groups: "overcloud"
ansible_host: '{{ item.1 }}'
ansible_fqdn: '{{ item.1 }}'
ansible_user: "{{ lookup('env','USER') }}"
ansible_private_key_file: "/etc/nodepool/id_rsa"
#required for regeneration of ssh.config.ansible
- name: set_fact for undercloud ip
set_fact: undercloud_ip={{ hostvars['undercloud'].undercloud_ip }}
when: hostvars['undercloud'] is defined and hostvars['undercloud'].undercloud_ip is defined
# Add the supplemental to the in-memory inventory.
- name: Add supplemental node vm to inventory
add_host:
name: supplemental
groups: supplemental
ansible_host: supplemental
ansible_fqdn: supplemental
ansible_user: '{{ supplemental_user }}'
ansible_private_key_file: '{{ local_working_dir }}/id_rsa_supplemental'
ansible_ssh_extra_args: '-F "{{local_working_dir}}/ssh.config.ansible"'
supplemental_node_ip: "{{ supplemental_node_ip }}"
when: supplemental_node_ip is defined
- name: set_fact for supplemental ip
set_fact: supplemental_node_ip={{ hostvars['supplemental'].supplemental_node_ip }}
when: hostvars['supplemental'] is defined and hostvars['supplemental'].supplemental_node_ip is defined
#readd the undercloud to reset the ansible_ssh parameters set in quickstart
- name: Add undercloud vm to inventory
add_host:
name: undercloud
groups: undercloud
ansible_host: undercloud
ansible_fqdn: undercloud
ansible_user: '{{ undercloud_user }}'
ansible_private_key_file: '{{ local_working_dir }}/id_rsa_undercloud'
ansible_ssh_extra_args: '-F "{{ local_working_dir }}/ssh.config.local.ansible"'
undercloud_ip: "{{ undercloud_ip }}"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is not defined and undercloud_ip is defined
#required for regeneration of ssh.config.ansible
- name: set undercloud ssh proxy command
set_fact: undercloud_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ hostvars[groups['virthost'][0]].ansible_private_key_file }}
{{ ssh_user }}@{{ hostvars[groups['virthost'][0]].ansible_host }}
-W {{ undercloud_ip }}:22"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is defined and undercloud_ip is defined
#required for regeneration of ssh.config.ansible
- name: set undercloud ssh proxy command
set_fact: undercloud_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ hostvars['localhost'].ansible_user_dir }}/.quickstart/id_rsa_virt_power
{{ ssh_user }}@{{ hostvars['localhost'].ansible_default_ipv4.address }}
-W {{ undercloud_ip }}:22"
when: hostvars[groups['virthost'][0]].ansible_private_key_file is not defined and undercloud_ip is defined
- name: set supplemental ssh proxy command
set_fact: supplemental_ssh_proxy_command="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ local_working_dir }}/id_rsa_virt_power
{{ ssh_user }}@{{ hostvars[groups['virthost'][0]].ansible_host }}
-W {{ supplemental_node_ip }}:22"
when: supplemental_node_ip is defined
- name: create inventory from template
delegate_to: localhost
template:
src: 'inventory.j2'
dest: '{{ local_working_dir }}/hosts'
- name: regenerate ssh config
delegate_to: localhost
template:
src: 'ssh_config.j2'
dest: '{{ local_working_dir }}/ssh.config.ansible'
mode: 0644
when: undercloud_ip is defined
- name: regenerate ssh config for ssh connections from the virthost
delegate_to: localhost
template:
src: 'ssh_config_localhost.j2'
dest: '{{ local_working_dir }}/ssh.config.local.ansible'
mode: 0644
when: undercloud_ip is defined
# just setup the ssh.config.ansible and hosts file for the virthost
- name: check for existence of identity key
delegate_to: localhost
stat: path="{{ local_working_dir }}/id_rsa_virt_power"
when: undercloud_ip is not defined
register: result_stat_id_rsa_virt_power
- name: set fact used in ssh_config_no_undercloud.j2 to determine if IdentityFile should be included
set_fact:
id_rsa_virt_power_exists: true
when: undercloud_ip is not defined and result_stat_id_rsa_virt_power.stat.exists == True
- name: regenerate ssh config, if no undercloud has been launched.
delegate_to: localhost
template:
src: 'ssh_config_no_undercloud.j2'
dest: '{{ local_working_dir }}/ssh.config.ansible'
mode: 0644
when: undercloud_ip is not defined
- include: inventory.yml
when: inventory in ['multinode', 'all', 'undercloud']
static: no
- include: openstack.yml
when: inventory == 'openstack'
static: no

View File

@ -0,0 +1,98 @@
---
- name: Cloud access credentials and URL
assert:
that:
- os_username is defined
- os_password is defined
- os_tenant_name is defined
- os_auth_url is defined
- cloud_name is defined
- name: copy clouds.yaml file
template:
src: clouds.yaml.j2
dest: "{{ local_working_dir }}/clouds.yaml"
mode: 0400
- name: fetch all nodes from openstack shade dynamic inventory
command: shade-inventory --list
args:
chdir: '{{ local_working_dir }}'
register: registered_nodes_output
no_log: true
- name: set_fact for filtered openstack inventory nodes by types
set_fact:
registered_overcloud_nodes: "{{ (registered_nodes_output.stdout | from_json) | json_query(jq_overcloud) }}"
registered_undercloud_nodes: "{{ (registered_nodes_output.stdout | from_json) | json_query(jq_undercloud) }}"
vars:
jq_overcloud: "[] | [?contains(name, 'overcloud')]"
jq_undercloud: "[] | [?contains(name, 'undercloud')]"
- name: set_fact for all filtered openstack inventory nodes by cluster id
when: clusterid is defined
set_fact:
registered_nodes: "{{ (registered_nodes_output.stdout | from_json) | json_query(jq_cluster) }}"
vars:
jq_cluster: "[?metadata.clusterid] | [?metadata.clusterid=='{{ clusterid }}']"
- name: set_fact for all filtered openstack inventory nodes
when: not clusterid is defined
set_fact:
registered_nodes: "{{ registered_overcloud_nodes|union(registered_undercloud_nodes) }}"
- name: Add overcloud nodes to inventory, accessed via undercloud/bastion
with_items: "{{ registered_overcloud_nodes|intersect(registered_nodes) }}"
add_host:
name: '{{ item.name }}'
groups: >-
{{ item.metadata.group|default('overcloud') }},
{{ item.name|regex_replace('overcloud-(?:nova)?([a-zA-Z0-9_]+)-[0-9]+$', '\\1') }}
ansible_host: '{{ item.name }}'
ansible_fqdn: '{{ item.name }}'
ansible_user: "{{ overcloud_user | default('heat-admin') }}"
ansible_private_key_file: "{{ overcloud_key }}"
ansible_ssh_extra_args: "-F {{ local_working_dir }}/ssh.config.ansible"
private_v4: >-
{% set node = registered_nodes | json_query("[?name=='" + item.name + "']") -%}
{{ node[0].addresses[openstack_private_network_name|quote][0].addr }}
- name: Add undercloud node to inventory, accessed via floating IP
with_items: "{{ registered_undercloud_nodes|intersect(registered_nodes) }}"
add_host:
name: undercloud
groups: "bastion,{{ item.metadata.group|default('undercloud') }}"
ansible_host: '{{ item.public_v4 }}'
ansible_fqdn: undercloud
ansible_user: '{{ undercloud_user }}'
ansible_private_key_file: '{{ undercloud_key }}'
undercloud_ip: >-
{% set node = registered_nodes | json_query("[?name=='" + item.name + "']") -%}
{{ node[0].addresses[openstack_private_network_name|quote][0].addr }}
- name: Bastion accessing requirements
assert:
that:
- hostvars['undercloud'] is defined
- hostvars['undercloud'].ansible_host is defined
- name: set ssh proxy command prefix for accessing nodes via bastion
set_fact:
ssh_proxy_command: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
-o ConnectTimeout=60 -i {{ hostvars['undercloud'].ansible_private_key_file }}
{{ ssh_user }}@{{ hostvars['undercloud'].ansible_host }}
- name: create inventory from template
delegate_to: localhost
template:
src: 'inventory.j2'
dest: '{{ local_working_dir }}/hosts'
run_once: true
- name: regenerate ssh config
delegate_to: localhost
template:
src: 'openstack_ssh_config.j2'
dest: '{{ local_working_dir }}/ssh.config.ansible'
mode: 0644
run_once: true

View File

@ -0,0 +1,8 @@
clouds:
{{ cloud_name }}:
auth:
username: {{ os_username }}
password: {{ os_password }}
project_name: {{ os_tenant_name }}
auth_url: {{ os_auth_url }}
region_name: regionOne

View File

@ -1,3 +1,4 @@
# BEGIN Autogenerated hosts
{% for host in groups['all'] %}
{% if hostvars[host].get('ansible_connection', '') == 'local' %}
{{ host }} ansible_connection=local
@ -5,16 +6,24 @@
{{ host }}{% if 'ansible_host' in hostvars[host]
%} ansible_host={{ hostvars[host]['ansible_host'] }}{% endif %}
{% if 'private_v4' in hostvars[host]
%} private_v4={{ hostvars[host]['private_v4'] }}{% endif %}
{% if 'public_v4' in hostvars[host]
%} public_v4={{ hostvars[host]['public_v4'] }}{% endif %}
{% if 'ansible_user' in hostvars[host]
%} ansible_user={{ hostvars[host]['ansible_user'] }}{% endif %}
{% if 'ansible_private_key_file' in hostvars[host]
%} ansible_private_key_file={{ hostvars[host]['ansible_private_key_file'] }}{% endif %}
{% if 'undercloud_ip' in hostvars[host]
%} undercloud_ip={{ hostvars[host]['undercloud_ip'] }}{% endif %}
{% if 'ansible_ssh_extra_args' in hostvars[host]
%} ansible_ssh_extra_args={{ hostvars[host]['ansible_ssh_extra_args']|quote }}{% endif %}
{% endif %}
{% endfor %}
# END autogenerated hosts
# BEGIN Autogenerated groups
{% for group in groups %}
{% if group not in ['ungrouped', 'all'] %}
[{{ group }}]
@ -24,3 +33,4 @@
{% endif %}
{% endfor %}
# END Autogenerated groups

View File

@ -0,0 +1,43 @@
Host *
IdentitiesOnly yes
{% if hostvars[groups['virthost'][0]].ansible_host is defined %}
Host virthost
Hostname {{ hostvars[groups['virthost'][0]].ansible_host }}
IdentityFile {{ hostvars[groups['virthost'][0]].ansible_private_key_file }}
User {{ ssh_user }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
{% else %}
Host virthost
Hostname {{ hostvars['localhost'].ansible_default_ipv4.address }}
IdentityFile {{ local_working_dir }}/id_rsa_virt_power
User {{ ssh_user }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
{% endif %}
Host undercloud
Hostname {{ hostvars['undercloud'].ansible_host }}
IdentityFile {{ hostvars['undercloud'].ansible_private_key_file }}
User {{ undercloud_user }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
{% if groups["overcloud"] is defined %}
{% for host in groups["overcloud"] %}
Host {{ host }}
Hostname {{ hostvars[host].ansible_host }}
ProxyCommand {{ ssh_proxy_command }} -W {{ hostvars[host].private_v4 }}:22
IdentityFile {{ hostvars[host].ansible_private_key_file }}
User {{ overcloud_user | default ('heat-admin') }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
{% endfor %}
{% endif %}

View File

@ -58,7 +58,7 @@ Host supplemental
Host {{ host }}
ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -F {{ local_working_dir }}/ssh.config.ansible undercloud -W {{ hostvars[host].ansible_fqdn }}:22
IdentityFile {{ hostvars[host].ansible_private_key_file }}
User heat-admin
User {{ overcloud_user | default ('heat-admin') }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null

View File

@ -20,7 +20,7 @@ Host {{ host }}
Hostname {{ hostvars[host].ansible_fqdn }}
ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -F {{ hostvars['localhost'].ansible_user_dir }}/.quickstart/ssh.config.local.ansible undercloud -W {{ hostvars[host].ansible_fqdn }}:22
IdentityFile {{ hostvars['localhost'].ansible_user_dir }}/.quickstart/id_rsa_overcloud
User heat-admin
User {{ overcloud_user | default('heat-admin') }}
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null