Improve the documentation for skydive

The documentation has been added to, which should make the deployment
of skydive easier to understand. Additionally an overlay inventory
and examples have been provided showing how skydive can be deployed
as part of an OSA cloud without having to extend or otherwise modify
the openstack-ansible inventory/configuration.

Change-Id: Iccc0d23fa8a6047d1bcae53614c38e5ea0945f82
Signed-off-by: Kevin Carter <kevin@cloudnull.com>
This commit is contained in:
Kevin Carter 2019-01-14 21:23:08 -06:00
parent b23ec9f8d9
commit 37f24f9e7b
9 changed files with 215 additions and 60 deletions

View File

@ -1,74 +1,92 @@
# Skydive Ansible deployment
These playbooks and roles will deploy skydive, a network
topology and protocols analyzer.
These playbooks and roles will deploy skydive, a network topology and
protocols analyzer.
Official documentation for skydive can be found here:
http://skydive.network/documentation/deployment#ansible
----
### Overview
## Overview
The playbooks provide a lot of optionality. All of the
available options are within the role `defaults` or
`vars` directories and commented as nessisary.
The playbooks provide a lot of optionality. All of the available options are
within the role `defaults` or `vars` directories and commented as nessisary.
The playbooks are roles contained within this repository
will build or GET skydive depending on how the inventory
is setup. If build services are specified, skydive will
be built from source using the provided checkout
(default HEAD). Once the build process is complete, all
skydive created binaries will be fetched and deployed to
the target agent and analyzer hosts.
The playbooks are roles contained within this repository will build or GET
skydive depending on how the inventory is setup. If build services are
specified, skydive will be built from source using the provided checkout
(default HEAD). Once the build process is complete, all skydive created
binaries will be fetched and deployed to the target agent and analyzer
hosts.
Skydive requires a persistent storage solution to store
data about the environment and to run captures. These
playbooks require access to an existing Elasticsearch
cluster. The variable `skydive_elasticsearch_uri` must be
set in a variable file, or on the CLI at the time of
deployment. If this option is undefined the playbooks
will not run.
Skydive requires a persistent storage solution to store data about the
environment and to run captures. These playbooks require access to an
existing Elasticsearch cluster. The variable `skydive_elasticsearch_uri`
must be set in a variable file, or on the CLI at the time of deployment.
*If this option is undefined the playbooks will not run*.
A user password for skydive and the cluster must be
defined. This option can be set in a variable file or
on the CLI. If this option is undefined the playbooks
will not run.
A user password for skydive and the cluster must be defined. This option can
be set in a variable file or on the CLI. If this option is undefined the
playbooks will not run.
Once the playbooks have been executed, the UI and API
can be accessed via a web browser or CLI on port `8082`.
Once the playbooks have been executed, the UI and API can be accessed via a
web browser or CLI on port `8082` on the nodes running the **Analyzer**.
#### Balancing storage traffic
### Balancing storage traffic
Storage traffic is balanced on each analyzer node using
a reverse proxy/load balancer application named
[Traefik](https://docs.traefik.io). This system
provides a hyper-light weight, API-able, load balancer.
All storage traffic will be sent through Traefik to
various servers within the backend. This provides access
to a highly available cluster of Elasticsearch nodes as
Storage traffic is balanced on each analyzer node using a reverse proxy/load
balancer application named [Traefik](https://docs.traefik.io). This system
provides a hyper-light weight, API-able, load balancer. All storage traffic
will be sent through Traefik to various servers within the backend. This
provides access to a highly available cluster of Elasticsearch nodes as
needed.
#### Deploying binaries or building from source
### Deploying binaries or building from source
This deployment solution provides the ability to install
skydive from source or from pre-constructed binaries. The
build process is also available for the traefik loadbalancer.
This deployment solution provides the ability to install skydive from source
or from pre-constructed binaries. The build process is also available for
the traefik loadbalancer.
The in cluster build process is triggered by simply having
designated build nodes within the inventory. If
`skydive_build_nodes` or `traefik_build_nodes` is defined in
inventory the build process for the selected solution will
be triggered. Regardless of installation preference, the
installation process is the same. The playbooks will `fetch`
the binaries and then ship them out the designated nodes
within inventory. A complete inventory example can be seen
The in cluster build process is triggered by simply having designated build
nodes within the inventory. If `skydive_build_nodes` or `traefik_build_nodes`
is defined in inventory the build process for the selected solution will be
triggered. Regardless of installation preference, the installation process is
the same. The playbooks will `fetch` the binaries and then ship them out the
designated nodes within inventory. A complete inventory example can be seen
in the **inventory** directory.
### Deployment Execution
#### Deploying | Installing with embedded Ansible
The following example will use a local inventory, and
set the required options on the CLI.
If this is being executed on a system that already has Ansible installed but is
incompatible with these playbooks the script `bootstrap-embedded-ansible.sh`
can be sourced to grab an embedded version of Ansible prior to executing the
playbooks.
``` shell
source bootstrap-embedded-ansible.sh
```
#### Deploying | Manually resolving the dependencies
This playbook has external role dependencies. If Ansible is not installed with
the `bootstrap-embedded-ansible.sh` script these dependencies can be resolved
with the ``ansible-galaxy`` command and the ``ansible-role-requirements.yml``
file.
``` shell
ansible-galaxy install -r ansible-role-requirements.yml
```
Once the dependencies are set make sure to set the action plugin path to the
location of the config_template action directory. This can be done using the
environment variable `ANSIBLE_ACTION_PLUGINS` or through the use of an
`ansible.cfg` file.
#### Deploying | The environment natively
The following example will use a local inventory, and set the required options
on the CLI to run a deployment.
``` shell
ansible-playbook -i inventory/inventory.yml \
@ -81,10 +99,81 @@ Tags are available for every playbook, use the `--list-tags`
switch to see all available tags.
#### Validating the skydive installation
#### Deploying | The environment within OSA
Post deployment, the skydive installation can be valided by
simply running the `validateSkydive.yml` playbook.
While it is possible to integrate skydive into an OSA cloud using environment
extensions and `openstack_user_config.yml` additions, the deployment of this
system is possible through the use of an inventory overlay.
> The example overlay inventory file `inventory/osa-integration-inventory.yml`
assumes elasticsearch is already deployed and is located on the baremetal
machine(s) within the log_hosts group. If this is not the case, adjust the
overlay inventory for your environment.
> The provided overlay inventory example makes the assumption that skydive
leverage the same
``` shell
# Source the embedded ansible
source bootstrap-embedded-ansible.sh
# Run the skydive deployment NOTE: This is using multiple inventories.
ansible-playbook -i /opt/openstack-ansible/inventory/dynamic_inventory.py \
-i /opt/openstack-ansible/ops/skydive/inventory/osa-integration-inventory.yml \
-e @/etc/openstack_deploy/user_secrets.yml \
site.yml
# Disable the embedded ansible
deactivate
# If using haproxy, run the haproxy playbook using the multiple inventory sources.
cd /opt/openstack-ansible/playbooks
openstack-ansible -i /opt/openstack-ansible/inventory/dynamic_inventory.py \
-i /opt/openstack-ansible/ops/skydive/inventory/osa-integration-inventory.yml \
haproxy-install.yml
```
##### Configuration | Haproxy
The example overlay inventory contains a section for general haproxy
configuration which exposes the skydive UI internally.
> If the deployment has `haproxy_extra_services` already defined the following
extra haproxy configuration will needed to be appended to the existing user
defined variable.
``` yaml
- service:
haproxy_service_name: skydive_analyzer
haproxy_backend_nodes: "{{ groups['skydive_analyzers'] | default([]) }}"
haproxy_bind: "{{ [internal_lb_vip_address] }}"
haproxy_port: 8082
haproxy_balance_type: http
haproxy_ssl: true
haproxy_backend_options:
- "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
- service:
haproxy_service_name: traefik
haproxy_backend_nodes: "{{ groups['skydive_analyzers'] | default([]) }}"
haproxy_bind: "{{ [internal_lb_vip_address] }}"
haproxy_port: 8090
haproxy_balance_type: http
haproxy_ssl: true
haproxy_backend_options:
- "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
```
This config will provide access to the web UI for both **skydive** and
**traefik**.
* Skydive runs on port `8082`
* Traefik runs on port `8090`
### Validating the skydive installation
Post deployment, the skydive installation can be valided by simply running the
`validateSkydive.yml` playbook.
----

View File

@ -0,0 +1,60 @@
---
all_systems:
vars: {}
children:
systems:
vars:
ansible_become: yes
ansible_become_user: "root"
ansible_user: "root"
skydive_password: "{{ haproxy_stats_password }}"
skydive_elasticsearch_servers: "{{ groups['elastic-logstash'] | map('extract', hostvars, ['ansible_host']) | list | join(',') }}"
skydive_bind_address: "{{ container_address | default(ansible_host) }}"
haproxy_extra_services:
- service:
haproxy_service_name: skydive_analyzer
haproxy_backend_nodes: "{{ groups['skydive_analyzers'] | default([]) }}"
haproxy_bind: "{{ [internal_lb_vip_address] }}"
haproxy_port: 8082
haproxy_balance_type: http
haproxy_ssl: true
haproxy_backend_options:
- "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
- service:
haproxy_service_name: traefik
haproxy_backend_nodes: "{{ groups['skydive_analyzers'] | default([]) }}"
haproxy_bind: "{{ [internal_lb_vip_address] }}"
haproxy_port: 8090
haproxy_balance_type: http
haproxy_ssl: true
haproxy_backend_options:
- "httpchk HEAD / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
haproxy_backend_httpcheck_options:
- expect rstatus 200|401
children:
traefik_all:
children:
traefik_build_nodes: {}
skydive_all:
children:
skydive_build_nodes: {}
skydive_agents:
children:
hosts: {} # This is an osa native group, as such nothing needs to be added. Values will be inherited.
skydive_analyzers:
children:
utility_all: {} # This is an osa native group, as such nothing needs to be added. Values will be inherited.
elk_all:
children:
elastic-logstash:
children:
log_hosts: {} # This is an osa native group, as such nothing needs to be added. Values will be inherited.
kibana:
children:
log_hosts: {} # This is an osa native group, as such nothing needs to be added. Values will be inherited.

View File

@ -13,4 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
skydive_agent_service_state: started
skydive_agent_service_state: restarted

View File

@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
skydive_analyzer_service_state: started
skydive_analyzer_service_state: restarted
skydive_username: skydive

View File

@ -38,6 +38,7 @@ galaxy_info:
dependencies:
- role: traefik_common
traefik_basic_auth_users: "{{ _skydive_basic_auth_users | combine(skydive_basic_auth_users) }}"
traffic_dashboard_bind: "{{ skydive_bind_address | default(hostvars[inventory_hostname]['ansible_' ~ (skydive_network_device | replace('-', '_') | string)]['ipv4']['address']) }}"
traefik_dashboard_enabled: true
traefik_destinations:
elasticsearch:

View File

@ -25,6 +25,11 @@ skydive_flow_protocol: udp
# Set a particulare network interface used for skydive traffic
skydive_network_device: "{{ ansible_default_ipv4['interface'] }}"
# The skydive bind address can also be used to set the specific bind address of
# a given node running the skydive analyzer. By default this variable is undefined
# so the bind address is determined using the `skydive_network_device`.
# skydive_bind_address: 10.0.0.2
# The skydive elasticsearch uri(s) is required
# Set the elasticsearch URI(s), the system will attempt to connect to the URI.
# If this URI is unreachable the deployment will fail. If there is more than

View File

@ -49,7 +49,7 @@ http:
analyzer:
# address and port for the analyzer API, Format: addr:port.
# Default addr is 127.0.0.1
listen: {{ hostvars[inventory_hostname]["ansible_" ~ (skydive_network_device | replace('-', '_') | string)]['ipv4']['address'] ~ ':' ~ skydive_analyzer_port }}
listen: {{ (skydive_bind_address | default(hostvars[inventory_hostname]["ansible_" ~ (skydive_network_device | replace('-', '_') | string)]['ipv4']['address'])) ~ ':' ~ skydive_analyzer_port }}
auth:
# auth section for API request
@ -166,7 +166,7 @@ analyzer:
{% set analyzers = [] %}
{% for node in groups['skydive_analyzers'] %}
{% set _ansible_interface_name = hostvars[node]['skydive_network_device'] | default(hostvars[node]['ansible_default_ipv4']['interface']) | replace('-', '_') %}
{% set _ = analyzers.append(hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'] ~ ':' ~ skydive_analyzer_port) %}
{% set _ = analyzers.append((hostvars[node]['skydive_bind_address'] | default(hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'])) ~ ':' ~ skydive_analyzer_port) %}
{% endfor %}
analyzers: {{ analyzers | to_json }}
@ -174,7 +174,7 @@ analyzers: {{ analyzers | to_json }}
agent:
# address and port for the agent API, Format: addr:port.
# Default addr is 127.0.0.1
listen: {{ hostvars[inventory_hostname]["ansible_" ~ (skydive_network_device | replace('-', '_') | string)]['ipv4']['address'] ~ ':' ~ skydive_agent_port }}
listen: {{ (skydive_bind_address | default(hostvars[inventory_hostname]["ansible_" ~ (skydive_network_device | replace('-', '_') | string)]['ipv4']['address'])) ~ ':' ~ skydive_agent_port }}
auth:
# auth section for API request
@ -421,7 +421,7 @@ etcd:
{% set peers = {} %}
{% for node in groups['skydive_analyzers'] %}
{% set _ansible_interface_name = hostvars[node]['skydive_network_device'] | default(hostvars[node]['ansible_default_ipv4']['interface']) | replace('-', '_') %}
{% set _ = peers.__setitem__(inventory_hostname, 'http://' ~ hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'] ~ ':' ~ skydive_etcd_port) %}
{% set _ = peers.__setitem__(inventory_hostname, 'http://' ~ (hostvars[node]['skydive_bind_address'] | default(hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'])) ~ ':' ~ skydive_etcd_port) %}
{% endfor %}
peers: {{ skydive_etcd_peers | default(peers) | to_json }}

View File

@ -76,7 +76,7 @@
systemd_services:
- service_name: "traefik"
execstarts:
- /usr/local/bin/traefik --file.directory="/etc/traefik"
- "/usr/local/bin/traefik --file.directory=/etc/traefik"
- name: Force handlers
meta: flush_handlers

View File

@ -19,7 +19,7 @@
skydive_username: skydive
skydive_analyzer_port: 8082
skydive_network_device: "{{ ansible_default_ipv4['interface'] | replace('-', '_') }}"
skydive_analyzer_uri: "{{ hostvars[inventory_hostname]['ansible_' ~ skydive_network_device]['ipv4']['address'] ~ ':' ~ skydive_analyzer_port }}"
skydive_analyzer_uri: "{{ (skydive_bind_address | default(hostvars[inventory_hostname]['ansible_' ~ skydive_network_device]['ipv4']['address'])) ~ ':' ~ skydive_analyzer_port }}"
tasks:
- name: Check API login
uri: