README cleanup and additional notes about limits

The readme has had a bit of clean-up and lines have been reflowed.

The templates have ben updated to ensure they work with limits and
notes about using limits have been added to the README.

Change-Id: I8d6f5684e02ba63e93b6993be228c7416a911ef7
Signed-off-by: Kevin Carter <kevin@cloudnull.com>
This commit is contained in:
Kevin Carter 2019-01-15 09:07:16 -06:00
parent 37f24f9e7b
commit 6c553e1495
3 changed files with 72 additions and 73 deletions

View File

@ -1,60 +1,57 @@
# Skydive Ansible deployment
These playbooks and roles will deploy skydive, a network topology and
protocols analyzer.
These playbooks and roles will deploy skydive, a network topology and protocols
analyzer.
Official documentation for skydive can be found here:
http://skydive.network/documentation/deployment#ansible
Official documentation for skydive can be found [here](http://skydive.network/documentation/deployment#ansible)
----
## Overview
The playbooks provide a lot of optionality. All of the available options are
within the role `defaults` or `vars` directories and commented as nessisary.
The playbooks provide a lot of optionalities. All of the available options are
within the role `defaults` or `vars` directories and commented as necessary.
The playbooks are roles contained within this repository will build or GET
skydive depending on how the inventory is setup. If build services are
specified, skydive will be built from source using the provided checkout
(default HEAD). Once the build process is complete, all skydive created
binaries will be fetched and deployed to the target agent and analyzer
hosts.
(default **HEAD**). Once the build process is complete, all skydive created
binaries will be fetched and deployed to the target agent and analyzer hosts.
Skydive requires a persistent storage solution to store data about the
environment and to run captures. These playbooks require access to an
existing Elasticsearch cluster. The variable `skydive_elasticsearch_uri`
must be set in a variable file, or on the CLI at the time of deployment.
*If this option is undefined the playbooks will not run*.
environment and to run captures. These playbooks require access to an existing
Elasticsearch cluster. The variable `skydive_elasticsearch_uri` must be set in a
variable file, or on the CLI at the time of deployment. *If this option is
undefined the playbooks will not run*.
A user password for skydive and the cluster must be defined. This option can
be set in a variable file or on the CLI. If this option is undefined the
playbooks will not run.
A user password for `skydive` and the cluster must be defined. The option
`skydive_password` can be set in a variable file or on the CLI. *If this
option is undefined the playbooks will not run.*
Once the playbooks have been executed, the UI and API can be accessed via a
web browser or CLI on port `8082` on the nodes running the **Analyzer**.
Once the playbooks have been executed, the UI and API can be accessed via a web
browser or CLI on port 8082 on the nodes running the Analyzer.
### Balancing storage traffic
Storage traffic is balanced on each analyzer node using a reverse proxy/load
balancer application named [Traefik](https://docs.traefik.io). This system
provides a hyper-light weight, API-able, load balancer. All storage traffic
will be sent through Traefik to various servers within the backend. This
provides access to a highly available cluster of Elasticsearch nodes as
needed.
balancer application named Traefik. This system provides a hyper-lightweight,
API-able, load balancer. All storage traffic will be sent through **Traefik**
to various servers within the backend. This provides access to a highly
available cluster of Elasticsearch nodes as needed.
### Deploying binaries or building from source
This deployment solution provides the ability to install skydive from source
or from pre-constructed binaries. The build process is also available for
the traefik loadbalancer.
This deployment solution provides the ability to install **skydive** from
source or from pre-constructed binaries. The build process is also available
for the **traefik** load balancer.
The in cluster build process is triggered by simply having designated build
nodes within the inventory. If `skydive_build_nodes` or `traefik_build_nodes`
is defined in inventory the build process for the selected solution will be
The cluster build process is triggered by simply having designated build nodes
within the inventory. If `skydive_build_nodes` or `traefik_build_nodes` are
defined in inventory the build process for the selected solution will be
triggered. Regardless of installation preference, the installation process is
the same. The playbooks will `fetch` the binaries and then ship them out the
designated nodes within inventory. A complete inventory example can be seen
in the **inventory** directory.
designated nodes within the inventory. A complete inventory example can be seen
in the `inventory` directory.
#### Deploying | Installing with embedded Ansible
@ -71,21 +68,20 @@ source bootstrap-embedded-ansible.sh
This playbook has external role dependencies. If Ansible is not installed with
the `bootstrap-embedded-ansible.sh` script these dependencies can be resolved
with the ``ansible-galaxy`` command and the ``ansible-role-requirements.yml``
file.
with the `ansible-galaxy` command and the ansible-role-requirements.yml file.
``` shell
ansible-galaxy install -r ansible-role-requirements.yml
```
Once the dependencies are set make sure to set the action plugin path to the
location of the config_template action directory. This can be done using the
location of the `config_template` action directory. This can be done using the
environment variable `ANSIBLE_ACTION_PLUGINS` or through the use of an
`ansible.cfg` file.
#### Deploying | The environment natively
The following example will use a local inventory, and set the required options
The following example will use a local inventory and set the required options
on the CLI to run a deployment.
``` shell
@ -95,9 +91,14 @@ ansible-playbook -i inventory/inventory.yml \
site.yml
```
Tags are available for every playbook, use the `--list-tags`
switch to see all available tags.
Tags are available for every playbook, use the `--list-tags` switch to see all
available tags.
> Because configuration for skydive **must** remain in sync it's recommended
deployers use tags whenever running isolated playbooks and not wanting to
perform a full run. This is a limitation due to the way *in memory* facts
are set and made available at run-time. In order to use `--limit` with
these playbooks fact caching must be enabled.
#### Deploying | The environment within OSA
@ -105,16 +106,12 @@ While it is possible to integrate skydive into an OSA cloud using environment
extensions and `openstack_user_config.yml` additions, the deployment of this
system is possible through the use of an inventory overlay.
> The example overlay inventory file `inventory/osa-integration-inventory.yml`
> The example overlay inventory file inventory/osa-integration-inventory.yml
assumes elasticsearch is already deployed and is located on the baremetal
machine(s) within the log_hosts group. If this is not the case, adjust the
overlay inventory for your environment.
> The provided overlay inventory example makes the assumption that skydive
leverage the same
``` shell
# Source the embedded ansible
source bootstrap-embedded-ansible.sh
@ -139,9 +136,9 @@ openstack-ansible -i /opt/openstack-ansible/inventory/dynamic_inventory.py \
The example overlay inventory contains a section for general haproxy
configuration which exposes the skydive UI internally.
> If the deployment has `haproxy_extra_services` already defined the following
extra haproxy configuration will needed to be appended to the existing user
defined variable.
> If the deployment has haproxy_extra_services already defined the following
extra haproxy configuration will need to be appended to the existing
user-defined variable.
``` yaml
- service:
@ -167,17 +164,15 @@ configuration which exposes the skydive UI internally.
This config will provide access to the web UI for both **skydive** and
**traefik**.
* Skydive runs on port `8082`
* Traefik runs on port `8090`
* **Skydive** runs on port `8082`
* **Traefik** runs on port `8090`
### Validating the skydive installation
Post deployment, the skydive installation can be valided by simply running the
`validateSkydive.yml` playbook.
----
Post-deployment, the skydive installation can be validated by simply running
the `validateSkydive.yml` playbook.
TODOs:
* Setup cert based agent/server auth
* Add openstack integration
** document openstack integration, what it adds to the admin service
[] Setup cert based agent/server auth
[] Add OpenStack integration
[] Document OpenStack integration, what it adds to the admin service

View File

@ -420,8 +420,10 @@ etcd:
# each entry is composed of the peer name and the endpoints for this peer
{% set peers = {} %}
{% for node in groups['skydive_analyzers'] %}
{% set _ansible_interface_name = hostvars[node]['skydive_network_device'] | default(hostvars[node]['ansible_default_ipv4']['interface']) | replace('-', '_') %}
{% set _ = peers.__setitem__(inventory_hostname, 'http://' ~ (hostvars[node]['skydive_bind_address'] | default(hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'])) ~ ':' ~ skydive_etcd_port) %}
{% if node in ansible_play_hosts %}
{% set _ansible_interface_name = hostvars[node]['skydive_network_device'] | default(hostvars[node]['ansible_default_ipv4']['interface']) | replace('-', '_') %}
{% set _ = peers.__setitem__(inventory_hostname, 'http://' ~ (hostvars[node]['skydive_bind_address'] | default(hostvars[node]["ansible_" ~ _ansible_interface_name]['ipv4']['address'])) ~ ':' ~ skydive_etcd_port) %}
{% endif %}
{% endfor %}
peers: {{ skydive_etcd_peers | default(peers) | to_json }}

View File

@ -18,25 +18,27 @@ skydive_fabric: |-
{% set fabric = [] %}
{% set nodes = [] %}
{% for node in groups['skydive_analyzers'] %}
{% set agents_loop = loop %}
{% for interface in (hostvars[node]['ansible_interfaces'] | map('replace', '-','_') | list) %}
{% if interface != 'lo' %}
{% set ansible_interface_name = "ansible_" ~ interface %}
{% set port_entry = "TOR[Name=TOR] -> TOR_PORT" ~ agents_loop.index ~ "[Name=port" ~ agents_loop.index ~ "]" %}
{% if hostvars[node][ansible_interface_name] is defined %}
{% set interface_data = hostvars[node][ansible_interface_name] %}
{% if interface_data['mtu'] is defined %}
{% set port_entry = "TOR[Name=TOR] -> [color=red] TOR_PORT" ~ agents_loop.index ~ "[Name=port" ~ agents_loop.index ~ ",MTU=" ~ interface_data['mtu'] ~ "]" %}
{% if node in ansible_play_hosts %}
{% set agents_loop = loop %}
{% for interface in (hostvars[node]['ansible_interfaces'] | map('replace', '-','_') | list) %}
{% if interface != 'lo' %}
{% set ansible_interface_name = "ansible_" ~ interface %}
{% set port_entry = "TOR[Name=TOR] -> TOR_PORT" ~ agents_loop.index ~ "[Name=port" ~ agents_loop.index ~ "]" %}
{% if hostvars[node][ansible_interface_name] is defined %}
{% set interface_data = hostvars[node][ansible_interface_name] %}
{% if interface_data['mtu'] is defined %}
{% set port_entry = "TOR[Name=TOR] -> [color=red] TOR_PORT" ~ agents_loop.index ~ "[Name=port" ~ agents_loop.index ~ ",MTU=" ~ interface_data['mtu'] ~ "]" %}
{% endif %}
{% endif %}
{% set _ = fabric.append((port_entry)) %}
{% if not interface in nodes %}
{% set host_entry = "TOR_PORT" ~ agents_loop.index ~ "-> *[Type=host,Name=" ~ hostvars[node]['ansible_hostname'] ~ "/" ~ interface %}
{% set _ = fabric.append((host_entry)) %}
{% endif %}
{% set _ = nodes.append(interface) %}
{% endif %}
{% set _ = fabric.append((port_entry)) %}
{% if not interface in nodes %}
{% set host_entry = "TOR_PORT" ~ agents_loop.index ~ "-> *[Type=host,Name=" ~ hostvars[node]['ansible_hostname'] ~ "/" ~ interface %}
{% set _ = fabric.append((host_entry)) %}
{% endif %}
{% set _ = nodes.append(interface) %}
{% endif %}
{% endfor %}
{% endfor %}
{% endif %}
{% endfor %}
{{ fabric }}