Testing, installation fixes

This commit is contained in:
Ales Komarek 2016-01-25 23:46:39 +01:00
parent 4d69a637ab
commit b0fddb2d92
16 changed files with 340 additions and 381 deletions

View File

@ -22,19 +22,22 @@ criteria are met:
* Steps to reproduce the problem if possible.
Tags
````
~~~~
If it's a bug that needs fixing in a branch in addition to master, add a
'\<release\>-backport-potential' tag (e.g. ``kilo-backport-potential``).
There are predefined tags that will auto-complete.
Status
``````
~~~~~~
Please leave the **status** of an issue alone until someone confirms it or
a member of the bugs team triages it. While waiting for the issue to be
confirmed or triaged the status should remain as **New**.
Importance
``````````
~~~~~~~~~~
Should only be touched if it is a Blocker/Gating issue. If it is, please
set to **High**, and only use **Critical** if you have found a bug that
can take down whole infrastructures. Once the importance has been changed
@ -42,7 +45,8 @@ the status should be changed to *Triaged* by someone other than the bug
creator.
Triaging bugs
`````````````
~~~~~~~~~~~~~
Reported bugs need prioritization, confirmation, and shouldn't go stale.
If you care about OpenStack stability but aren't wanting to actively
develop the roles and playbooks used within the "openstack-salt"
@ -135,32 +139,32 @@ using the YAML dictionary format.
Example YAML dictionary format:
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name:
thing1: "some-stuff"
thing2: "some-other-stuff"
tags:
- some-tag
- some-other-tag
- name: The name of the tasks
module_name:
thing1: "some-stuff"
thing2: "some-other-stuff"
tags:
- some-tag
- some-other-tag
Example what **NOT** to do:
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name: thing1="some-stuff" thing2="some-other-stuff"
tags: some-tag
- name: The name of the tasks
module_name: thing1="some-stuff" thing2="some-other-stuff"
tags: some-tag
.. code-block:: yaml
.. code-block:: yaml
- name: The name of the tasks
module_name: >
thing1="some-stuff"
thing2="some-other-stuff"
tags: some-tag
- name: The name of the tasks
module_name: >
thing1="some-stuff"
thing2="some-other-stuff"
tags: some-tag
Usage of the ">" and "|" operators should be limited to Salt conditionals

View File

@ -3,11 +3,13 @@
Chapter 3. Extending
=====================
.. toctree::
extending-formulas.rst
extending-contribute.rst
--------------
.. include:: navigation.txt

View File

@ -1,5 +1,6 @@
Development documentation
=============================
=========================
In this section, you will find documentation relevant to developing
openstack-salt.
@ -15,14 +16,6 @@ Quick start
quickstart.rst
Testing
^^^^^^^
.. toctree::
:maxdepth: 2
testing.rst
Extending
^^^^^^^^^
@ -33,10 +26,18 @@ Extending
extending.rst
Testing
^^^^^^^
.. toctree::
:maxdepth: 2
testing.rst
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -16,6 +16,56 @@ It is possible to run full size proof-of-concept deployment on OpenStack with `H
.. _Heat template: https://github.com/tcpcloud/heat-templates
List of available stacks
------------------------
.. list-table::
:stub-columns: 1
* - salt_single_public
- Base stack which deploys network and single-node Salt master
* - openstack_cluster_public
- Deploy OpenStack cluster with OpenContrail, requires
``salt_single_public``
Heat client setup
-----------------
First you need to clone heat templates from our `Github repository
<https://github.com/tcpcloud/heat-templates>`_.
.. code-block:: bash
git clone https://github.com/tcpcloud/heat-templates.git
To be able to create Python environment and install compatible OpenStack
clients, you need to install build tools first. Eg. on Ubuntu:
.. code-block:: bash
apt-get install python-dev python-pip python-virtualenv build-essential
Now create and activate virtualenv `venv-heat` so you can install specific
versions of OpenStack clients into completely isolated Python environment.
.. code-block:: bash
virtualenv venv-heat
source ./venv-heat/bin/activate
To install tested versions of clients for OpenStack Juno and Kilo into
activated environment, use `requirements.txt` file in repository cloned
earlier:
.. code-block:: bash
pip install -r requirements.txt
If everything goes right, you should be able to use openstack clients, `heat`,
`nova`, etc.
Environment setup
-----------------
@ -40,33 +90,74 @@ Install requirements:
pip install -r requirements.txt
List of available stacks
-------------------------
.. list-table::
:stub-columns: 1
* - salt_single_public
- Base stack which deploys network and single-node Salt master
* - openstack_cluster_public
- Deploy OpenStack cluster with OpenContrail, requires
``salt_single_public``
Launching the Heat stack
------------------------
#. Setup environment file, eg. ``env/salt_single_public.env``, look at example
file first
#. Source credentials and required environment variables. You can download
openrc file from Horizon dashboard.
First source openrc credentials so you can use openstack clients. You can
download openrc file from Openstack dashboard and source it or execute
following commands with filled credentials:
.. code-block:: bash
.. code-block:: bash
source my_tenant-openrc.sh
export OS_AUTH_URL=https://<openstack_endpoint>:5000/v2.0
export OS_USERNAME=<username>
export OS_PASSWORD=<password>
export OS_TENANT_NAME=<tenant>
#. Deploy the actual stack
Now you need to customize env files for stacks, see examples in env directory
and set required parameters.
.. code-block:: bash
``env/salt_single_public.env``:
.. code-block:: yaml
./create_stack.sh salt_single_public
parameters:
# Following parameters are required to deploy workshop lab environment
# Public net id can be found in Horizon or by running `nova net-list`
public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2
# Public part of your SSH key
key_name: my-key
key_value: ssh-rsa xyz
# Instance image to use, we recommend to grab latest tcp cloud image here:
# http://apt.tcpcloud.eu/images/
# Lookup for image by running `nova image-list`
instance_image: ubuntu-14-04-x64-1437486976
``env/openstack_cluster_public.env``:
.. code-block:: yaml
parameters:
# Following parameters are required to deploy workshop lab environment
# Net id can be found in Horizon or by running `nova net-list`
public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2
private_net_id: 90699bd2-b10e-4596-99c6-197ac3fb565a
# Your SSH key, deployed by salt_single_public stack
key_name: my-key
# Instance image to use, we recommend to grab latest tcp cloud image here:
# http://apt.tcpcloud.eu/images/
# Lookup for image by running `nova image-list`
instance_image: ubuntu-14-04-x64-1437486976
To see all available parameters, see template yaml files in `templates` directory.
Finally you can deploy common stack with Salt master, SSH key and private network.
.. code-block:: bash
./create_stack.sh salt_single_public
If everything goes right, stack should be ready in a few minutes. You can verify by running following commands:
.. code-block:: bash
heat stack-list
nova list
You should be also able to log in as root to public IP provided by ``nova list`` command.
Now you can deploy openstack cluster:
.. code-block:: bash
./create_stack.sh openstack_cluster_public
When cluster is deployed, you should be able to log in to the instances from Salt master node by forwarding your SSH agent.

View File

@ -1,272 +0,0 @@
OpenStack over OpenStack Heat deployment
===========================================
This procedure enables to launch salt-openstack iside of existing OpenStack deployment as Heat template.
Heat stacks
~~~~~~~~~~~~~~~~~~~~
Lab setup consists of multiple Heat stacks.
.. list-table::
:stub-columns: 1
* - salt_single_public
- Base stack which deploys network and single-node Salt master
* - openstack_cluster_public
- Deploy OpenStack cluster with OpenContrail, requires
``salt_single_public``
* - openvstorage_cluster_private
- Deploy Open vStorage infrastructure on top of
``openstack_cluster_public``
Naming convention is following:
::
<name>_<cluster|single>_<public|private>
* `name` is short identifier describing main purpose of given stack
* `cluster` or `single` identifies topology (multi node vs. single node setup)
* `public` or `private` identifies network access. Public sets security group
and assigns floating IP so provided services are visible from outside world.
For smallest clustered setup, we are going to use `salt_single_public` and
`openstack_cluster_public` stacks.
Heat client
~~~~~~~~~~~~~~~~~~~~
First you need to clone heat templates from our `Github repository
<https://github.com/tcpcloud/heat-templates>`_.
.. code-block:: bash
git clone https://github.com/tcpcloud/heat-templates.git
To be able to create Python environment and install compatible OpenStack
clients, you need to install build tools first. Eg. on Ubuntu:
.. code-block:: bash
apt-get install python-dev python-pip python-virtualenv build-essential
Now create and activate virtualenv `venv-heat` so you can install specific
versions of OpenStack clients into completely isolated Python environment.
.. code-block:: bash
virtualenv venv-heat
source ./venv-heat/bin/activate
To install tested versions of clients for OpenStack Juno and Kilo into
activated environment, use `requirements.txt` file in repository cloned
earlier:
.. code-block:: bash
pip install -r requirements.txt
If everything goes right, you should be able to use openstack clients, `heat`,
`nova`, etc.
Stack deployment
~~~~~~~~~~~~~~~~~~~~
First source openrc credentials so you can use openstack clients. You can
download openrc file from Openstack dashboard and source it or execute
following commands with filled credentials:
.. code-block:: bash
export OS_AUTH_URL=https://<openstack_endpoint>:5000/v2.0
export OS_USERNAME=<username>
export OS_PASSWORD=<password>
export OS_TENANT_NAME=<tenant>
Now you need to customize env files for stacks, see examples in env directory
and set required parameters.
``env/salt_single_public.env``:
.. code-block:: yaml
parameters:
# Following parameters are required to deploy workshop lab environment
# Public net id can be found in Horizon or by running `nova net-list`
public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2
# Public part of your SSH key
key_name: my-key
key_value: ssh-rsa xyz
# Instance image to use, we recommend to grab latest tcp cloud image here:
# http://apt.tcpcloud.eu/images/
# Lookup for image by running `nova image-list`
instance_image: ubuntu-14-04-x64-1437486976
``env/openstack_cluster_public.env``:
.. code-block:: yaml
parameters:
# Following parameters are required to deploy workshop lab environment
# Net id can be found in Horizon or by running `nova net-list`
public_net_id: f82ffadb-cd7b-4931-a2c1-f865c61edef2
private_net_id: 90699bd2-b10e-4596-99c6-197ac3fb565a
# Your SSH key, deployed by salt_single_public stack
key_name: my-key
# Instance image to use, we recommend to grab latest tcp cloud image here:
# http://apt.tcpcloud.eu/images/
# Lookup for image by running `nova image-list`
instance_image: ubuntu-14-04-x64-1437486976
To see all available parameters, see template yaml files in `templates`
directory.
Finally you can deploy common stack with Salt master, SSH key and private network.
.. code-block:: bash
./create_stack.sh salt_single_public
If everything goes right, stack should be ready in a few minutes. You can
verify by running following commands:
.. code-block:: bash
heat stack-list
nova list
You should be also able to log in as root to public IP provided by ``nova
list`` command.
Now you can deploy openstack cluster:
.. code-block:: bash
./create_stack.sh openstack_cluster_public
When cluster is deployed, you should be able to log in to the instances
from Salt master node by forwarding your SSH agent.
Deploy Salt master
~~~~~~~~~~~~~~~~~~~~
Login to cfg01 node and run highstate to ensure everything is set up
correctly.
.. code-block:: bash
salt-call state.highstate
Then you should be able to see all Salt minions.
.. code-block:: bash
salt '*' grains.get ipv4
Deploy control nodes
~~~~~~~~~~~~~~~~~~~~
First execute basic states on all nodes to ensure Salt minion, system and
OpenSSH are set up.
.. code-block:: bash
salt '*' state.sls linux,salt,openssh,ntp
Next you can deploy basic services:
* keepalived - this service will set up virtual IP on controllers
* rabbitmq
* GlusterFS server service
.. code-block:: bash
salt 'ctl*' state.sls keepalived,rabbitmq,glusterfs.server.service
Now you can deploy Galera MySQL and GlusterFS cluster node by node.
.. code-block:: bash
salt 'ctl01*' state.sls glusterfs.server,galera
salt 'ctl02*' state.sls glusterfs.server,galera
salt 'ctl03*' state.sls glusterfs.server,galera
Next you need to ensure that GlusterFS is mounted. Permission errors are ok at
this point, because some users and groups does not exist yet.
.. code-block:: bash
salt 'ctl*' state.sls glusterfs.client
Finally you can execute highstate to deploy remaining services. Again, run
this node by node.
.. code-block:: bash
salt 'ctl01*' state.highstate
salt 'ctl02*' state.highstate
salt 'ctl03*' state.highstate
Verification
^^^^^^^^^^^^^^^^
Everything should be up and running now. You should execute a few checks
before continue.
Execute following checks on one or all control nodes.
Check GlusterFS status:
.. code-block:: bash
gluster peer status
gluster volume status
Check Galera status (execute on one of the controllers):
.. code-block:: bash
mysql -pworkshop -e'SHOW STATUS;'
Check OpenContrail status:
.. code-block:: bash
contrail-status
Check OpenStack services:
.. code-block:: bash
nova-manage service list
cinder-manage service list
Source keystone credentials and try Nova API:
.. code-block:: bash
source keystonerc
nova list
Deploy compute nodes
~~~~~~~~~~~~~~~~~~~~~
Simply run highstate (better twice):
.. code-block:: bash
salt 'cmp*' state.highstate
Dashboard and support infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Web and metering nodes can be deployed by running highstate:
.. code-block:: bash
salt 'web*' state.highstate
salt 'mtr*' state.highstate
On monitoring node, you need to setup git first:
.. code-block:: bash
salt 'mon*' state.sls git
salt 'mon*' state.highstate
--------------
.. include:: navigation.txt

View File

@ -6,9 +6,9 @@ Chapter 1. Quick start
.. toctree::
quickstart-vagrant.rst
quickstart-heat.rst
quickstart-ooo-heat.rst
quickstart-vagrant.rst
--------------

View File

@ -1,35 +1,32 @@
Salt Formula coding style
=============================
https://github.com/johanek/salt-lint.git
Testing coding style
====================
Formulas are pre-written Salt States. They are as open-ended as Salt States themselves and can be used for tasks such as installing a package, configuring, and starting a service, setting up users or permissions, and many other common tasks. They have certain rules that needs to be adhered.
Using double quotes with no variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-------------------------------------
In general - it's a bad idea. All the strings which does not contain dynamic content ( variables ) should use single quote instead of double.
Line length above 80 characters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-------------------------------
As a 'standard code width limit' and for historical reasons - [IBM punch card](http://en.wikipedia.org/wiki/Punched_card) had exactly 80 columns.
Found single line declaration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Single line declarations
------------------------
Avoid extending your code by adding single-line declarations. It makes your code much cleaner and easier to parse / grep while searching for those declarations.
No newline at the end of the file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
---------------------------------
Each line should be terminated in a newline character, including the last one. Some programs have problems processing the last line of a file if it isn't newline terminated. [Stackoverflow thread](http://stackoverflow.com/questions/729692/why-should-files-end-with-a-newline)
Trailing whitespace character found
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Trailing whitespace characters
------------------------------
Trailing whitespaces take more spaces than necessary, any regexp based searches won't return lines as a result due to trailing whitespace(s).

View File

@ -1,6 +1,6 @@
Integration testing
=====================
===================
There are requirements, in addition to Salt's requirements, which need to be installed in order to run the test suite. Install the line below.

View File

@ -1,18 +1,28 @@
Metadata validation
===================
Metadata testing
================
Pillars are tree-like structures of data defined on the Salt Master and passed through to minions. They allow confidential, targeted data to be securely sent only to the relevant minion. Pillar is therefore one of the most important systems when using Salt.
Pillars are tree-like structures of data defined on the Salt Master and passed through to the minions. They allow confidential, targeted data to be securely sent only to the relevant minion. Pillar is therefore one of the most important systems when using Salt.
Testing scenarios
-----------------
Our testing plan is to test each state with the example pillar:
Testing plan tests each formula with the example pillars covering all possible deployment setups:
#. Run ``state.show_sls`` to ensure that it parses properly and have some debugging output,
#. Run ``state.sls`` to run the state were on,
#. Run ``state.sls again, capturing output, asserting that ``^Not Run:`` is not present in the output, because if it is then it means that a state cannot detect by itself whether it has to be run or not and thus is not idempotent.
The first test run covers ``state.show_sls`` call to ensure that it parses properly with debug output.
The second test covers ``state.sls`` to run the state definition, and run ``state.sls again, capturing output, asserting that ``^Not Run:`` is not present in the output, because if it is then it means that a state cannot detect by itself whether it has to be run or not and thus is not idempotent.
metadata.yml
~~~~~~~~~~~~
.. code-block:: yaml
name: "service"
version: "0.2"
source: "https://github.com/tcpcloud/salt-formula-service"
--------------

View File

@ -58,7 +58,6 @@ And set the content to the following to setup reclass as salt-master metadata so
vim /etc/reclass/reclass-config.yml
.. code-block:: yaml
storage_type: yaml_fs

View File

@ -2,6 +2,30 @@
Orchestrate infrastrucutre services
===================================
First execute basic states on all nodes to ensure Salt minion, system and
OpenSSH are set up.
.. code-block:: bash
salt '*' state.sls linux,salt,openssh,ntp
Support infrastructure deployment
---------------------------------
Metering node is deployed by running highstate:
.. code-block:: bash
salt 'mtr*' state.highstate
On monitoring node, git needs to be setup first:
.. code-block:: bash
salt 'mon*' state.sls git
salt 'mon*' state.highstate
--------------

View File

@ -2,6 +2,69 @@
Orchestrate OpenStack services
================================
Control nodes deployment
-------------------------
Next you can deploy basic services:
* keepalived - this service will set up virtual IP on controllers
* rabbitmq
* GlusterFS server service
.. code-block:: bash
salt 'ctl*' state.sls keepalived,rabbitmq,glusterfs.server.service
Now you can deploy Galera MySQL and GlusterFS cluster node by node.
.. code-block:: bash
salt 'ctl01*' state.sls glusterfs.server,galera
salt 'ctl02*' state.sls glusterfs.server,galera
salt 'ctl03*' state.sls glusterfs.server,galera
Next you need to ensure that GlusterFS is mounted. Permission errors are ok at
this point, because some users and groups does not exist yet.
.. code-block:: bash
salt 'ctl*' state.sls glusterfs.client
Finally you can execute highstate to deploy remaining services. Again, run
this node by node.
.. code-block:: bash
salt 'ctl01*' state.highstate
salt 'ctl02*' state.highstate
salt 'ctl03*' state.highstate
Compute nodes deployment
~~~~~~~~~~~~~~~~~~~~~~~~
Simply run highstate (better twice):
.. code-block:: bash
salt 'cmp*' state.highstate
Dashboard and support infrastructure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Web and metering nodes can be deployed by running highstate:
.. code-block:: bash
salt 'web*' state.highstate
salt 'mtr*' state.highstate
On monitoring node, get needs to setup first:
.. code-block:: bash
salt 'mon*' state.sls git
salt 'mon*' state.highstate
--------------

View File

@ -2,6 +2,41 @@
Validate OpenStack services
================================
Everything should be up and running now. You should execute a few checks
before continue. Execute following checks on one or all control nodes.
Check GlusterFS status:
.. code-block:: bash
gluster peer status
gluster volume status
Check Galera status (execute on one of the controllers):
.. code-block:: bash
mysql -p<PWD> -e'SHOW STATUS;'
Check OpenContrail status:
.. code-block:: bash
contrail-status
Check OpenStack services:
.. code-block:: bash
nova-manage service list
cinder-manage service list
Source keystone credentials and try Nova API:
.. code-block:: bash
source keystonerc
nova list
--------------

View File

@ -21,11 +21,6 @@ Remote execution principles carry over all aspects of Salt platform. Command are
- **Target** - Matching minion ID with globbing, regular expressions, Grains matching, Node groups, compound matching is possible
- **Function** - Commands haveform: module.function, arguments are YAML formatted, compound commands are possible
Try test run to reach minion
.. code-block:: bash
salt '*' test.version
Targetting minions
~~~~~~~~~~~~~~~~~~
@ -34,31 +29,40 @@ Examples of different kinds of targetting minions
.. code-block:: bash
salt '*' test.version
salt -E '.*' apache.signal restart
salt -G 'os:Fedora' test.version
salt '*' cmd.exec_code python 'import sys; print sys.version'
SaltStack commands
~~~~~~~~~~~~~~~~~~~
Minion facts
SaltStack commands
~~~~~~~~~~~~~~~~~~
Minion inner facts
.. code-block:: bash
salt-call grains.items
Minion parameters
Minion external parameters
.. code-block:: bash
salt-call pillar.data
Sync state
Run the full configuration catalog
.. code-block:: bash
salt-call state.highstate
Run one given service from catalog
.. code-block:: bash
salt-call state.sls servicename
--------------
.. include:: navigation.txt

View File

@ -5,6 +5,8 @@ Security issues
Encrypted communication
-----------------------
System permissions
------------------

View File

@ -2,11 +2,7 @@
Server Topology
==================
Production setup role description
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
High availability is key idea for production environments. Therefore Reference Architecture and all components are introduced only in HA mode. This provides replicated servers to prevent single points of failure.
High availability is the default environment setup. Reference architecture covers only the HA deployment. HA provides replicated servers to prevent single points of failure. Single node deployments are supported for development environments in Vagrant and Heat.
Production setup consists from several roles of physical nodes:
@ -14,55 +10,58 @@ Production setup consists from several roles of physical nodes:
* KVM Control cluster
* Compute nodes
Server role description
-----------------------
Virtual Machine nodes:
SaltMaster node
^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~
TCP Master Node contains supporting installation components for deploying OpenStack cloud as Salt Master, git repositories, package repository, etc. TCP Master Node is virtual machine.
SaltMaster node contains supporting installation components for deploying OpenStack cloud as Salt Master, git repositories, package repository, etc.
OpenStack Controller node
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OpenStack controller nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~
Controller is fail-over cluster for hosting OpenStack core cloud components (Nova, Neutron, Cinder, Glance), OpenContrail control roles and multi-mastar database for all OpenStack services.
OpenContrail Config node
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OpenContrail controller nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenContrail controller is fail-over cluster for hosting OpenContrail Config, Neutron, Control and other services like Cassandra, Zookeeper, Redis, HAProxy, Keepalived fully operated in High Availability.
OpenContrail Analytics node
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OpenContrail analytics node
~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenContrail Analytics node is fail-over cluster for OpenContrail analytics.
Database node
^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~
MySQL Galera nodes contain multi-master database for all OpenStack and Monitoring services.
Telemetry node
^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~
Ceilometer node is separated from central controllers for better performance, maintenance and upgrades. MongoDB cluster is used for storing telemetry data.
Proxy node
^^^^^^^^^^^^^^
~~~~~~~~~~~~~~
This node does proxy for all OpenStack APIs and Dashboards.
Monitoring node
^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~
This node contains modules for TCP Monitoring, which include Sensu open source monitoring framework, RabbitMQ and KEDB.
Billometer node
^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~
This node contains modules for TCP Billing, which include Horizon dashboard.
Metering node
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
This node contains Graphite, which is a highly scalable real-time graphing system. It includes Graphite's processing backend - Carbon and fixed-size database - Whisper.
@ -71,7 +70,7 @@ This node contains Graphite, which is a highly scalable real-time graphing syste
:align: center
Reference Architecture
~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------
.. figure:: figures/server_topology.jpg
:width: 100%
@ -123,7 +122,7 @@ Reclass model for:
All hosts are deployed in `workshop.cloudlab.cz` domain.
Instructions for reclass modification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------------------
- Fork this repository
- Make customizations according to your environment: