Improved architecture section in documentation

Change-Id: I71527cf3831a02f2ce7abbc38c53b5c5d69f99da
This commit is contained in:
Stéphane Albert 2014-09-02 16:12:39 +02:00
parent 59c3d9bd1a
commit 176aef1beb
3 changed files with 175 additions and 18 deletions

View File

@ -5,20 +5,23 @@ CloudKitty's Architecture
CloudKitty can be cut in four big parts:
* API
* collector
* billing processor
* writer pipeline
* Collector
* Rating processing
* Writing pipeline
.. graphviz:: graph/arch.dot
Module loading and extensions
=============================
Nearly every part of CloudKitty makes use of stevedore to load extensions
Nearly every part of CloudKitty make use of stevedore to load extensions
dynamically.
Every billing module is loaded at runtime and can be enabled/disabled directly
via CloudKitty's API. The billing module is responsible of its own API to ease
the management of its configuration.
Every rating module is loaded at runtime and can be enabled/disabled directly
via CloudKitty's API. The module is responsible of its own API to ease the
management of its configuration.
Collectors and writers are loaded with stevedore but configured in CloudKitty's
configuration file.
@ -27,21 +30,115 @@ configuration file.
Collector
=========
This part is responsible of the information gathering. It consists of a python
module that load data from a backend and return them in a format that
CloudKitty can handle.
**Loaded with stevedore**
Processor
=========
The name of the collector to use is specified in the configuration, only one
collector can be loaded at once.
This part is responsible of information gathering. It consists of a python
class that load data from a backend and return them in a format that CloudKitty
can handle.
This is where every pricing calculations is done. The data gathered by
the collector is pushed in a pipeline of billing processors. Every
processor does its calculations and updates the data.
The data format of CloudKitty is the following:
.. code-block:: json
{
"myservice": [
{
"billing": {
"price": 0.1
},
"desc": {
"sugar": "25",
"fiber": "10",
"name": "apples",
},
"vol": {
"qty": 1,
"unit": "banana"
}
}
]
}
Example code of a basic collector:
.. code-block:: python
class MyCollector(BaseCollector):
def __init__(self, **kwargs):
super(MyCollector, self).__init__(**kwargs)
def get_mydata(self, start, end=None, project_id=None, q_filter=None):
# Do stuff
return ck_data
You'll now be able to add the gathering of mydata in CloudKitty by modifying
the configuration and specifying the new service in collect/services.
Rating
======
**Loaded with stevedore**
This is where every rating calculations is done. The data gathered by the
collector is pushed in a pipeline of billing processors. Every processor does
its calculations and updates the data.
Example of minimal rating module (taken from the Noop module):
.. code-block:: python
class NoopController(billing.BillingController):
module_name = 'noop'
def get_module_info(self):
module = Noop()
infos = {
'name': self.module_name,
'description': 'Dummy test module.',
'enabled': module.enabled,
'hot_config': False,
}
return infos
class Noop(billing.BillingProcessorBase):
controller = NoopController
def __init__(self):
pass
@property
def enabled(self):
"""Check if the module is enabled
:returns: bool if module is enabled
"""
return True
def reload_config(self):
pass
def process(self, data):
for cur_data in data:
cur_usage = cur_data['usage']
for service in cur_usage:
for entry in cur_usage[service]:
if 'billing' not in entry:
entry['billing'] = {'price': 0}
return data
Writer
======
In the same way as the processor pipeline, the writing is handled with a
pipeline. The data is pushed to every writer in the pipeline which is
responsible of the writing.
**Loaded with stevedore**
In the same way as the rating pipeline, the writing is handled with a pipeline.
The data is pushed to write orchestrator that will store the data in a
transient DB (in case of output file invalidation). And then to every writer in
the pipeline which is responsible of the writing.

View File

@ -28,6 +28,7 @@ sys.path.insert(0, os.path.abspath('../..'))
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.graphviz',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode',
'wsmeext.sphinxext',

59
doc/source/graph/arch.dot Normal file
View File

@ -0,0 +1,59 @@
digraph "CloudKitty's Architecture" {
// Graph parameters
label="CloudKitty's Internal Architecture";
node [shape=box];
compound=true;
// API
api [label="API"];
// Orchestrator
subgraph cluster_3 {
label="Orchestrator";
node[shape=none, width=1.3, height=0, label=""];
{rank=same; o1 -> o2 -> o3 [style=invis];}
}
// Collector
ceilometer [label="Ceilometer"];
vendor [label="Vendor specific", style=dotted];
subgraph cluster_0 {
label="Collector";
style=dashed;
ceilometer -> vendor [style=invis];
}
// Rating
hashmap [label="HashMap module"];
r_others [label="Other modules...", style=dotted];
subgraph cluster_1 {
label="Rating engines";
style=dashed;
hashmap -> r_others [style=invis];
}
// Write Orchestrator
w_orchestrator [label="Write Orchestrator"];
tdb [label="Transient DB"];
//Writers
osrf [label="OpenStack\nReference Format\n(json)"];
w_others [label="Other modules...", style=dotted];
subgraph cluster_2 {
label="Writers";
style=dashed;
osrf -> w_others [style=invis];
}
// Relations
api -> hashmap;
api -> r_others;
o1 -> ceilometer [dir=both, ltail=cluster_3, lhead=cluster_0];
o2 -> hashmap [dir=both, ltail=cluster_3, lhead=cluster_1];
o3 -> w_orchestrator [ltail=cluster_3];
w_orchestrator -> osrf [constraint=false];
w_orchestrator -> w_others [style=dotted, constraint=false];
w_orchestrator -> tdb;
}