Correct a bunch of typos in docs and add contributor link

Change-Id: I18769378a3c14ca0771da9c1fbd5d3fb81959e3b
This commit is contained in:
Levi Blackstone 2015-04-03 09:37:31 -05:00
parent d3ad114596
commit c7b7cf7876
3 changed files with 15 additions and 15 deletions

View File

@ -151,7 +151,7 @@ durable = True
max_messages = 100
</pre>
<p>The important part of this configuration is the <code>[event_worker]</code> section. This says we want to use the RabbitMQ data source. The RabbitMQ connectivity information is stored in the <code>[rabbit_broker]</code> section. The name of each rabbitmq queue to consume from is specified in the <code>[consumers]</code> section. For every queue you define there, you will need a <code>[consumer:&lt;queue_name&gt;]</code> section. This last section is where there real magic happens. Beyond defining the exchange, routing_key and durability characteristics, it defines the chain of <code>Yagi Handlers</code> that will run on every notification that gets consumed. </p>
<p>The important part of this configuration is the <code>[event_worker]</code> section. This says we want to use the RabbitMQ data source. The RabbitMQ connectivity information is stored in the <code>[rabbit_broker]</code> section. The name of each RabbitMQ queue to consume from is specified in the <code>[consumers]</code> section. For every queue you define there, you will need a <code>[consumer:&lt;queue_name&gt;]</code> section. This last section is where there real magic happens. Beyond defining the exchange, routing_key and durability characteristics, it defines the chain of <code>Yagi Handlers</code> that will run on every notification that gets consumed. </p>
<p>You can write your own Yagi handlers if you like, but there are a number that ship with StackTach.v3 to do some interesting things. The most important of these is the <a href='https://github.com/stackforge/stacktach-winchester/blob/4875e419a66974e416dbe3b43ed286017bad1ec4/winchester/yagi_handler.py#L18'>winchester.yagi_handler:WinchesterHandler</a>. This handler is your entry point into StackTach.v3 stream processing. But first, we need to convert those messy notifications into events ...</p>
<h3><a id='distill'>Distilling Notifications to Events</a></h3>
<p>Now we have notifications coming into Winchester. But, as we hinted at above, we need to take the larger notification and <i>distill</i> it down into a, more manageable, event. The stack-distiller module makes this happen. Within StackTach.v3, this is part of <code>winchester.yagi_handler:WinchesterHandler</code>.</p>
@ -269,7 +269,7 @@ max_messages = 100
<td><span class="glyphicon glyphicon-ok-circle"></span></td></tr>
<tr><td>Boolean support</td>
<td><span class="glyphicon glyphicon-ok-circle"></span></td>
<td>Coersed to Integer</td></tr>
<td>Coerced to Integer</td></tr>
<tr><td>Millisecond resolution datetime support</td>
<td><span class="glyphicon glyphicon-remove-circle"></span></td>
<td><span class="glyphicon glyphicon-ok-circle"></span></td></tr>
@ -301,7 +301,7 @@ pipeline_handlers:
notabene: winchester.pipeline_handler:NotabeneHandler
</pre>
<p>The first thing you'll notice is the database connection string. But then you'll notice that the Winchester module needs three other configuration files. The distiller config file we've already covered. The other two require a little more explaination. They define your <a href='glossary.html#trigger'>Triggers</a> and your <a href='glossary.html#pipeline'>Pipelines</a>.</p>
<p>The first thing you'll notice is the database connection string. But then you'll notice that the Winchester module needs three other configuration files. The distiller config file we've already covered. The other two require a little more explanation. They define your <a href='glossary.html#trigger'>Triggers</a> and your <a href='glossary.html#pipeline'>Pipelines</a>.</p>
<div class="panel panel-info">
<div class="panel-heading">
@ -398,7 +398,7 @@ my_expire_pipeline:
<img src='pipeline_processing.gif' class="img-rounded"/>
<p>During pipeline processing each handler is called to process the events in the stream. A handler has three methods: handle_events(), commit() and rollback(). The handle_events() method is called for each handler in the order they're defined. If they all succeed, the commit() method of each handler is called. Otherwise, the rollback() method of each handler is called. No work should be performed in the handle_events() method. The data should be pre-computed and stored, but not actioned until in the commit() method. In the case of errors, the handle_event() method could be called many times. So, to ensure at-most-once functionality, non-reversable operations should be reserved for the commit() call. Things like, publishing new notifications, emitting metrics, sending emails, etc. should be done in commit(). rollback() is a last chance for you to unwind any work you may have performed.<p>
<p>During pipeline processing each handler is called to process the events in the stream. A handler has three methods: handle_events(), commit() and rollback(). The handle_events() method is called for each handler in the order they're defined. If they all succeed, the commit() method of each handler is called. Otherwise, the rollback() method of each handler is called. No work should be performed in the handle_events() method. The data should be pre-computed and stored, but not actioned until in the commit() method. In the case of errors, the handle_event() method could be called many times. So, to ensure at-most-once functionality, non-reversible operations should be reserved for the commit() call. Things like, publishing new notifications, emitting metrics, sending emails, etc. should be done in commit(). rollback() is a last chance for you to unwind any work you may have performed.<p>
<h4>Stream debugging</h4>
@ -515,7 +515,7 @@ winchester.debugging[INFO line: 161] ----------------------------
most popular OpenStack operations.</p>
<h4><a id='usage'>The UsageHandler</a></h4>
<p>The UsageHandler is a pipeline handler for determining the daily usage of every instance with an OpenStack Nova deployment. The usage handler is cells-aware so it can support large deployments.</p>
<p>The useage handler requires a stream per instance per day. It triggers when the <code>compute.instance.exists</code> event is seen. Audit notifications should be <a href='#enabling'>enabled</a> within Nova. See the samples for an example of a usage stream definition.</p>
<p>The usage handler requires a stream per instance per day. It triggers when the <code>compute.instance.exists</code> event is seen. Audit notifications should be <a href='#enabling'>enabled</a> within Nova. See the samples for an example of a usage stream definition.</p>
<p>Once triggered, the usage handler will compare the daily transactional events for every instance against the various .exists records for that instance. If nothing happens to an instance within that 24-hour period, an end-of-day .exists notification is sent from Nova. Nova operations that change the <code>launched_at</code> date for an instance will issue additional .exists records. These include create, delete, resize and rebuild operations. If the transactional events for the instance match the values in the .exists event, a <code>compute.instance.exists.verified</code> notification is created, otherwise a <code>compute.instance.exists.failed</code> and/or <code>compute.instance.exists.warnings</code> notifications are created. When coupled with the NotabeneHandler, these new notifications can be republished to the queue for subsequent processing.<p>
<p>The schema of these new notifications are as follows:</p>
@ -532,7 +532,7 @@ winchester.debugging[INFO line: 161] ----------------------------
'audit_period_beginning': start datetime of audit period
'audit_period_ending': ending datetime of audit period
'launched_at': datetime this instance was launched
'deleted_at': datatime this instance was deleted
'deleted_at': datetime this instance was deleted
'instance_id': instance uuid
'tenant_id': tenant id
'display_name': instance display name
@ -564,13 +564,13 @@ winchester.debugging[INFO line: 161] ----------------------------
'timestamp': datetime this notification was generated at source
'stream_id': stream id
'original_message_id': message_id of .exists event
'error': human readable explaination for verification failure
'error': human readable explanation for verification failure
'error_code': numeric error code (see below)
'payload': {
'audit_period_beginning': start datetime of audit period
'audit_period_ending': ending datetime of audit period
'launched_at': datetime this instance was launched
'deleted_at': datatime this instance was deleted
'deleted_at': datetime this instance was deleted
'instance_id': instance uuid
'tenant_id': tenant id
'display_name': instance display name
@ -647,8 +647,8 @@ winchester.debugging[INFO line: 161] ----------------------------
<h4><a id='notabene'>The NotabeneHandler</a></h4>
<p>The NotabeneHandler will take any new notifications (not events) it finds in the pipeline Environment variable and publish them to the rabbitmq exchange specified. The handler will look ofor a key/value in the pipeline environment (passed into the handler on the handle_events() call).<p>
<p>In your pipeline definition, you can set the configuration for the NotabeneHandler as shown below. Note how the enviroment variable keys are defined by the <code>env_keys</code> value. This can be a list of keys. Any new notifications this handler finds in those variables will get published to the RabbitMQ exchange specified in the rest of the configuration. The <code>queue_name</code> is also critical so we know which topic to publish to. In OpenStack, the routing key is the queue name. The notabene handler does connection pooling to the various queues, so specifying many different servers is not expensive.</p>
<p>The NotabeneHandler will take any new notifications (not events) it finds in the pipeline Environment variable and publish them to the RabbitMQ exchange specified. The handler will look for a key/value in the pipeline environment (passed into the handler on the handle_events() call).<p>
<p>In your pipeline definition, you can set the configuration for the NotabeneHandler as shown below. Note how the environment variable keys are defined by the <code>env_keys</code> value. This can be a list of keys. Any new notifications this handler finds in those variables will get published to the RabbitMQ exchange specified in the rest of the configuration. The <code>queue_name</code> is also critical so we know which topic to publish to. In OpenStack, the routing key is the queue name. The notabene handler does connection pooling to the various queues, so specifying many different servers is not expensive.</p>
<p>Because these environment keys have to be set before the notabene handler is called, it has to be one of the last handlers in the pipeline. The UsageHandler adds new notifications to the <code>usage_notifications</code> key. If the notabene handler is not part of the pipeline, these new notifications are dropped when the pipeline is finished.</p>

View File

@ -60,7 +60,7 @@
<p>StackTach.v3 is licensed under the Apache 2.0 license</p>
<p>All the source repos for StackTach.v3 (and .v2) are available on <a href='https://github.com/stackforge?query=stacktach'>StackForge</a>. Details on contributing to StackForge projects are available <a href='https://wiki.openstack.org/wiki/How_To_Contribute'>here</a></p>
<p>The core developers are available on Freenode IRC in the <code>#stacktach</code> channel</p>
<p>These docs are available in the Sandbox repo. Patches welcome!</p>
<p>These docs are available in the <a href='https://github.com/stackforge/stacktach-sandbox/tree/master/docs'>Sandbox</a> repo. Patches welcome!</p>
<footer class="footer">
<p>&copy; Dark Secret Software Inc. 2014</p>

View File

@ -78,14 +78,14 @@
<p>You can see the flow of data in the diagram above:</p>
<ol>
<li>OpenStack Nova notifications are simulated by notagen and pumped into RabbitMQ via the notabene library. With the sandbox, there is no need to stand up a full OpenStack deployment.</li>
<li>OpenStack Nova notifications are simulated by notigen and pumped into RabbitMQ via the notabene library. With the sandbox, there is no need to stand up a full OpenStack deployment.</li>
<li>The yagi-event workers consume notifications from RabbitMQ, archives them via shoebox, distills them into events and stuffs them into streams via winchester</li>
<li>The pipeline-worker workers look for ready streams and does pipeline processing on them.</li>
<li>The user can use the klugman cmdline tool to talk to the REST API to perform stream and event operations/queries</li>
</ol>
<p>In order to do all this, there are a number of configation files required. Refer to the general documentation here or the particular libaries README file for configuration details. Of course, the names could be anything, these are just the onces we've settled on. The main configuration files include:</p>
<p>In order to do all this, there are a number of configuration files required. Refer to the general documentation here or the particular libraries README file for configuration details. Of course, the names could be anything, these are just the ones we've settled on. The main configuration files include:</p>
<ul>
<li><b>yagi.conf</b> - the configuration file that tells yagi how to connect to the quque and what to do with the notifications consumed.</li>
<li><b>yagi.conf</b> - the configuration file that tells yagi how to connect to the queue and what to do with the notifications consumed.</li>
<li><b>winchester.yaml</b> - the master configuration file for winchester. It specifies the pipeline configuration to use, the stream definitions and the triggering rules.</p>
<li><b>triggers.yaml</b> - the detailed stream definitions and pipeline triggering rules</b></li>
<li><b>pipelines.yaml</b> - the pipeline handler definitions</p>
@ -96,7 +96,7 @@
<img src='sandbox-2.gif' class="img-rounded"/>
<p>This will get you going for a minimal StackTach.v3 installation. It's especially handy for dev environments as well as a way of just playing around. For low-volume/non-mission critical evironments, this would be appropriate. Read up on the "build" command, below, for instructions on launching the sandbox environment. For larger deployments, you may want to look at how we deploy StackTach.v3 within Rackspace Public Cloud, below:</p>
<p>This will get you going for a minimal StackTach.v3 installation. It's especially handy for dev environments as well as a way of just playing around. For low-volume/non-mission critical environments, this would be appropriate. Read up on the "build" command, below, for instructions on launching the sandbox environment. For larger deployments, you may want to look at how we deploy StackTach.v3 within Rackspace Public Cloud, below:</p>
<h3>How StackTach.v3 is deployed at Rackspace</h3>
<p>For Rackspace Public Cloud, OpenStack is deployed in many different regions and each region is comprised of many cells.</p>