9.3 KiB
Nailgun Extensions
Overview of extensions
Nailgun extensions provide a capability for Fuel Developers to extend Fuel features. Extensions were introduced to provide pythonic way for adding integrations with external services, extending existing features, or adding new features without changing the Nailgun source code.
A Nailgun extension can execute its method on specific events such as
on_node_create
or on_cluster_delete
(more
about event handlers in the Available
Events section) and also to change deployment and provisioning data
just before it is sent to orchestrator by the means of Data Pipelines
classes.
Note
The extensions mechanism does not provide a sufficient level of isolation. Therefore, the extension may not work after you upgrade Fuel.
On the contrary, Fuel plugins provide backward compatibility and a friendly UI for the end user. Use plugins for all changes in the system performed by the Fuel user.
Required properties
All Nailgun extensions must populate the following class variables:
name
- a string which will be used inside Nailgun to identify the extension. It should consist only of lowercase letters with "_" (underscore) separator and digits.version
- a string with version. It should follow semantic versioning: http://semver.org/description
- a short text which briefly describes the actions that the extension performs.
Available events
Extension can execute event handlers on specific events. There is a list of available handlers:
Method | Event |
---|---|
on_node_create |
Node has been created |
on_node_update |
Node has been updated |
on_node_reset |
Node has been reseted |
on_node_delete |
Node has been deleted |
on_node_collection_delete |
Collection of nodes has been deleted |
on_cluster_delete |
Cluster has been deleted |
on_before_deployment_check |
Called right before running "before deployment check task" |
REST API handlers
Nailgun Extensions also provide a way to add additional API endpoints. To add an extension-specific handler sub-class from:
nailgun.api.v1.handlers.base.BaseHandler
The second step is to register the handler by providing the
urls
list in Extension class:
= [
urls 'uri': r'/example_extension/(?P<node_id>\d+)/?$',
{'handler': ExampleNodeHandler},
'uri': r'/example_extension/(?P<node_id>\d+)/properties/',
{'handler': NodeDefaultsDisksHandler},
]
As you can see you need to provide a list of dicts with keys:
key | value |
---|---|
uri |
a regular expression (string) for the URL path |
handler |
handler class |
Database interaction
There is a possibility to use the Nailgun database to store the data needed by a Nailgun extension. To use it you must provide alembic migration scripts which should be placed in:
extension_module/alembic_migrations/migrations/
Where extension_module
is the one where the file with
your extension class is placed.
You can also change this directory by overriding the classmethod:
alembic_migrations_path
It should return an absolute path (string) to alembic migrations directory.
Additionally, use a table name with an extension-specific prefix in
models classes and alembic migration scripts. We recommend that you use
the table_prefix
extension method to retrieve the prefix
(string).
Note
Do not use the direct db calls to Nailgun core tables in the
extension class. Use the nailgun.objects
module which
ensures compatibility between the Nailgun DB and the configuration
implemented in your extension.
There must be no relations between extension models and core models.
Extension Data Pipelines
If you want to change the deployment or provisioning data just before it is sent to an orchestrator use Extension Data Pipelines.
Data Pipeline is a class which inherits from:
nailgun.extensions.BasePipeline
BasePipeline provides two methods which you can override:
process_provisioning
process_deployment
Both methods take the following parameters:
data
- serialized data which will be sent to orchestrator. Data does not include nodes data which was defined by User inreplaced_deployment_info
or inreplaced_provisioning_info
.cluster
- a cluster instance for which the data was serialized.nodes
- nodes instances for which the data was serialized. Nodes list does not include node instances which were filtered out indata
parameter.**kwargs
- additional kwargs - must be in method definition to provide backwards-compatibility for future (small) changes in extensions API.
Both methods must return the data
dict so it can be
processed by other pipelines.
To enable pipelines, add the data_pipelines
variable in
your extensions class:
class ExamplePipelineOne(BasePipeline):
@classmethod
def process_provisioning(cls, data, cluster, nodes, **kwargs):
'new_field'] = 'example_value'
data[return data
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
'new_field'] = 'example_value'
data[return data
class ExamplePipelineTwo(BasePipeline):
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
'new_field2'] = 'example_value2'
data[return data
class ExampleExtension(BaseExtension):
...= [
data_pipelines
ExamplePipelineOne,
ExamplePipelineTwo,
] ...
Pipeline classes will be executed in the order they are
defined in the data_pipelines
variable.
How to install and plug in extensions
To use extensions system in Nailgun, implement an extension class which will be the subclass of:
nailgun.extensions.BaseExtension
The class must be placed in a separate module which defines
entry_points
in its setup.py
file.
Extension entry point should use Nailgun extensions namespace which is:
nailgun.extensions
Example setup.py
file with ExampleExtension
may look like this:
from setuptools import setup, find_packages
setup(='example_package',
name='1.0',
version='Demonstration package for Nailgun Extensions',
description='Fuel Nailgman',
author='fuel@nailgman.com',
author_email='http://example.com',
url=['Development Status :: 3 - Alpha',
classifiers'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Environment :: Console',
],=find_packages(),
packages={
entry_points'nailgun.extensions': [
'ExampleExtension = example_package.nailgun_extensions.ExampleExtension',
],
}, )
Now to enable the extension it is enough to run:
python setup.py install
or:
pip install .
Now extension will be discovered by Nailgun automatically after restart.
Example Extension with Pipeline - additional logging
import datetime
import logging
from nailgun.extensions import BaseExtension
from nailgun.extensions import BasePipeline
= logging.getLogger(__name__)
logger
class TimeStartedPipeline(BasePipeline):
@classmethod
def process_provisioning(cls, data, cluster, nodes, **kwargs):
= datetime.datetime.now()
now 'time_started'] = 'provisioning started at {}'.format(now)
data[return data
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
= datetime.datetime.now()
now 'time_started'] = 'deployment started at {}'.format(now)
data[return data
class ExampleExtension(BaseExtension):
= 'additional_logger'
name = '1.0.0'
version = 'Additional Logging Extension '
description
= [
data_pipelines
TimeStartedPipeline,
]
@classmethod
def on_node_create(cls, node):
'Node %s has been created', node.id)
logging.debug(
@classmethod
def on_node_update(cls, node):
'Node %s has been updated', node.id)
logging.debug(
@classmethod
def on_node_reset(cls, node):
'Node %s has been reseted', node.id)
logging.debug(
@classmethod
def on_node_delete(cls, node):
'Node %s has been deleted', node.id)
logging.debug(
@classmethod
def on_node_collection_delete(cls, node_ids):
'Nodes %s have been deleted', ', '.join(node_ids))
logging.debug(
@classmethod
def on_cluster_delete(cls, cluster):
'Cluster %s has been deleted', cluster.id)
logging.debug(
@classmethod
def on_before_deployment_check(cls, cluster):
'Cluster %s will be deployed soon', cluster.id) logging.debug(