Initial commit

Change-Id: Icc80ec5baba9086b0277e394fe8d8e42e2b27dc0
This commit is contained in:
Volodymyr Kornylyuk 2016-07-26 11:39:24 +03:00
parent e07f62e4b5
commit 7764e6c4fe
40 changed files with 1803 additions and 0 deletions

7
.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
.DS_Store
.bundled_gems/
.build/
kafka*rpm
.tox
._*

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

10
README.md Normal file
View File

@ -0,0 +1,10 @@
Kafka plugin
=======================
The *Kafka Plugin* installs [Apache Kafka](http://kafka.apache.org) and
[Apache ZooKeeper](https://zookeeper.apache.org) in a
Mirantis OpenStack (MOS) environment deployed by Fuel.
Please go to the [Kafka Plugin Documentation](
http://fuel-plugin-kafka.readthedocs.org/en/latest/index.html)
to get started.

View File

@ -0,0 +1,2 @@
Gemfile.lock
.bundle

View File

@ -0,0 +1,20 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
source 'https://rubygems.org'
group :development, :test do
gem 'rake'
gem "puppet", ENV['PUPPET_VERSION'] || '~> 3.4.0'
gem 'puppetlabs_spec_helper'
end

View File

@ -0,0 +1,13 @@
require 'puppet-lint/tasks/puppet-lint'
require 'puppet-syntax/tasks/puppet-syntax'
PuppetLint.configuration.fail_on_warnings = true
PuppetLint.configuration.send('disable_80chars')
PuppetLint.configuration.send('disable_class_inherits_from_params_class')
PuppetLint.configuration.send('disable_class_parameter_defaults')
desc "Run lint, and syntax tests."
task :test => [
:lint,
:syntax,
]

View File

@ -0,0 +1,26 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('fuel-plugin-kafka: check_environment_configuration.pp')
# Check that JVM size doesn't exceed the physical RAM size
$kafka_heap = hiera('kafka::jvm_heap_size')
$zookeeper_heap = hiera('zookeeper::jvm_heap_size')
$total_heap_mb = ($kafka_heap + 0.0) * 1024 + ($zookeeper_heap + 0.0) * 1024
if $total_heap_mb >= $::memorysize_mb {
fail("The configured JVM size for Kafka (${kafka_heap} GB) and\
Zookeeper (${zookeeper_heap} GB) in total is greater than the system RAM (${::memorysize}).")
}

View File

@ -0,0 +1,82 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('kafka: firewall.pp')
$zookeeper_client_port = hiera('zookeeper::config::client_port')
$zookeeper_election_port = hiera('zookeeper::config::election_port')
$zookeeper_leader_port = hiera('zookeeper::config::leader_port')
$kafka_port = hiera('kafka::port')
$kafka_jmx_port = hiera('kafka::jmx_port')
class {'::firewall':}
firewall { '000 accept all icmp requests':
proto => 'icmp',
action => 'accept',
}
firewall { '001 accept all to lo interface':
proto => 'all',
iniface => 'lo',
action => 'accept',
}
firewall { '002 accept related established rules':
proto => 'all',
state => ['RELATED', 'ESTABLISHED'],
action => 'accept',
}
firewall {'020 ssh':
port => 22,
proto => 'tcp',
action => 'accept',
}
firewall { '100 zookeeper port':
port => $zookeeper_client_port,
proto => 'tcp',
action => 'accept',
}
firewall { '102 zookeeper port':
port => $zookeeper_election_port,
proto => 'tcp',
action => 'accept',
}
firewall { '103 zookeeper port':
port => $zookeeper_leader_port,
proto => 'tcp',
action => 'accept',
}
firewall { '104 kafka port':
port => $kafka_port,
proto => 'tcp',
action => 'accept',
}
firewall { '105 kafka port':
port => $kafka_jmx_port,
proto => 'tcp',
action => 'accept',
}
firewall { '999 drop all other requests':
proto => 'all',
chain => 'INPUT',
action => 'drop',
}

View File

@ -0,0 +1,72 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('fuel-plugin-kafka: hiera_override.pp')
# Initialize network-related variables
$network_scheme = hiera_hash('network_scheme')
$network_metadata = hiera_hash('network_metadata')
prepare_network_config($network_scheme)
$kafka = hiera_hash('kafka')
$hiera_file = '/etc/hiera/plugins/kafka.yaml'
$kafka_nodes = get_nodes_hash_by_roles($network_metadata, ['kafka', 'primary-kafka'])
$kafka_nodes_count = count($kafka_nodes)
$listen_address = get_network_role_property('management', 'ipaddr')
$kafka_addresses_map = get_node_to_ipaddr_map_by_network_role($kafka_nodes, 'management')
$kafka_ip_addresses = sort(values($kafka_addresses_map))
$uid = $kafka_nodes[$hostname]['uid']
if is_integer($kafka["replication_factor"]) and $kafka["replication_factor"] <= $kafka_nodes_count {
$replication_factor = $kafka["replication_factor"]
} else {
$replication_factor = $kafka_nodes_count
}
notice("Replication factor set to ${replication_factor}")
$calculated_content = inline_template('
---
kafka::jvm_heap_size: <%= @kafka["kafka_jvm_heap_size"] %>
kafka::num_partitions: <%= @kafka["num_partitions"] %>
kafka::replication_factor: <%= @replication_factor %>
kafka::log_retention_hours: <%= @kafka["log_retention_hours"] %>
# This directory must match the mount point set in volumes.yaml
kafka::data_dir: "/opt/kafka-data"
kafka::port: 9092
kafka::jmx_port: 9990
kafka::uid: <%= @uid %>
kafka::nodes:
<% @kafka_ip_addresses.each do |x| -%>
- "<%= x %>"
<% end -%>
kafka::addresses_map:
<% @kafka_addresses_map.each do |k,v| -%>
<%= k %>: "<%= v %>"
<% end -%>
zookeeper::jvm_heap_size: <%= @kafka["zookeeper_jvm_heap_size"] %>
zookeeper::config::client_port: 2181
zookeeper::config::election_port: 2888
zookeeper::config::leader_port: 3888
zookeeper::config::tick_time: 2000
zookeeper::config::init_limit: 5
zookeeper::config::sync_limit: 2
')
file { $hiera_file:
ensure => file,
content => $calculated_content,
}
class { '::osnailyfacter::netconfig::hiera_default_route' :}

View File

@ -0,0 +1,64 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('fuel-plugin-kafka: kafka.pp')
$deployment_id = hiera('deployment_id')
$master_ip = hiera('master_ip')
$uid = hiera('kafka::uid')
$heap_size = hiera('kafka::jvm_heap_size')
$num_partitions = hiera('kafka::num_partitions')
$replication_factor = hiera('kafka::replication_factor')
$log_retention_hours = hiera('kafka::log_retention_hours')
$kafka_port = hiera('kafka::port')
$kafka_jmx_port = hiera('kafka::jmx_port')
$zookeeper_port = hiera('zookeeper::config::client_port')
$kafka = hiera_hash('kafka')
$plugin_version = $kafka['metadata']['plugin_version']
$array_version = split($plugin_version, '[.]')
$major_version = "${$array_version[0]}.${$array_version[1]}"
$kafka_version = '0.10.0.0'
$datastore = hiera('kafka::data_dir')
$mirror_url = "http://${master_ip}:8080/plugins/kafka-${major_version}/repositories/ubuntu"
$log_options = '-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties -Dkafka.logs.dir=/var/log/kafka'
$jmx_opts = "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=${kafka_jmx_port}"
class { 'kafka':
version => $kafka_version,
scala_version => '2.11',
mirror_url => $mirror_url,
}
class { 'kafka::broker':
config => {
'broker.id' => $uid,
'zookeeper.connect' => "localhost:${zookeeper_port}",
'inter.broker.protocol.version' => $kafka_version,
'num.partitions' => $num_partitions,
'replication.factor' => $replication_factor,
'log.retention.hours' => $log_retention_hours,
'port' => $kafka_port,
'log.dir' => "${datastore}/message-logs",
},
heap_opts => "-Xmx${heap_size}G -Xms${heap_size}G",
log4j_opts => $log_options
}
file { "${datastore}/message-logs":
ensure => directory,
owner => 'kafka',
group => 'kafka',
mode => '0644',
}

View File

@ -0,0 +1,35 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
notice('fuel-plugin-kafka: zookeeper.pp')
$myid = hiera('kafka::uid')
$addresses_map = hiera('kafka::addresses_map')
$heap_size = hiera('zookeeper::jvm_heap_size')
$datastore = hiera('kafka::data_dir')
class { 'zookeeper':
servers => $addresses_map,
id => $myid,
datastore => $datastore,
java_opts => "-Xmx${heap_size}G -Xms${heap_size}G",
}
file { '/etc/logrotate.d/zookeeper.conf':
ensure => present,
owner => 'root',
group => 'root',
mode => '0644',
content => template('kafka_zookeeper/zookeeper_logrotate.conf.erb'),
}

View File

@ -0,0 +1,19 @@
# managed by puppet
/var/log/zookeeper/*.log {
copytruncate
compress
delaycompress
missingok
notifempty
# logrotate allows to use only year, month, day and unix epoch
dateext
dateformat -%Y%m%d-%s
# number of rotated files to keep
rotate 10
# do not rotate files unless both size and time conditions are met
hourly
minsize 20M
# force rotate if filesize exceeded 100M
maxsize 100M
}

View File

@ -0,0 +1,55 @@
# Author:: Liam Bennett (mailto:lbennett@opentable.com)
# Copyright:: Copyright (c) 2013 OpenTable Inc
# License:: MIT
# == Class: kafka::broker::service
#
# This private class is meant to be called from `kafka::broker`.
# It manages the kafka service
#
class kafka::broker::service(
$service_install = $kafka::broker::service_install,
$service_ensure = $kafka::broker::service_ensure,
$jmx_opts = $kafka::broker::jmx_opts,
$log4j_opts = $kafka::broker::log4j_opts,
$opts = $kafka::broker::opts
) {
if $caller_module_name != $module_name {
fail("Use of private class ${name} by ${caller_module_name}")
}
if $service_install {
if $::service_provider == 'systemd' {
include ::systemd
file { '/usr/lib/systemd/system/kafka.service':
ensure => present,
mode => '0644',
content => template('kafka/broker.unit.erb'),
}
file { '/etc/init.d/kafka':
ensure => absent,
}
File['/usr/lib/systemd/system/kafka.service'] ~> Exec['systemctl-daemon-reload'] -> Service['kafka']
} else {
file { '/etc/init/kafka.conf':
ensure => present,
mode => '0755',
content => template('kafka/init.erb'),
before => Service['kafka'],
}
}
service { 'kafka':
ensure => $service_ensure,
enable => true,
hasstatus => true,
hasrestart => true,
}
} else {
debug('Skipping service install')
}
}

View File

@ -0,0 +1,139 @@
# Author:: Liam Bennett (mailto:lbennett@opentable.com)
# Copyright:: Copyright (c) 2013 OpenTable Inc
# License:: MIT
# == Class: kafka
#
# This class will install kafka binaries
#
# === Requirements/Dependencies
#
# Currently requires the puppetlabs/stdlib module on the Puppet Forge in
# order to validate much of the the provided configuration.
#
# === Parameters
#
# [*version*]
# The version of kafka that should be installed.
#
# [*scala_version*]
# The scala version what kafka was built with.
#
# [*install_dir*]
# The directory to install kafka to.
#
# [*mirror_url*]
# The url where the kafka is downloaded from.
#
# [*install_java*]
# Install java if it's not already installed.
#
# [*package_dir*]
# The directory to install kafka.
#
# === Examples
#
#
class kafka (
$version = $kafka::params::version,
$scala_version = $kafka::params::scala_version,
$install_dir = $kafka::params::install_dir,
$mirror_url = $kafka::params::mirror_url,
$install_java = $kafka::params::install_java,
$package_dir = $kafka::params::package_dir
) inherits kafka::params {
validate_re($::osfamily, 'RedHat|Debian\b', "${::operatingsystem} not supported")
validate_bool($install_java)
validate_absolute_path($package_dir)
$basefilename = "kafka_${scala_version}-${version}.tgz"
$package_url = "${mirror_url}/kafka/${version}/${basefilename}"
if $version != $kafka::params::version {
$install_directory = "/opt/kafka-${scala_version}-${version}"
} elsif $scala_version != $kafka::params::scala_version {
$install_directory = "/opt/kafka-${scala_version}-${version}"
} else {
$install_directory = $install_dir
}
if $install_java {
class { '::java':
distribution => 'jdk',
}
}
group { 'kafka':
ensure => present,
}
user { 'kafka':
ensure => present,
shell => '/bin/bash',
require => Group['kafka'],
}
file { $package_dir:
ensure => directory,
owner => 'kafka',
group => 'kafka',
require => [
Group['kafka'],
User['kafka'],
],
}
file { $install_directory:
ensure => directory,
owner => 'kafka',
group => 'kafka',
require => [
Group['kafka'],
User['kafka'],
],
}
file { '/opt/kafka':
ensure => link,
target => $install_directory,
require => File[$install_directory],
}
file { '/opt/kafka/config':
ensure => directory,
owner => 'kafka',
group => 'kafka',
require => Archive["${package_dir}/${basefilename}"],
}
file { '/var/log/kafka':
ensure => directory,
owner => 'kafka',
group => 'kafka',
require => [
Group['kafka'],
User['kafka'],
],
}
include '::archive'
archive { "${package_dir}/${basefilename}":
ensure => present,
extract => true,
extract_command => 'tar xfz %s --strip-components=1',
extract_path => $install_directory,
source => $package_url,
creates => "${install_directory}/config",
cleanup => true,
user => 'kafka',
group => 'kafka',
require => [
File[$package_dir],
File[$install_directory],
Group['kafka'],
User['kafka'],
],
}
}

View File

@ -0,0 +1,34 @@
# Kafka Broker Service
description "Kafka Broker"
start on (started zookeeper)
stop on (stopping zookeeper)
respawn
respawn limit 2 5
env HOME=/opt/kafka/config
env KAFKA_HOME=/opt/kafka
env KAFKA_JMX_OPTS="<%= @jmx_opts %>"
env KAFKA_LOG4J_OPTS="<%= @log4j_opts %>"
env KAFKA_HEAP_OPTS="<%= @heap_opts %>"
env KAFKA_GC_LOG_OPTS=" "
umask 007
limit nofile 65536 65536
limit core unlimited unlimited
kill timeout 300
pre-start script
#Sanity checks
[ -r $HOME/server.properties ]
end script
setuid kafka
setgid kafka
script
$KAFKA_HOME/bin/kafka-server-start.sh $HOME/server.properties
end script

View File

@ -0,0 +1,130 @@
# Class: zookeeper::post_install
#
# In order to maintain compatibility with older releases, there are
# some post-install task to ensure same behaviour on all platforms.
#
# Should not be called directly
#
class zookeeper::post_install(
$ensure,
$ensure_account,
$ensure_cron,
$user,
$group,
$datastore,
$snap_retain_count,
$cleanup_sh,
$manual_clean = undef,
){
# make sure user and group exists for ZooKeeper #49, if the OS package
# doesn't handle its creation
if ($ensure_account){
ensure_resource('group',
[$group],
{'ensure' => $ensure_account}
)
case $::osfamily {
'Redhat': {
$shell = '/sbin/nologin'
}
default: {
# sane default for most OS
$shell = '/bin/false'
}
}
ensure_resource('user',
[$user],
{
'ensure' => $ensure_account,
# 'home' => $datastore,
'comment' => 'Zookeeper',
'gid' => $group,
'shell' => $shell,
'require' => Group[$group]
}
)
}
if ($manual_clean) {
# user defined value
$clean = $manual_clean
} else {
# autodetect
# since ZooKeeper 3.4 there's no need for purging snapshots with cron
case $::osfamily {
'Debian': {
case $::operatingsystem {
'Debian': {
case $::lsbdistcodename {
'wheezy', 'squeeze': { # 3.3.5
$clean = true
}
default: { # future releases
$clean = false
}
}
}
'Ubuntu': {
case $::lsbdistcodename {
'precise': { # 3.3.5
$clean = true
}
default: {
$clean = false
}
}
}
default: {
fail ("Family: '${::osfamily}' OS: '${::operatingsystem}' is not supported yet")
}
}
}
'Redhat': {
$clean = false
}
default: {
fail ("Family: '${::osfamily}' OS: '${::operatingsystem}' is not supported yet")
}
}
}
# if !$cleanup_count, then ensure this cron is absent.
if ($clean and $snap_retain_count > 0 and $ensure != 'absent') {
if ($ensure_cron){
ensure_resource('package', 'cron', {
ensure => 'installed',
})
cron { 'zookeeper-cleanup':
ensure => present,
command => "${cleanup_sh} ${datastore} ${snap_retain_count}",
hour => 2,
minute => 42,
user => $user,
}
}else {
file { '/etc/cron.daily/zkcleanup':
ensure => present,
content => "${cleanup_sh} ${datastore} ${snap_retain_count}",
}
}
}
# package removal
if($clean and $ensure == 'absent'){
if ($ensure_cron){
cron { 'zookeeper-cleanup':
ensure => $ensure,
}
}else{
file { '/etc/cron.daily/zkcleanup':
ensure => $ensure,
}
}
}
}

View File

@ -0,0 +1,112 @@
# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html
# The number of milliseconds of each tick
tickTime=<%= @tick_time %>
# The number of ticks that the initial
# synchronization phase can take
initLimit=<%= @init_limit %>
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=<%= @sync_limit %>
# the directory where the snapshot is stored.
dataDir=<%= @datastore %>
# Place the dataLogDir to a separate physical disc for better performance
<% if @datalogstore -%>
dataLogDir=<%= @datalogstore %>
<% else -%>
# dataLogDir=/disk2/zookeeper
<% end -%>
# the port at which the clients will connect
clientPort=<%= @client_port %>
# interface to bind
<% if @client_ip -%>
clientPortAddress=<%= @client_ip %>
<% else -%>
#clientPortAddress=
<% end -%>
# specify all zookeeper servers
# The fist port is used by followers to connect to the leader
# The second one is used for leader election
#server.UID1=zookeeper1:2888:3888
#server.UID2=zookeeper2:2888:3888
#server.UID3=zookeeper3:2888:3888
<% @servers.each do |k, h| -%>
<%# make sure port is not included in hostname %>
<% if h.index(':') -%>
<% h = h[0...(h.index(':'))] -%>
<% end -%>
<% server_id = k.split('-').last() -%>
<% if @observers.include? h -%>
<% observer_text=':observer' -%>
<% end -%>
<%= "server.#{server_id}=#{h}:%s:%s%s" % [ @election_port, @leader_port, observer_text ] -%>
<% end -%>
# To avoid seeks ZooKeeper allocates space in the transaction log file in
# blocks of preAllocSize kilobytes. The default block size is 64M. One reason
# for changing the size of the blocks is to reduce the block size if snapshots
# are taken more often. (Also, see snapCount).
#preAllocSize=65536
# Clients can submit requests faster than ZooKeeper can process them,
# especially if there are a lot of clients. To prevent ZooKeeper from running
# out of memory due to queued requests, ZooKeeper will throttle clients so that
# there is no more than globalOutstandingLimit outstanding requests in the
# system. The default limit is 1,000.ZooKeeper logs transactions to a
# transaction log. After snapCount transactions are written to a log file a
# snapshot is started and a new transaction log file is started. The default
# snapCount is 10,000.
snapCount=<%= @snap_count %>
# If this option is defined, requests will be will logged to a trace file named
# traceFile.year.month.day.
#traceFile=
# Leader accepts client connections. Default value is "yes". The leader machine
# coordinates updates. For higher update throughput at thes slight expense of
# read throughput the leader can be configured to not accept clients and focus
# on coordination.
<% if @leader -%>
leaderServes=yes
<% else -%>
leaderServes=no
<% end -%>
# Since 3.4.0: When enabled, ZooKeeper auto purge feature retains the autopurge.
# snapRetainCount most recent snapshots and the corresponding transaction logs
# in the dataDir and dataLogDir respectively and deletes the rest.
# Defaults to 3. Minimum value is 3.
autopurge.snapRetainCount=<%= @snap_retain_count %>
# Since 3.4.0: The time interval in hours for which the purge task has to be
# triggered. Set to a positive integer (1 and above) to enable the auto purging.
# Defaults to 0.
autopurge.purgeInterval=<%= @purge_interval %>
# Maximum allowed connections
<% if @max_allowed_connections -%>
maxClientCnxns=<%= @max_allowed_connections %>
<% else -%>
#maxClientCnxns=60
<% end -%>
<% if @peer_type != 'UNSET' -%>
# Zookeeper peer type
peerType=<%= @peer_type %>
<% end -%>
# The minimum session timeout in milliseconds that the server will allow the # client to negotiate. Defaults to 2 times the tickTime.
<% if @min_session_timeout -%>
minSessionTimeout=<%= @min_session_timeout %>
<% else -%>
#minSessionTimeout=2
<% end -%>
# The maximum session timeout in milliseconds that the server will allow the # client to negotiate. Defaults to 20 times the tickTime.
<% if @max_session_timeout -%>
maxSessionTimeout=<%= @max_session_timeout %>
<% else -%>
#maxSessionTimeout=20
<% end -%>

137
deployment_tasks.yaml Normal file
View File

@ -0,0 +1,137 @@
# Groups definitions
####################
- id: primary-kafka
type: group
version: 2.0.0
role: [primary-kafka]
tasks:
- hiera
- setup_repositories
- fuel_pkgs
- globals
- tools
- logging
- netconfig
- hosts
- kafka-firewall
- kafka-check-configuration
- zookeeper-installation
- kafka-hiera
- kafka-installation
requires: [deploy_start]
required_for: [deploy_end]
parameters:
strategy:
type: one_by_one
- id: kafka
type: group
version: 2.0.0
role: [kafka]
tasks:
- hiera
- setup_repositories
- fuel_pkgs
- globals
- tools
- logging
- netconfig
- hosts
- kafka-firewall
- kafka-check-configuration
- kafka-hiera
- zookeeper-installation
- kafka-installation
requires: [deploy_start, primary-kafka]
required_for: [deploy_end]
parameters:
strategy:
type: parallel
# Tasks definitions for the deployment
######################################
# This task needs to be reexecuted to adapt the configuration parameters which
# depend on the number of nodes in the cluster
- id: kafka-hiera
type: puppet
version: 2.0.0
requires: [netconfig]
required_for: [deploy_end]
parameters:
puppet_manifest: "puppet/manifests/hiera_override.pp"
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 120
reexecute_on:
- deploy_changes
# This task needs to be reexecuted to recheck that the configuration parameters
# match the node's characteristics (eg JVM size).
- id: kafka-check-configuration
type: puppet
version: 2.0.0
requires: [kafka-hiera]
required_for: [deploy_end]
parameters:
puppet_manifest: puppet/manifests/check_environment_configuration.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 120
reexecute_on:
- deploy_changes
- id: kafka-firewall
type: puppet
version: 2.0.0
requires: [kafka-check-configuration]
required_for: [deploy_end]
parameters:
puppet_manifest: "puppet/manifests/firewall.pp"
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 3600
- id: kafka-installation
type: puppet
version: 2.0.0
requires: [zookeeper-installation]
required_for: [deploy_end]
parameters:
puppet_manifest: puppet/manifests/kafka.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
reexecute_on:
- deploy_changes
# This task needs to be reexecuted to reconfigure kafka instances
- id: zookeeper-installation
type: puppet
version: 2.0.0
requires: [kafka-check-configuration]
required_for: [deploy_end]
parameters:
puppet_manifest: puppet/manifests/zookeeper.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
reexecute_on:
- deploy_changes
- id: kafka-dns-client
type: puppet
version: 2.0.0
role: [primary-kafka, kafka]
requires: [post_deployment_start]
required_for: [post_deployment_end]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/dns/dns-client.pp
puppet_modules: /etc/puppet/modules
timeout: 600
- id: kafka-ntp-client
type: puppet
version: 2.0.0
role: [primary-kafka, kafka]
requires: [kafka-dns-client]
required_for: [post_deployment_end]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/ntp/ntp-client.pp
puppet_modules: /etc/puppet/modules
timeout: 600

2
docs/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
build/
images/*.pdf

191
docs/Makefile Normal file
View File

@ -0,0 +1,191 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# SVG to PDF conversion
SVG2PDF = inkscape
SVG2PDF_FLAGS =
# Build a list of SVG files to convert to PDF
PDF_FILES := $(foreach dir, images, $(patsubst %.svg,%.pdf,$(wildcard $(dir)/*.svg)))
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
rm -f $(PDF_FILES)
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/LMAcollector.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/LMAcollector.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/LMAcollector"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/LMAcollector"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex: $(PDF_FILES)
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf: $(PDF_FILES)
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
# Rule for building the PDF files only
images: $(PDF_FILES)
# Pattern rule for converting SVG to PDF
%.pdf : %.svg
$(SVG2PDF) -f $< -A $@

32
docs/source/conf.py Normal file
View File

@ -0,0 +1,32 @@
import sys
import os
extensions = []
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = u'The Kafka Cluster Plugin'
copyright = u'2016, Mirantis Inc.'
version = '0.1'
release = '0.1.0'
exclude_patterns = [
]
pygments_style = 'sphinx'
html_theme = 'default'
htmlhelp_basename = 'RedisPlugindoc'
latex_elements = {
}
latex_documents = [
('index', 'KafkaPlugindoc.tex', u'The Kafka Cluster Plugin',
u'Mirantis Inc.', 'manual'),
]
man_pages = [
('index', 'kafkaplugin', u'The Kafka Cluster Plugin',
[u'Mirantis Inc.'], 1)
]
texinfo_documents = [
('index', 'KafkaPlugin', u'The Kafka Cluster Plugin',
u'Mirantis Inc.', 'KafkaPlugin', 'One line description of project.',
'Miscellaneous'),
]
latex_elements = {'classoptions': ',openany,oneside', 'babel':
'\\usepackage[english]{babel}'}

View File

@ -0,0 +1,41 @@
.. _overview:
Overview
========
The *Kafka Plugin* installs `Apache Kafka <http://kafka.apache.org/>`_ and
`Apache ZooKeeper <https://zookeeper.apache.org/>`_ in a
Mirantis OpenStack (MOS) environment deployed by Fuel.
Apache Kafka is publish-subscribe messaging system. It is fast,
scalable and durable.
The *Kafka Plugin* is created for exchanging messages between various components of StackLight and Ceilometer, but it is generic enough to accommodate other usages.
Plugin provides fuel role *kafka*. Maximum node number is 5.
Recommended minimum is 3 nodes, odd number of nodes required for leader election.
Please refer to the `Kafka 0.10.0 documentation <http://kafka.apache.org/documentation.html>`_
for more information.
Requirements
------------
======================= ================
Requirements Version/Comment
======================= ================
MOS 9.0
======================= ================
.. _limitations:
Limitations
-----------
* Kafka supports authentication, encryption and authorization. Current version of the
plugin doesn't support any form of security, meaning that the Kafka cluster will be
“open” on the management network. We plan to support some level of security in future
versions of the plugin.
* Kafka Plugin will not expose configuration properties for all the broker configuration parameters.
This means that the Kafka broker configuration set by the plugin will not be appropriate for every
usage. In the future, we may make the Fuel plugin more configurable by adding new configuration
properties.

76
docs/source/guide.rst Normal file
View File

@ -0,0 +1,76 @@
User Guide
==========
Once the *Kafka Plugin* is installed following the instructions of
the :ref:`Installation Guide`, you can add Kafka nodes to new or
existing Mirantis OpenStack (MOS) environment.
Plugin Configuration
--------------------
To use the *Kafka Plugin*, you need to add nodes with Kafka role `Add a node to an OpenStack environment
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/add-nodes.html>`_.
1. Make sure that the plugin is properly installed on the Fuel Master node.
Go to the *Plugins* tab. You should see the following:
.. image:: images/plugins-list.png
:width: 100%
2. Enable the plugin. You can configure additional setting on this step.
Go to the *Environments* tab and select the *The Apache Kafka Message Broker Plugin* checkbox:
.. image:: images/settings.png
:width: 100%
3. Add nodes to your environment and assign the **Kafka** role.
.. note:: When `adding nodes
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/add-nodes.html>`_
to the environment and `assign or change a role
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/change-roles.html>`_,
do not forget to use an odd number of nodes as recommended in :ref:`overview` section.
.. image:: images/assign-role.png
:width: 100%
4. `Verify your network configuration
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/verify-networks.html>`_.
5. `Deploy your changes
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/deploy-environment.html>`_
once you are done with the configuration of your environment.
Plugin Verification
-------------------
#. On the Fuel Master node, find the IP address of a node where the
Kafka is installed using the :command:`fuel nodes` command:
.. code-block:: console
[root@fuel ~]# fuel nodes
id|status|name |cluster|ip |mac |roles |
--|------|----------------|-------|----|-------------------------|
1 |ready |Untitled (fa:87)| 1 |... |... |kafka |
2 |ready |Untitled (12:aa)| 1 |... |... |kafka |
3 |ready |Untitled (4e:6e)| 1 |... |... |kafka |
#. Log in to any of these nodes using SSH, for example, to ``node-1``.
#. Run the following command:
.. code-block:: console
root@node-1:~# netstat -ntpl | grep java
tcp6 0 0 :::9092 :::* LISTEN 14702/java
tcp6 0 0 :::2181 :::* LISTEN 9710/java
tcp6 0 0 :::9990 :::* LISTEN 14702/java
You will see that Kafka and Zookeeper are running and listening theirs ports:
2181 - Zookeeper, 9092 and 9990 - Kafka.
#. Additionally you can test sending/receiving messages with instructions on `Quick Start Guide (Step 3 - Step 5) <http://kafka.apache.org/documentation.html#quickstart>`_

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

15
docs/source/index.rst Normal file
View File

@ -0,0 +1,15 @@
=====================================================
Welcome to the Kafka Cluster Plugin Documentation!
=====================================================
.. toctree::
:maxdepth: 2
description
installation
guide
Indices and Tables
==================
* :ref:`search`

View File

@ -0,0 +1,35 @@
.. _installation guide:
Installation Guide
==================
Install the Plugin
------------------
To install the *Kafka Plugin*, you need to follow these steps.
#. Please refer to the :ref:`limitations` section before you proceed.
#. Download the plugin from the
`Fuel Plugins Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins/>`_.
#. Copy the plugin's RPM file to the
`Fuel Master node
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-install-guide/intro/intro_fuel_intro.html>`_
with secure copy (scp)::
# scp fuel-plugin-ceilometer-redis/kafka-0.1-0.1.0-1.noarch.rpm /
root@:<the_Fuel_Master_node_IP address>:/tmp
#. Log into the Fuel Master node and install the plugin::
# ssh root@:<the_Fuel_Master_node_IP address>
[root@fuel-master ~]# cd /tmp
[root@fuel-master ~]# fuel plugins --install kafka-0.1-0.1.0-1.noarch.rpm
#. Verify that the plugin is installed correctly::
[root@fuel-master ~]# fuel plugins list
id | name | version | package_version | releases
---+-------+---------+-----------------+--------------------
1 | kafka | 0.1.0 | 4.0.0 | ubuntu (mitaka-9.0)

67
environment_config.yaml Normal file
View File

@ -0,0 +1,67 @@
attributes:
kafka_jvm_heap_size:
value: '1'
label: 'Kafka JVM Heap size'
description: "The JVM Heap size for Kafka in GB"
weight: 10
type: "text"
regex:
source: '^\d+$'
error: "You must provide a number"
zookeeper_jvm_heap_size:
value: '1'
label: 'ZooKeeper JVM Heap size'
description: 'The JVM Heap size for ZooKeeper in GB. Kafka documentation it is recommended to use 3-5 GB as the JVM Heap size for ZooKeeper'
weight: 15
type: "text"
regex:
source: '^\d+$'
error: "You must provide a number"
advanced_settings:
label: "Advanced settings"
value: false
description: "The plugin determines the best settings if not set"
weight: 20
type: checkbox
num_partitions:
value: '5'
label: 'Number of partitions'
description: "The number of partitions per topic. Default is 5."
weight: 22
type: "text"
regex:
source: '^\d+$'
error: "You must provide a number"
restrictions:
- condition: "settings:kafka.advanced_settings.value == false"
action: hide
replication_factor:
value: ''
label: 'Replication factor'
description: 'The partition replication factor. Default is number of nodes in the Kafka cluster'
weight: 23
type: "text"
regex:
source: '^\d{0,2}$'
error: "You must provide either a number or leave it empty"
restrictions:
- condition: "settings:kafka.advanced_settings.value == false"
action: hide
log_retention_hours:
value: '168'
label: 'Retention period'
description: 'The log retention in hours. Default is 168 hours (7 days).'
weight: 24
type: "text"
regex:
source: '^\d+$'
error: "You must provide a number"
restrictions:
- condition: "settings:kafka.advanced_settings.value == false"
action: hide

63
functions.sh Normal file
View File

@ -0,0 +1,63 @@
#!/bin/bash
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -eux
ROOT="$(dirname "$(readlink -f "$0")")"
MODULES_DIR="${ROOT}"/deployment_scripts/puppet/modules
RPM_REPO="${ROOT}"/repositories/centos/
DEB_REPO="${ROOT}"/repositories/ubuntu/
function get_package_path {
FILE=$(basename "$1")
if [[ "$1" == *.deb ]]; then
echo "$DEB_REPO"/"$FILE"
elif [[ "$1" == *.rpm ]]; then
echo "$RPM_REPO"/"$FILE"
else
echo "Invalid URL for $1"
exit 1
fi
}
# Download RPM or DEB packages and store them in the local repository directory
function download_packages {
while [ $# -gt 0 ]; do
wget -qO - "$1" > "$(get_package_path "$1")"
shift
done
}
# Download file and store it in the local directory
function download_file {
URL=$1
FILE_NAME=$2
DESTINATION=$3
mkdir -p $DESTINATION
wget -qO $DESTINATION/$FILE_NAME $URL
}
# Download official Puppet module and store it in the local directory
function download_puppet_module {
rm -rf "${MODULES_DIR:?}"/"$1"
mkdir -p "${MODULES_DIR}"/"$1"
wget -qO- "$2" | tar -C "${MODULES_DIR}/$1" --strip-components=1 -xz
}
function check_md5sum {
FILE="$(get_package_path "$1")"
echo "$2 $FILE" | md5sum --check --strict
}

29
metadata.yaml Normal file
View File

@ -0,0 +1,29 @@
# Plugin name
name: kafka
# Human-readable name for your plugin
title: The Apache Kafka Message Broker Plugin
# Plugin version
version: '0.1.0'
# Description
description: Deploy Apache Kafka Message Broker Cluster
# Required fuel version
fuel_version: ['9.0']
# Licences
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['Mirantis Inc.']
# A link to the plugin homepage
homepage: 'https://github.com/openstack/fuel-plugin-kafka'
groups: ['monitoring']
is_hotpluggable: true
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: mitaka-9.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '4.0.0'

11
node_roles.yaml Normal file
View File

@ -0,0 +1,11 @@
kafka:
name: 'Kafka'
description: 'Install Kafka Cluster'
has_primary: true
public_ip_required: false
weight: 100
limits:
max: 5
recommended: 3
conflicts:
- compute

30
pre_build_hook Executable file
View File

@ -0,0 +1,30 @@
#!/bin/bash
set -eux
. "$(dirname "$(readlink -f "$0")")"/functions.sh
ARCHIVE_MODULE_URL="https://forge.puppet.com/v3/files/puppet-archive-1.0.0.tar.gz"
JAVA_MODULE_URL="https://forge.puppet.com/v3/files/puppetlabs-java-1.6.0.tar.gz"
STDLIB_MODULE_URL="https://forge.puppet.com/v3/files/puppetlabs-stdlib-4.12.0.tar.gz"
SYSTEMD_MODULE_URL="https://forge.puppet.com/v3/files/camptocamp-systemd-0.2.2.tar.gz"
ZOOKEEPER_MODULE_URL="https://forge.puppet.com/v3/files/deric-zookeeper-0.5.5.tar.gz"
KAFKA_MODULE_URL="https://forge.puppet.com/v3/files/puppet-kafka-2.0.0.tar.gz"
download_puppet_module "archive" "${ARCHIVE_MODULE_URL}"
download_puppet_module "java" "${JAVA_MODULE_URL}"
download_puppet_module "stdlib" "${STDLIB_MODULE_URL}"
download_puppet_module "systemd" "${SYSTEMD_MODULE_URL}"
download_puppet_module "zookeeper" "${ZOOKEEPER_MODULE_URL}"
download_puppet_module "kafka" "${KAFKA_MODULE_URL}"
# Patching modules
PATCH_DIR="deployment_scripts/puppet/patches"
MODULES_DIR="deployment_scripts/puppet/modules"
cp -f $PATCH_DIR/zookeeper/manifests/post_install.pp $MODULES_DIR/zookeeper/manifests
cp -f $PATCH_DIR/zookeeper/templates/conf/zoo.cfg.erb $MODULES_DIR/zookeeper/templates/conf
cp -f $PATCH_DIR/kafka/manifests/init.pp $MODULES_DIR/kafka/manifests
cp -f $PATCH_DIR/kafka/manifests/broker/service.pp $MODULES_DIR/kafka/manifests/broker
cp -f $PATCH_DIR/kafka/templates/init.erb $MODULES_DIR/kafka/templates
KAFKA_TARBALL_URL="http://mirrors.ukfast.co.uk/sites/ftp.apache.org/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgz"
download_file "${KAFKA_TARBALL_URL}" kafka_2.11-0.10.0.0.tgz repositories/ubuntu/kafka/0.10.0.0

2
repositories/ubuntu/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
*deb
kafka

View File

1
tasks.yaml Normal file
View File

@ -0,0 +1 @@
[]

2
test-requirements.txt Normal file
View File

@ -0,0 +1,2 @@
-e git+https://github.com/openstack/fuel-plugins.git#egg=fuel-plugin-builder
Sphinx

27
tox.ini Normal file
View File

@ -0,0 +1,27 @@
[tox]
envlist = manifests,build_plugin
skipsdist = True
[testenv]
deps = -r{toxinidir}/test-requirements.txt
passenv = HOME
[testenv:manifests]
deps =
changedir = {toxinidir}/deployment_scripts/puppet/manifests
whitelist_externals =
bundle
mkdir
commands =
mkdir -p {toxinidir}/.bundled_gems
bundle install --path {toxinidir}/.bundled_gems
bundle exec rake test
[testenv:build_plugin]
changedir = {toxinidir}
whitelist_externals =
fpb
bash
commands =
fpb --check {toxinidir} --debug
fpb --build {toxinidir} --debug

21
volumes.yaml Normal file
View File

@ -0,0 +1,21 @@
volumes:
- id: "kafka"
type: "vg"
min_size:
generator: "calc_gb_to_mb"
generator_args: [30]
label: "Kafka data"
volumes:
- mount: "/opt/kafka-data"
type: "lv"
name: "kafka"
file_system: "ext4"
size:
generator: "calc_total_vg"
generator_args: ["kafka"]
volumes_roles_mapping:
kafka:
- {allocate_size: "min", id: "os"}
- {allocate_size: "min", id: "logs"}
- {allocate_size: "all", id: "kafka"}