Iintial implemenation of Fuel Midonet plugin
Change-Id: I022e1d8d20036b7c50d92f009ad25d17a11dda55
This commit is contained in:
parent
5aeb680ac5
commit
ea9058d84e
|
@ -0,0 +1,202 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
|
@ -0,0 +1,7 @@
|
|||
file { '/etc/yum.repos.d/CentOS-Base.repo':
|
||||
ensure => absent,
|
||||
}
|
||||
|
||||
file { '/etc/yum.repos.d/epel.repo':
|
||||
ensure => absent,
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$service_endpoint = $::fuel_settings['management_vip']
|
||||
class { 'plugin_midonet::compute_neutron': }
|
|
@ -0,0 +1,66 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$nodes_hash = $::fuel_settings['nodes']
|
||||
$node = filter_nodes($nodes_hash,'name',$::hostname)
|
||||
|
||||
#Network
|
||||
$internal_address = $node[0]['internal_address']
|
||||
$public_int = $::fuel_settings['public_interface']
|
||||
$gateways = filter_nodes($nodes_hash,'role','midonet-gw')
|
||||
$gateways_internal_addresses = nodes_to_hash($gateways,'name','internal_address')
|
||||
|
||||
#amqp
|
||||
$primary_controller_nodes = filter_nodes($nodes_hash,'role','primary-controller')
|
||||
$controllers = concat($primary_controller_nodes, filter_nodes($nodes_hash,'role','controller'))
|
||||
$controller_internal_addresses = nodes_to_hash($controllers,'name','internal_address')
|
||||
$controller_nodes = ipsort(values($controller_internal_addresses))
|
||||
if $::internal_address in $controller_nodes {
|
||||
# prefer local MQ broker if it exists on this node
|
||||
$amqp_nodes = concat(['127.0.0.1'], fqdn_rotate(delete($controller_nodes, $::internal_address)))
|
||||
} else {
|
||||
$amqp_nodes = fqdn_rotate($controller_nodes)
|
||||
}
|
||||
$amqp_port = '5673'
|
||||
$amqp_hosts = inline_template("<%= @amqp_nodes.map {|x| x + ':' + @amqp_port}.join ',' %>")
|
||||
$amqp_user = 'nova'
|
||||
$amqp_password = $::fuel_settings['rabbit']['password']
|
||||
|
||||
|
||||
|
||||
$access_hash = $::fuel_settings['access']
|
||||
$midonet_api_address = $primary_controller_nodes[0]['internal_address']
|
||||
|
||||
#Logging
|
||||
$verbose = true
|
||||
$debug = $::fuel_settings['debug']
|
||||
$use_syslog = $::fuel_settings['use_syslog'] ? { default=>true }
|
||||
$syslog_log_facility_neutron = 'LOG_LOCAL4'
|
||||
|
||||
#Neutron
|
||||
$db_host = $::fuel_settings['management_vip']
|
||||
$neutron_db_user = 'neutron'
|
||||
$neutron_config = $::fuel_settings['quantum_settings']
|
||||
$network_provider = 'neutron'
|
||||
$neutron_db_password = $neutron_config['database']['passwd']
|
||||
$neutron_user_password = $neutron_config['keystone']['admin_password']
|
||||
$neutron_metadata_proxy_secret = $neutron_config['metadata']['metadata_proxy_shared_secret']
|
||||
$base_mac = 'fa:16:3e:00:00:00'
|
||||
$neutron_db_dbname = 'neutron'
|
||||
$service_plugins = ['neutron.services.l3_router.l3_router_plugin.L3RouterPlugin','neutron.services.metering.metering_plugin.MeteringPlugin']
|
||||
$mechanism_drivers = 'openvswitch'
|
||||
$service_endpoint = $::fuel_settings['management_vip']
|
||||
|
||||
#Nova
|
||||
$nova_user_password = $::fuel_settings['nova']['user_password']
|
||||
stage { 'repos':
|
||||
before => Stage['main']
|
||||
}
|
||||
|
||||
|
||||
class {'plugin_midonet::repos':
|
||||
stage => repos,
|
||||
}
|
||||
class {'plugin_midonet::controller':
|
||||
} ->
|
||||
exec { '/etc/init.d/tomcat6 restart':
|
||||
}
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
sysctl::value { 'net.ipv4.ip_forward':
|
||||
value => '1'
|
||||
}
|
||||
|
|
@ -0,0 +1,19 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$nodes_hash = $::fuel_settings['nodes']
|
||||
$node = filter_nodes($nodes_hash,'name',$::hostname)
|
||||
$internal_address = $node[0]['internal_address']
|
||||
$gateways = filter_nodes($nodes_hash,'role','midonet-gw')
|
||||
$gateways_internal_addresses = nodes_to_hash($gateways,'name','internal_address')
|
||||
|
||||
stage { 'repos':
|
||||
before => Stage['main']
|
||||
}
|
||||
|
||||
|
||||
class {'plugin_midonet::repos':
|
||||
stage => repos,
|
||||
}
|
||||
|
||||
class {'plugin_midonet::midolman':
|
||||
}
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$nodes_hash = $::fuel_settings['nodes']
|
||||
$primary_controller_nodes = filter_nodes($nodes_hash,'role','primary-controller')
|
||||
$controllers = concat($primary_controller_nodes, filter_nodes($nodes_hash,'role','controller'))
|
||||
$service_endpoint = $::fuel_settings['management_vip']
|
||||
stage { 'repos':
|
||||
before => Stage['main']
|
||||
}
|
||||
|
||||
|
||||
class {'plugin_midonet::repos':
|
||||
stage => repos,
|
||||
}
|
||||
|
||||
class {'plugin_midonet::midonetapi':
|
||||
}
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$nodes_hash = $::fuel_settings['nodes']
|
||||
$node = filter_nodes($nodes_hash,'name',$::hostname)
|
||||
$internal_address = $node[0]['internal_address']
|
||||
$gateways = filter_nodes($nodes_hash,'role','midonet-gw')
|
||||
$gateways_internal_addresses = nodes_to_hash($gateways,'name','internal_address')
|
||||
|
||||
stage { 'repos':
|
||||
} ->
|
||||
stage { 'zookeeper':
|
||||
} ->
|
||||
stage { 'cassandra':
|
||||
before => Stage['main']
|
||||
}
|
||||
|
||||
class {'plugin_midonet::repos':
|
||||
stage => repos,
|
||||
}
|
||||
class {'plugin_midonet::zookeeper':
|
||||
stage => zookeeper,
|
||||
}
|
||||
class {'plugin_midonet::cassandra':
|
||||
stage => cassandra,
|
||||
}
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
# CentOS-Base.repo
|
||||
#
|
||||
# This file uses a new mirrorlist system developed by Lance Davis for CentOS.
|
||||
# The mirror system uses the connecting IP address of the client and the
|
||||
# update status of each mirror to pick mirrors that are updated to and
|
||||
# geographically close to the client. You should use this for CentOS updates
|
||||
# unless you are manually picking other mirrors.
|
||||
#
|
||||
# If the mirrorlist= does not work for you, as a fall back you can try the
|
||||
# remarked out baseurl= line instead.
|
||||
#
|
||||
#
|
||||
|
||||
[base]
|
||||
name=CentOS-$releasever - Base
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
||||
|
||||
#released updates
|
||||
[update]
|
||||
name=CentOS-$releasever - Updates
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
||||
|
||||
#packages used/produced in the build but not released
|
||||
[addons]
|
||||
name=CentOS-$releasever - Addons
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=addons
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
||||
|
||||
#additional packages that may be useful
|
||||
[extras]
|
||||
name=CentOS-$releasever - Extras
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
|
||||
gpgcheck=1
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
||||
|
||||
#additional packages that extend functionality of existing packages
|
||||
[centosplus]
|
||||
name=CentOS-$releasever - Plus
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
|
||||
gpgcheck=1
|
||||
enabled=0
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
||||
|
||||
#contrib - packages by Centos Users
|
||||
[contrib]
|
||||
name=CentOS-$releasever - Contrib
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/
|
||||
gpgcheck=1
|
||||
enabled=0
|
||||
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
|
|
@ -0,0 +1,27 @@
|
|||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
|
||||
[epel-debuginfo]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
gpgcheck=1
|
||||
|
||||
[epel-source]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
gpgcheck=1
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
module Puppet::Parser::Functions
|
||||
newfunction(:create_tunnel_zone, :doc => <<-EOS
|
||||
This function creates tunnel zone based on input nodes hash
|
||||
EOS
|
||||
) do |argv|
|
||||
nodes_hash = argv[0]
|
||||
tzone = `midonet-cli -e "create tunnel-zone name default type gre"`.strip
|
||||
list_host = `midonet-cli -e "host list"`.split("\n")
|
||||
list_host.map! { |line| [line.split(" ")[1],line.split(" ")[3]]}
|
||||
list_host = Hash[list_host]
|
||||
list_host.each do |uuid,fqdn|
|
||||
addr = nodes_hash[fqdn]
|
||||
`midonet-cli -e "tunnel-zone #{tzone} add member host #{uuid} address #{addr}"`
|
||||
end
|
||||
end
|
||||
end
|
|
@ -0,0 +1,20 @@
|
|||
module Puppet::Parser::Functions
|
||||
newfunction(:generate_zookeeper_hash, :type => :rvalue, :doc => <<-EOS
|
||||
This function returns Zookeper configuration hash
|
||||
EOS
|
||||
) do |argv|
|
||||
nodes_hash = argv[0]
|
||||
role = argv[1]
|
||||
result = {}
|
||||
ip_list = []
|
||||
sorted_ctrls = nodes_hash.select { |node| node["role"] == role }
|
||||
sorted_ctrls.sort! {|a,b| a['uid'].to_i <=> b['uid'].to_i}
|
||||
# sorted_ctrls = nodes_hash.select { |node| node["role"] == 'primary-controller' } + sorted_ctrls
|
||||
sorted_ctrls.each do |ctrl|
|
||||
result[ctrl['fqdn']] = { 'address' => ctrl['internal_address'],
|
||||
'id' => sorted_ctrls.index(ctrl)+1
|
||||
}
|
||||
end
|
||||
return result
|
||||
end
|
||||
end
|
|
@ -0,0 +1,22 @@
|
|||
Puppet::Type.type(:midolman_config).provide(
|
||||
:ini_setting,
|
||||
:parent => Puppet::Type.type(:ini_setting).provider(:ruby)
|
||||
) do
|
||||
|
||||
def section
|
||||
resource[:name].split('/', 2).first
|
||||
end
|
||||
|
||||
def setting
|
||||
resource[:name].split('/', 2).last
|
||||
end
|
||||
|
||||
def separator
|
||||
'='
|
||||
end
|
||||
|
||||
def file_path
|
||||
'/etc/midolman/midolman.conf'
|
||||
end
|
||||
|
||||
end
|
|
@ -0,0 +1,55 @@
|
|||
Puppet::Type.type(:midonet_host).provide(:ruby) do
|
||||
optional_commands :midonet_cli => "midonet-cli"
|
||||
|
||||
def tunnel_zone
|
||||
res = ''
|
||||
tzones = midonet_cli('-e', "tunnel-zone list").split("\n")
|
||||
tzones.each do |zone|
|
||||
if zone.split(' ')[3] == resource[:tunnel_zone]
|
||||
res = zone.split(' ')[1]
|
||||
end
|
||||
end
|
||||
res
|
||||
end
|
||||
|
||||
def hosts
|
||||
res = {}
|
||||
list_host = midonet_cli('-e', "tunnel-zone #{tunnel_zone} list member").split("\n")
|
||||
list_host.each do |line|
|
||||
host_id = line.split(' ')[3]
|
||||
res[midonet_cli('-e',"show host #{host_id}").split(' ')[3]] = host_id
|
||||
end
|
||||
# list_host.map! { |line| [line.split(" ")[3],line.split(" ")[1]]}
|
||||
# list_host = Hash[list_host]
|
||||
# list_host
|
||||
res
|
||||
end
|
||||
|
||||
def exists?
|
||||
# puts "DEBUG!!!", hosts.inspect
|
||||
hosts.keys().include?(resource[:name])
|
||||
end
|
||||
|
||||
def host_id
|
||||
res = ''
|
||||
list_host = midonet_cli('-e', "host list").split("\n")
|
||||
# puts "HOST_ID", list_host.inspect
|
||||
list_host.each do |line|
|
||||
if line.split(' ')[3] == resource[:name]
|
||||
res = line.split(' ')[1]
|
||||
break
|
||||
end
|
||||
end
|
||||
res
|
||||
end
|
||||
|
||||
def create
|
||||
|
||||
# puts "DEBUG CREATE!!!", hosts.inspect, host_id
|
||||
# puts "DEBUG CREATE!!!", "tunnel-zone #{tunnel_zone} add member host #{host_id} address #{resource[:nodes][resource[:name]]}"
|
||||
midonet_cli('-e',"tunnel-zone #{tunnel_zone} add member host #{host_id} address #{resource[:nodes][resource[:name]]}")
|
||||
end
|
||||
|
||||
def destroy
|
||||
end
|
||||
end
|
|
@ -0,0 +1,15 @@
|
|||
Puppet::Type.type(:midonet_tunnel_zone).provide(:ruby) do
|
||||
optional_commands :midonet_cli => "midonet-cli"
|
||||
|
||||
def exists?
|
||||
tunnel_zones = midonet_cli('-e', "tunnel-zone list").split("\n")
|
||||
tunnel_zones.map! { |line| [line.split(" ")[1],line.split(" ")[3]]}
|
||||
tunnel_zones = Hash[tunnel_zones]
|
||||
tunnel_zones.values().include?(resource[:name])
|
||||
end
|
||||
def create
|
||||
midonet_cli('-e',"create tunnel-zone name #{resource[:name]} type gre")
|
||||
end
|
||||
def destroy
|
||||
end
|
||||
end
|
|
@ -0,0 +1,22 @@
|
|||
Puppet::Type.type(:neutron_plugin_midonet).provide(
|
||||
:ini_setting,
|
||||
:parent => Puppet::Type.type(:ini_setting).provider(:ruby)
|
||||
) do
|
||||
|
||||
def section
|
||||
resource[:name].split('/', 2).first
|
||||
end
|
||||
|
||||
def setting
|
||||
resource[:name].split('/', 2).last
|
||||
end
|
||||
|
||||
def separator
|
||||
'='
|
||||
end
|
||||
|
||||
def file_path
|
||||
'/etc/neutron/plugins/midonet/midonet.ini'
|
||||
end
|
||||
|
||||
end
|
|
@ -0,0 +1,18 @@
|
|||
Puppet::Type.newtype(:midolman_config) do
|
||||
|
||||
ensurable
|
||||
|
||||
newparam(:name, :namevar => true) do
|
||||
desc 'Section/setting name to manage from midonet_plugin.ini'
|
||||
newvalues(/\S+\/\S+/)
|
||||
end
|
||||
|
||||
newproperty(:value) do
|
||||
desc 'The value of the setting to be defined.'
|
||||
munge do |value|
|
||||
value = value.to_s.strip
|
||||
value.capitalize! if value =~ /^(true|false)$/i
|
||||
value
|
||||
end
|
||||
end
|
||||
end
|
|
@ -0,0 +1,16 @@
|
|||
Puppet::Type.newtype(:midonet_host) do
|
||||
|
||||
ensurable
|
||||
|
||||
newparam(:name, :namevar => true) do
|
||||
desc 'FQDN of midonet host'
|
||||
end
|
||||
|
||||
newparam(:nodes) do
|
||||
desc 'Midonet nodes hash { fqdn => ip }'
|
||||
end
|
||||
|
||||
newparam(:tunnel_zone) do
|
||||
desc 'Tunnel zone name'
|
||||
end
|
||||
end
|
|
@ -0,0 +1,8 @@
|
|||
Puppet::Type.newtype(:midonet_tunnel_zone) do
|
||||
|
||||
ensurable
|
||||
|
||||
newparam(:name, :namevar => true) do
|
||||
desc 'FQDN of midonet host'
|
||||
end
|
||||
end
|
|
@ -0,0 +1,18 @@
|
|||
Puppet::Type.newtype(:neutron_plugin_midonet) do
|
||||
|
||||
ensurable
|
||||
|
||||
newparam(:name, :namevar => true) do
|
||||
desc 'Section/setting name to manage from midonet_plugin.ini'
|
||||
newvalues(/\S+\/\S+/)
|
||||
end
|
||||
|
||||
newproperty(:value) do
|
||||
desc 'The value of the setting to be defined.'
|
||||
munge do |value|
|
||||
value = value.to_s.strip
|
||||
value.capitalize! if value =~ /^(true|false)$/i
|
||||
value
|
||||
end
|
||||
end
|
||||
end
|
|
@ -0,0 +1,42 @@
|
|||
class plugin_midonet::cassandra {
|
||||
package { 'dsc20':
|
||||
ensure => present,
|
||||
}
|
||||
|
||||
file { '/etc/cassandra/conf/cassandra.yaml':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/cassandra.yaml.erb'),
|
||||
require => Package['dsc20'],
|
||||
notify => Service['cassandra'],
|
||||
}
|
||||
|
||||
service { 'cassandra':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
firewall {'550 cassandra ports':
|
||||
port => '9042',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
firewall {'551 cassandra ports':
|
||||
port => '7000',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
firewall {'552 cassandra ports':
|
||||
port => '7199',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
firewall {'553 cassandra ports':
|
||||
port => '9160',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
firewall {'554 cassandra ports':
|
||||
port => '59471',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
}
|
|
@ -0,0 +1,54 @@
|
|||
class plugin_midonet::compute_neutron {
|
||||
$neutron_config = $::fuel_settings['quantum_settings']
|
||||
class { 'nova::compute::neutron':
|
||||
}
|
||||
|
||||
class { 'nova::network::neutron':
|
||||
neutron_admin_password => $neutron_config['keystone']['admin_password'],
|
||||
neutron_url => "http://${::service_endpoint}:9696",
|
||||
neutron_admin_auth_url => "http://${::service_endpoint}:35357/v2.0",
|
||||
}
|
||||
|
||||
service {'openstack-nova-compute':
|
||||
ensure => running,
|
||||
}
|
||||
Nova_config <||> ~> Service['openstack-nova-compute']
|
||||
service { 'libvirt':
|
||||
name => 'libvirtd',
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
file_line { 'user_root':
|
||||
path => '/etc/libvirt/qemu.conf',
|
||||
line => 'user = "root"',
|
||||
notify => Service['libvirt']
|
||||
}
|
||||
file_line { 'group_root':
|
||||
path => '/etc/libvirt/qemu.conf',
|
||||
line => 'group = "root"',
|
||||
notify => Service['libvirt']
|
||||
}
|
||||
file_line { 'cgroup_controllers':
|
||||
path => '/etc/libvirt/qemu.conf',
|
||||
line => 'cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuset", "cpuacct" ]',
|
||||
notify => Service['libvirt']
|
||||
}
|
||||
file_line { 'clear_emulator_capabilities':
|
||||
path => '/etc/libvirt/qemu.conf',
|
||||
line => 'clear_emulator_capabilities = 0',
|
||||
notify => Service['libvirt']
|
||||
}
|
||||
|
||||
file_line { 'cgroup_device_acl':
|
||||
path => '/etc/libvirt/qemu.conf',
|
||||
line => 'cgroup_device_acl = [
|
||||
"/dev/null", "/dev/full", "/dev/zero",
|
||||
"/dev/random", "/dev/urandom",
|
||||
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
|
||||
"/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
|
||||
"/dev/net/tun"
|
||||
]',
|
||||
notify => Service['libvirt']
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,67 @@
|
|||
class plugin_midonet::controller {
|
||||
$midokura_user = $fuel_settings['midonet']['midokura_user']
|
||||
$midokura_password = $fuel_settings['midonet']['midokura_password']
|
||||
|
||||
include plugin_midonet::neutron
|
||||
Package['openstack-neutron-midonet'] -> Neutron_plugin_midonet <||> ~> Service<| title == 'neutron' |>
|
||||
Neutron_plugin_midonet <||> -> Exec<| title == 'neutron-db-sync_plugin' |>
|
||||
Neutron_plugin_midonet <||> -> Exec<| title == 'neutron-db-sync' |>
|
||||
Neutron_dhcp_agent_config<||> ~> Service<| title == 'neutron' |>
|
||||
|
||||
# file { '/etc/yum.repos.d/midokura.repo':
|
||||
# content => template('plugin_midonet/midokura.repo.erb'),
|
||||
# }
|
||||
|
||||
file { '/var/run/netns':
|
||||
mode => '0755',
|
||||
}
|
||||
|
||||
package { 'python-neutron-plugin-midonet':
|
||||
ensure => present,
|
||||
} ->
|
||||
package { 'python-midonetclient':
|
||||
ensure => present,
|
||||
} ->
|
||||
package { 'openstack-neutron-midonet':
|
||||
ensure => present,
|
||||
}
|
||||
|
||||
neutron_plugin_midonet {
|
||||
'midonet/midonet_uri': value => "http://${::midonet_api_address}:8081/midonet-api";
|
||||
'midonet/username': value => $::access_hash['user'];
|
||||
'midonet/password': value => $::access_hash['password'];
|
||||
'midonet/project_id': value => $::access_hash['tenant'];
|
||||
'midonet/auth_url': value => "http://${::service_endpoint}:35357/v2.0";
|
||||
}
|
||||
|
||||
file {'/etc/neutron/plugin.ini':
|
||||
ensure => link,
|
||||
target => '/etc/neutron/plugins/midonet/midonet.ini',
|
||||
require => Package['python-neutron-plugin-midonet']
|
||||
}
|
||||
file { '/usr/lib/python2.6/site-packages/midonet':
|
||||
ensure => link,
|
||||
target => '/usr/lib/python2.7/site-packages/midonet',
|
||||
require => Package['python-neutron-plugin-midonet']
|
||||
}
|
||||
|
||||
file { '/root/.midonetrc':
|
||||
content => template('plugin_midonet/midonetrc.erb'),
|
||||
}
|
||||
|
||||
# exec { 'drop_neutron_database':
|
||||
# refreshonly => true,
|
||||
# notify => Service['neutron'],
|
||||
# }
|
||||
|
||||
# neutron_dhcp_agent_config {
|
||||
# 'DEFAULT/enable_isolated_metadata': value => 'True';
|
||||
# 'DEFAULT/dhcp_driver': value => 'neutron.plugins.midonet.agent.midonet_driver.DhcpNoOpDriver';
|
||||
# 'DEFAULT/interface_driver': value => 'neutron.agent.linux.interface.MidonetInterfaceDriver';
|
||||
# 'DEFAULT/ovs_use_veth': value => 'False';
|
||||
# 'DEFAULT/root_helper': value => 'sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf';
|
||||
# 'DEFAULT/use_namespaces': value => 'True';
|
||||
# 'DEFAULT/debug': value => 'True';
|
||||
# }
|
||||
|
||||
}
|
|
@ -0,0 +1,84 @@
|
|||
# Define: mysql::db
|
||||
#
|
||||
# This module creates database instances, a user, and grants that user
|
||||
# privileges to the database. It can also import SQL from a file in order to,
|
||||
# for example, initialize a database schema.
|
||||
#
|
||||
# Since it requires class mysql::server, we assume to run all commands as the
|
||||
# root mysql user against the local mysql server.
|
||||
#
|
||||
# Parameters:
|
||||
# [*title*] - mysql database name.
|
||||
# [*user*] - username to create and grant access.
|
||||
# [*password*] - user's password.
|
||||
# [*charset*] - database charset.
|
||||
# [*host*] - host for assigning privileges to user.
|
||||
# [*grant*] - array of privileges to grant user.
|
||||
# [*enforce_sql*] - whether to enforce or conditionally run sql on creation.
|
||||
# [*sql*] - sql statement to run.
|
||||
#
|
||||
# Actions:
|
||||
#
|
||||
# Requires:
|
||||
#
|
||||
# class mysql::server
|
||||
#
|
||||
# Sample Usage:
|
||||
#
|
||||
# mysql::db { 'mydb':
|
||||
# user => 'my_user',
|
||||
# password => 'password',
|
||||
# host => $::hostname,
|
||||
# grant => ['all']
|
||||
# }
|
||||
#
|
||||
define plugin_midonet::db (
|
||||
$user,
|
||||
$password,
|
||||
$allowed_hosts,
|
||||
$charset = 'utf8',
|
||||
$host = 'localhost',
|
||||
$grant = 'all',
|
||||
$sql = '',
|
||||
$enforce_sql = false,
|
||||
) {
|
||||
|
||||
database { $name:
|
||||
ensure => present,
|
||||
charset => $charset,
|
||||
provider => 'mysql',
|
||||
}
|
||||
|
||||
database_user { "${user}@${host}":
|
||||
ensure => present,
|
||||
password_hash => mysql_password($password),
|
||||
provider => 'mysql',
|
||||
require => Database[$name],
|
||||
}
|
||||
|
||||
database_grant { "${user}@${host}/${name}":
|
||||
privileges => $grant,
|
||||
provider => 'mysql',
|
||||
require => Database_user["${user}@${host}"],
|
||||
}
|
||||
|
||||
neutron::db::mysql::host_access { $allowed_hosts:
|
||||
user => $user,
|
||||
password => $password,
|
||||
database => $name,
|
||||
}
|
||||
|
||||
|
||||
$refresh = ! $enforce_sql
|
||||
|
||||
if $sql {
|
||||
exec{ "${name}-import":
|
||||
command => "/usr/bin/mysql ${name} < ${sql}",
|
||||
logoutput => true,
|
||||
refreshonly => $refresh,
|
||||
require => Database_grant["${user}@${host}/${name}"],
|
||||
subscribe => Database[$name],
|
||||
}
|
||||
}
|
||||
|
||||
}
|
|
@ -0,0 +1,40 @@
|
|||
define plugin_midonet::kern_module ($ensure) {
|
||||
$modulesfile = $operatingsystem ? { debian => "/etc/modules", redhat => "/etc/rc.modules", centos=>"/etc/rc.modules" }
|
||||
case $operatingsystem {
|
||||
redhat: { file { "/etc/rc.modules": ensure => file, mode => 755 } }
|
||||
centos: { file { "/etc/rc.modules": ensure => file, mode => 755 } }
|
||||
}
|
||||
case $ensure {
|
||||
present: {
|
||||
exec { "insert_module_${name}":
|
||||
command => $operatingsystem ? {
|
||||
debian => "/bin/echo '${name}' >> '${modulesfile}'",
|
||||
redhat => "/bin/echo '/sbin/modprobe ${name}' >> '${modulesfile}' ",
|
||||
centos => "/bin/echo '/sbin/modprobe ${name}' >> '${modulesfile}' "
|
||||
},
|
||||
unless => $operatingsystem ? {
|
||||
debian => "/bin/grep -qFx '${name}' '${modulesfile}'",
|
||||
redhat => "/bin/grep -q '^/sbin/modprobe ${name}\$' '${modulesfile}'",
|
||||
centos => "/bin/grep -q '^/sbin/modprobe ${name}\$' '${modulesfile}'",
|
||||
}
|
||||
}
|
||||
exec { "/sbin/modprobe ${name}": unless => "/bin/grep -q '^${name} ' '/proc/modules'" }
|
||||
}
|
||||
absent: {
|
||||
exec { "/sbin/modprobe -r ${name}": onlyif => "/bin/grep -q '^${name} ' '/proc/modules'" }
|
||||
exec { "remove_module_${name}":
|
||||
command => $operatingsystem ? {
|
||||
debian => "/usr/bin/perl -ni -e 'print unless /^\\Q${name}\\E\$/' '${modulesfile}'",
|
||||
redhat => "/usr/bin/perl -ni -e 'print unless /^\\Q/sbin/modprobe ${name}\\E\$/' '${modulesfile}'",
|
||||
centos => "/usr/bin/perl -ni -e 'print unless /^\\Q/sbin/modprobe ${name}\\E\$/' '${modulesfile}'"
|
||||
},
|
||||
onlyif => $operatingsystem ? {
|
||||
debian => "/bin/grep -qFx '${name}' '${modulesfile}'",
|
||||
redhat => "/bin/grep -q '^/sbin/modprobe ${name}\$' '${modulesfile}'",
|
||||
centos => "/bin/grep -q '^/sbin/modprobe ${name}\$' '${modulesfile}'"
|
||||
}
|
||||
}
|
||||
}
|
||||
default: { err ( "unknown ensure value ${ensure}" ) }
|
||||
}
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
class plugin_midonet::midolman {
|
||||
if $::fuel_settings['role'] == 'compute' {
|
||||
plugin_midonet::kern_module { 'vhost_net':
|
||||
ensure => present,
|
||||
}
|
||||
}
|
||||
$zoo_nodes = inline_template("<%= scope.lookupvar('::gateways_internal_addresses').collect { |name,info| info+':2181'}.join(',') %>")
|
||||
$cassanda_nodes = inline_template("<%= scope.lookupvar('::gateways_internal_addresses').values.join(',')%>")
|
||||
package { 'midolman':
|
||||
ensure => present,
|
||||
} ->
|
||||
midolman_config {
|
||||
'zookeeper/zookeeper_hosts': value => $zoo_nodes;
|
||||
'cassandra/servers': value => $cassanda_nodes;
|
||||
'cassandra/replication_factor': value => 3;
|
||||
'midolman/bgpd_binary': value => '/usr/sbin';
|
||||
} ~>
|
||||
service { 'midolman':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
if $::fuel_settings['role'] == 'midonet-gw' or $::fuel_settings['role'] == 'midonet-simplegw' {
|
||||
l23network::l3::ifconfig {$::fuel_settings['midonet']['bgb1_iface']:
|
||||
ipaddr => 'none',
|
||||
check_by_ping => 'none',
|
||||
}
|
||||
l23network::l3::ifconfig {$::fuel_settings['midonet']['bgb2_iface']:
|
||||
ipaddr => 'none',
|
||||
check_by_ping => 'none',
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,39 @@
|
|||
class plugin_midonet::midonet_agent {
|
||||
# include nova::params
|
||||
plugin_midonet::kern_module { 'vhost_net':
|
||||
ensure => present,
|
||||
}
|
||||
|
||||
# package { 'midolman':
|
||||
# ensure => present,
|
||||
#}
|
||||
# nova_config {
|
||||
# 'DEFAULT/libvirt_vif_driver': value => 'nova.virt.libvirt.vif.LibvirtGenericVIFDriver';
|
||||
# 'MIDONET/midonet_use_tunctl': value => "True";
|
||||
# 'MIDONET/midonet_uri': value => "http://${::midonet_api_address}:8081/midonet-api"
|
||||
# 'MIDONET/username': value => $::access_hash['user'];
|
||||
# 'MIDONET/password': value => $::access_hash['password'];
|
||||
# 'MIDONET/project_id': value => $::access_hash['tenant'];
|
||||
# 'MIDONET/auth_url': value => "http://${::service_endpoint}:35357/v2.0";
|
||||
# }
|
||||
#
|
||||
# service { 'nova-compute':
|
||||
# name => $::nova::params::compute_service_name,
|
||||
# ensure => running,
|
||||
# }
|
||||
|
||||
# Nova_config <||> ~> Service['nova-compute']
|
||||
|
||||
# $zoo_nodes = inline_template("<%= scope.lookupvar('::gateways_internal_addresses').collect { |name,info| info+':2181'}.join(',') %>")
|
||||
# $cassanda_nodes = inline_template("<%= scope.lookupvar('::gateways_internal_addresses').values.join(',')%>")
|
||||
#
|
||||
# midolman_config {
|
||||
# 'zookeeper/zookeeper_hosts': value => $zoo_nodes;
|
||||
# 'cassandra/servers': value => $cassanda_nodes;
|
||||
# 'cassandra/replication_factor': values => 3;
|
||||
# } ~>
|
||||
# service { 'midolman':
|
||||
# ensure => running,
|
||||
# }
|
||||
|
||||
}
|
|
@ -0,0 +1,75 @@
|
|||
class plugin_midonet::midonetapi {
|
||||
$zoo_nodes = generate_zookeeper_hash($::fuel_settings['nodes'],'midonet-gw')
|
||||
$keystone_token = $::fuel_settings['keystone']['admin_token']
|
||||
$http_port='8081'
|
||||
|
||||
$primary_controller = $::fuel_settings['role'] ? { 'primary-controller'=>true, default=>false }
|
||||
|
||||
class { 'cluster::haproxy_ocf':
|
||||
primary_controller => $primary_controller
|
||||
}
|
||||
|
||||
package { ['tomcat6', 'midonet-api']:
|
||||
ensure => present,
|
||||
}
|
||||
file { '/etc/tomcat6/server.xml':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/server.xml.erb'),
|
||||
require => Package['tomcat6'],
|
||||
} ->
|
||||
file { '/etc/tomcat6/Catalina/localhost/midonet-api.xml':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/midonet-api.xml.erb'),
|
||||
require => Package['tomcat6'],
|
||||
} ->
|
||||
file { '/usr/share/midonet-api/WEB-INF/web.xml':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/web.xml.erb'),
|
||||
require => Package['midonet-api'],
|
||||
} ~>
|
||||
service { 'tomcat6':
|
||||
ensure => running,
|
||||
require => Package['midonet-api','tomcat6'],
|
||||
# notify => Exec['/sbin/service tomcat6 restart'],
|
||||
}
|
||||
|
||||
Haproxy::Service { use_include => true }
|
||||
Haproxy::Balancermember { use_include => true }
|
||||
|
||||
Openstack::Ha::Haproxy_service {
|
||||
server_names => filter_hash($::controllers, 'name'),
|
||||
ipaddresses => filter_hash($::controllers, 'internal_address'),
|
||||
public_virtual_ip => $::fuel_settings['public_vip'],
|
||||
internal_virtual_ip => $::fuel_settings['management_vip'],
|
||||
}
|
||||
|
||||
|
||||
openstack::ha::haproxy_service { 'midonetapi':
|
||||
order => 199,
|
||||
listen_port => 8081,
|
||||
balancermember_port => 8081,
|
||||
define_backups => true,
|
||||
before_start => true,
|
||||
public => true,
|
||||
haproxy_config_options => {
|
||||
'balance' => 'roundrobin',
|
||||
'option' => ['httplog'],
|
||||
},
|
||||
balancermember_options => 'check',
|
||||
}
|
||||
|
||||
# exec { '/sbin/service tomcat6 restart':
|
||||
# require => Service['tomcat6'],
|
||||
#}
|
||||
|
||||
firewall {'502 Midonet api':
|
||||
port => '8081',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
|
||||
# package { 'midonet-cp2':
|
||||
# reuiqre => Service['tomcat6'],
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,115 @@
|
|||
class plugin_midonet::neutron {
|
||||
|
||||
$primary_controller = $::fuel_settings['role'] ? { 'primary-controller'=>true, default=>false }
|
||||
if $primary_controller {
|
||||
if ($::neutron::params::server_package) {
|
||||
# Debian platforms
|
||||
Package<| title == 'neutron-server' |> ~> Exec['neutron-db-sync']
|
||||
} else {
|
||||
# RH platforms
|
||||
Package<| title == 'neutron' |> ~> Exec['neutron-db-sync']
|
||||
}
|
||||
exec { 'neutron-db-sync_plugin':
|
||||
command => 'neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini upgrade head',
|
||||
path => '/usr/bin',
|
||||
refreshonly => true,
|
||||
tries => 10,
|
||||
# TODO(bogdando) contribute change to upstream:
|
||||
# new try_sleep param for sleep driven development (SDD)
|
||||
try_sleep => 20,
|
||||
}
|
||||
#NOTE(bogdando) contribute change to upstream #1384133
|
||||
Neutron_config<||> -> Exec['neutron-db-sync']
|
||||
Exec['neutron-db-sync'] -> Service<| title == 'neutron-server' |>
|
||||
}
|
||||
|
||||
plugin_midonet::db { $::neutron_db_dbname:
|
||||
user => $::neutron_db_user,
|
||||
password => $::neutron_db_password,
|
||||
allowed_hosts => [ '%', $::hostname ],
|
||||
host => '127.0.0.1',
|
||||
}
|
||||
|
||||
if $primary_controller {
|
||||
class { 'neutron::keystone::auth':
|
||||
password => $::neutron_user_password,
|
||||
public_address => $::fuel_settings['public_vip'],
|
||||
admin_address => $::fuel_settings['management_vip'],
|
||||
internal_address => $::fuel_settings['management_vip'],
|
||||
}
|
||||
}
|
||||
|
||||
class { 'cluster::haproxy_ocf':
|
||||
primary_controller => $primary_controller
|
||||
}
|
||||
Haproxy::Service { use_include => true }
|
||||
Haproxy::Balancermember { use_include => true }
|
||||
|
||||
Openstack::Ha::Haproxy_service {
|
||||
server_names => filter_hash($::controllers, 'name'),
|
||||
ipaddresses => filter_hash($::controllers, 'internal_address'),
|
||||
public_virtual_ip => $::fuel_settings['public_vip'],
|
||||
internal_virtual_ip => $::fuel_settings['management_vip'],
|
||||
}
|
||||
|
||||
|
||||
class { 'openstack::ha::neutron': }
|
||||
class { 'openstack::network':
|
||||
network_provider => $::neutron_db_user,
|
||||
agents => ['dhcp', 'metadata'],
|
||||
ha_agents => false,
|
||||
verbose => $::verbose,
|
||||
debug => $::debug,
|
||||
use_syslog => $::use_syslog,
|
||||
syslog_log_facility => $::syslog_log_facility_neutron,
|
||||
|
||||
neutron_server => true,
|
||||
neutron_db_uri => "mysql://${::neutron_db_user}:${::neutron_db_password}@${::db_host}/${::neutron_db_dbname}?&read_timeout=60",
|
||||
public_address => $::fuel_settings['public_vip'],
|
||||
internal_address => $::fuel_settings['management_vip'], # Could be this node or, internal_vip
|
||||
admin_address => $::fuel_settings['management_vip'],
|
||||
nova_neutron => true,
|
||||
base_mac => $::base_mac,
|
||||
core_plugin => 'midonet.neutron.plugin.MidonetPluginV2',
|
||||
service_plugins => '',
|
||||
|
||||
#ovs
|
||||
mechanism_drivers => $::mechanism_drivers,
|
||||
local_ip => $::internal_address, # $::internal_adress is this node
|
||||
# bridge_mappings => $bridge_mappings,
|
||||
# network_vlan_ranges => $vlan_range,
|
||||
# enable_tunneling => $enable_tunneling,
|
||||
# tunnel_id_ranges => $tunnel_id_ranges,
|
||||
|
||||
#Queue settings
|
||||
queue_provider => 'rabbitmq',
|
||||
amqp_hosts => [$::amqp_hosts],
|
||||
amqp_user => $::amqp_user,
|
||||
amqp_password => $::amqp_password,
|
||||
|
||||
# keystone
|
||||
admin_password => $::neutron_user_password,
|
||||
auth_host => $::internal_address,
|
||||
auth_url => "http://${::service_endpoint}:35357/v2.0",
|
||||
neutron_url => "http://${::service_endpoint}:9696",
|
||||
|
||||
#metadata
|
||||
shared_secret => $::neutron_metadata_proxy_secret,
|
||||
metadata_ip => $::service_endpoint,
|
||||
|
||||
#nova settings
|
||||
private_interface => false,
|
||||
public_interface => $::public_int,
|
||||
fixed_range => false,
|
||||
floating_range => false,
|
||||
# network_manager => $network_manager,
|
||||
# network_config => $config_overrides,
|
||||
create_networks => false,
|
||||
# num_networks => $num_networks,
|
||||
# network_size => $network_size,
|
||||
# nameservers => $nameservers,
|
||||
enable_nova_net => false, # just setup networks, but don't start nova-network service on controllers
|
||||
nova_admin_password => $::nova_user_password,
|
||||
nova_url => "http://${service_endpoint}:8774/v2",
|
||||
}
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
class plugin_midonet::params {
|
||||
$zoo_hosts = generate_zookeeper_hash($::fuel_settings['nodes'],'midonet-gw')
|
||||
}
|
|
@ -0,0 +1,45 @@
|
|||
class plugin_midonet::repos {
|
||||
include l23network::params
|
||||
|
||||
package { 'openvswitch':
|
||||
name => $::l23network::params::ovs_common_package_name,
|
||||
ensure => absent,
|
||||
} ->
|
||||
package { 'openvswitch-datapath':
|
||||
name => $::l23network::params::ovs_datapath_package_name,
|
||||
ensure => absent,
|
||||
}
|
||||
|
||||
file { '/etc/yum.repos.d/CentOS-Base.repo':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/CentOS-Base.repo'),
|
||||
}
|
||||
|
||||
file { '/etc/yum.repos.d/epel.repo':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/epel.repo'),
|
||||
}
|
||||
|
||||
yumrepo { 'midokura':
|
||||
# ensure => present,
|
||||
gpgcheck => 0,
|
||||
enabled => 1,
|
||||
baseurl => "http://${::fuel_settings['midonet']['repo_username']}:${::fuel_settings['midonet']['repo_password']}@yum.midokura.com/repo/v1.8/stable/RHEL/6/",
|
||||
# gpgkey => "http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/RPM-GPG-KEY-midokura",
|
||||
}
|
||||
|
||||
yumrepo { 'midokura_neutron_pligin':
|
||||
# ensure => present,
|
||||
gpgcheck => 0,
|
||||
enabled => 1,
|
||||
baseurl => "http://${::fuel_settings['midonet']['repo_username']}:${::fuel_settings['midonet']['repo_password']}@yum.midokura.com/repo/openstack-juno/stable/RHEL/6/",
|
||||
# gpgkey => "http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/RPM-GPG-KEY-midokura",
|
||||
}
|
||||
|
||||
yumrepo { 'datastax':
|
||||
# ensure => present,
|
||||
gpgcheck => 0,
|
||||
enabled => 1,
|
||||
baseurl => "http://rpm.datastax.com/community",
|
||||
}
|
||||
}
|
|
@ -0,0 +1,57 @@
|
|||
class plugin_midonet::zookeeper {
|
||||
|
||||
package {'java-1.7.0-openjdk-devel.x86_64':
|
||||
ensure => present,
|
||||
} ->
|
||||
package { 'zookeeper':
|
||||
ensure => present,
|
||||
}->
|
||||
file { '/usr/java':
|
||||
ensure => directory,
|
||||
} ->
|
||||
file { '/usr/java/default':
|
||||
ensure => directory,
|
||||
} ->
|
||||
file { '/usr/java/default/bin':
|
||||
ensure => directory,
|
||||
} ->
|
||||
file { '/usr/java/default/bin/java':
|
||||
ensure => link,
|
||||
target => '/usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java',
|
||||
}
|
||||
|
||||
$zoo_nodes = generate_zookeeper_hash($::fuel_settings['nodes'],'midonet-gw')
|
||||
|
||||
file { '/etc/zookeeper/zoo.cfg':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/zoo.cfg.erb'),
|
||||
require => Package['zookeeper'],
|
||||
notify => Service['zookeeper'],
|
||||
}
|
||||
|
||||
$myid = $zoo_nodes["${::fqdn}"]['id']
|
||||
file { '/var/lib/zookeeper/data':
|
||||
ensure => directory,
|
||||
require => Package['zookeeper'],
|
||||
mode => 0775,
|
||||
group => 'hadoop'
|
||||
} ->
|
||||
file { '/var/lib/zookeeper/data/myid':
|
||||
ensure => present,
|
||||
content => "${myid}",
|
||||
} ~>
|
||||
service { 'zookeeper':
|
||||
ensure => running,
|
||||
}
|
||||
|
||||
firewall {'500 zookeeper ports':
|
||||
port => '2888-3888',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
firewall {'501 zookeeper ports':
|
||||
port => '2181',
|
||||
proto => 'tcp',
|
||||
action => 'accept',
|
||||
}
|
||||
}
|
|
@ -0,0 +1,53 @@
|
|||
# CentOS-Base.repo
|
||||
#
|
||||
# The mirror system uses the connecting IP address of the client and the
|
||||
# update status of each mirror to pick mirrors that are updated to and
|
||||
# geographically close to the client. You should use this for CentOS updates
|
||||
# unless you are manually picking other mirrors.
|
||||
#
|
||||
# If the mirrorlist= does not work for you, as a fall back you can try the
|
||||
# remarked out baseurl= line instead.
|
||||
#
|
||||
#
|
||||
|
||||
[base]
|
||||
name=CentOS-$releasever - Base
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
|
||||
|
||||
#released updates
|
||||
[updates]
|
||||
name=CentOS-$releasever - Updates
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
|
||||
|
||||
#additional packages that may be useful
|
||||
[extras]
|
||||
name=CentOS-$releasever - Extras
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
|
||||
|
||||
#additional packages that extend functionality of existing packages
|
||||
[centosplus]
|
||||
name=CentOS-$releasever - Plus
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/
|
||||
gpgcheck=0
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
|
||||
|
||||
#contrib - packages by Centos Users
|
||||
[contrib]
|
||||
name=CentOS-$releasever - Contrib
|
||||
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
|
||||
#baseurl=http://mirror.centos.org/centos/$releasever/contrib/$basearch/
|
||||
gpgcheck=0
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
|
||||
|
|
@ -0,0 +1,704 @@
|
|||
# Cassandra storage config YAML
|
||||
|
||||
# NOTE:
|
||||
# See http://wiki.apache.org/cassandra/StorageConfiguration for
|
||||
# full explanations of configuration directives
|
||||
# /NOTE
|
||||
|
||||
# The name of the cluster. This is mainly used to prevent machines in
|
||||
# one logical cluster from joining another.
|
||||
cluster_name: 'Test Cluster'
|
||||
|
||||
# This defines the number of tokens randomly assigned to this node on the ring
|
||||
# The more tokens, relative to other nodes, the larger the proportion of data
|
||||
# that this node will store. You probably want all nodes to have the same number
|
||||
# of tokens assuming they have equal hardware capability.
|
||||
#
|
||||
# If you leave this unspecified, Cassandra will use the default of 1 token for legacy compatibility,
|
||||
# and will use the initial_token as described below.
|
||||
#
|
||||
# Specifying initial_token will override this setting on the node's initial start,
|
||||
# on subsequent starts, this setting will apply even if initial token is set.
|
||||
#
|
||||
# If you already have a cluster with 1 token per node, and wish to migrate to
|
||||
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
|
||||
num_tokens: 256
|
||||
|
||||
# initial_token allows you to specify tokens manually. While you can use # it with
|
||||
# vnodes (num_tokens > 1, above) -- in which case you should provide a
|
||||
# comma-separated list -- it's primarily used when adding nodes # to legacy clusters
|
||||
# that do not have vnodes enabled.
|
||||
# initial_token:
|
||||
|
||||
# May either be "true" or "false" to enable globally, or contain a list
|
||||
# of data centers to enable per-datacenter.
|
||||
# hinted_handoff_enabled: DC1,DC2
|
||||
# See http://wiki.apache.org/cassandra/HintedHandoff
|
||||
hinted_handoff_enabled: true
|
||||
# this defines the maximum amount of time a dead host will have hints
|
||||
# generated. After it has been dead this long, new hints for it will not be
|
||||
# created until it has been seen alive and gone down again.
|
||||
max_hint_window_in_ms: 10800000 # 3 hours
|
||||
# Maximum throttle in KBs per second, per delivery thread. This will be
|
||||
# reduced proportionally to the number of nodes in the cluster. (If there
|
||||
# are two nodes in the cluster, each delivery thread will use the maximum
|
||||
# rate; if there are three, each will throttle to half of the maximum,
|
||||
# since we expect two nodes to be delivering hints simultaneously.)
|
||||
hinted_handoff_throttle_in_kb: 1024
|
||||
# Number of threads with which to deliver hints;
|
||||
# Consider increasing this number when you have multi-dc deployments, since
|
||||
# cross-dc handoff tends to be slower
|
||||
max_hints_delivery_threads: 2
|
||||
|
||||
# Maximum throttle in KBs per second, total. This will be
|
||||
# reduced proportionally to the number of nodes in the cluster.
|
||||
batchlog_replay_throttle_in_kb: 1024
|
||||
|
||||
# Authentication backend, implementing IAuthenticator; used to identify users
|
||||
# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthenticator,
|
||||
# PasswordAuthenticator}.
|
||||
#
|
||||
# - AllowAllAuthenticator performs no checks - set it to disable authentication.
|
||||
# - PasswordAuthenticator relies on username/password pairs to authenticate
|
||||
# users. It keeps usernames and hashed passwords in system_auth.credentials table.
|
||||
# Please increase system_auth keyspace replication factor if you use this authenticator.
|
||||
authenticator: AllowAllAuthenticator
|
||||
|
||||
# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
|
||||
# Out of the box, Cassandra provides org.apache.cassandra.auth.{AllowAllAuthorizer,
|
||||
# CassandraAuthorizer}.
|
||||
#
|
||||
# - AllowAllAuthorizer allows any action to any user - set it to disable authorization.
|
||||
# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please
|
||||
# increase system_auth keyspace replication factor if you use this authorizer.
|
||||
authorizer: AllowAllAuthorizer
|
||||
|
||||
# Validity period for permissions cache (fetching permissions can be an
|
||||
# expensive operation depending on the authorizer, CassandraAuthorizer is
|
||||
# one example). Defaults to 2000, set to 0 to disable.
|
||||
# Will be disabled automatically for AllowAllAuthorizer.
|
||||
permissions_validity_in_ms: 2000
|
||||
|
||||
# Refresh interval for permissions cache (if enabled).
|
||||
# After this interval, cache entries become eligible for refresh. Upon next
|
||||
# access, an async reload is scheduled and the old value returned until it
|
||||
# completes. If permissions_validity_in_ms is non-zero, then this must be
|
||||
# also.
|
||||
# Defaults to the same value as permissions_validity_in_ms.
|
||||
# permissions_update_interval_in_ms: 1000
|
||||
|
||||
# The partitioner is responsible for distributing groups of rows (by
|
||||
# partition key) across nodes in the cluster. You should leave this
|
||||
# alone for new clusters. The partitioner can NOT be changed without
|
||||
# reloading all data, so when upgrading you should set this to the
|
||||
# same partitioner you were already using.
|
||||
#
|
||||
# Besides Murmur3Partitioner, partitioners included for backwards
|
||||
# compatibility include RandomPartitioner, ByteOrderedPartitioner, and
|
||||
# OrderPreservingPartitioner.
|
||||
#
|
||||
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
|
||||
|
||||
# Directories where Cassandra should store data on disk. Cassandra
|
||||
# will spread data evenly across them, subject to the granularity of
|
||||
# the configured compaction strategy.
|
||||
data_file_directories:
|
||||
- /var/lib/cassandra/data
|
||||
|
||||
# commit log
|
||||
commitlog_directory: /var/lib/cassandra/commitlog
|
||||
|
||||
# policy for data disk failures:
|
||||
# stop_paranoid: shut down gossip and Thrift even for single-sstable errors.
|
||||
# stop: shut down gossip and Thrift, leaving the node effectively dead, but
|
||||
# can still be inspected via JMX.
|
||||
# best_effort: stop using the failed disk and respond to requests based on
|
||||
# remaining available sstables. This means you WILL see obsolete
|
||||
# data at CL.ONE!
|
||||
# ignore: ignore fatal errors and let requests fail, as in pre-1.2 Cassandra
|
||||
disk_failure_policy: stop
|
||||
|
||||
# policy for commit disk failures:
|
||||
# stop: shut down gossip and Thrift, leaving the node effectively dead, but
|
||||
# can still be inspected via JMX.
|
||||
# stop_commit: shutdown the commit log, letting writes collect but
|
||||
# continuing to service reads, as in pre-2.0.5 Cassandra
|
||||
# ignore: ignore fatal errors and let the batches fail
|
||||
commit_failure_policy: stop
|
||||
|
||||
# Maximum size of the key cache in memory.
|
||||
#
|
||||
# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
|
||||
# minimum, sometimes more. The key cache is fairly tiny for the amount of
|
||||
# time it saves, so it's worthwhile to use it at large numbers.
|
||||
# The row cache saves even more time, but must contain the entire row,
|
||||
# so it is extremely space-intensive. It's best to only use the
|
||||
# row cache if you have hot rows or static rows.
|
||||
#
|
||||
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
|
||||
#
|
||||
# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
|
||||
key_cache_size_in_mb:
|
||||
|
||||
# Duration in seconds after which Cassandra should
|
||||
# save the key cache. Caches are saved to saved_caches_directory as
|
||||
# specified in this configuration file.
|
||||
#
|
||||
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
|
||||
# terms of I/O for the key cache. Row cache saving is much more expensive and
|
||||
# has limited use.
|
||||
#
|
||||
# Default is 14400 or 4 hours.
|
||||
key_cache_save_period: 14400
|
||||
|
||||
# Number of keys from the key cache to save
|
||||
# Disabled by default, meaning all keys are going to be saved
|
||||
# key_cache_keys_to_save: 100
|
||||
|
||||
# Maximum size of the row cache in memory.
|
||||
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
|
||||
#
|
||||
# Default value is 0, to disable row caching.
|
||||
row_cache_size_in_mb: 0
|
||||
|
||||
# Duration in seconds after which Cassandra should
|
||||
# safe the row cache. Caches are saved to saved_caches_directory as specified
|
||||
# in this configuration file.
|
||||
#
|
||||
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
|
||||
# terms of I/O for the key cache. Row cache saving is much more expensive and
|
||||
# has limited use.
|
||||
#
|
||||
# Default is 0 to disable saving the row cache.
|
||||
row_cache_save_period: 0
|
||||
|
||||
# Number of keys from the row cache to save
|
||||
# Disabled by default, meaning all keys are going to be saved
|
||||
# row_cache_keys_to_save: 100
|
||||
|
||||
# The off-heap memory allocator. Affects storage engine metadata as
|
||||
# well as caches. Experiments show that JEMAlloc saves some memory
|
||||
# than the native GCC allocator (i.e., JEMalloc is more
|
||||
# fragmentation-resistant).
|
||||
#
|
||||
# Supported values are: NativeAllocator, JEMallocAllocator
|
||||
#
|
||||
# If you intend to use JEMallocAllocator you have to install JEMalloc as library and
|
||||
# modify cassandra-env.sh as directed in the file.
|
||||
#
|
||||
# Defaults to NativeAllocator
|
||||
# memory_allocator: NativeAllocator
|
||||
|
||||
# saved caches
|
||||
saved_caches_directory: /var/lib/cassandra/saved_caches
|
||||
|
||||
# commitlog_sync may be either "periodic" or "batch."
|
||||
# When in batch mode, Cassandra won't ack writes until the commit log
|
||||
# has been fsynced to disk. It will wait up to
|
||||
# commitlog_sync_batch_window_in_ms milliseconds for other writes, before
|
||||
# performing the sync.
|
||||
#
|
||||
# commitlog_sync: batch
|
||||
# commitlog_sync_batch_window_in_ms: 50
|
||||
#
|
||||
# the other option is "periodic" where writes may be acked immediately
|
||||
# and the CommitLog is simply synced every commitlog_sync_period_in_ms
|
||||
# milliseconds. By default this allows 1024*(CPU cores) pending
|
||||
# entries on the commitlog queue. If you are writing very large blobs,
|
||||
# you should reduce that; 16*cores works reasonably well for 1MB blobs.
|
||||
# It should be at least as large as the concurrent_writes setting.
|
||||
commitlog_sync: periodic
|
||||
commitlog_sync_period_in_ms: 10000
|
||||
# commitlog_periodic_queue_size:
|
||||
|
||||
# The size of the individual commitlog file segments. A commitlog
|
||||
# segment may be archived, deleted, or recycled once all the data
|
||||
# in it (potentially from each columnfamily in the system) has been
|
||||
# flushed to sstables.
|
||||
#
|
||||
# The default size is 32, which is almost always fine, but if you are
|
||||
# archiving commitlog segments (see commitlog_archiving.properties),
|
||||
# then you probably want a finer granularity of archiving; 8 or 16 MB
|
||||
# is reasonable.
|
||||
commitlog_segment_size_in_mb: 32
|
||||
|
||||
# any class that implements the SeedProvider interface and has a
|
||||
# constructor that takes a Map<String, String> of parameters will do.
|
||||
seed_provider:
|
||||
# Addresses of hosts that are deemed contact points.
|
||||
# Cassandra nodes use this list of hosts to find each other and learn
|
||||
# the topology of the ring. You must change this if you are running
|
||||
# multiple nodes!
|
||||
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
|
||||
parameters:
|
||||
# seeds is actually a comma-delimited list of addresses.
|
||||
# Ex: "<ip1>,<ip2>,<ip3>"
|
||||
- seeds: "<%= scope.lookupvar('::gateways_internal_addresses').values.join(',')%>"
|
||||
|
||||
# For workloads with more data than can fit in memory, Cassandra's
|
||||
# bottleneck will be reads that need to fetch data from
|
||||
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
|
||||
# order to allow the operations to enqueue low enough in the stack
|
||||
# that the OS and drives can reorder them.
|
||||
#
|
||||
# On the other hand, since writes are almost never IO bound, the ideal
|
||||
# number of "concurrent_writes" is dependent on the number of cores in
|
||||
# your system; (8 * number_of_cores) is a good rule of thumb.
|
||||
concurrent_reads: 32
|
||||
concurrent_writes: 32
|
||||
|
||||
# Total memory to use for sstable-reading buffers. Defaults to
|
||||
# the smaller of 1/4 of heap or 512MB.
|
||||
# file_cache_size_in_mb: 512
|
||||
|
||||
# Total memory to use for memtables. Cassandra will flush the largest
|
||||
# memtable when this much memory is used.
|
||||
# If omitted, Cassandra will set it to 1/4 of the heap.
|
||||
# memtable_total_space_in_mb: 2048
|
||||
|
||||
# Total space to use for commitlogs. Since commitlog segments are
|
||||
# mmapped, and hence use up address space, the default size is 32
|
||||
# on 32-bit JVMs, and 1024 on 64-bit JVMs.
|
||||
#
|
||||
# If space gets above this value (it will round up to the next nearest
|
||||
# segment multiple), Cassandra will flush every dirty CF in the oldest
|
||||
# segment and remove it. So a small total commitlog space will tend
|
||||
# to cause more flush activity on less-active columnfamilies.
|
||||
# commitlog_total_space_in_mb: 4096
|
||||
|
||||
# This sets the amount of memtable flush writer threads. These will
|
||||
# be blocked by disk io, and each one will hold a memtable in memory
|
||||
# while blocked. If you have a large heap and many data directories,
|
||||
# you can increase this value for better flush performance.
|
||||
# By default this will be set to the amount of data directories defined.
|
||||
#memtable_flush_writers: 1
|
||||
|
||||
# the number of full memtables to allow pending flush, that is,
|
||||
# waiting for a writer thread. At a minimum, this should be set to
|
||||
# the maximum number of secondary indexes created on a single CF.
|
||||
memtable_flush_queue_size: 4
|
||||
|
||||
# Whether to, when doing sequential writing, fsync() at intervals in
|
||||
# order to force the operating system to flush the dirty
|
||||
# buffers. Enable this to avoid sudden dirty buffer flushing from
|
||||
# impacting read latencies. Almost always a good idea on SSDs; not
|
||||
# necessarily on platters.
|
||||
trickle_fsync: false
|
||||
trickle_fsync_interval_in_kb: 10240
|
||||
|
||||
# TCP port, for commands and data
|
||||
storage_port: 7000
|
||||
|
||||
# SSL port, for encrypted communication. Unused unless enabled in
|
||||
# encryption_options
|
||||
ssl_storage_port: 7001
|
||||
|
||||
# Address to bind to and tell other Cassandra nodes to connect to. You
|
||||
# _must_ change this if you want multiple nodes to be able to
|
||||
# communicate!
|
||||
#
|
||||
# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
|
||||
# will always do the Right Thing _if_ the node is properly configured
|
||||
# (hostname, name resolution, etc), and the Right Thing is to use the
|
||||
# address associated with the hostname (it might not be).
|
||||
#
|
||||
# Setting this to 0.0.0.0 is always wrong.
|
||||
listen_address: <%= scope.lookupvar('::internal_address') %>
|
||||
|
||||
# Address to broadcast to other Cassandra nodes
|
||||
# Leaving this blank will set it to the same value as listen_address
|
||||
# broadcast_address: 1.2.3.4
|
||||
|
||||
# Internode authentication backend, implementing IInternodeAuthenticator;
|
||||
# used to allow/disallow connections from peer nodes.
|
||||
# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator
|
||||
|
||||
# Whether to start the native transport server.
|
||||
# Please note that the address on which the native transport is bound is the
|
||||
# same as the rpc_address. The port however is different and specified below.
|
||||
start_native_transport: true
|
||||
# port for the CQL native transport to listen for clients on
|
||||
native_transport_port: 9042
|
||||
# The maximum threads for handling requests when the native transport is used.
|
||||
# This is similar to rpc_max_threads though the default differs slightly (and
|
||||
# there is no native_transport_min_threads, idle threads will always be stopped
|
||||
# after 30 seconds).
|
||||
# native_transport_max_threads: 128
|
||||
#
|
||||
# The maximum size of allowed frame. Frame (requests) larger than this will
|
||||
# be rejected as invalid. The default is 256MB.
|
||||
# native_transport_max_frame_size_in_mb: 256
|
||||
|
||||
# Whether to start the thrift rpc server.
|
||||
start_rpc: true
|
||||
|
||||
# The address to bind the Thrift RPC service and native transport
|
||||
# server -- clients connect here.
|
||||
#
|
||||
# Leaving this blank has the same effect it does for ListenAddress,
|
||||
# (i.e. it will be based on the configured hostname of the node).
|
||||
#
|
||||
# Note that unlike ListenAddress above, it is allowed to specify 0.0.0.0
|
||||
# here if you want to listen on all interfaces, but that will break clients
|
||||
# that rely on node auto-discovery.
|
||||
rpc_address: <%= scope.lookupvar('::internal_address') %>
|
||||
# port for Thrift to listen for clients on
|
||||
rpc_port: 9160
|
||||
|
||||
# enable or disable keepalive on rpc/native connections
|
||||
rpc_keepalive: true
|
||||
|
||||
# Cassandra provides two out-of-the-box options for the RPC Server:
|
||||
#
|
||||
# sync -> One thread per thrift connection. For a very large number of clients, memory
|
||||
# will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size
|
||||
# per thread, and that will correspond to your use of virtual memory (but physical memory
|
||||
# may be limited depending on use of stack space).
|
||||
#
|
||||
# hsha -> Stands for "half synchronous, half asynchronous." All thrift clients are handled
|
||||
# asynchronously using a small number of threads that does not vary with the amount
|
||||
# of thrift clients (and thus scales well to many clients). The rpc requests are still
|
||||
# synchronous (one thread per active request). If hsha is selected then it is essential
|
||||
# that rpc_max_threads is changed from the default value of unlimited.
|
||||
#
|
||||
# The default is sync because on Windows hsha is about 30% slower. On Linux,
|
||||
# sync/hsha performance is about the same, with hsha of course using less memory.
|
||||
#
|
||||
# Alternatively, can provide your own RPC server by providing the fully-qualified class name
|
||||
# of an o.a.c.t.TServerFactory that can create an instance of it.
|
||||
rpc_server_type: sync
|
||||
|
||||
# Uncomment rpc_min|max_thread to set request pool size limits.
|
||||
#
|
||||
# Regardless of your choice of RPC server (see above), the number of maximum requests in the
|
||||
# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync
|
||||
# RPC server, it also dictates the number of clients that can be connected at all).
|
||||
#
|
||||
# The default is unlimited and thus provides no protection against clients overwhelming the server. You are
|
||||
# encouraged to set a maximum that makes sense for you in production, but do keep in mind that
|
||||
# rpc_max_threads represents the maximum number of client requests this server may execute concurrently.
|
||||
#
|
||||
# rpc_min_threads: 16
|
||||
# rpc_max_threads: 2048
|
||||
|
||||
# uncomment to set socket buffer sizes on rpc connections
|
||||
# rpc_send_buff_size_in_bytes:
|
||||
# rpc_recv_buff_size_in_bytes:
|
||||
|
||||
# Uncomment to set socket buffer size for internode communication
|
||||
# Note that when setting this, the buffer size is limited by net.core.wmem_max
|
||||
# and when not setting it it is defined by net.ipv4.tcp_wmem
|
||||
# See:
|
||||
# /proc/sys/net/core/wmem_max
|
||||
# /proc/sys/net/core/rmem_max
|
||||
# /proc/sys/net/ipv4/tcp_wmem
|
||||
# /proc/sys/net/ipv4/tcp_wmem
|
||||
# and: man tcp
|
||||
# internode_send_buff_size_in_bytes:
|
||||
# internode_recv_buff_size_in_bytes:
|
||||
|
||||
# Frame size for thrift (maximum message length).
|
||||
thrift_framed_transport_size_in_mb: 15
|
||||
|
||||
# Set to true to have Cassandra create a hard link to each sstable
|
||||
# flushed or streamed locally in a backups/ subdirectory of the
|
||||
# keyspace data. Removing these links is the operator's
|
||||
# responsibility.
|
||||
incremental_backups: false
|
||||
|
||||
# Whether or not to take a snapshot before each compaction. Be
|
||||
# careful using this option, since Cassandra won't clean up the
|
||||
# snapshots for you. Mostly useful if you're paranoid when there
|
||||
# is a data format change.
|
||||
snapshot_before_compaction: false
|
||||
|
||||
# Whether or not a snapshot is taken of the data before keyspace truncation
|
||||
# or dropping of column families. The STRONGLY advised default of true
|
||||
# should be used to provide data safety. If you set this flag to false, you will
|
||||
# lose data on truncation or drop.
|
||||
auto_snapshot: true
|
||||
|
||||
# When executing a scan, within or across a partition, we need to keep the
|
||||
# tombstones seen in memory so we can return them to the coordinator, which
|
||||
# will use them to make sure other replicas also know about the deleted rows.
|
||||
# With workloads that generate a lot of tombstones, this can cause performance
|
||||
# problems and even exaust the server heap.
|
||||
# (http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets)
|
||||
# Adjust the thresholds here if you understand the dangers and want to
|
||||
# scan more tombstones anyway. These thresholds may also be adjusted at runtime
|
||||
# using the StorageService mbean.
|
||||
tombstone_warn_threshold: 1000
|
||||
tombstone_failure_threshold: 100000
|
||||
|
||||
# Granularity of the collation index of rows within a partition.
|
||||
# Increase if your rows are large, or if you have a very large
|
||||
# number of rows per partition. The competing goals are these:
|
||||
# 1) a smaller granularity means more index entries are generated
|
||||
# and looking up rows withing the partition by collation column
|
||||
# is faster
|
||||
# 2) but, Cassandra will keep the collation index in memory for hot
|
||||
# rows (as part of the key cache), so a larger granularity means
|
||||
# you can cache more hot rows
|
||||
column_index_size_in_kb: 64
|
||||
|
||||
|
||||
# Log WARN on any batch size exceeding this value. 5kb per batch by default.
|
||||
# Caution should be taken on increasing the size of this threshold as it can lead to node instability.
|
||||
batch_size_warn_threshold_in_kb: 5
|
||||
|
||||
# Size limit for rows being compacted in memory. Larger rows will spill
|
||||
# over to disk and use a slower two-pass compaction process. A message
|
||||
# will be logged specifying the row key.
|
||||
in_memory_compaction_limit_in_mb: 64
|
||||
|
||||
# Number of simultaneous compactions to allow, NOT including
|
||||
# validation "compactions" for anti-entropy repair. Simultaneous
|
||||
# compactions can help preserve read performance in a mixed read/write
|
||||
# workload, by mitigating the tendency of small sstables to accumulate
|
||||
# during a single long running compactions. The default is usually
|
||||
# fine and if you experience problems with compaction running too
|
||||
# slowly or too fast, you should look at
|
||||
# compaction_throughput_mb_per_sec first.
|
||||
#
|
||||
# concurrent_compactors defaults to the number of cores.
|
||||
# Uncomment to make compaction mono-threaded, the pre-0.8 default.
|
||||
#concurrent_compactors: 1
|
||||
|
||||
# Multi-threaded compaction. When enabled, each compaction will use
|
||||
# up to one thread per core, plus one thread per sstable being merged.
|
||||
# This is usually only useful for SSD-based hardware: otherwise,
|
||||
# your concern is usually to get compaction to do LESS i/o (see:
|
||||
# compaction_throughput_mb_per_sec), not more.
|
||||
multithreaded_compaction: false
|
||||
|
||||
# Throttles compaction to the given total throughput across the entire
|
||||
# system. The faster you insert data, the faster you need to compact in
|
||||
# order to keep the sstable count down, but in general, setting this to
|
||||
# 16 to 32 times the rate you are inserting data is more than sufficient.
|
||||
# Setting this to 0 disables throttling. Note that this account for all types
|
||||
# of compaction, including validation compaction.
|
||||
compaction_throughput_mb_per_sec: 16
|
||||
|
||||
# Track cached row keys during compaction, and re-cache their new
|
||||
# positions in the compacted sstable. Disable if you use really large
|
||||
# key caches.
|
||||
compaction_preheat_key_cache: true
|
||||
|
||||
# Throttles all outbound streaming file transfers on this node to the
|
||||
# given total throughput in Mbps. This is necessary because Cassandra does
|
||||
# mostly sequential IO when streaming data during bootstrap or repair, which
|
||||
# can lead to saturating the network connection and degrading rpc performance.
|
||||
# When unset, the default is 200 Mbps or 25 MB/s.
|
||||
# stream_throughput_outbound_megabits_per_sec: 200
|
||||
|
||||
# Throttles all streaming file transfer between the datacenters,
|
||||
# this setting allows users to throttle inter dc stream throughput in addition
|
||||
# to throttling all network stream traffic as configured with
|
||||
# stream_throughput_outbound_megabits_per_sec
|
||||
# inter_dc_stream_throughput_outbound_megabits_per_sec:
|
||||
|
||||
# How long the coordinator should wait for read operations to complete
|
||||
read_request_timeout_in_ms: 5000
|
||||
# How long the coordinator should wait for seq or index scans to complete
|
||||
range_request_timeout_in_ms: 10000
|
||||
# How long the coordinator should wait for writes to complete
|
||||
write_request_timeout_in_ms: 2000
|
||||
# How long a coordinator should continue to retry a CAS operation
|
||||
# that contends with other proposals for the same row
|
||||
cas_contention_timeout_in_ms: 1000
|
||||
# How long the coordinator should wait for truncates to complete
|
||||
# (This can be much longer, because unless auto_snapshot is disabled
|
||||
# we need to flush first so we can snapshot before removing the data.)
|
||||
truncate_request_timeout_in_ms: 60000
|
||||
# The default timeout for other, miscellaneous operations
|
||||
request_timeout_in_ms: 10000
|
||||
|
||||
# Enable operation timeout information exchange between nodes to accurately
|
||||
# measure request timeouts. If disabled, replicas will assume that requests
|
||||
# were forwarded to them instantly by the coordinator, which means that
|
||||
# under overload conditions we will waste that much extra time processing
|
||||
# already-timed-out requests.
|
||||
#
|
||||
# Warning: before enabling this property make sure to ntp is installed
|
||||
# and the times are synchronized between the nodes.
|
||||
cross_node_timeout: false
|
||||
|
||||
# Enable socket timeout for streaming operation.
|
||||
# When a timeout occurs during streaming, streaming is retried from the start
|
||||
# of the current file. This _can_ involve re-streaming an important amount of
|
||||
# data, so you should avoid setting the value too low.
|
||||
# Default value is 0, which never timeout streams.
|
||||
# streaming_socket_timeout_in_ms: 0
|
||||
|
||||
# phi value that must be reached for a host to be marked down.
|
||||
# most users should never need to adjust this.
|
||||
# phi_convict_threshold: 8
|
||||
|
||||
# endpoint_snitch -- Set this to a class that implements
|
||||
# IEndpointSnitch. The snitch has two functions:
|
||||
# - it teaches Cassandra enough about your network topology to route
|
||||
# requests efficiently
|
||||
# - it allows Cassandra to spread replicas around your cluster to avoid
|
||||
# correlated failures. It does this by grouping machines into
|
||||
# "datacenters" and "racks." Cassandra will do its best not to have
|
||||
# more than one replica on the same "rack" (which may not actually
|
||||
# be a physical location)
|
||||
#
|
||||
# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER,
|
||||
# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS
|
||||
# ARE PLACED.
|
||||
#
|
||||
# Out of the box, Cassandra provides
|
||||
# - SimpleSnitch:
|
||||
# Treats Strategy order as proximity. This can improve cache
|
||||
# locality when disabling read repair. Only appropriate for
|
||||
# single-datacenter deployments.
|
||||
# - GossipingPropertyFileSnitch
|
||||
# This should be your go-to snitch for production use. The rack
|
||||
# and datacenter for the local node are defined in
|
||||
# cassandra-rackdc.properties and propagated to other nodes via
|
||||
# gossip. If cassandra-topology.properties exists, it is used as a
|
||||
# fallback, allowing migration from the PropertyFileSnitch.
|
||||
# - PropertyFileSnitch:
|
||||
# Proximity is determined by rack and data center, which are
|
||||
# explicitly configured in cassandra-topology.properties.
|
||||
# - Ec2Snitch:
|
||||
# Appropriate for EC2 deployments in a single Region. Loads Region
|
||||
# and Availability Zone information from the EC2 API. The Region is
|
||||
# treated as the datacenter, and the Availability Zone as the rack.
|
||||
# Only private IPs are used, so this will not work across multiple
|
||||
# Regions.
|
||||
# - Ec2MultiRegionSnitch:
|
||||
# Uses public IPs as broadcast_address to allow cross-region
|
||||
# connectivity. (Thus, you should set seed addresses to the public
|
||||
# IP as well.) You will need to open the storage_port or
|
||||
# ssl_storage_port on the public IP firewall. (For intra-Region
|
||||
# traffic, Cassandra will switch to the private IP after
|
||||
# establishing a connection.)
|
||||
# - RackInferringSnitch:
|
||||
# Proximity is determined by rack and data center, which are
|
||||
# assumed to correspond to the 3rd and 2nd octet of each node's IP
|
||||
# address, respectively. Unless this happens to match your
|
||||
# deployment conventions, this is best used as an example of
|
||||
# writing a custom Snitch class and is provided in that spirit.
|
||||
#
|
||||
# You can use a custom Snitch by setting this to the full class name
|
||||
# of the snitch, which will be assumed to be on your classpath.
|
||||
endpoint_snitch: SimpleSnitch
|
||||
|
||||
# controls how often to perform the more expensive part of host score
|
||||
# calculation
|
||||
dynamic_snitch_update_interval_in_ms: 100
|
||||
# controls how often to reset all host scores, allowing a bad host to
|
||||
# possibly recover
|
||||
dynamic_snitch_reset_interval_in_ms: 600000
|
||||
# if set greater than zero and read_repair_chance is < 1.0, this will allow
|
||||
# 'pinning' of replicas to hosts in order to increase cache capacity.
|
||||
# The badness threshold will control how much worse the pinned host has to be
|
||||
# before the dynamic snitch will prefer other replicas over it. This is
|
||||
# expressed as a double which represents a percentage. Thus, a value of
|
||||
# 0.2 means Cassandra would continue to prefer the static snitch values
|
||||
# until the pinned host was 20% worse than the fastest.
|
||||
dynamic_snitch_badness_threshold: 0.1
|
||||
|
||||
# request_scheduler -- Set this to a class that implements
|
||||
# RequestScheduler, which will schedule incoming client requests
|
||||
# according to the specific policy. This is useful for multi-tenancy
|
||||
# with a single Cassandra cluster.
|
||||
# NOTE: This is specifically for requests from the client and does
|
||||
# not affect inter node communication.
|
||||
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
|
||||
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
|
||||
# client requests to a node with a separate queue for each
|
||||
# request_scheduler_id. The scheduler is further customized by
|
||||
# request_scheduler_options as described below.
|
||||
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
|
||||
|
||||
# Scheduler Options vary based on the type of scheduler
|
||||
# NoScheduler - Has no options
|
||||
# RoundRobin
|
||||
# - throttle_limit -- The throttle_limit is the number of in-flight
|
||||
# requests per client. Requests beyond
|
||||
# that limit are queued up until
|
||||
# running requests can complete.
|
||||
# The value of 80 here is twice the number of
|
||||
# concurrent_reads + concurrent_writes.
|
||||
# - default_weight -- default_weight is optional and allows for
|
||||
# overriding the default which is 1.
|
||||
# - weights -- Weights are optional and will default to 1 or the
|
||||
# overridden default_weight. The weight translates into how
|
||||
# many requests are handled during each turn of the
|
||||
# RoundRobin, based on the scheduler id.
|
||||
#
|
||||
# request_scheduler_options:
|
||||
# throttle_limit: 80
|
||||
# default_weight: 5
|
||||
# weights:
|
||||
# Keyspace1: 1
|
||||
# Keyspace2: 5
|
||||
|
||||
# request_scheduler_id -- An identifier based on which to perform
|
||||
# the request scheduling. Currently the only valid option is keyspace.
|
||||
# request_scheduler_id: keyspace
|
||||
|
||||
# Enable or disable inter-node encryption
|
||||
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
|
||||
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
|
||||
# suite for authentication, key exchange and encryption of the actual data transfers.
|
||||
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
|
||||
# NOTE: No custom encryption options are enabled at the moment
|
||||
# The available internode options are : all, none, dc, rack
|
||||
#
|
||||
# If set to dc cassandra will encrypt the traffic between the DCs
|
||||
# If set to rack cassandra will encrypt the traffic between the racks
|
||||
#
|
||||
# The passwords used in these options must match the passwords used when generating
|
||||
# the keystore and truststore. For instructions on generating these files, see:
|
||||
# http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
|
||||
#
|
||||
server_encryption_options:
|
||||
internode_encryption: none
|
||||
keystore: conf/.keystore
|
||||
keystore_password: cassandra
|
||||
truststore: conf/.truststore
|
||||
truststore_password: cassandra
|
||||
# More advanced defaults below:
|
||||
# protocol: TLS
|
||||
# algorithm: SunX509
|
||||
# store_type: JKS
|
||||
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
|
||||
# require_client_auth: false
|
||||
|
||||
# enable or disable client/server encryption.
|
||||
client_encryption_options:
|
||||
enabled: false
|
||||
keystore: conf/.keystore
|
||||
keystore_password: cassandra
|
||||
# require_client_auth: false
|
||||
# Set trustore and truststore_password if require_client_auth is true
|
||||
# truststore: conf/.truststore
|
||||
# truststore_password: cassandra
|
||||
# More advanced defaults below:
|
||||
# protocol: TLS
|
||||
# algorithm: SunX509
|
||||
# store_type: JKS
|
||||
# cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
|
||||
|
||||
# internode_compression controls whether traffic between nodes is
|
||||
# compressed.
|
||||
# can be: all - all traffic is compressed
|
||||
# dc - traffic between different datacenters is compressed
|
||||
# none - nothing is compressed.
|
||||
internode_compression: all
|
||||
|
||||
# Enable or disable tcp_nodelay for inter-dc communication.
|
||||
# Disabling it will result in larger (but fewer) network packets being sent,
|
||||
# reducing overhead from the TCP protocol itself, at the cost of increasing
|
||||
# latency if you block for cross-datacenter responses.
|
||||
inter_dc_tcp_nodelay: false
|
||||
|
||||
# Enable or disable kernel page cache preheating from contents of the key cache after compaction.
|
||||
# When enabled it would preheat only first "page" (4KB) of each row to optimize
|
||||
# for sequential access. Note: This could be harmful for fat rows, see CASSANDRA-4937
|
||||
# for further details on that topic.
|
||||
preheat_kernel_page_cache: false
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/bash
|
||||
midonet-cli -e "create tunnel-zone name default type gre"
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
[epel]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=1
|
||||
gpgcheck=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
|
||||
[epel-debuginfo]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
gpgcheck=0
|
||||
|
||||
[epel-source]
|
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
|
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
|
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
|
||||
failovermethod=priority
|
||||
enabled=0
|
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
|
||||
gpgcheck=0
|
|
@ -0,0 +1,13 @@
|
|||
[Midokura]
|
||||
name=Midokura Repo v1.5
|
||||
baseurl=http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/rc/v1.5/RHEL/6/
|
||||
gpgcheck=0
|
||||
gpgkey=http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/RPMGPGKEYmidokura
|
||||
enabled=1
|
||||
|
||||
[MidokuraNeutronPlugin]
|
||||
name=MidokuraNeutronPlugin Repository
|
||||
baseurl=http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/openstackicehouse/stable/RHEL/6/
|
||||
gpgcheck=0
|
||||
gpgkey=http://<%= midokura_user %>:<%= midokura_password %>@yum.midokura.com/repo/RPMGPGKEYmidokura
|
||||
enabled=1
|
|
@ -0,0 +1,6 @@
|
|||
<Context
|
||||
path="/midonet-api"
|
||||
docBase="/usr/share/midonet-api"
|
||||
antiResourceLocking="false"
|
||||
privileged="true"
|
||||
/>
|
|
@ -0,0 +1,11 @@
|
|||
[DATABASE]
|
||||
sql_connection = <%= @sql_connection %>
|
||||
sql_max_retries = 100
|
||||
|
||||
[MIDONET]
|
||||
midonet_uri = http://<%= scope.lookupvar('::midonet_api_address') %>:8081/midonet-api
|
||||
username = <%= scope.lookupvar('::access_hash')['user'] %>
|
||||
password = <%= scope.lookupvar('::access_hash')['password'] %>
|
||||
project_id = <%= scope.lookupvar('::access_hash')['tenant'] %>
|
||||
auth_url = http://<%= scope.lookupvar('::service_endpoint') %>:35357/v2.0
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
[cli]
|
||||
api_url=http://<%= scope.lookupvar('::fuel_settings')['public_vip'] %>:8081/midonet-api
|
||||
username=<%= scope.lookupvar('::access_hash')['user'] %>
|
||||
password=<%= scope.lookupvar('::access_hash')['password'] %>
|
||||
project_id=<%= scope.lookupvar('::access_hash')['tenant'] %>
|
|
@ -0,0 +1,148 @@
|
|||
<?xml version='1.0' encoding='utf-8'?>
|
||||
<!--
|
||||
Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
contributor license agreements. See the NOTICE file distributed with
|
||||
this work for additional information regarding copyright ownership.
|
||||
The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
(the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
-->
|
||||
<!-- Note: A "Server" is not itself a "Container", so you may not
|
||||
define subcomponents such as "Valves" at this level.
|
||||
Documentation at /docs/config/server.html
|
||||
-->
|
||||
<Server port="8005" shutdown="SHUTDOWN">
|
||||
|
||||
<!--APR library loader. Documentation at /docs/apr.html -->
|
||||
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
|
||||
<!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html -->
|
||||
<Listener className="org.apache.catalina.core.JasperListener" />
|
||||
<!-- Prevent memory leaks due to use of particular java/javax APIs-->
|
||||
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
|
||||
<!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html -->
|
||||
<Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
|
||||
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
|
||||
|
||||
<!-- Global JNDI resources
|
||||
Documentation at /docs/jndi-resources-howto.html
|
||||
-->
|
||||
<GlobalNamingResources>
|
||||
<!-- Editable user database that can also be used by
|
||||
UserDatabaseRealm to authenticate users
|
||||
-->
|
||||
<Resource name="UserDatabase" auth="Container"
|
||||
type="org.apache.catalina.UserDatabase"
|
||||
description="User database that can be updated and saved"
|
||||
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
|
||||
pathname="conf/tomcat-users.xml" />
|
||||
</GlobalNamingResources>
|
||||
|
||||
<!-- A "Service" is a collection of one or more "Connectors" that share
|
||||
a single "Container" Note: A "Service" is not itself a "Container",
|
||||
so you may not define subcomponents such as "Valves" at this level.
|
||||
Documentation at /docs/config/service.html
|
||||
-->
|
||||
<Service name="Catalina">
|
||||
|
||||
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
|
||||
<!--
|
||||
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
|
||||
maxThreads="150" minSpareThreads="4"/>
|
||||
-->
|
||||
|
||||
|
||||
<!-- A "Connector" represents an endpoint by which requests are received
|
||||
and responses are returned. Documentation at :
|
||||
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
|
||||
Java AJP Connector: /docs/config/ajp.html
|
||||
APR (HTTP/AJP) Connector: /docs/apr.html
|
||||
Define a non-SSL HTTP/1.1 Connector on port 8080
|
||||
-->
|
||||
<Connector port="<%= @http_port %>" protocol="HTTP/1.1"
|
||||
connectionTimeout="20000"
|
||||
redirectPort="8443" />
|
||||
<!-- A "Connector" using the shared thread pool-->
|
||||
<!--
|
||||
<Connector executor="tomcatThreadPool"
|
||||
port="8080" protocol="HTTP/1.1"
|
||||
connectionTimeout="20000"
|
||||
redirectPort="8443" />
|
||||
-->
|
||||
<!-- Define a SSL HTTP/1.1 Connector on port 8443
|
||||
This connector uses the JSSE configuration, when using APR, the
|
||||
connector should be using the OpenSSL style configuration
|
||||
described in the APR documentation -->
|
||||
<!--
|
||||
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
|
||||
maxThreads="150" scheme="https" secure="true"
|
||||
clientAuth="false" sslProtocol="TLS" />
|
||||
-->
|
||||
|
||||
<!-- Define an AJP 1.3 Connector on port 8009 -->
|
||||
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
|
||||
|
||||
|
||||
<!-- An Engine represents the entry point (within Catalina) that processes
|
||||
every request. The Engine implementation for Tomcat stand alone
|
||||
analyzes the HTTP headers included with the request, and passes them
|
||||
on to the appropriate Host (virtual host).
|
||||
Documentation at /docs/config/engine.html -->
|
||||
|
||||
<!-- You should set jvmRoute to support load-balancing via AJP ie :
|
||||
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
|
||||
-->
|
||||
<Engine name="Catalina" defaultHost="localhost">
|
||||
|
||||
<!--For clustering, please take a look at documentation at:
|
||||
/docs/cluster-howto.html (simple how to)
|
||||
/docs/config/cluster.html (reference documentation) -->
|
||||
<!--
|
||||
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
|
||||
-->
|
||||
|
||||
<!-- The request dumper valve dumps useful debugging information about
|
||||
the request and response data received and sent by Tomcat.
|
||||
Documentation at: /docs/config/valve.html -->
|
||||
<!--
|
||||
<Valve className="org.apache.catalina.valves.RequestDumperValve"/>
|
||||
-->
|
||||
|
||||
<!-- This Realm uses the UserDatabase configured in the global JNDI
|
||||
resources under the key "UserDatabase". Any edits
|
||||
that are performed against this UserDatabase are immediately
|
||||
available for use by the Realm. -->
|
||||
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
|
||||
resourceName="UserDatabase"/>
|
||||
|
||||
<!-- Define the default virtual host
|
||||
Note: XML Schema validation will not work with Xerces 2.2.
|
||||
-->
|
||||
<Host name="localhost" appBase="webapps"
|
||||
unpackWARs="true" autoDeploy="true"
|
||||
xmlValidation="false" xmlNamespaceAware="false">
|
||||
|
||||
<!-- SingleSignOn valve, share authentication between web applications
|
||||
Documentation at: /docs/config/valve.html -->
|
||||
<!--
|
||||
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
|
||||
-->
|
||||
|
||||
<!-- Access log processes all example.
|
||||
Documentation at: /docs/config/valve.html -->
|
||||
<!--
|
||||
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
|
||||
prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/>
|
||||
-->
|
||||
|
||||
</Host>
|
||||
</Engine>
|
||||
</Service>
|
||||
</Server>
|
|
@ -0,0 +1,2 @@
|
|||
#!/bin/bash
|
||||
/sbin/service zookeeper start
|
|
@ -0,0 +1,10 @@
|
|||
#!/bin/bash
|
||||
/bin/ps -ef|/bin/grep java|/bin/grep zookeeper
|
||||
RETCODE=$?
|
||||
if [ $RETCODE == 0 ]
|
||||
then
|
||||
/bin/kill `/bin/ps -ef |/bin/grep java|/bin/grep zookeeper|/bin/awk '{print \$2}'`
|
||||
fi
|
||||
rm -rf /var/lib/zookeper/data/version-2
|
||||
rm /var/lib/zookeeper/data/zookeeper_server.pid
|
||||
|
|
@ -0,0 +1,155 @@
|
|||
<!DOCTYPE web-app PUBLIC
|
||||
"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
|
||||
"http://java.sun.com/dtd/web-app_2_3.dtd" >
|
||||
|
||||
<web-app>
|
||||
<display-name>MidoNet API</display-name>
|
||||
|
||||
<!-- REST API configuration -->
|
||||
<!-- This value overrides the default base URI. This is typically set if
|
||||
you are proxying the API server and the base URI that the clients use
|
||||
to access the API is different from the actual server base URI. -->
|
||||
<context-param>
|
||||
<param-name>rest_api-base_uri</param-name>
|
||||
<param-value>http://<%= scope.lookupvar('::fuel_settings')['public_vip'] %>:8081/midonet-api</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- CORS configuration -->
|
||||
<context-param>
|
||||
<param-name>cors-access_control_allow_origin</param-name>
|
||||
<param-value>*</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cors-access_control_allow_headers</param-name>
|
||||
<param-value>Origin, X-Auth-Token, Content-Type, Accept, Authorization</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cors-access_control_allow_methods</param-name>
|
||||
<param-value>GET, POST, PUT, DELETE, OPTIONS</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cors-access_control_expose_headers</param-name>
|
||||
<param-value>Location</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- Auth configuration -->
|
||||
<context-param>
|
||||
<param-name>auth-auth_provider</param-name>
|
||||
<!-- Specify the class path of the auth service -->
|
||||
<param-value>
|
||||
org.midonet.api.auth.keystone.v2_0.KeystoneService
|
||||
</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>auth-admin_role</param-name>
|
||||
<param-value>admin</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- Mock auth configuration -->
|
||||
<context-param>
|
||||
<param-name>mock_auth-admin_token</param-name>
|
||||
<param-value>999888777666</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>mock_auth-tenant_admin_token</param-name>
|
||||
<param-value>999888777666</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>mock_auth-tenant_user_token</param-name>
|
||||
<param-value>999888777666</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- Keystone configuration -->
|
||||
<context-param>
|
||||
<param-name>keystone-service_protocol</param-name>
|
||||
<param-value>http</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>keystone-service_host</param-name>
|
||||
<param-value><%= scope.lookupvar('::service_endpoint') %></param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>keystone-service_port</param-name>
|
||||
<param-value>35357</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>keystone-admin_token</param-name>
|
||||
<param-value><%= @keystone_token %></param-value>
|
||||
</context-param>
|
||||
<!-- This tenant name is used to get the scoped token from Keystone, and
|
||||
should be the tenant name of the user that owns the token sent in the
|
||||
request -->
|
||||
<context-param>
|
||||
<param-name>keystone-tenant_name</param-name>
|
||||
<param-value>admin</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- CloudStack auth configuration -->
|
||||
<context-param>
|
||||
<param-name>cloudstack-api_base_uri</param-name>
|
||||
<param-value>http://127.0.0.1:8080</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cloudstack-api_path</param-name>
|
||||
<param-value>/client/api?</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cloudstack-api_key</param-name>
|
||||
<param-value></param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>cloudstack-secret_key</param-name>
|
||||
<param-value></param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- Zookeeper configuration -->
|
||||
<!-- The following parameters should match the ones in midolman.conf
|
||||
except 'use_mock' -->
|
||||
<context-param>
|
||||
<param-name>zookeeper-use_mock</param-name>
|
||||
<param-value>false</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>zookeeper-zookeeper_hosts</param-name>
|
||||
<!-- comma separated list of Zookeeper nodes(host:port) -->
|
||||
<param-value><%= @zoo_nodes.collect { |name,info| info['address']+':2181'}.join(',') %></param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>zookeeper-session_timeout</param-name>
|
||||
<param-value>30000</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>zookeeper-midolman_root_key</param-name>
|
||||
<param-value>/midonet/v1</param-value>
|
||||
</context-param>
|
||||
<context-param>
|
||||
<param-name>zookeeper-curator_enabled</param-name>
|
||||
<param-value>true</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- VXLAN gateway configuration -->
|
||||
<context-param>
|
||||
<param-name>midobrain-vxgw_enabled</param-name>
|
||||
<param-value>false</param-value>
|
||||
</context-param>
|
||||
|
||||
<!-- Servlet Listner -->
|
||||
<listener>
|
||||
<listener-class>
|
||||
<!-- Use Jersey's Guice compatible context listener -->
|
||||
org.midonet.api.servlet.JerseyGuiceServletContextListener
|
||||
</listener-class>
|
||||
</listener>
|
||||
|
||||
<!-- Servlet filter -->
|
||||
<filter>
|
||||
<!-- Filter to enable Guice -->
|
||||
<filter-name>Guice Filter</filter-name>
|
||||
<filter-class>com.google.inject.servlet.GuiceFilter</filter-class>
|
||||
</filter>
|
||||
<filter-mapping>
|
||||
<filter-name>Guice Filter</filter-name>
|
||||
<url-pattern>/*</url-pattern>
|
||||
</filter-mapping>
|
||||
|
||||
</web-app>
|
|
@ -0,0 +1,28 @@
|
|||
# The number of milliseconds of each tick
|
||||
tickTime=2000
|
||||
# The number of ticks that the initial
|
||||
# synchronization phase can take
|
||||
initLimit=10
|
||||
# The number of ticks that can pass between
|
||||
# sending a request and getting an acknowledgement
|
||||
syncLimit=5
|
||||
# the directory where the snapshot is stored.
|
||||
# do not use /tmp for storage, /tmp here is just
|
||||
# example sakes.
|
||||
dataDir=/var/lib/zookeeper/data
|
||||
# the port at which the clients will connect
|
||||
clientPort=2181
|
||||
#
|
||||
# Be sure to read the maintenance section of the
|
||||
# administrator guide before turning on autopurge.
|
||||
#
|
||||
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
|
||||
#
|
||||
# The number of snapshots to retain in dataDir
|
||||
#autopurge.snapRetainCount=3
|
||||
# Purge task interval in hours
|
||||
# Set to "0" to disable auto purge feature
|
||||
#autopurge.purgeInterval=1
|
||||
<%- @zoo_nodes.each do |node,info|%>
|
||||
server.<%= info['id'] %>=<%= info['address']%>:2888:3888
|
||||
<%- end %>
|
|
@ -0,0 +1,8 @@
|
|||
file { '/tmp/start_zookeeper.sh':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/start_zookeeper.sh'),
|
||||
} ->
|
||||
exec { '/bin/bash /tmp/start_zookeeper.sh':
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,10 @@
|
|||
file { '/root/stop_zookeeper.sh':
|
||||
ensure => present,
|
||||
content => template('plugin_midonet/stop_zookeeper.sh'),
|
||||
# notify => Exec['stop'],
|
||||
} ~>
|
||||
exec { 'stop':
|
||||
command => '/bin/bash /root/stop_zookeeper.sh',
|
||||
refreshonly => true,
|
||||
}
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
$fuel_settings = parseyaml($astute_settings_yaml)
|
||||
$nodes_hash = $::fuel_settings['nodes']
|
||||
$primary_controller_nodes = filter_nodes($nodes_hash,'role','primary-controller')
|
||||
$controllers = concat($primary_controller_nodes, filter_nodes($nodes_hash,'role','controller'))
|
||||
$db_gateways = filter_nodes($nodes_hash,'role','midonet-gw')
|
||||
$gateways = filter_nodes($nodes_hash,'role','midonet-simplegw')
|
||||
$computes = filter_nodes($nodes_hash,'role','compute')
|
||||
|
||||
$midonet_nodes1 = concat($controllers,$db_gateways)
|
||||
$midonet_nodes2 = concat($gateways,$computes)
|
||||
$midonet_nodes = concat($midonet_nodes1,$midonet_nodes2)
|
||||
|
||||
$nodes_adresses = nodes_to_hash($midonet_nodes,'fqdn','internal_address')
|
||||
$access_hash = $::fuel_settings['access']
|
||||
$service_endpoint = $::fuel_settings['management_vip']
|
||||
$neutron_config = $::fuel_settings['quantum_settings']
|
||||
|
||||
Nova_config<||> -> Exec['/etc/init.d/openstack-nova-api restart']
|
||||
|
||||
nova_config {
|
||||
'DEFAULT/enabled_apis': value => 'ec2,osapi_compute,metadata';
|
||||
'DEFAULT/service_neutron_metadata_proxy': value => 'true';
|
||||
'DEFAULT/neutron_metadata_proxy_shared_secret': value => $neutron_config['metadata']['metadata_proxy_shared_secret'];
|
||||
}
|
||||
exec { '/etc/init.d/openstack-nova-api restart':
|
||||
}
|
||||
if $fuel_settings['role'] == 'primary-controller' {
|
||||
$nodes_fqdn = keys($nodes_adresses)
|
||||
midonet_tunnel_zone { 'default':
|
||||
ensure => present,
|
||||
} ->
|
||||
midonet_host { $nodes_fqdn:
|
||||
ensure => present,
|
||||
nodes => $nodes_adresses,
|
||||
tunnel_zone => 'default',
|
||||
require => Midonet_tunnel_zone['default'],
|
||||
}
|
||||
# create_tunnel_zone($nodes_adresses)
|
||||
}
|
||||
|
||||
Neutron_dhcp_agent_config<||> ~> Service['neutron-dhcp-agent']
|
||||
Neutron_dhcp_agent_config<||> ~> Service['neutron-metadata-agent']
|
||||
|
||||
service { 'neutron-dhcp-agent':
|
||||
ensure => running,
|
||||
}
|
||||
service { 'neutron-metadata-agent':
|
||||
ensure => running,
|
||||
}
|
||||
neutron_dhcp_agent_config {
|
||||
'DEFAULT/enable_isolated_metadata': value => 'True';
|
||||
'DEFAULT/dhcp_driver': value => 'midonet.neutron.agent.midonet_driver.DhcpNoOpDriver';
|
||||
'DEFAULT/interface_driver': value => 'neutron.agent.linux.interface.MidonetInterfaceDriver';
|
||||
'DEFAULT/ovs_use_veth': value => 'False';
|
||||
'DEFAULT/root_helper': value => 'sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf';
|
||||
'DEFAULT/use_namespaces': value => 'True';
|
||||
'DEFAULT/debug': value => 'False';
|
||||
'midonet/midonet_uri': value => "http://${::service_endpoint}:8081/midonet-api";
|
||||
'midonet/username': value => $::access_hash['user'];
|
||||
'midonet/password': value => $::access_hash['password'];
|
||||
'midonet/project_id': value => $::access_hash['tenant'];
|
||||
'midonet/auth_url': value => "http://${::service_endpoint}:35357/v2.0";
|
||||
}
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
attributes:
|
||||
repo_username:
|
||||
value: 'getty'
|
||||
label: 'Midokura Repo User'
|
||||
description: 'Username for Midokura repositories'
|
||||
weight: 25
|
||||
type: "text"
|
||||
repo_password:
|
||||
value: 'getty'
|
||||
label: 'Midokura Repo Password'
|
||||
description: 'Password for Midokura repositories'
|
||||
weight: 35
|
||||
type: "password"
|
||||
bgb1_iface:
|
||||
value: ''
|
||||
label: 'BGP VLAN'
|
||||
description: 'VLAN interface for BGP peering'
|
||||
weight: 45
|
||||
type: "text"
|
||||
bgb2_iface:
|
||||
value: ''
|
||||
label: 'BGP VLAN'
|
||||
description: 'VLAN interface for BGP peering'
|
||||
weight: 45
|
||||
type: "text"
|
|
@ -0,0 +1,36 @@
|
|||
# Plugin name
|
||||
name: midonet
|
||||
# Human-readable name for your plugin
|
||||
title: Neutron Midonet plugin
|
||||
# Plugin version
|
||||
version: 1.0.0
|
||||
# Description
|
||||
description: Enable to use plugin Midonet for Neutron
|
||||
# Required fuel version
|
||||
fuel_version: ['6.0','6.0.1']
|
||||
|
||||
# The plugin is compatible with releases in the list
|
||||
releases:
|
||||
- os: ubuntu
|
||||
version: 2014.2-6.0
|
||||
mode: ['ha', 'multinode']
|
||||
deployment_scripts_path: deployment_scripts/
|
||||
repository_path: repositories/ubuntu
|
||||
- os: centos
|
||||
version: 2014.2-6.0
|
||||
mode: ['ha', 'multinode']
|
||||
deployment_scripts_path: deployment_scripts/
|
||||
repository_path: repositories/centos
|
||||
- os: ubuntu
|
||||
version: 2014.2.2-6.0.1
|
||||
mode: ['ha', 'multinode']
|
||||
deployment_scripts_path: deployment_scripts/
|
||||
repository_path: repositories/ubuntu
|
||||
- os: centos
|
||||
version: 2014.2.2-6.0.1
|
||||
mode: ['ha', 'multinode']
|
||||
deployment_scripts_path: deployment_scripts/
|
||||
repository_path: repositories/centos
|
||||
|
||||
# Version of plugin package
|
||||
package_version: '1.0.0'
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Add here any the actions which are required before plugin build
|
||||
# like packages building, packages downloading from mirrors and so on.
|
||||
# The script should return 0 if there were no errors.
|
|
@ -0,0 +1,84 @@
|
|||
# This tasks will be applied on controller nodes,
|
||||
# here you can also specify several roles, for example
|
||||
# ['cinder', 'compute'] will be applied only on
|
||||
# cinder and compute nodes
|
||||
- role: '*'
|
||||
stage: pre_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: enable_ip_forward.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
- role: ['midonet-gw']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: midonetdb_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 100
|
||||
- role: ['midonet-gw']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: stop_zookeeper.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 200
|
||||
- role: ['midonet-gw']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: start_zookeeper.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 300
|
||||
- role: ['controller']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: midonetapi_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 400
|
||||
- role: ['controller','midonet-gw','compute','midonet-simplegw']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: midolman_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 500
|
||||
- role: ['controller']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: controller_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 600
|
||||
- role: ['controller']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: tunnels_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 3600
|
||||
priority: 800
|
||||
- role: ['compute']
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: compute_site.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 900
|
||||
- role: '*'
|
||||
stage: post_deployment
|
||||
type: puppet
|
||||
parameters:
|
||||
puppet_manifest: cleanup.pp
|
||||
puppet_modules: "puppet/:/etc/puppet/modules/"
|
||||
timeout: 360
|
||||
priority: 1000
|
||||
|
Loading…
Reference in New Issue