Update cookbooks from Tsinghua's lab

Change-Id: I4e41542e6dfeebcb7c998d7b06b6814b76d3f8b0
This commit is contained in:
Weidong Shao 2014-10-02 23:28:04 +00:00
parent 248f2f73f1
commit 224d05cc26
103 changed files with 3422 additions and 298 deletions

View File

@ -17,7 +17,7 @@
# limitations under the License.
#
default['build_essential']['compiletime'] = false
default['build_essential']['compiletime'] = true
case node['platform_family']
when "mac_os_x"

View File

@ -0,0 +1,7 @@
site :opscode
metadata
group :integration do
cookbook 'apt'
end

View File

@ -0,0 +1,17 @@
ceph
====
v0.2.0 (2014-03-03)
-------------------
- Add tests and fixes related.
- Add ceph-extra
- Fix searching feature
- Refactor RPM part
- Add iscsi tgt
v0.1.0 (2013-07-18)
-------------------
- Initial changelog

View File

@ -0,0 +1,14 @@
source 'https://rubygems.org'
gem 'chef', '~> 11'
gem 'berkshelf', '~> 2.0.10'
group :test do
gem 'foodcritic', '~> 3.0'
gem 'rubocop', '~> 0.23.0'
end
group :integration do
gem 'test-kitchen', '~> 1.1.1'
gem 'kitchen-vagrant', '~> 0.14'
end

201
chef/cookbooks/ceph/LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,102 @@
# Chef cookbook [![Build Status](https://travis-ci.org/ceph/ceph-cookbook.svg?branch=master)](https://travis-ci.org/ceph/ceph-cookbook) [![Gitter chat](https://badges.gitter.im/ceph/ceph-cookbook.png)](https://gitter.im/ceph/ceph-cookbook)
## DESCRIPTION
Installs and configures Ceph, a distributed network storage and filesystem designed to provide excellent performance, reliability, and scalability.
The current version is focused towards deploying Monitors and OSD on Ubuntu.
For documentation on how to use this cookbook, refer to the [USAGE](#USAGE) section.
For help, use [Gitter chat](https://gitter.im/ceph/ceph-cookbook), [mailing-list](mailto:ceph-users-join@lists.ceph.com) or [issues](https://github.com/ceph/ceph-cookbook/issues)
## REQUIREMENTS
### Chef
>= 11.6.0
### Platform
Tested as working:
* Ubuntu Precise (12.04)
### Cookbooks
The ceph cookbook requires the following cookbooks from Opscode:
https://github.com/opscode/cookbooks
* apt
* apache2
## ATTRIBUTES
### Ceph Rados Gateway
* node[:ceph][:radosgw][:api_fqdn]
* node[:ceph][:radosgw][:admin_email]
* node[:ceph][:radosgw][:rgw_addr]
## TEMPLATES
## USAGE
Ceph cluster design is beyond the scope of this README, please turn to the
public wiki, mailing lists, visit our IRC channel, or contact Inktank:
http://ceph.com/docs/master
http://ceph.com/resources/mailing-list-irc/
http://www.inktank.com/
### Ceph Monitor
Ceph monitor nodes should use the ceph-mon role.
Includes:
* ceph::default
* ceph::conf
### Ceph Metadata Server
Ceph metadata server nodes should use the ceph-mds role.
Includes:
* ceph::default
### Ceph OSD
Ceph OSD nodes should use the ceph-osd role
Includes:
* ceph::default
* ceph::conf
### Ceph Rados Gateway
Ceph Rados Gateway nodes should use the ceph-radosgw role
## LICENSE AND AUTHORS
* Author: Kyle Bader <kyle.bader@dreamhost.com>
* Copyright 2013, DreamHost Web Hosting and Inktank Storage Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,48 @@
#!/usr/bin/env rake
# Style tests. Rubocop and Foodcritic
namespace :style do
begin
require 'rubocop/rake_task'
desc 'Run Ruby style checks'
RuboCop::RakeTask.new(:ruby)
rescue LoadError
puts '>>>>> Rubocop gem not loaded, omitting tasks' unless ENV['CI']
end
begin
require 'foodcritic'
desc 'Run Chef style checks'
FoodCritic::Rake::LintTask.new(:chef) do |t|
t.options = {
fail_tags: ['any'],
tags: ['~FC003'],
chef_version: '11.6.0'
}
end
rescue LoadError
puts '>>>>> foodcritic gem not loaded, omitting tasks' unless ENV['CI']
end
end
desc 'Run all style checks'
task style: ['style:chef', 'style:ruby']
# Integration tests. Kitchen.ci
namespace :integration do
begin
require 'kitchen/rake_tasks'
desc 'Run kitchen integration tests'
Kitchen::RakeTasks.new
rescue LoadError
puts '>>>>> Kitchen gem not loaded, omitting tasks' unless ENV['CI']
end
end
desc 'Run all tests on Travis'
task travis: ['style']
# Default
task default: ['style', 'integration:kitchen:all']

View File

@ -0,0 +1,10 @@
default['ceph']['cephfs_mount'] = '/ceph'
case node['platform_family']
when 'debian'
packages = ['ceph-fs-common']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['cephfs']['packages'] = packages
else
default['ceph']['cephfs']['packages'] = []
end

View File

@ -0,0 +1,7 @@
default['ceph']['config'] = {}
default['ceph']['config-sections'] = {}
default['ceph']['config']['keystone']['rgw keystone accepted roles'] = 'admin, _member_'
default['ceph']['config']['keystone']['rgw keystone token cache size'] = 500
default['ceph']['config']['keystone']['rgw keystone revocation interval'] = 600
default['ceph']['config']['keystone']['nss db path'] = '/var/ceph/nss'
default['ceph']['config']['keystone']['rgw keystone admin token'] = 'openstack_identity_bootstrap_token'

View File

@ -0,0 +1,24 @@
default['ceph']['install_debug'] = false
default['ceph']['encrypted_data_bags'] = false
default['ceph']['install_repo'] = true
case node['platform']
when 'ubuntu'
default['ceph']['init_style'] = 'upstart'
else
default['ceph']['init_style'] = 'sysvinit'
end
case node['platform_family']
when 'debian'
packages = ['ceph-common']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['packages'] = packages
when 'rhel', 'fedora'
packages = ['ceph']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['packages'] = packages
else
default['ceph']['packages'] = []
end

View File

@ -0,0 +1,12 @@
include_attribute 'ceph'
default['ceph']['mds']['init_style'] = node['init_style']
case node['platform_family']
when 'debian'
packages = ['ceph-mds']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['mds']['packages'] = packages
else
default['ceph']['mds']['packages'] = []
end

View File

@ -0,0 +1,15 @@
include_attribute 'ceph'
default['ceph']['mon']['init_style'] = node['ceph']['init_style']
default['ceph']['mon']['secret_file'] = '/etc/chef/secrets/ceph_mon'
default['ceph']['default_pools'] = ['data', 'metadata', 'rbd']
case node['platform_family']
when 'debian', 'rhel', 'fedora'
packages = ['ceph']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['mon']['packages'] = packages
else
default['ceph']['mon']['packages'] = []
end

View File

@ -0,0 +1,14 @@
include_attribute 'ceph'
default['ceph']['osd']['init_style'] = node['ceph']['init_style']
default['ceph']['osd']['secret_file'] = '/etc/chef/secrets/ceph_osd'
case node['platform_family']
when 'debian', 'rhel', 'fedora'
packages = ['ceph']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['osd']['packages'] = packages
else
default['ceph']['osd']['packages'] = []
end

View File

@ -0,0 +1,43 @@
#
# Cookbook Name:: ceph
# Attributes:: radosgw
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
include_attribute 'ceph'
default['ceph']['radosgw']['api_fqdn'] = "#{node['hostname']}"
default['ceph']['radosgw']['admin_email'] = 'admin@example.com'
default['ceph']['radosgw']['rgw_addr'] = '*:80'
default['ceph']['radosgw']['rgw_port'] = false
default['ceph']['radosgw']['webserver_companion'] = 'apache2' # can be false
default['ceph']['radosgw']['use_apache_fork'] = true
default['ceph']['radosgw']['init_style'] = node['ceph']['init_style']
default['ceph']['radosgw']['signing']['certfile'] = '/tmp/signing_cert.pem'
default['ceph']['radosgw']['signing']['ca_certs'] = '/tmp/ca.pem'
case node['platform_family']
when 'debian'
packages = ['radosgw']
packages += debug_packages(packages) if node['ceph']['install_debug']
default['ceph']['radosgw']['packages'] = packages
when 'rhel', 'fedora', 'suse'
default['ceph']['radosgw']['packages'] = ['ceph-radosgw']
else
default['ceph']['radosgw']['packages'] = []
end

View File

@ -0,0 +1,6 @@
case node['platform_family']
when 'debian', 'suse'
default['ceph']['radosgw']['apache2']['packages'] = ['libapache2-mod-fastcgi']
when 'rhel', 'fedora'
default['ceph']['radosgw']['apache2']['packages'] = ['mod_fastcgi']
end

View File

@ -0,0 +1,52 @@
default['ceph']['branch'] = 'stable' # Can be stable, testing or dev.
# Major release version to install or gitbuilder branch
default['ceph']['version'] = 'firefly'
default['ceph']['el_add_epel'] = true
default['ceph']['repo_url'] = 'http://ceph.com'
default['ceph']['extras_repo_url'] = 'http://ceph.com/packages/ceph-extras'
default['ceph']['extras_repo'] = true
case node['platform_family']
when 'debian'
# Debian/Ubuntu default repositories
default['ceph']['debian']['stable']['repository'] = "#{node['ceph']['repo_url']}/debian-#{node['ceph']['version']}/"
default['ceph']['debian']['stable']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['debian']['testing']['repository'] = "#{node['ceph']['repo_url']}/debian-testing/"
default['ceph']['debian']['testing']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['debian']['dev']['repository'] = "http://gitbuilder.ceph.com/ceph-deb-#{node['lsb']['codename']}-x86_64-basic/ref/#{node['ceph']['version']}"
default['ceph']['debian']['dev']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'
default['ceph']['debian']['extras']['repository'] = "#{node['ceph']['extras_repo_url']}/debian/"
default['ceph']['debian']['extras']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
when 'rhel'
# Redhat/CentOS default repositories
default['ceph']['rhel']['stable']['repository'] = "#{node['ceph']['repo_url']}/rpm-#{node['ceph']['version']}/el6/x86_64/"
default['ceph']['rhel']['stable']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['rhel']['testing']['repository'] = "#{node['ceph']['repo_url']}/rpm-testing/el6/x86_64/"
default['ceph']['rhel']['testing']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['rhel']['dev']['repository'] = "http://gitbuilder.ceph.com/ceph-rpm-centos6-x86_64-basic/ref/#{node['ceph']['version']}/x86_64/"
default['ceph']['rhel']['dev']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'
default['ceph']['rhel']['extras']['repository'] = "#{node['ceph']['extras_repo_url']}/rpm/centos6.4/x86_64/"
default['ceph']['rhel']['extras']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
when 'fedora'
# Fedora default repositories
default['ceph']['fedora']['stable']['repository'] = "#{node['ceph']['repo_url']}/rpm-#{node['ceph']['version']}/fc#{node['platform_version']}/x86_64/"
default['ceph']['fedora']['stable']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['fedora']['testing']['repository'] = "#{node['ceph']['repo_url']}/rpm-testing/fc#{node['platform_version']}/x86_64/"
default['ceph']['fedora']['testing']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
default['ceph']['fedora']['dev']['repository'] = "http://gitbuilder.ceph.com/ceph-rpm-fc#{node['platform_version']}-x86_64-basic/ref/#{node['ceph']['version']}/RPMS/x86_64/"
default['ceph']['fedora']['dev']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'
default['ceph']['fedora']['extras']['repository'] = "#{node['ceph']['extras_repo_url']}/rpm/fedora#{node['platform_version']}/x86_64/"
default['ceph']['fedora']['extras']['repository_key'] = 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
when 'suse'
# (Open)SuSE default repositories
# Chef doesn't make a difference between suse and opensuse
suse = Mixlib::ShellOut.new("head -n1 /etc/SuSE-release| awk '{print $1}'").run_command.stdout.chomp.downcase
suse = 'sles' if suse == 'suse'
suse_version = suse << Mixlib::ShellOut.new("grep VERSION /etc/SuSE-release | awk -F'= ' '{print $2}'").run_command.stdout.chomp
default['ceph']['suse']['stable']['repository'] = "#{node['ceph']['repo_url']}/rpm-#{node['ceph']['version']}/#{suse_version}/x86_64/ceph-release-1-0.#{suse_version}.noarch.rpm"
default['ceph']['suse']['testing']['repository'] = "#{node['ceph']['repo_url']}/rpm-testing/#{suse_version}/x86_64/ceph-release-1-0.#{suse_version}.noarch.rpm"
default['ceph']['suse']['extras']['repository'] = "#{node['ceph']['extras_repo_url']}/rpm/#{suse_version}/x86_64/"
else
fail "#{node['platform_family']} is not supported"
end

View File

@ -0,0 +1,26 @@
# fail 'mon_initial_members must be set in config' if node['ceph']['config']['mon_initial_members'].nil?
unless node['ceph']['config']['fsid']
Chef::Log.warn('We are genereting a new uuid for fsid')
require 'securerandom'
node.set['ceph']['config']['fsid'] = SecureRandom.uuid
node.save
end
directory '/etc/ceph' do
owner 'root'
group 'root'
mode '0755'
action :create
end
template '/etc/ceph/ceph.conf' do
source 'ceph.conf.erb'
variables lazy {
{
:mon_addresses => mon_addresses,
:is_rgw => node['ceph']['is_radosgw']
}
}
mode '0644'
end

View File

@ -0,0 +1,6 @@
roles:
- ceph-mds:
- ceph-mon:
- ceph-osd:
- ceph-radosgw:
- ceph-tgt:

View File

@ -0,0 +1,334 @@
require 'ipaddr'
require 'json'
require 'chef/mixin/shell_out'
include Chef::Mixin::ShellOut
def crowbar?
!defined?(Chef::Recipe::Barclamp).nil?
end
def mon_nodes
if crowbar?
mon_roles = search(:role, 'name:crowbar-* AND run_list:role\[ceph-mon\]')
unless mon_roles.empty?
search_string = mon_roles.map { |role_object| 'roles:' + role_object.name }.join(' OR ')
search_string = "(#{search_string}) AND ceph_config_environment:#{node['ceph']['config']['environment']}"
end
else
#search_string = "ceph_is_mon:true AND chef_environment:#{node.chef_environment}"
search_string = "ceph_is_mon:true AND ceph_config_fsid:#{node["ceph"]["config"]["fsid"]}"
end
if use_cephx? && !node['ceph']['encrypted_data_bags']
search_string = "(#{search_string}) AND (ceph_bootstrap_osd_key:*)"
end
search(:node, search_string)
end
def osd_secret
if node['ceph']['encrypted_data_bags']
secret = Chef::EncryptedDataBagItem.load_secret(node['ceph']['osd']['secret_file'])
return Chef::EncryptedDataBagItem.load('ceph', 'osd', secret)['secret']
else
return mon_nodes[0]['ceph']['bootstrap_osd_key']
end
end
# If public_network is specified
# we need to search for the monitor IP
# in the node environment.
# 1. We look if the network is IPv6 or IPv4
# 2. We look for a route matching the network
# 3. We grab the IP and return it with the port
def find_node_ip_in_network(network, nodeish = nil)
nodeish = node unless nodeish
net = IPAddr.new(network)
nodeish['network']['interfaces'].each do |_iface, addrs|
addresses = addrs['addresses'] || []
addresses.each do |ip, params|
return ip_address_to_ceph_address(ip, params) if ip_address_in_network?(ip, params, net)
end
end
nil
end
def ip_address_in_network?(ip, params, net)
if params['family'] == 'inet'
net.include?(ip) && params.key?('broadcast') # is primary ip on iface
elsif params['family'] == 'inet6'
net.include?(ip)
else
false
end
end
def ip_address_to_ceph_address(ip, params)
if params['family'].eql?('inet6')
return "[#{ip}]:6789"
elsif params['family'].eql?('inet')
return "#{ip}:6789"
end
end
def mon_addresses
mon_ips = []
if File.exist?("/var/run/ceph/ceph-mon.#{node['hostname']}.asok")
mon_ips = quorum_members_ips
else
mons = []
# make sure if this node runs ceph-mon, it's always included even if
# search is laggy; put it first in the hopes that clients will talk
# primarily to local node
mons << node if node['ceph']['is_mon']
mons += mon_nodes
if crowbar?
mon_ips = mons.map { |node| Chef::Recipe::Barclamp::Inventory.get_network_by_type(node, 'admin').address }
else
if node['ceph']['config']['global'] && node['ceph']['config']['global']['public network']
mon_ips = mons.map { |nodeish| find_node_ip_in_network(node['ceph']['config']['global']['public network'], nodeish) }
else
mon_ips = mons.map { |node| node['ipaddress'] + ':6789' }
end
end
end
mon_ips.reject { |m| m.nil? }.uniq
end
def mon_secret
if node['ceph']['encrypted_data_bags']
secret = Chef::EncryptedDataBagItem.load_secret(node['ceph']['mon']['secret_file'])
Chef::EncryptedDataBagItem.load('ceph', 'mon', secret)['secret']
elsif !mon_nodes.empty?
mon_nodes[0]['ceph']['monitor-secret']
elsif node['ceph']['monitor-secret']
node['ceph']['monitor-secret']
elsif mon_master['hostname'] != node['hostname']
mon_nodes[0]['ceph']['monitor-secret']
else
Chef::Log.info('No monitor secret found')
nil
end
end
def pg_creating?
pg_creating_flag = ''
if pg_creating_flag.empty?
pg_creating_flag = Mixlib::ShellOut.new('ceph -s | grep "creating"').run_command.stdout.strip
end
pg_creating_flag.include?('creating')
end
def mon_master
search_string2 = "run_list:role\\[ceph-mon\\] AND chef_environment:#{node.chef_environment} AND tags:mon_master"
mon_master_node = search(:node, search_string2)
if mon_master_node.empty?
search_string2 = "run_list:role\\[ceph-mon\\] AND chef_environment:#{node.chef_environment}"
all_mons = search(:node, search_string2)
mons_sort = all_mons.sort_by { |a| a['hostname']}
if mons_sort[0].name == node.name
node.tags << 'mon_master' unless node.tags.include?("mon_master")
node.save
end
mons_sort[0]
else
mon_master_node[0]
end
end
def mon_init_member
search_string3 = "run_list:role\\[ceph-mon\\] AND chef_environment:#{node.chef_environment}"
all_mons = search(:node, search_string3)
mons_sort = all_mons.sort_by { |a| a['hostname']}
end
def mon_init_member_name
mon_list = mon_init_member
mon_list.map { |a| a.name }.sort unless mon_list.nil?
end
#search SSD device
def ssd_device
ssd_device = []
node['block_device'].each do |device|
device_name = device[0]
if device_name.include?"sd"
device_ssd_flag = Mixlib::ShellOut.new("cat /sys/block/#{device_name}/queue/rotational").run_command.stdout.strip
if device_ssd_flag == "0"
ssd_device << "/dev/#{device_name}"
end
else
next
end
end
ssd_device
end
def quorum_members_ips
mon_ips = []
cmd = Mixlib::ShellOut.new("ceph --admin-daemon /var/run/ceph/ceph-mon.#{node['hostname']}.asok mon_status")
cmd.run_command
cmd.error!
mons = JSON.parse(cmd.stdout)['monmap']['mons']
mons.each do |k|
mon_ips.push(k['addr'][0..-3])
end
mon_ips
end
QUORUM_STATES = %w(leader, peon)
def quorum?
# "ceph auth get-or-create-key" would hang if the monitor wasn't
# in quorum yet, which is highly likely on the first run. This
# helper lets us delay the key generation into the next
# chef-client run, instead of hanging.
#
# Also, as the UNIX domain socket connection has no timeout logic
# in the ceph tool, this exits immediately if the ceph-mon is not
# running for any reason; trying to connect via TCP/IP would wait
# for a relatively long timeout.
cmd = Mixlib::ShellOut.new("ceph --admin-daemon /var/run/ceph/ceph-mon.#{node['hostname']}.asok mon_status")
cmd.run_command
cmd.error!
state = JSON.parse(cmd.stdout)['state']
QUORUM_STATES.include?(state)
end
# Cephx is on by default, but users can disable it.
# type can be one of 3 values: cluster, service, or client. If the value is none of the above, set it to cluster
def use_cephx?(type = nil)
# Verify type is valid
type = 'cluster' if %w(cluster service client).index(type).nil?
# CephX is enabled if it's not configured at all, or explicity enabled
node['ceph']['config'].nil? ||
node['ceph']['config']['global'].nil? ||
node['ceph']['config']['global']["auth #{type} required"] == 'cephx'
end
#current partion number of a given device
def partition_num(device)
cmd = "parted #{device} --script -- p | awk '{print $1}'"
rc = shell_out(cmd)
p_num = rc.stdout.split.select{|e| e[/\d/]}
if p_num.include? "Number"
last_num = 0
Chef::Log.info("There is not any partition created at #{resource.device} yet.")
end
p_num
end
#partion start size of a given device
def partition_start_size(device)
cmd = "parted #{device} --script -- p | awk '{print $3}' | tail -n 2"
rc = shell_out(cmd)
device_start_size = rc.stdout.split[0]
if device_start_size.include? "End"
device_start_size = 0
end
if device_start_size == 0
device_start_size
elsif device_start_size.include?('KB')
device_start_size = eval(device_start_size.gsub(/[A-Z]/,''))/1000
elsif device_start_size.include?('GB')
device_start_size = eval(device_start_size.gsub(/[A-Z]/,''))*1000
elsif device_start_size.include?('MB')
device_start_size = eval(device_start_size.gsub(/[A-Z]/,''))
elsif device_start_size.include?('TB')
device_start_size = eval(device_start_size.gsub(/[A-Z]/,''))*1000000
end
device_start_size
end
def disk_total_size(device)
cmd = "parted #{device} --script -- p | grep #{device} | cut -f 2 -d ':'"
rc = shell_out(cmd)
device_total_size = rc.stdout.split[0]
if device_total_size.include?('GB')
device_total_size = eval(device_total_size.gsub(/[A-Z]/,''))*1000
elsif device_total_size.include?('MB')
device_total_size = eval(device_total_size.gsub(/[A-Z]/,''))
elsif device_total_size.include?('TB')
device_total_size = eval(device_total_size.gsub(/[A-Z]/,''))*1000000
end
device_total_size
end
def mklabel(device)
queryresult = %x{parted #{device} --script -- print |grep 'Partition Table: gpt'}
if not queryresult.include?('gpt')
cmd = "parted #{device} --script -- mklabel gpt"
rc = shell_out(cmd)
if not rc.exitstatus.eql?(0)
Chef::Log.error("Creating disk label was failed.")
end
end
end
def mkpart(device)
device_total_size = disk_total_size(device)
device_start_size = partition_start_size(device) + 100
if node['ceph']['config']['osd']['osd journal size']
osd_journal_size = node['ceph']['config']['osd']['osd journal size'].to_i
elsif node['ceph']['config']['global']['osd journal size']
osd_journal_size = node['ceph']['config']['global']['osd journal size'].to_i
else
osd_journal_size = 5120
end
device_end_size = device_start_size + osd_journal_size
if device_start_size < device_total_size
p_num_old = partition_num(device)
if device_total_size > device_end_size
output = %x{parted #{device} --script -- mkpart osd_journal #{device_start_size.to_s} #{device_end_size.to_s}}
else
output = %x{parted #{device} --script -- mkpart osd_journal #{device_start_size.to_s} 100%}
end
output = %x{partx -a #{device} > /dev/null 2>&1}
p_num_new = partition_num(device)
p_num = (p_num_new - p_num_old)[0]
if p_num.nil?
Chef::Log.error("Making partition was failed.")
else
device_return = device+p_num
%x{mkfs.xfs #{device_return}}
device_return
end
end
end
def create_disk_partion(device)
mklabel(device)
mkpart(device)
end
def selinux_disabled?
selinux_status = Mixlib::ShellOut.new('sestatus').run_command.stdout.strip
selinux_status.include?('disabled')
end
def node_election(role, tag, chef_environment = nil)
chef_environment = chef_environment || node.chef_environment
master = search(:node, "run_list:role\\[#{role}\\] AND \
chef_environment:#{chef_environment} AND \
tags:#{tag}") || []
if master.empty?
nodes = search(:node, "run_list:role\\[#{role}\\] AND \
chef_environment:#{chef_environment}") || []
if !nodes.empty?
nodes = nodes.sort_by { |node| node.name } unless nodes.empty?
if node['hostname'].eql?(nodes[0]['hostname'])
node.tags << tag unless node.tags.include?(tag)
node.save
end
return nodes[0]
else
node
end
else
return master[0]
end
end

View File

@ -0,0 +1,14 @@
def debug_packages(packages)
packages.map { |x| x + debug_ext }
end
def debug_ext
case node['platform_family']
when 'debian'
'-dbg'
when 'rhel', 'fedora'
'-debug'
else
''
end
end

View File

@ -0,0 +1,12 @@
name 'ceph'
maintainer 'Kyle Bader'
maintainer_email 'kyle.bader@dreamhost.com'
license 'Apache 2.0'
description 'Installs/Configures the Ceph distributed filesystem'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version '0.2.1'
depends 'apache2', '>= 1.1.12'
depends 'apt'
depends 'yum', '>= 3.0'
depends 'yum-epel'

View File

@ -0,0 +1,80 @@
use_inline_resources
def whyrun_supported?
true
end
action :add do
current_resource = @current_resource
filename = @current_resource.filename
keyname = @current_resource.keyname
caps = @new_resource.caps.map { |k, v| "#{k} '#{v}'" }.join(' ')
owner = @new_resource.owner
group = @new_resource.group
mode = @new_resource.mode
unless @current_resource.caps_match
converge_by("Set caps for #{@new_resource}") do
auth_set_key(keyname, caps)
current_resource.key = get_key(keyname)
end
end
# update the key in the file
file filename do
content file_content
owner owner
group group
mode mode
end
end
def load_current_resource
@current_resource = Chef::Resource::CephClient.new(@new_resource.name)
@current_resource.name(@new_resource.name)
@current_resource.as_keyring(@new_resource.as_keyring)
@current_resource.keyname(@new_resource.keyname || "client.#{current_resource.name}.#{node['hostname']}")
@current_resource.caps(get_caps(@current_resource.keyname))
default_filename = "/etc/ceph/ceph.client.#{@new_resource.name}.#{node['hostname']}.#{@new_resource.as_keyring ? 'keyring' : 'secret'}"
@current_resource.filename(@new_resource.filename || default_filename)
@current_resource.key = get_key(@current_resource.keyname)
@current_resource.caps_match = true if @current_resource.caps == @new_resource.caps
end
def file_content
@current_resource.as_keyring ? "[#{@current_resource.keyname}]\n\tkey = #{@current_resource.key}\n" : @current_resource.key
end
def get_key(keyname)
cmd = "ceph auth print_key #{keyname} --name mon. --key='#{mon_secret}'"
#cmd = "ceph auth print_key #{keyname} --key='#{mon_secret}'"
Mixlib::ShellOut.new(cmd).run_command.stdout
end
def get_caps(keyname)
caps = {}
cmd = "ceph auth get #{keyname} --name mon. --key='#{mon_secret}'"
#cmd = "ceph auth get #{keyname} --key='#{mon_secret}'"
output = Mixlib::ShellOut.new(cmd).run_command.stdout
output.scan(/caps\s*(\S+)\s*=\s*"([^"]*)"/) { |k, v| caps[k] = v }
caps
end
def auth_set_key(keyname, caps)
secret = mon_secret
# try to add the key
cmd = "ceph auth get-or-create #{keyname} #{caps} --name mon. --key='#{secret}'"
#cmd = "ceph auth get-or-create #{keyname} #{caps} --key='#{secret}'"
get_or_create = Mixlib::ShellOut.new(cmd)
get_or_create.run_command
if get_or_create.stderr.scan(/EINVAL.*but cap.*does not match/)
Chef::Log.info('Deleting old key with incorrect caps')
# delete an old key if it exists and is wrong
Mixlib::ShellOut.new("ceph auth del #{keyname} --name mon. --key='#{secret}'").run_command
#Mixlib::ShellOut.new("ceph auth del #{keyname} --key='#{secret}'").run_command
# try to create again
get_or_create = Mixlib::ShellOut.new(cmd)
get_or_create.run_command
end
get_or_create.error!
end

17
chef/cookbooks/ceph/recipes/.gitignore vendored Normal file
View File

@ -0,0 +1,17 @@
.vagrant
Berksfile.lock
*~
*#
.#*
\#*#
.*.sw[a-z]
*.un~
/cookbooks
# Bundler
Gemfile.lock
bin/*
.bundle/*
.kitchen/
.kitchen.local.yml

View File

@ -0,0 +1,63 @@
---
driver_plugin: vagrant
driver_config:
vagrantfile_erb: test/integration/Vagrantfile.erb
require_chef_omnibus: true
platforms:
- name: ubuntu-12.04
run_list:
- recipe[apt]
- name: ubuntu-14.04
run_list:
- recipe[apt]
- name: debian-7.4
run_list:
- recipe[apt]
- name: centos-6.5
- name: centos-5.10
- name: fedora-18
provisioner:
name: chef_zero
suites:
- name: default
run_list:
- "recipe[ceph::repo]"
- "recipe[ceph]"
attributes: &defaults
ceph:
config:
fsid: ae3f1d03-bacd-4a90-b869-1a4fabb107f2
mon_initial_members:
- "127.0.0.1"
- name: osd
run_list:
- "role[ceph-osd]"
attributes: *defaults
- name: mon
run_list:
- "role[ceph-mon]"
attributes: *defaults
- name: mds
run_list:
- "role[ceph-mds]"
attributes: *defaults
- name: radosgw
run_list:
- "role[ceph-radosgw]"
attributes: *defaults
- name: aio
attributes:
ceph:
config-sections:
global:
"osd journal size" : 128
"osd pool default size": 1
osd_devices:
- { device: "/dev/sdb" }
- { device: "/dev/sdc" }
- { device: "/dev/sdd" }
run_list:
- recipe[ceph::all_in_one]

View File

@ -0,0 +1,28 @@
AllCops:
Include:
- Berksfile
- Gemfile
- Rakefile
- Thorfile
- Guardfile
Exclude:
- vendor/**
ClassLength:
Enabled: false
Documentation:
Enabled: false
Encoding:
Enabled: false
HashSyntax:
Enabled: false
LineLength:
Enabled: false
MethodLength:
Enabled: false
SignalException:
Enabled: false
TrailingComma:
Enabled: false
WordArray:
Enabled: false

View File

@ -0,0 +1,7 @@
language: ruby
rvm:
- 1.9.3
- 2.0.0
bundler_args: --without integration
script:
- bundle exec rake travis

View File

@ -0,0 +1,6 @@
include_recipe 'ceph::_common_install'
# Tools needed by cookbook
node['ceph']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1 @@
include_recipe 'ceph::repo' if node['ceph']['install_repo']

View File

@ -0,0 +1,6 @@
include_recipe 'ceph::mon'
include_recipe 'ceph::osd'
include_recipe 'ceph::mds'
include_recipe 'ceph::cephfs'
include_recipe 'ceph::radosgw'

View File

@ -0,0 +1,27 @@
include_recipe 'apt'
branch = node['ceph']['branch']
distribution_codename =
case node['lsb']['codename']
when 'jessie' then 'sid'
else node['lsb']['codename']
end
apt_repository 'ceph' do
repo_name 'ceph'
uri node['ceph']['debian'][branch]['repository']
distribution distribution_codename
components ['main']
key node['ceph']['debian'][branch]['repository_key']
end
apt_repository 'ceph-extras' do
repo_name 'ceph-extras'
uri node['ceph']['debian']['extras']['repository']
distribution distribution_codename
components ['main']
key node['ceph']['debian']['extras']['repository_key']
only_if { node['ceph']['extras_repo'] }
end

View File

@ -0,0 +1,46 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: cephfs
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
include_recipe 'ceph::_common'
include_recipe 'ceph::cephfs_install'
include_recipe 'ceph::conf'
name = 'cephfs'
client_name = "cephfs.#{node['hostname']}"
filename = "/etc/ceph/ceph.client.#{client_name}.secret"
ceph_client name do
filename filename
caps('mon' => 'allow r', 'osd' => 'allow rw', 'mds' => 'allow')
as_keyring false
end
mons = mon_addresses.join(',') + ':/'
directory node['ceph']['cephfs_mount']
mount node['ceph']['cephfs_mount'] do
fstype 'ceph'
device mons
options "_netdev,name=#{client_name},secretfile=#{filename}"
dump 0
pass 0
action [:mount, :enable]
not_if { mons.empty? }
end

View File

@ -0,0 +1,5 @@
include_recipe 'ceph::_common_install'
node['ceph']['cephfs']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1,27 @@
# fail 'mon_initial_members must be set in config' if node['ceph']['config']['mon_initial_members'].nil?
unless node['ceph']['config']['fsid']
Chef::Log.warn('We are genereting a new uuid for fsid')
require 'securerandom'
node.set['ceph']['config']['fsid'] = SecureRandom.uuid
node.save
end
directory '/etc/ceph' do
owner 'root'
group 'root'
mode '0755'
action :create
end
template '/etc/ceph/ceph.conf' do
source 'ceph.conf.erb'
variables lazy {
{
:mon_addresses => mon_addresses,
:is_rgw => node['ceph']['is_radosgw'],
:is_keystone_integration => node['ceph']['is_keystone_integration']
}
}
mode '0644'
end

View File

@ -0,0 +1,53 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: default
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
packages = []
case node['platform_family']
when 'debian'
packages = %w(
ceph
ceph-common
)
if node['ceph']['install_debug']
packages_dbg = %w(
ceph-dbg
ceph-common-dbg
)
packages += packages_dbg
end
when 'rhel', 'fedora'
packages = %w(
ceph
)
if node['ceph']['install_debug']
packages_dbg = %w(
ceph-debug
)
packages += packages_dbg
end
end
packages.each do |pkg|
package pkg do
action :install
end
end

View File

@ -0,0 +1,70 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: mds
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
include_recipe 'ceph::_common'
include_recipe 'ceph::mds_install'
include_recipe 'ceph::conf'
cluster = 'ceph'
directory "/var/lib/ceph/mds/#{cluster}-#{node['hostname']}" do
owner 'root'
group 'root'
mode 00755
recursive true
action :create
end
ceph_client 'mds' do
caps('osd' => 'allow *', 'mon' => 'allow rwx')
keyname "mds.#{node['hostname']}"
filename "/var/lib/ceph/mds/#{cluster}-#{node['hostname']}/keyring"
end
file "/var/lib/ceph/mds/#{cluster}-#{node['hostname']}/done" do
owner 'root'
group 'root'
mode 00644
end
service_type = node['ceph']['osd']['init_style']
case service_type
when 'upstart'
filename = 'upstart'
else
filename = 'sysvinit'
end
file "/var/lib/ceph/mds/#{cluster}-#{node['hostname']}/#{filename}" do
owner 'root'
group 'root'
mode 00644
end
service 'ceph_mds' do
case service_type
when 'upstart'
service_name 'ceph-mds-all-starter'
provider Chef::Provider::Service::Upstart
else
service_name 'ceph'
end
action [:enable, :start]
supports :restart => true
end

View File

@ -0,0 +1,5 @@
include_recipe 'ceph::_common_install'
node['ceph']['mds']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1,180 @@
# This recipe creates a monitor cluster
#
# You should never change the mon default path or
# the keyring path.
# Don't change the cluster name either
# Default path for mon data: /var/lib/ceph/mon/$cluster-$id/
# which will be /var/lib/ceph/mon/ceph-`hostname`/
# This path is used by upstart. If changed, upstart won't
# start the monitor
# The keyring files are created using the following pattern:
# /etc/ceph/$cluster.client.$name.keyring
# e.g. /etc/ceph/ceph.client.admin.keyring
# The bootstrap-osd and bootstrap-mds keyring are a bit
# different and are created in
# /var/lib/ceph/bootstrap-{osd,mds}/ceph.keyring
node.default['ceph']['is_mon'] = true
include_recipe 'ceph::conf'
include_recipe 'ceph::_common'
include_recipe 'ceph::mon_install'
service_type = node['ceph']['mon']['init_style']
directory '/var/run/ceph' do
owner 'root'
group 'root'
mode 00755
recursive true
action :create
end
directory "/var/lib/ceph/mon/ceph-#{node['hostname']}" do
owner 'root'
group 'root'
mode 00755
recursive true
action :create
end
# TODO: cluster name
cluster = 'ceph'
if mon_master.name != node.name
admin_keyring = mon_master['ceph']['admin-secret']
if admin_keyring.nil?
Chef::Application.fatal!("wait for mon master node update.")
end
if mon_secret.nil?
Chef::Application.fatal!("wait for mon master node update.")
end
admin_user = "admin"
template "/etc/ceph/ceph.client.#{admin_user}.keyring" do
source 'ceph.client.keyring.erb'
mode 00600
variables(
name: admin_user,
key: admin_keyring
)
end
end
unless File.exist?("/var/lib/ceph/mon/ceph-#{node['hostname']}/done")
keyring = "#{Chef::Config[:file_cache_path]}/#{cluster}-#{node['hostname']}.mon.keyring"
execute 'format mon-secret as keyring' do
command lazy { "ceph-authtool '#{keyring}' --create-keyring --name=mon. --add-key='#{mon_secret}' --cap mon 'allow *'" }
creates "#{Chef::Config[:file_cache_path]}/#{cluster}-#{node['hostname']}.mon.keyring"
only_if { mon_secret }
notifies :create, 'ruby_block[save mon_secret]', :immediately
end
execute 'generate mon-secret as keyring' do
command "ceph-authtool '#{keyring}' --create-keyring --name=mon. --gen-key --cap mon 'allow *'"
creates "#{Chef::Config[:file_cache_path]}/#{cluster}-#{node['hostname']}.mon.keyring"
not_if { mon_secret }
notifies :create, 'ruby_block[save mon_secret]', :immediately
end
ruby_block 'save mon_secret' do
block do
fetch = Mixlib::ShellOut.new("ceph-authtool '#{keyring}' --print-key --name=mon.")
fetch.run_command
key = fetch.stdout
node.set['ceph']['monitor-secret'] = key
node.save
end
action :nothing
end
execute 'ceph-mon mkfs' do
command "ceph-mon --mkfs -i #{node['hostname']} --keyring '#{keyring}'"
end
ruby_block 'finalise' do
block do
['done', service_type].each do |ack|
::File.open("/var/lib/ceph/mon/ceph-#{node['hostname']}/#{ack}", 'w').close
end
end
end
end
if service_type == 'upstart'
service 'ceph-mon' do
provider Chef::Provider::Service::Upstart
action :enable
end
service 'ceph-mon-all' do
provider Chef::Provider::Service::Upstart
supports :status => true
action [:enable, :start]
end
end
service 'ceph_mon' do
case service_type
when 'upstart'
service_name 'ceph-mon-all-starter'
provider Chef::Provider::Service::Upstart
else
service_name 'ceph'
end
supports :restart => true, :status => true
subscribes :restart, resources('template[/etc/ceph/ceph.conf]')
action [:enable, :start]
end
mon_addresses.each do |addr|
execute "peer #{addr}" do
command "ceph --admin-daemon '/var/run/ceph/ceph-mon.#{node['hostname']}.asok' add_bootstrap_peer_hint #{addr}"
ignore_failure true
end
end
# The key is going to be automatically created, We store it when it is created
# If we're storing keys in encrypted data bags, then they've already been generated above
#if use_cephx? && !node['ceph']['encrypted_data_bags']
unless node['ceph']['encrypted_data_bags']
ruby_block 'get osd-bootstrap keyring' do
block do
run_out = ''
while run_out.empty?
run_out = Mixlib::ShellOut.new('ceph auth get-key client.bootstrap-osd').run_command.stdout.strip
sleep 2
end
node.set['ceph']['bootstrap_osd_key'] = run_out
node.save
end
not_if { node['ceph']['bootstrap_osd_key'] }
end
end
ruby_block 'save admin_secret' do
block do
fetch = Mixlib::ShellOut.new("ceph-authtool /etc/ceph/ceph.client.admin.keyring --print-key --name=client.admin")
fetch.run_command
key = fetch.stdout
node.set['ceph']['admin-secret'] = key
node.save
end
end
default_pools = node['ceph']['default_pools']
#set default pg num
if node['ceph']['config']['global']['osd pool default pg num']
default_pools.each do |default_pool|
run_out = Mixlib::ShellOut.new("ceph osd pool get #{default_pool} pg_num| awk -F \": \" '{print $2}'").run_command.stdout.strip
if run_out.to_i < node['ceph']['config']['global']['osd pool default pgp num'].to_i
execute 'set default pg num' do
command "ceph osd pool delete #{default_pool} #{default_pool} --yes-i-really-really-mean-it;ceph osd pool create #{default_pool} #{node['ceph']['config']['global']['osd pool default pg num']}"
ignore_failure true
not_if {pg_creating?}
end
end
end
end

View File

@ -0,0 +1,5 @@
include_recipe 'ceph::_common_install'
node['ceph']['mon']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1,76 @@
# attention:
# this recipe should run after the openstack and ceph are running correctly!
#
cluster = 'ceph'
if node['ceph']['openstack_pools'].nil?
node.normal['ceph']['openstack_pools'] = [{'pool_name'=>'images'},{'pool_name'=>'volumes'},{'pool_name'=>'vms'}]
end
#create pools for openstack volumes and images
if node['ceph']['openstack_pools']
pools = node['ceph']['openstack_pools']
pools = Hash[(0...pools.size).zip pools] unless pools.kind_of? Hash
pools.each do |index, ceph_pools|
unless ceph_pools['status'].nil?
Chef::Log.info("osd pools: ceph_pools #{ceph_pools['pool_name']} has already been create.")
next
end
execute "create #{ceph_pools['pool_name']} pool" do
command "ceph osd pool create #{ceph_pools['pool_name']} #{node['ceph']['config']['global']['osd pool default pg num']}"
notifies :create, "ruby_block[save osd pools status #{index}]", :immediately
end
ruby_block "save osd pools status #{index}" do
block do
node.normal['ceph']['openstack_pools'][index]['status'] = 'created'
node.save
end
action :nothing
end
end
end
#generate the openstack cinder secret
if node['ceph']['cinder-secret'].nil?
keyring1 = "client.cinder"
execute 'generate cinder-secret as keyring' do
command "ceph auth get-or-create #{keyring1} mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'"
notifies :create, 'ruby_block[save cinder-secret]', :immediately
end
ruby_block 'save cinder-secret' do
block do
fetch1 = Mixlib::ShellOut.new("ceph auth print_key '#{keyring1}'")
fetch1.run_command
key1 = fetch1.stdout
node.set['ceph']['cinder-secret'] = key1
node.save
end
action :nothing
end
end
#generate the openstack glance secret
if node['ceph']['glance-secret'].nil?
keyring2 = "client.glance"
execute 'generate cinder-secret as keyring' do
command "ceph auth get-or-create #{keyring2} mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'"
notifies :create, 'ruby_block[save glance-secret]', :immediately
end
ruby_block 'save glance-secret' do
block do
fetch2 = Mixlib::ShellOut.new("ceph auth print_key '#{keyring2}'")
fetch2.run_command
key2 = fetch2.stdout
node.set['ceph']['glance-secret'] = key2
node.save
end
action :nothing
end
end

View File

@ -0,0 +1,100 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: radosgw
#
# Copyright 2011, Liucheng
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
node.default['ceph']['is_keystone_integration'] = false
if node['ceph']['is_keystone_integration']
keystone_master = node_election('os-identity', 'keystone_keygen', node['ceph']['keystone environment'])
puts "****************keystone_master:#{keystone_master}"
if keystone_master['openstack']['endpoints']['identity-bind']['host'].nil?
Chef::Log.debug \
"Chef-client exit for keystone endpoint bind host on #{keystone_master.name})"
exit 1
end
node.default['ceph']['config']['keystone']['rgw keystone url'] = "#{keystone_master['openstack']['endpoints']['identity-bind']['host']}:35357"
template '/etc/ceph/ceph.conf' do
source 'ceph.conf.erb'
variables lazy {
{
:mon_addresses => mon_addresses,
:is_rgw => node['ceph']['is_radosgw'],
:is_keystone_integration => node['ceph']['is_keystone_integration']
}
}
mode '0644'
end
%w{certfile ca_certs}.each do |name|
if !keystone_master['openstack']['identity']['signing'].attribute?("#{name}_data")
Chef::Log.debug \
"Chef-client exit for PKI files from node #{keystone_master.name})"
exit 1
end
file node['ceph']['radosgw']['signing']["#{name}"] do
content keystone_master['openstack']['identity']['signing']["#{name}_data"]
owner 'root'
group 'root'
mode 00640
end
end
directory node['ceph']['config']['keystone']['nss db path'] do
owner 'apache'
group 'apache'
mode 00755
recursive true
action :create
end
if !::File.exist?("#{node['ceph']['config']['keystone']['nss db path']}/done")
execute 'config ca.pem' do
command "openssl x509 -in #{node['ceph']['radosgw']['signing']['ca_certs']} -pubkey | certutil -d /var/ceph/nss -A -n ca -t \"TCu,Cu,Tuw\""
end
execute 'config signing_cert.pem' do
command "openssl x509 -in #{node['ceph']['radosgw']['signing']['certfile']} -pubkey | certutil -A -d /var/ceph/nss -n signing_cert -t \"P,P,P\""
end
execute 'change owner of nss' do
command "chown apache:apache -R #{node['ceph']['config']['keystone']['nss db path']}"
end
file "#{node['ceph']['config']['keystone']['nss db path']}/done" do
action :create
end
end
service 'ceph-radosgw' do
case node['ceph']['radosgw']['init_style']
when 'upstart'
service_name 'radosgw-all-starter'
provider Chef::Provider::Service::Upstart
else
if node['platform'] == 'debian'
service_name 'radosgw'
else
service_name 'ceph-radosgw'
end
end
supports :restart => true
action [:enable, :start]
subscribes :restart, resources('template[/etc/ceph/ceph.conf]')
end
end

View File

@ -0,0 +1,193 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: osd
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# this recipe allows bootstrapping new osds, with help from mon
# Sample environment:
# #knife node edit ceph1
# "osd_devices": [
# {
# "device": "/dev/sdc"
# },
# {
# "device": "/dev/sdd",
# "dmcrypt": true,
# "journal": "/dev/sdd"
# }
# ]
include_recipe 'ceph::_common'
include_recipe 'ceph::osd_install'
include_recipe 'ceph::conf'
package 'gdisk' do
action :upgrade
end
package 'cryptsetup' do
action :upgrade
only_if { node['dmcrypt'] }
end
service_type = node['ceph']['osd']['init_style']
directory '/var/lib/ceph/bootstrap-osd' do
owner 'root'
group 'root'
mode '0755'
end
# TODO: cluster name
cluster = 'ceph'
execute 'format bootstrap-osd as keyring' do
command lazy { "ceph-authtool '/var/lib/ceph/bootstrap-osd/#{cluster}.keyring' --create-keyring --name=client.bootstrap-osd --add-key='#{osd_secret}'" }
creates "/var/lib/ceph/bootstrap-osd/#{cluster}.keyring"
only_if { osd_secret }
end
node_osds = node['ceph']['osd_devices']
if node_osds.nil? or node_osds.empty?
osd_device = []
ssd_disk = ssd_device
ssd_index = 0
# search normal osd device
node['block_device'].each do |device|
device_hash = Hash.new
device_name = device[0]
if device_name.include?"sd"
# whether the storage device is in use
device_ssd_flag = Mixlib::ShellOut.new("cat /sys/block/#{device_name}/queue/rotational").run_command.stdout.strip
device_partion_num = Mixlib::ShellOut.new("cat /proc/partitions | grep #{device_name} -c").run_command.stdout.strip
if device_partion_num == "1" and device_ssd_flag == "1"
device_hash['device'] = "/dev/#{device_name}"
unless ssd_disk.empty?
ssd_index = (ssd_index >= ssd_disk.length ? 0 : ssd_index)
ssd_partion = nil
while ssd_partion.nil?
if ssd_index >= ssd_disk.length
break
end
ssd_partion = create_disk_partion(ssd_disk[ssd_index])
ssd_index = ssd_index + 1
end
ssd_index = ssd_index + 1
end
device_hash['journal'] = ssd_partion unless ssd_partion.nil?
end
osd_device << device_hash unless device_hash.empty?
else
next
end
node.normal['ceph']['osd_devices'] = osd_device
node.save
node_osds = osd_device
Log.info("osd_devices are #{node['ceph']['osd_devices']}")
end
end
if crowbar?
node['crowbar']['disks'].each do |disk, _data|
execute "ceph-disk-prepare #{disk}" do
command "ceph-disk-prepare /dev/#{disk}"
only_if { node['crowbar']['disks'][disk]['usage'] == 'Storage' }
notifies :run, 'execute[udev trigger]', :immediately
end
ruby_block "set disk usage for #{disk}" do
block do
node.set['crowbar']['disks'][disk]['usage'] = 'ceph-osd'
node.save
end
end
end
execute 'udev trigger' do
command 'udevadm trigger --subsystem-match=block --action=add'
action :nothing
end
else
# Calling ceph-disk-prepare is sufficient for deploying an OSD
# After ceph-disk-prepare finishes, the new device will be caught
# by udev which will run ceph-disk-activate on it (udev will map
# the devices if dm-crypt is used).
# IMPORTANT:
# - Always use the default path for OSD (i.e. /var/lib/ceph/
# osd/$cluster-$id)
# - $cluster should always be ceph
# - The --dmcrypt option will be available starting w/ Cuttlefish
if node_osds
devices = node_osds
devices = Hash[(0...devices.size).zip devices] unless devices.kind_of? Hash
devices.each do |index, osd_device|
unless osd_device['status'].nil?
Log.info("osd: osd_device #{osd_device} has already been setup.")
next
end
directory osd_device['device'] do # ~FC022
owner 'root'
group 'root'
recursive true
only_if { osd_device['type'] == 'directory' }
end
dmcrypt = osd_device['encrypted'] == true ? '--dmcrypt' : ''
execute "ceph-disk-prepare on #{osd_device['device']}" do
command "ceph-disk-prepare #{dmcrypt} #{osd_device['device']} #{osd_device['journal']}"
action :run
notifies :create, "ruby_block[save osd_device status #{index}]", :immediately
end
execute "ceph-disk-activate #{osd_device['device']}" do
only_if { osd_device['type'] == 'directory' }
end
# we add this status to the node env
# so that we can implement recreate
# and/or delete functionalities in the
# future.
ruby_block "save osd_device status #{index}" do
block do
node.normal['ceph']['osd_devices'][index]['status'] = 'deployed'
node.save
end
action :nothing
end
end
else
Log.info('node["ceph"]["osd_devices"] empty')
end
end
service 'ceph_osd' do
case service_type
when 'upstart'
service_name 'ceph-osd-all-starter'
provider Chef::Provider::Service::Upstart
else
service_name 'ceph'
end
action [:enable, :start]
supports :restart => true
subscribes :restart, resources('template[/etc/ceph/ceph.conf]')
end

View File

@ -0,0 +1,5 @@
include_recipe 'ceph::_common_install'
node['ceph']['osd']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1,103 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: radosgw
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
node.default['ceph']['is_radosgw'] = true
include_recipe 'ceph::_common'
include_recipe 'ceph::radosgw_install'
include_recipe 'ceph::conf'
directory '/var/run/ceph' do
owner 'apache'
group 'apache'
mode 00755
recursive true
action :create
end
if !::File.exist?("/var/lib/ceph/radosgw/ceph-radosgw.#{node['hostname']}/done")
if node['ceph']['radosgw']['webserver_companion']
include_recipe "ceph::radosgw_#{node['ceph']['radosgw']['webserver_companion']}"
end
ceph_client 'radosgw' do
caps('mon' => 'allow rw', 'osd' => 'allow rwx')
end
d_owner = d_group = 'apache'
%W(
/etc/ceph/ceph.client.radosgw.#{node['hostname']}.keyring
/var/log/ceph/radosgw.log
).each do |f|
file f do
owner d_owner
group d_group
action :create
end
end
directory "/var/lib/ceph/radosgw/ceph-radosgw.#{node['hostname']}" do
recursive true
end
file "/var/lib/ceph/radosgw/ceph-radosgw.#{node['hostname']}/done" do
action :create
end
service 'radosgw' do
case node['ceph']['radosgw']['init_style']
when 'upstart'
service_name 'radosgw-all-starter'
provider Chef::Provider::Service::Upstart
else
if node['platform'] == 'debian'
service_name 'radosgw'
else
service_name 'ceph-radosgw'
end
end
supports :restart => true
action [:enable, :start]
end
execute 'set selinux permissive' do
command "setenforce 0"
not_if {selinux_disabled?}
end
else
Log.info('Rados Gateway already deployed')
end
service 'radosgw' do
case node['ceph']['radosgw']['init_style']
when 'upstart'
service_name 'radosgw-all-starter'
provider Chef::Provider::Service::Upstart
else
if node['platform'] == 'debian'
service_name 'radosgw'
else
service_name 'ceph-radosgw'
end
end
supports :restart => true
action [:enable, :start]
subscribes :restart, resources('template[/etc/ceph/ceph.conf]')
end

View File

@ -0,0 +1,104 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: radosgw_apache2
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# For EL, delete the current fastcgi configuration
# and set the correct owners for dirs and logs
# d_owner = d_group = 'root'
# if node['platform_family'] == 'rhel'
# file "#{node['apache']['dir']}/conf.d/fastcgi.conf" do
# action :delete
# backup false
# end
# d_owner = d_group = 'apache'
# end
# %W(/var/run/ceph
# /var/lib/ceph/radosgw/ceph-radosgw.#{node['hostname']}
# /var/lib/apache2/
# ).each do |dir|
# directory dir do
# owner d_owner
# group d_group
# mode '0755'
# recursive true
# action :create
# end
# end
include_recipe 'ceph::_common'
include_recipe 'ceph::_common_install'
include_recipe 'ceph::radosgw_apache2_repo'
node['ceph']['radosgw']['apache2']['packages'].each do |pck|
package pck
end
include_recipe 'apache2'
d_owner = d_group = 'root'
puts "**************************rgw_platform_family: #{node['platform_family']}"
if node['platform_family'] == 'rhel'
file "#{node['apache']['dir']}/conf.d/fastcgi.conf" do
action :delete
backup false
end
d_owner = d_group = 'apache'
end
%W(/var/lib/ceph/radosgw/ceph-radosgw.#{node['hostname']}
/var/lib/apache2/
/var/log/ceph/
).each do |dir|
directory dir do
owner d_owner
group d_group
mode '0755'
recursive true
action :create
end
end
apache_module 'fastcgi' do
conf true
end
apache_module 'rewrite' do
conf false
end
web_app 'rgw' do
template 'rgw.conf.erb'
server_name node['ceph']['radosgw']['api_fqdn']
admin_email node['ceph']['radosgw']['admin_email']
ceph_rgw_addr node['ceph']['radosgw']['rgw_addr']
end
service 'apache2' do
action :restart
end
template '/var/www/s3gw.fcgi' do
source 's3gw.fcgi.erb'
owner 'root'
group 'root'
mode '0755'
variables(
:ceph_rgw_client => "client.radosgw.#{node['hostname']}"
)
end

View File

@ -0,0 +1,33 @@
if node['ceph']['radosgw']['use_apache_fork'] == true
if node.platform_family?('debian') &&
%w(precise quantal raring saucy trusty squeeze wheezy).include?(node['lsb']['codename'])
apt_repository 'ceph-apache2' do
repo_name 'ceph-apache2'
uri "http://gitbuilder.ceph.com/apache2-deb-#{node['lsb']['codename']}-x86_64-basic/ref/master"
distribution node['lsb']['codename']
components ['main']
key 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'
end
apt_repository 'ceph-modfastcgi' do
repo_name 'ceph-modfastcgi'
uri "http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-#{node['lsb']['codename']}-x86_64-basic/ref/master"
distribution node['lsb']['codename']
components ['main']
key 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'
end
elsif (node.platform_family?('fedora') && [18, 19].include?(node['platform_version'].to_i)) ||
(node.platform_family?('rhel') && [6].include?(node['platform_version'].to_i))
platform_family = node['platform_family']
platform_version = node['platform_version'].to_i
yum_repository 'ceph-apache2' do
baseurl "http://gitbuilder.ceph.com/apache2-rpm-#{node['platform']}#{platform_version}-x86_64-basic/ref/master"
gpgkey node['ceph'][platform_family]['dev']['repository_key']
end
yum_repository 'ceph-modfastcgi' do
baseurl "http://gitbuilder.ceph.com/mod_fastcgi-rpm-#{node['platform']}#{platform_version}-x86_64-basic/ref/master"
gpgkey node['ceph'][platform_family]['dev']['repository_key']
end
else
Log.info("Ceph's Apache and Apache FastCGI forks not available for this distribution")
end
end

View File

@ -0,0 +1,5 @@
include_recipe 'ceph::_common_install'
node['ceph']['radosgw']['packages'].each do |pck|
package pck
end

View File

@ -0,0 +1,8 @@
case node['platform_family']
when 'debian'
include_recipe 'ceph::apt'
when 'rhel', 'suse', 'fedora'
include_recipe 'ceph::rpm'
else
fail 'not supported'
end

View File

@ -0,0 +1,37 @@
platform_family = node['platform_family']
case platform_family
when 'rhel'
include_recipe 'yum-epel' if node['ceph']['el_add_epel']
end
branch = node['ceph']['branch']
if branch == 'dev' && platform_family != 'centos' && platform_family != 'fedora'
fail "Dev branch for #{platform_family} is not yet supported"
end
package 'yum-plugin-priorities'
yum_repository 'ceph' do
baseurl node['ceph'][platform_family][branch]['repository']
gpgkey node['ceph'][platform_family][branch]['repository_key']
priority '1'
end
yum_repository 'ceph-extra' do
baseurl node['ceph'][platform_family]['extras']['repository']
gpgkey node['ceph'][platform_family]['extras']['repository_key']
priority '1'
only_if { node['ceph']['extras_repo'] }
end
package 'parted' # needed by ceph-disk-prepare to run partprobe
package 'hdparm' # used by ceph-disk activate
package 'xfsprogs' # needed by ceph-disk-prepare to format as xfs
if node['platform_family'] == 'rhel' && node['platform_version'].to_f > 6
package 'btrfs-progs' # needed to format as btrfs, in the future
end
if node['platform_family'] == 'rhel' && node['platform_version'].to_f < 7
package 'python-argparse'
end

View File

@ -0,0 +1,51 @@
#
# Author:: Kyle Bader <kyle.bader@dreamhost.com>
# Cookbook Name:: ceph
# Recipe:: radosgw
#
# Copyright 2011, DreamHost Web Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
node.default['ceph']['extras_repo'] = true
case node['platform_family']
when 'debian'
packages = %w(
tgt
)
when 'rhel', 'fedora'
packages = %w(
scsi-target-utils
)
end
packages.each do |pkg|
package pkg do
action :upgrade
end
end
include_recipe 'ceph::conf'
# probably needs the key
service 'tgt' do
if node['platform'] == 'ubuntu'
# The ceph version of tgt does not provide an Upstart script
provider Chef::Provider::Service::Init::Debian
service_name 'tgt'
else
service_name 'tgt'
end
supports :restart => true
action [:enable, :start]
end

View File

@ -0,0 +1,24 @@
actions :add
default_action :add
attribute :name, :kind_of => String, :name_attribute => true
attribute :caps, :kind_of => Hash, :default => { 'mon' => 'allow r', 'osd' => 'allow r' }
# Whether to store the secret in a keyring file or a plain secret file
attribute :as_keyring, :kind_of => [TrueClass, FalseClass], :default => true
# what the key should be called in the ceph cluster
# defaults to client.#{name}.#{hostname}
attribute :keyname, :kind_of => String
# where the key should be saved
# defaults to /etc/ceph/ceph.client.#{name}.#{hostname}.keyring if as_keyring
# defaults to /etc/ceph/ceph.client.#{name}.#{hostname}.secret if not as_keyring
attribute :filename, :kind_of => String
# key file access creds
attribute :owner, :kind_of => String, :default => 'root'
attribute :group, :kind_of => String, :default => 'root'
attribute :mode, :kind_of => [Integer, String], :default => '00640'
attr_accessor :key, :caps_match

View File

@ -0,0 +1,2 @@
[client.<%= @name -%>]
key = <%= @key %>

View File

@ -0,0 +1,59 @@
[global]
fsid = <%= node["ceph"]["config"]["fsid"] %>
mon initial members = <%= node["ceph"]["config"]["mon_initial_members"] %>
mon host = <%= @mon_addresses.sort.join(', ') %>
<% if (! node['ceph']['config']['global'].nil?) -%>
<% node['ceph']['config']['global'].sort.each do |k, v| %>
<%= k %> = <%= v %>
<% end %>
<% end -%>
<% if (! node['ceph']['config']['osd'].nil?) -%>
[osd]
<% node['ceph']['config']['osd'].sort.each do |k, v| %>
<% if ssd_device%>
<% end %>
<%= k %> = <%= v %>
<% end %>
<% end -%>
<% if (! node['ceph']['config']['mon'].nil?) -%>
[mon]
<% node['ceph']['config']['mon'].sort.each do |k, v| %>
<%= k %> = <%= v %>
<% end %>
<% end -%>
<% if (! node['ceph']['config']['mds'].nil?) -%>
[mds]
<% node['ceph']['config']['mds'].sort.each do |key, value| -%>
<%= key %> = <%= value %>
<% end -%>
<% end -%>
<% if (@is_rgw) -%>
[client.radosgw.<%= node['hostname'] %>]
host = <%= node['hostname'] %>
rgw socket path = /var/run/ceph/radosgw.<%= node['hostname'] %>
keyring = /etc/ceph/ceph.client.radosgw.<%= node['hostname'] %>.keyring
log file = /var/log/ceph/radosgw.log
<% if (! node['ceph']['config']['rgw'].nil?) -%>
<% node['ceph']['config']['rgw'].sort.each do |k, v| %>
<%= k %> = <%= v %>
<% end %>
<% end -%>
<% if (@is_keystone_integration) -%>
<% if (! node['ceph']['config']['keystone'].nil?) -%>
<% node['ceph']['config']['keystone'].sort.each do |k, v| %>
<%= k %> = <%= v %>
<% end %>
<% end -%>
<% end -%>
<% end -%>
<% node['ceph']['config-sections'].sort.each do |name, sect| %>
[<%= name %>]
<% sect.sort.each do |k, v| %>
<%= k %> = <%= v %>
<% end %>
<% end %>

View File

@ -0,0 +1,6 @@
<IfModule mod_fastcgi.c>
AddHandler fastcgi-script .fcgi
#FastCgiWrapper /usr/lib/apache2/suexec
FastCgiIpcDir /var/lib/apache2/fastcgi
FastCgiWrapper off
</IfModule>

View File

@ -0,0 +1,39 @@
<% if node['ceph']['radosgw']['rgw_port'] -%>
FastCgiExternalServer /var/www/s3gw.fcgi -host 127.0.0.1:<%= node['ceph']['radosgw']['rgw_port'] %>
<% else -%>
FastCgiExternalServer /var/www/s3gw.fcgi -socket /var/run/ceph/radosgw.<%= node['hostname'] %>
<% end -%>
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" \"%{Host}i\"" proxy_combined
LogFormat "%{X-Forwarded-For}i %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" \"%{Host}i\"" proxy_debug
<VirtualHost <%= node['ceph']['radosgw']['rgw_addr'] %>>
ServerName <%= @params[:server_name] %>
<% if node['ceph']['radosgw']['api_aliases'] -%>
<% node['ceph']['radosgw']['api_aliases'].each do |api_alias| -%>
ServerAlias <%= api_alias %>
<% end -%>
<% end -%>
ServerAdmin <%= node["ceph"]["radosgw"]["admin_email"] %>
DocumentRoot /var/www/
RewriteEngine On
RewriteRule ^/(.*) /s3gw.fcgi?%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
<IfModule mod_fastcgi.c>
<Directory /var/www/>
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
</Directory>
</IfModule>
AllowEncodedSlashes On
ErrorLog /var/log/<%= node['apache']['package'] %>/error.log
CustomLog /var/log/<%= node['apache']['package'] %>/rgw-access.log proxy_combined
ServerSignature Off
</VirtualHost>

View File

@ -0,0 +1,2 @@
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n <%= @ceph_rgw_client %>

View File

@ -0,0 +1,10 @@
Vagrant.configure("2") do |config|
config.vm.box = "<%= config[:box] %>"
config.vm.box_url = "<%= config[:box_url ]%>"
(0..2).each do |d|
config.vm.provider :virtualbox do |vb|
vb.customize [ "createhd", "--filename", "disk-#{d}", "--size", "1000" ]
vb.customize [ "storageattach", :id, "--storagectl", "IDE Controller", "--device", (1+d)/2, "--port", (1+d)%2, "--type", "hdd", "--medium", "disk-#{d}.vdi" ]
end
end
end

View File

@ -0,0 +1,19 @@
@test "ceph is running" {
ceph -s | grep HEALTH
}
@test "ceph is healthy" {
ceph -s | grep HEALTH_OK
}
@test "cephfs is mounted" {
mount | grep 'type ceph'
}
@test "radosgw is running" {
ps auxwww | grep radosg[w]
}
@test "apache is running and listening" {
netstat -ln | grep -E '^\S+\s+\S+\s+\S+\s+\S+:80\s+'
}

View File

@ -18,11 +18,13 @@
#
# node['haproxy']['backend'] to deside where service backend sources come from
# if 'prefeed', all services' backend info will be choosen from databag
# if 'prefeed', all services' backend info will be choosen from attribute
# 'node_mapping'; 'prefeed' is suitable for stable and independent services
# if 'autofeed', services' backend info will automaticly learn backend info
# from it's chef server.
default['haproxy']['choose_backend'] = 'prefeed'
# from its chef server.
default['haproxy']['log']['facilities'] = 'local4'
default['haproxy']['log']['file'] = '/var/log/haproxy.log'
default['haproxy']['choose_backend'] = 'autofeed'
default['haproxy']['enable_default_http'] = true
default['haproxy']['incoming_address'] = "0.0.0.0"
default['haproxy']['incoming_port'] = 80
@ -31,7 +33,8 @@ default['haproxy']['members'] = [{
"ipaddress" => "127.0.0.1",
"port" => 4000,
"ssl_port" => 4000
}, {
},
{
"hostname" => "localhost",
"ipaddress" => "127.0.0.1",
"port" => 4001,
@ -56,13 +59,15 @@ default['haproxy']['stats_socket_user'] = node['haproxy']['user']
default['haproxy']['stats_socket_group'] = node['haproxy']['group']
default['haproxy']['pid_file'] = "/var/run/haproxy.pid"
default['haproxy']['defaults_options'] = ["tcpka", "httpchk", "tcplog", "httplog"]
default['haproxy']['defaults_options'] = ["tcpka", "httpchk", "tcplog", "httplog", "forceclose", "redispatch"]
default['haproxy']['x_forwarded_for'] = false
default['haproxy']['defaults_timeouts']['connect'] = "10s"
default['haproxy']['defaults_timeouts']['check'] = "10s"
default['haproxy']['defaults_timeouts']['queue'] = "100s"
default['haproxy']['defaults_timeouts']['client'] = "100s"
default['haproxy']['defaults_timeouts']['server'] = "100s"
default['haproxy']['defaults_timeouts']['connect'] = "30s"
default['haproxy']['defaults_timeouts']['check'] = "30s"
#default['haproxy']['defaults_timeouts']['queue'] = "100s"
default['haproxy']['defaults_timeouts']['client'] = "300s"
default['haproxy']['defaults_timeouts']['server'] = "300s"
default['haproxy']['tune']['bufsize'] = 1000000
default['haproxy']['tune']['maxrewrite'] = 1024
default['haproxy']['cookie'] = nil
@ -70,9 +75,9 @@ default['haproxy']['user'] = "haproxy"
default['haproxy']['group'] = "haproxy"
default['haproxy']['global_max_connections'] = 8192
default['haproxy']['member_max_connections'] = 100
default['haproxy']['frontend_max_connections'] = 2000
default['haproxy']['frontend_ssl_max_connections'] = 2000
default['haproxy']['member_max_connections'] = 20000
default['haproxy']['frontend_max_connections'] = 4096
default['haproxy']['frontend_ssl_max_connections'] = 4096
default['haproxy']['install_method'] = 'package'
default['haproxy']['conf_dir'] = '/etc/haproxy'
@ -88,7 +93,43 @@ default['haproxy']['source']['use_pcre'] = false
default['haproxy']['source']['use_openssl'] = false
default['haproxy']['source']['use_zlib'] = false
default['haproxy']['enabled_services'] = []
default['haproxy']['enabled_services'] = [
"dashboard_http",
"dashboard_https",
"glance_api",
"keystone_admin",
"keystone_public_internal",
"nova_compute_api",
"nova_metadata_api",
"novncproxy",
"cinder_api",
"neutron_api"
]
default['haproxy']['roles'] = {
"os-identity" => [
"keystone_admin",
"keystone_public_internal"
],
"os-dashboard" => [
"dashboard_http",
"dashboard_https"
],
"os-compute-controller" => [
"nova_compute_api",
"nova_metadata_api",
"novncproxy"
],
"os-block-storage-controller" => [
"cinder_api"
],
"os-network-server" => [
"neutron_api"
],
"os-image" => [
"glance_api"
]
}
default['haproxy']['listeners'] = {
'listen' => {},
@ -96,118 +137,106 @@ default['haproxy']['listeners'] = {
'backend' => {}
}
default['haproxy']['services'] = {
"dashboard_http" => {
"role" => "os-compute-single-controller",
"role" => "os-dashboard",
"frontend_port" => "80",
"backend_port" => "80",
"balance" => "source",
"options" => [ "capture cookie vgnvisitor= len 32", \
"cookie SERVERID insert indirect nocache", \
"mode http", \
"option forwardfor", \
"option httpchk", \
"option httpclose", \
"option http-server-close", \
'rspidel ^Set-cookie:\ IP='
# "appsession csrftoken len 42 timeout 1h"
]
},
"dashboard_https" => {
"role" => "os-compute-single-controller",
"role" => "os-dashboard",
"frontend_port" => "443",
"backend_port" => "443",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
},
"glance_api" => {
"role" => "os-compute-single-controller",
"role" => "os-image-api",
"frontend_port" => "9292",
"backend_port" => "9292",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn" ]
},
"glance_registry_cluster" => {
"role" => "os-compute-single-controller",
"role" => "os-image-registry",
"frontend_port" => "9191",
"backend_port" => "9191",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn" ]
},
"keystone_admin" => {
"role" => "os-compute-single-controller",
"role" => "os-identity",
"frontend_port" => "35357",
"backend_port" => "35357",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn" ]
},
"keystone_public_internal" => {
"role" => "os-compute-single-controller",
"role" => "os-identity",
"frontend_port" => "5000",
"backend_port" => "5000",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn" ]
},
"nova_ec2_api" => {
"role" => "os-compute-single-controller",
"role" => "os-compute-api",
"frontend_port" => "8773",
"backend_port" => "8773",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
},
"nova_compute_api" => {
"role" => "os-compute-single-controller",
"role" => "os-compute-api",
"frontend_port" => "8774",
"backend_port" => "8774",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn"]
},
"novncproxy" => {
"role" => "os-compute-single-controller",
"role" => "os-compute-vncproxy",
"frontend_port" => "6080",
"backend_port" => "6080",
"balance" => "source",
"options" => [ "option tcpka", "option http-server-close", "option tcplog"]
"balance" => "leastconn",
#"balance" => "source",
"options" => [ "option tcpka", "option http-server-close", "option tcplog", "balance leastconn"]
},
"nova_metadata_api" => {
"role" => "os-compute-single-controller",
"role" => "os-compute-api-metadata",
"frontend_port" => "8775",
"backend_port" => "8775",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn"]
},
"cinder_api" => {
"role" => "os-compute-single-controller",
"role" => "os-block-storage-api",
"frontend_port" => "8776",
"backend_port" => "8776",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn"]
},
"ceilometer_api" => {
"role" => "os-compute-single-controller",
"frontend_port" => "8777",
"backend_port" => "8777",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
},
"spice" => {
"role" => "os-compute-single-controller",
"frontend_port" => "6082",
"backend_port" => "6082",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
},
"neutron_api" => {
"role" => "os-compute-single-controller",
"role" => "os-network-server",
"frontend_port" => "9696",
"backend_port" => "9696",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
"options" => [ "option tcpka", "option httpchk", "option tcplog", "balance leastconn"]
},
"swift_proxy" => {
"role" => "os-compute-single-controller",
"frontend_port" => "8080",
"backend_port" => "8080",
"balance" => "source",
"options" => [ "option tcpka", "option httpchk", "option tcplog"]
}
}

View File

@ -14,7 +14,11 @@ action :create do
listener << "balance #{new_resource.balance}" unless new_resource.balance.nil?
listener << "mode #{new_resource.mode}" unless new_resource.mode.nil?
listener += new_resource.servers.map {|server| "server #{server}" }
listener += new_resource.servers.map {|server|
if server
"server #{server}"
end
}
node.default['haproxy']['listeners'][new_resource.type][new_resource.name] = listener
end

View File

@ -17,33 +17,14 @@
# limitations under the License.
#
defaultbag = "openstack"
if !Chef::DataBag.list.key?(defaultbag)
Chef::Application.fatal!("databag '#{defaultbag}' doesn't exist.")
return
end
myitem = node.attribute?('cluster')? node['cluster']:"env_default"
if !search(defaultbag, "id:#{myitem}")
Chef::Application.fatal!("databagitem '#{myitem}' doesn't exist.")
return
end
mydata = data_bag_item(defaultbag, myitem)
if mydata['ha']['status'].eql?('enable')
node.set['haproxy']['enabled_services'] = nil
node.set['haproxy']['incoming_address'] = mydata['ha']['haproxy']['vip']
mydata['ha']['haproxy']['roles'].each do |role, services|
services.each do |service|
node.set['haproxy']['services'][service]['role'] = role
unless node['haproxy']['enabled_services'].include?(service)
# node['haproxy']['enabled_services'] << service
node.set['haproxy']['enabled_services'] = node['haproxy']['enabled_services'] + [service]
end
node['haproxy']['roles'].each do |role, services|
services.each do |service|
node.set['haproxy']['services'][service]['role'] = role
unless node['haproxy']['enabled_services'].include?(service)
# node['haproxy']['enabled_services'] << service
node.set['haproxy']['enabled_services'] = node['haproxy']['enabled_services'] + [service]
end
node.save
end
end
@ -54,23 +35,28 @@ node['haproxy']['services'].each do |name, service|
if node['haproxy']['choose_backend'].eql?("prefeed")
pool_members = []
mydata['node_mapping'].each do |nodename, nodeinfo|
if nodeinfo['roles'].include?(service['role'])
pool_members << nodename
if node['haproxy'].has_attribute?(:node_mapping)
node['haproxy']['node_mapping'].each do |nodename, nodeinfo|
if nodeinfo['roles'].include?(service['role'])
pool_members << nodename
end
end
end
else
pool_members = search(:node, "run_list:role\\[#{service['role']}\\] AND chef_environment:#{node.chef_environment}") || []
Chef::Log.info("===== search run_list:role\\[#{service['role']}\\] AND chef_environment:#{node.chef_environment}")
# load balancer may be in the pool
pool_members << node if node.run_list.roles.include?(service[:role])
pool_members = pool_members.sort_by { |node| node.name } unless pool_members.empty?
end
# we prefer connecting via local_ipv4 if
# pool members are in the same cloud
# TODO refactor this logic into library...see COOK-494
pool_members.map! do |member|
Chef::Log.info("processing member ...... #{member}")
if node['haproxy']['choose_backend'].eql?("prefeed")
server_ip = mydata['node_mapping']["#{member}"]['management_ip']
server_ip = node['haproxy']['node_mapping']["#{member}"]['management_ip']
{:ipaddress => server_ip, :hostname => member}
else
server_ip = begin
@ -92,10 +78,12 @@ node['haproxy']['services'].each do |name, service|
pool = service[:options]
servers = pool_members.uniq.map do |s|
# novncproxy cannot to be checked
if name.eql?("novncproxy")
"#{s[:hostname]} #{s[:ipaddress]}:#{service[:backend_port]}"
else
"#{s[:hostname]} #{s[:ipaddress]}:#{service[:backend_port]} check inter 2000 rise 2 fall 5"
if s[:hostname] and s[:ipaddress]
if name.eql?("novncproxy")
"#{s[:hostname]} #{s[:ipaddress]}:#{service[:backend_port]}"
else
"#{s[:hostname]} #{s[:ipaddress]}:#{service[:backend_port]} check inter 30000 fastinter 1000 rise 2 fall 5"
end
end
end
@ -135,3 +123,30 @@ service "haproxy" do
supports :restart => true, :status => true, :reload => true
action [:enable, :start]
end
# Enable haproxy log to file
service "rsyslog" do
supports :status => true, :restart => true, :start => true, :stop => true
action :nothing
end
ruby_block "enable haproxy log" do
block do
fe = Chef::Util::FileEdit.new('/etc/rsyslog.conf')
fe.search_file_replace_line(/^\#\$ModLoad\s+imudp/, '$ModLoad imudp')
fe.write_file
fe.search_file_replace_line(/^\#\$UDPServerRun\s+514/, '$UDPServerRun 514')
fe.write_file
fe.search_file_replace_line(/^\*.emerg\s+\*/, "#*.emerg *")
fe.write_file
haproxylog = "#{node['haproxy']['log']['facilities']}.* \
#{node['haproxy']['log']['file']}"
if !::File.readlines('/etc/rsyslog.conf').grep(/#{haproxylog}/).any?
fe.insert_line_after_match('^local7.*', haproxylog)
fe.write_file
end
end
action :nothing
subscribes :run, "template[#{node['haproxy']['conf_dir']}/haproxy.cfg]", :immediately
notifies :restart, "service[rsyslog]", :delayed
end

View File

@ -1,12 +1,14 @@
global
#log 127.0.0.1 local0
log 127.0.0.1 local4 notice
log 127.0.0.1 local4 info
log 127.0.0.1 <%= node['haproxy']['log']['facilities'] -%> notice
log 127.0.0.1 <%= node['haproxy']['log']['facilities'] -%> info
log-send-hostname
daemon
maxconn <%= node['haproxy']['global_max_connections'] %>
#debug
#quiet
spread-checks 5
nbproc 8
# debug
# quiet
spread-checks 5
tune.bufsize <%= node['haproxy']['tune']['bufsize'] %>
tune.maxrewrite <%= node['haproxy']['tune']['maxrewrite'] %>
user <%= node['haproxy']['user'] %>
group <%= node['haproxy']['group'] %>
<% if node['haproxy']['enable_stats_socket'] -%>
@ -17,6 +19,7 @@ defaults
log global
mode http
retries 3
maxconn <%= node['haproxy']['member_max_connections'] %>
<% @defaults_timeouts.sort.map do | value, time | -%>
timeout <%= value %> <%= time %>
<% end -%>
@ -26,6 +29,13 @@ defaults
balance <%= node['haproxy']['balance_algorithm'] %>
# Set up application listeners here.
listen stats
bind 0.0.0.0:8080
mode http
stats refresh 10s
stats enable
stats uri /
stats realm Strictly\ Private
<% node['haproxy']['listeners'].each do |type, listeners | %>
<% listeners.each do |name, listen| %>

View File

@ -5,8 +5,8 @@ default['keepalived']['global']['smtp_server'] = '127.0.0.1'
default['keepalived']['global']['smtp_connect_timeout'] = 30
default['keepalived']['global']['router_id'] = 'DEFAULT_ROUT_ID'
default['keepalived']['global']['router_ids'] = {
"centos-10-145-88-152" => "lsb01",
"centos-10-145-88-153" => "lsb02"
# "centos-10-145-88-152" => "lsb01",
# "centos-10-145-88-153" => "lsb02"
} # node name based mapping
default['keepalived']['check_scripts'] = {
"haproxy" => {
@ -19,22 +19,23 @@ default['keepalived']['instance_defaults']['state'] = 'MASTER'
default['keepalived']['instance_defaults']['priority'] = 100
default['keepalived']['instance_defaults']['virtual_router_id'] = 10
default['keepalived']['vip'] = {
"eth0" => "10.145.88.161"
"ipaddress" => "127.0.0.1",
"interface" => "eth0"
}
default['keepalived']['instances'] = {
"openstack" => {
"virtual_router_id" => "50",
"advert_int" => "1",
"priorities" => {
"centos-10-145-88-152" => 110,
"centos-10-145-88-153" => 101
# "centos-10-145-88-152" => 110,
# "centos-10-145-88-153" => 101
},
"states" => {
"centos-10-145-88-152" => "BACKUP",
"centos-10-145-88-153" => "MASTER"
# "centos-10-145-88-152" => "BACKUP",
# "centos-10-145-88-153" => "MASTER"
},
"interface" => "eth0",
"ip_addresses" => ["#{node['keepalived']['vip']['eth0']} dev eth0"],
"interface" => "#{node['keepalived']['vip']['interface']}",
"ip_addresses" => ["#{node['keepalived']['vip']['ipaddress']} dev #{node['keepalived']['vip']['interface']}"],
"track_script" => "haproxy"
}
}

View File

@ -0,0 +1,33 @@
def keepalived_master(role, tag, chef_environment = node.chef_environment)
chef_environment = chef_environment || node.chef_environment
master = search(:node, "run_list:role\\[#{role}\\] AND \
chef_environment:#{chef_environment} AND \
tags:#{tag}") || []
master = master.sort_by { |node| node.name } unless master.empty?
if master.empty?
nodes = search(:node, "run_list:role\\[#{role}\\] AND \
chef_environment:#{chef_environment}") || []
if nodes.empty?
Chef::Log.error("Cannot find the role #{role} in the environment #{chef_environment}\n")
end
nodes = nodes.sort_by { |node| node.name } unless nodes.empty?
if node.name.eql?(nodes.first.name)
node.tags << tag unless node.tags.include?(tag)
node.save
end
return nodes.first
else
if master.length.eql?(1)
return master.first
else
head, *tail = master
for m in tail
print node,m
m.tags.delete(tag)
m.save
end
return head
end
end
end

View File

@ -8,3 +8,7 @@ version "1.2.0"
supports "ubuntu"
recipe "keepalived", "Installs and configures keepalived"
depends 'apt', '>= 2.3.8'
depends 'yum', '>= 3.1.4'
depends 'yum-epel', '>= 0.3.4'

View File

@ -19,37 +19,25 @@
require 'chef/util/file_edit'
defaultbag = "openstack"
if !Chef::DataBag.list.key?(defaultbag)
Chef::Application.fatal!("databag '#{defaultbag}' doesn't exist.")
return
end
myitem = node.attribute?('cluster')? node['cluster']:"env_default"
if !search(defaultbag, "id:#{myitem}")
Chef::Application.fatal!("databagitem '#{myitem}' doesn't exist.")
return
end
mydata = data_bag_item(defaultbag, myitem)
if mydata['ha']['status'].eql?('enable')
mydata['ha']['keepalived']['router_ids'].each do |nodename, routerid|
node.override['keepalived']['global']['router_ids']["#{nodename}"] = routerid
# The following code block is trying to automaticly elect master node
# however it is not polished very well, currently only support two keepalived
# nodes. If you are going to build a keepalived cluster with 3 and up nodes,
# either poilish it or use your own recipe to handle the situation.
master_node = keepalived_master('os-ha', 'keepalived_default_master')
instance = node.set['keepalived']['instances']['openstack']
router_ids = node.set['keepalived']['global']['router_ids']
if node.name.eql?(master_node.name)
if instance['states']["#{node.name}"].empty?
router_ids["#{node.name}"] = 'lsb01'
instance['priorities']["#{node.name}"] = '110'
instance['states']["#{node.name}"] = 'MASTER'
end
mydata['ha']['keepalived']['instance_name']['priorities'].each do |nodename, priority|
node.override['keepalived']['instances']['openstack']['priorities']["#{nodename}"] = priority
else
if instance['states']["#{node.name}"].empty?
router_ids["#{node.name}"] = 'lsb02'
instance['priorities']["#{node.name}"] = '101'
instance['states']["#{node.name}"] = 'BACKUP'
end
mydata['ha']['keepalived']['instance_name']['states'].each do |nodename, status|
node.override['keepalived']['instances']['openstack']['states']["#{nodename}"] = status
end
interface = node['keepalived']['instances']['openstack']['interface']
node.override['keepalived']['instances']['openstack']['ip_addresses'] = [
"#{mydata['ha']['keepalived']['instance_name']['vip']} dev #{interface}" ]
end
case node["platform_family"]
@ -86,7 +74,6 @@ if node['keepalived']['shared_address']
block do
fe = Chef::Util::FileEdit.new('/etc/sysctl.conf')
fe.search_file_delete_line(/^net.ipv4.ip_nonlocal_bind\s*=\s*0/)
fe.write_file
fe.insert_line_if_no_match(/^net.ipv4.ip_nonlocal_bind\s*=s*1/,
"net.ipv4.ip_nonlocal_bind = 1")
fe.write_file

View File

@ -17,11 +17,12 @@
# limitations under the License.
#
default['memcached']['memory'] = 256
default['memcached']['memory'] = (node['memory']['total'].to_i/1024/3) < 8192 ? node['memory']['total'].to_i/1024/3 : 8192
default['memcached']['port'] = 11211
default['memcached']['udp_port'] = 11211
default['memcached']['listen'] = '0.0.0.0'
default['memcached']['maxconn'] = 2048
default['memcached']['bind_interface'] = nil
default['memcached']['maxconn'] = 4096
default['memcached']['max_object_size'] = '1m'
default['memcached']['logfilename'] = 'memcached.log'

View File

@ -20,6 +20,10 @@
# include epel on redhat/centos 5 and below in order to get the memcached packages
include_recipe 'yum-epel' if node['platform_family'] == 'rhel' && node['platform_version'].to_i == 5
class ::Chef::Recipe # rubocop:disable Documentation
include ::Openstack
end
package 'memcached'
package 'libmemcache-dev' do
@ -44,6 +48,10 @@ service 'memcached' do
supports :status => true, :start => true, :stop => true, :restart => true, :enable => true
end
if !node['memcached']['bind_interface'].nil?
node.set['memcached']['listen'] = address_for(node['memcached']['bind_interface'])
end
case node['platform_family']
when 'rhel', 'fedora', 'suse'
family = node['platform_family'] == 'suse' ? 'suse' : 'redhat'

View File

@ -48,10 +48,10 @@ default['mysql']['tunable']['myisam_max_sort_file_size'] = '2147483648'
default['mysql']['tunable']['myisam_repair_threads'] = '1'
default['mysql']['tunable']['myisam-recover'] = 'BACKUP'
default['mysql']['tunable']['max_allowed_packet'] = '16M'
default['mysql']['tunable']['max_connections'] = '800'
default['mysql']['tunable']['max_connections'] = '3000'
default['mysql']['tunable']['max_connect_errors'] = '10'
default['mysql']['tunable']['concurrent_insert'] = '2'
default['mysql']['tunable']['connect_timeout'] = '10'
default['mysql']['tunable']['connect_timeout'] = '60'
default['mysql']['tunable']['tmp_table_size'] = '32M'
default['mysql']['tunable']['max_heap_table_size'] = node['mysql']['tunable']['tmp_table_size']
default['mysql']['tunable']['bulk_insert_buffer_size'] = node['mysql']['tunable']['tmp_table_size']
@ -102,7 +102,8 @@ default['mysql']['tunable']['log_queries_not_using_index'] = true
default['mysql']['tunable']['log_bin_trust_function_creators'] = false
default['mysql']['tunable']['innodb_log_file_size'] = '5M'
default['mysql']['tunable']['innodb_buffer_pool_size'] = '128M'
# default['mysql']['tunable']['innodb_buffer_pool_size'] = '128M'
default['mysql']['tunable']['innodb_buffer_pool_size'] = (node['memory']['total'].to_i/1024) < 4096 ? '128M' : '4096M'
default['mysql']['tunable']['innodb_buffer_pool_instances'] = '4'
default['mysql']['tunable']['innodb_additional_mem_pool_size'] = '8M'
default['mysql']['tunable']['innodb_data_file_path'] = 'ibdata1:10M:autoextend'

View File

@ -80,8 +80,8 @@ default['openstack']['block-storage']['quota_gigabytes'] = '1000'
default['openstack']['block-storage']['quota_driver'] = 'cinder.quota.DbQuotaDriver'
# Common rpc definitions
default['openstack']['block-storage']['rpc_thread_pool_size'] = 64
default['openstack']['block-storage']['rpc_conn_pool_size'] = 30
default['openstack']['block-storage']['rpc_thread_pool_size'] = 240
default['openstack']['block-storage']['rpc_conn_pool_size'] = 100
default['openstack']['block-storage']['rpc_response_timeout'] = 60
case node['openstack']['mq']['service_type']
when 'rabbitmq'
@ -209,7 +209,7 @@ default['openstack']['block-storage']['volume']['volume_group_size'] = 40
default['openstack']['block-storage']['volume']['volume_clear_size'] = 10
default['openstack']['block-storage']['volume']['volume_clear'] = 'zero'
# volume disk can be loopfile or /dev/sdb
default['openstack']['block-storage']['volume']['disk'] = 'loopfile'
default['openstack']['block-storage']['volume']['disk'] = '/dev/sdb'
default['openstack']['block-storage']['volume']['create_volume_group'] = false
default['openstack']['block-storage']['volume']['iscsi_helper'] = 'tgtadm'
default['openstack']['block-storage']['volume']['iscsi_ip_address'] = node['ipaddress']

View File

@ -24,3 +24,4 @@ depends 'openstack-identity', '~> 9.0'
depends 'openstack-image', '~> 9.0'
depends 'selinux', '>= 0.7.2'
depends 'python', '>= 1.4.6'
depends 'ceph', '>= 0.2.1'

View File

@ -0,0 +1,75 @@
# encoding: UTF-8
#
# Cookbook Name:: openstack-block-storage
# Recipe:: volume
#
# Copyright 2012, Rackspace US, Inc.
# Copyright 2012-2013, AT&T Services, Inc.
# Copyright 2013, Opscode, Inc.
# Copyright 2013, SUSE Linux Gmbh.
# Copyright 2013, IBM, Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
class ::Chef::Recipe # rubocop:disable Documentation
include ::Openstack
end
if node['openstack']['block-storage']['volume']['driver'] == 'cinder.volume.drivers.rbd.RBDDriver'
include_recipe 'ceph::_common'
include_recipe 'ceph::mon_install'
include_recipe 'ceph::conf'
cluster = 'ceph'
platform_options = node['openstack']['block-storage']['platform']
platform_options['cinder_volume_packages'].each do |pkg|
package pkg do
options platform_options['package_overrides']
action :upgrade
end
end
rbd_user = node['openstack']['block-storage']['rbd_user']
if mon_nodes.empty?
rbd_key = ""
elsif !mon_master['ceph'].has_key?('cinder-secret')
rbd_key = ""
else
rbd_key = mon_master['ceph']['cinder-secret']
end
template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
source 'ceph.client.keyring.erb'
cookbook 'openstack-common'
owner node['openstack']['block-storage']['user']
group node['openstack']['block-storage']['group']
mode '0644'
variables(
name: rbd_user,
key: rbd_key
)
end
include_recipe 'openstack-block-storage::cinder-common'
service 'cinder-volume-ceph' do
service_name platform_options['cinder_volume_service']
supports status: true, restart: true
action :restart
subscribes :restart, 'template[/etc/cinder/cinder.conf]'
end
end

View File

@ -57,31 +57,31 @@ when 'cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver'
when 'cinder.volume.drivers.rbd.RBDDriver'
# this is used in the cinder.conf template
node.override['openstack']['block-storage']['rbd_secret_uuid'] = get_secret node['openstack']['block-storage']['rbd_secret_name']
rbd_user = node['openstack']['block-storage']['rbd_user']
rbd_key = get_password 'service', node['openstack']['block-storage']['rbd_key_name']
include_recipe 'openstack-common::ceph_client'
platform_options['cinder_ceph_packages'].each do |pkg|
package pkg do
options platform_options['package_overrides']
action :upgrade
end
end
template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
source 'ceph.client.keyring.erb'
cookbook 'openstack-common'
owner node['openstack']['block-storage']['user']
group node['openstack']['block-storage']['group']
mode '0600'
variables(
name: rbd_user,
key: rbd_key
)
end
# node.override['openstack']['block-storage']['rbd_secret_uuid'] = get_secret node['openstack']['block-storage']['rbd_secret_name']
#
# rbd_user = node['openstack']['block-storage']['rbd_user']
# rbd_key = get_password 'service', node['openstack']['block-storage']['rbd_key_name']
#
# include_recipe 'openstack-common::ceph_client'
#
# platform_options['cinder_ceph_packages'].each do |pkg|
# package pkg do
# options platform_options['package_overrides']
# action :upgrade
# end
# end
#
# template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
# source 'ceph.client.keyring.erb'
# cookbook 'openstack-common'
# owner node['openstack']['block-storage']['user']
# group node['openstack']['block-storage']['group']
# mode '0600'
# variables(
# name: rbd_user,
# key: rbd_key
# )
# end
when 'cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver'
node.override['openstack']['block-storage']['netapp']['netapp_server_password'] = get_password 'service', 'netapp-filer'

View File

@ -560,6 +560,15 @@ rbd_user=<%= node["openstack"]["block-storage"]["rbd_user"] %>
rbd_secret_uuid=<%= node["openstack"]["block-storage"]["rbd_secret_uuid"] %>
#### (StrOpt) the libvirt uuid of the secret for the rbd_uservolumes
rbd_ceph_conf=<%= node["openstack"]["block-storage"]["rbd_ceph_conf"] %>
rbd_flatten_volume_from_snapshot=<%= node["openstack"]["block-storage"]["rbd_flatten_volume_from_snapshot"] %>
rbd_max_clone_depth=<%= node["openstack"]["block-storage"]["rbd_max_clone_depth"] %>
glance_api_version=<%= node["openstack"]["block-storage"]["glance_api_version"] %>
<% end %>
# volume_tmp_dir=<None>
#### (StrOpt) where to store temporary image files if the volume driver

View File

@ -132,11 +132,11 @@ default['openstack']['db']['network']['retry_interval'] = 10
# Minimum number of SQL connections to keep open in a pool
default['openstack']['db']['network']['min_pool_size'] = 1
# Maximum number of SQL connections to keep open in a pool
default['openstack']['db']['network']['max_pool_size'] = 10
default['openstack']['db']['network']['max_pool_size'] = 100
# Timeout in seconds before idle sql connections are reaped
default['openstack']['db']['network']['idle_timeout'] = 3600
# If set, use this value for max_overflow with sqlalchemy
default['openstack']['db']['network']['max_overflow'] = 20
default['openstack']['db']['network']['max_overflow'] = 100
# Verbosity of SQL debugging information. 0=None, 100=Everything
default['openstack']['db']['network']['connection_debug'] = 0
# Add python stack traces to SQL as comment strings

View File

@ -92,7 +92,8 @@ default['openstack']['zypp']['uri'] = 'http://download.opensuse.org/repositories
default['openstack']['yum']['rdo_enabled'] = true
default['openstack']['yum']['uri'] = 'http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6'
# default['openstack']['yum']['repo-key'] = 'https://raw.githubusercontent.com/redhat-openstack/rdo-release/master/RPM-GPG-KEY-RDO-Icehouse'
#default['openstack']['yum']['repo-key'] = 'https://raw.githubusercontent.com/redhat-openstack/rdo-release/master/RPM-GPG-KEY-RDO-Icehouse'
default['openstack']['yum']['repo-key'] = 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse'
# ======================== OpenStack Endpoints ================================
#

View File

@ -0,0 +1,52 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.11 (GNU/Linux)
mQINBFLKi4cBEADTxh9Xzd9Lko0D2OxFLLI9QlVEl/oTXMR24A2wKGYJxdCHabWH
wMGd+4FNNop7zKBDdp03aZGapfMihlxGYFH886xZSqalEwt88OA7WKmi2/oA98RI
2XfcnEs+J8Plk3XpS9dlrZTbKUBxn37Ouy60tJHd1gJQTI50Z1a1NwzgNaWZdmH8
eHZ+OlhWSgcGKZA94/3YFxMtnWidT7GITOYHeynnVSnFfgZwHkIbHzrCNuXsi+L8
nkl9C4E8Of19apHjthafZp3KLc2ICxfAEMnMiRoTURjzvnx2pwmZoMFYThFRjZ56
6/IXBKzreMVeYNA4xBsjPpCwr5gAkcFK8diUk0jh6wENsffG5ZkwHdGbGmBvZuqG
KytCJrwNoeudxz8Bx4Tiy/RpEOYqX65NU/ch7rdA6T4b3uhBMmohncQEEnb+BKVZ
w7E5+e56pwA2jucHLRtAEl5DMJaG1MSYsnqgyd5fUngCKRHSBW881bddmOnpoyEv
t/iQ5jbYV9F7QzSCl57qPSS0XkmEkuC6WxIhFbJtxxn4ixAn/i+LvntgV5geJ7fQ
RrM5TCtElf8rDuGmDfD2kyrVA/vSTT8CgVYN6b3+Hr7pjwGqwIXIfkG+WskZUTgY
5TOTqF2j+SQXefMyw9uHn+Hou5QmsD2XfJ2SU7J5WCIcv60BhMI0Vsz02wARAQAB
tCdyZG8taWNlaG91c2Utc2lnbiA8cmRvLWluZm9AcmVkaGF0LmNvbT6JAjgEEwEC
ACIFAlLKi4cCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEOUL5qsOT70o
zTYQAKEHYsN4hSqBHnipRBlQj3HI50Aha0ucdYoesj5W53DXQVtrNPi9d3r59Tud
uSx/WYwifW9axPfjwpkuVyvv05ewnkmVsxZjre85nB7ZeOF8I8NEVNn+GRn21Lmt
u/f06YBg+a+ujn2h5PBrOf1hF6WSpQY7m2eKKFTya6WVpKkq3o5dWMzxs0RWfF/3
A9F52179IMAwddbH1AxGaI4RFw+gNg/+sM3OWRxZ8KvEDFFDBAjyDAF7F5t6mZK6
vZjQCYj3hPTUooxNpv9V9N7MtTdu7SZD7NMAM2xZlPFcc1qAiJQvtc+egjIl/Kho
HYn6suKIJQoqOqVuVkTs1oIIG9mQ8eH7VxxvswqZafgj+GXLWCYDPbmcSqTu0joc
SEQNvGY/atApZyA/IVAt2BOduVIwAmBMbKd9DOhlW4Neq8fxdXD+Cb3EOsz810dY
rcVv7NJmVinAg84NCwKZPsRrzIRPEEEJK1oMmab2GDRvSmYb3PoRmhs8x9q2E1q3
/Fx7PXZP/WFqn0w3OuRIZC5Ez1wbzB+lmmNUN3vaZWGrq6+o2f3lvhWfr/CU6WCx
TocRqmBXV7DXRC89BLKrgPiIi7eK+Vn4j7XSfXNvlmCHyhoYBURqZtl+lDwIDFzB
KNM2NLK/LC1wg1GveogeqifUVqxyapSAPqBWYH2TVq4FcWiDuQINBFLKi4cBEADd
cAocPyUSxli8e9E4evDUuOJmLyD32elH033Cwem7fRhowHIb1wMPZqCGAFK+aqq3
tY04Cg+sgUtmDxRUJsQmJEif8OEJ864vrLNWFKhsKe0dc92ZgIxV6JKOwlRSdWFX
4Pxdg5xLQlRfYrwNqXzCYczaMf+p5g0F21tpylIqf+tWiFWnRJ7H3OqWYhY35w2E
BzjCA3bEsg/nP5WF/beOyFv5vdusDAJKSe8xfa+tVnr+0l8vztL+GDawTy7H/CCl
LX7eQ9dXCFVRUT79CnwnWHUiz6HwK26G6AC4BAvUxV5yB4PrJCzD4GbW9XzhVbe1
U75G9vWFLxe7OQHGr6ezA273wQ2cKlKEF0RGKOYArjHJCbdHCy/mwAnzi1qehgXE
flbtthKjkMUOGKLeRQNbf2aksDzsUURBAGor+Tf3y4tnjROmzWTPfBTAQesFh21Z
Bm0IGfJxSiunCEBI1ekck+NGqoqrD6dEnREwREod8SJwbDHqpZWyb5Sh4Z7wkzHN
aAlYKbucG2XB1eFrIjsOxZUDQuSXAYuTCQ5f0LanQV9/ghfPEUGvdqy4SSQ1awkG
vD+XKQauu/VMpuYPojr3uPyBUjTi3sOIB6F38xfQyvV6hVnYYxfCHONtngqFTD80
5iWNdSsEHErGPujNLuP3Lkd+GmFItUMhI7D/ygp0DQARAQABiQIfBBgBAgAJBQJS
youHAhsMAAoJEOUL5qsOT70of5wQALtZnQOVu/IBaMY2xkeWmHGbEBaxBvwK/lO4
FKlukzl3yuJmdpNcJzrhYzyxQMF4B+HColeo+ajkEvX3hTeZWTy/FQ6Fovt/1z2O
P+oq0aBN9sHnd/KaAtTH2pz18y8nuUX/Sl2TEdpwu/aU1yXPwHz8NtAFCD76D1aB
VHg4v9DVxFbbXEIO5KSvLu39fUZ4mjkiLoWMgCPVPSrBjj0akF95oU4/XOAEE6Oq
0FfoIp1j0mWVJI9p6MS+DbcXugdgmCy2Pj3EtXz66Sp6HFI3OF/F2JhBjHssXsIK
hv6nF2v4gYEONlNkqwGUeGdngwoKLJkPzO+lTnlRI8aFOTMTFZzTDmn3V36cHkxl
+vufqTL6grEFfkhenXbi+rIyrDb52LDuK9dps46fq5DVpuTBFqN2q6bhfkHQvHF3
tsLZveZi6gl/mhkFT+1zCvLR/k19nWreb2AXjRWsxKwmUj/QA72Pos6rxx2ew39O
EjTzfcd90MovP+A6KI9qkwoE2yflJ9vI+OZ7lMn7vFKK00QJ6bMIbYPDTorNpkr0
PaebdELL/odcRw0hmCDMIxxkheP/XlZOcwVEeiu9LxFALJ/77+T97J1wp8QmHzrc
bZM9W96LlcWhjNpsb4daMIcGbebacLzQ8NlaDDJ21XSrm4HX5dHvJkalq6bpqDZN
5KmnPi+m
=6cHR
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -40,6 +40,12 @@ when 'debian'
end
when 'rhel'
cookbook_file '/etc/pki/rpm-gpg/RPM-GPG-KEY-RDO-Icehouse' do
source 'RPM-GPG-KEY-RDO-Icehouse'
mode 00644
action :create
end
if node['openstack']['yum']['rdo_enabled']
repo_action = :add
include_recipe 'yum-epel'
@ -51,10 +57,9 @@ when 'rhel'
yum_repository "RDO-#{node['openstack']['release']}" do
description "OpenStack RDO repo for #{node['openstack']['release']}"
# gpgkey node['openstack']['yum']['repo-key']
gpgkey node['openstack']['yum']['repo-key']
baseurl node['openstack']['yum']['uri']
enabled true
gpgcheck false
action repo_action
end

View File

@ -33,16 +33,18 @@ default['openstack']['compute']['lock_path'] =
# Workers
# nova will default to number of cpus
default['openstack']['compute']['ec2_workers'] = nil
default['openstack']['compute']['osapi_compute_workers'] = nil
default['openstack']['compute']['osapi_compute_workers'] = [6, node['cpu']['total'].to_i].min
default['openstack']['compute']['metadata_workers'] = nil
# The name of the Chef role that sets up the Keystone Service API
default['openstack']['compute']['identity_service_chef_role'] = 'os-identity'
# Common rpc definitions
default['openstack']['compute']['rpc_thread_pool_size'] = 64
default['openstack']['compute']['rpc_conn_pool_size'] = 30
default['openstack']['compute']['rpc_response_timeout'] = 60
default['openstack']['compute']['rpc_cast_timeout'] = 300
default['openstack']['compute']['rpc_response_timeout'] = 300
default['openstack']['compute']['rpc_thread_pool_size'] = 240
default['openstack']['compute']['rpc_conn_pool_size'] = 100
case node['openstack']['mq']['service_type']
when 'rabbitmq'
default['openstack']['compute']['rpc_backend'] = 'nova.openstack.common.rpc.impl_kombu'
@ -63,6 +65,11 @@ when 'suse'
default['openstack']['compute']['group'] = 'openstack-nova'
end
# Options defined in nova.image.glance
# Number of retries when downloading an image from glance
# the actually retries num = glance_num_retries + 1
default['openstack']['compute']['glance_num_retries'] = 2
# Logging stuff
default['openstack']['compute']['log_dir'] = '/var/log/nova'
@ -83,6 +90,7 @@ default['openstack']['compute']['network']['service_type'] = 'nova'
default['openstack']['compute']['network']['plugins'] = ['openvswitch']
# Neutron options
default["openstack"]["compute"]["network"]["neutron"]["url_timeout"] = 90
default['openstack']['compute']['network']['neutron']['network_api_class'] = 'nova.network.neutronv2.api.API'
default['openstack']['compute']['network']['neutron']['auth_strategy'] = 'keystone'
default['openstack']['compute']['network']['neutron']['admin_tenant_name'] = 'service'
@ -173,12 +181,14 @@ default['openstack']['compute']['scheduler']['default_filters'] = %W(
SameHostFilter
DifferentHostFilter)
default['openstack']['compute']['scheduler']['host_subset_size']= 100
default['openstack']['compute']['driver'] = 'libvirt.LibvirtDriver'
default['openstack']['compute']['default_ephemeral_format'] = nil
default['openstack']['compute']['preallocate_images'] = 'none'
default['openstack']['compute']['use_cow_images'] = true
default['openstack']['compute']['vif_plugging_is_fatal'] = true
default['openstack']['compute']['vif_plugging_timeout'] = 300
default['openstack']['compute']['vif_plugging_is_fatal'] = 'True'
default['openstack']['compute']['vif_plugging_timeout'] = 360
default['openstack']['compute']['libvirt']['virt_type'] = 'kvm'
default['openstack']['compute']['libvirt']['virt_auto'] = false
@ -224,9 +234,9 @@ default['openstack']['compute']['config']['storage_availability_zone'] = 'nova'
default['openstack']['compute']['config']['default_schedule_zone'] = 'nova'
default['openstack']['compute']['config']['force_raw_images'] = false
default['openstack']['compute']['config']['allow_same_net_traffic'] = true
default['openstack']['compute']['config']['osapi_max_limit'] = 1000
default['openstack']['compute']['config']['cpu_allocation_ratio'] = 16.0
default['openstack']['compute']['config']['ram_allocation_ratio'] = 1.5
default['openstack']['compute']['config']['osapi_max_limit'] = 5000
default['openstack']['compute']['config']['cpu_allocation_ratio'] = 2.0
default['openstack']['compute']['config']['ram_allocation_ratio'] = 1.0
default['openstack']['compute']['config']['disk_allocation_ratio'] = 1.0
default['openstack']['compute']['config']['snapshot_image_format'] = 'qcow2'
default['openstack']['compute']['config']['allow_resize_to_same_host'] = false
@ -246,16 +256,16 @@ default['openstack']['compute']['config']['injected_network_template'] = '$pybas
default['openstack']['compute']['config']['volume_api_class'] = 'nova.volume.cinder.API'
# quota settings
default['openstack']['compute']['config']['quota_security_groups'] = 50
default['openstack']['compute']['config']['quota_security_group_rules'] = 20
default['openstack']['compute']['config']['quota_security_groups'] = 200
default['openstack']['compute']['config']['quota_security_group_rules'] = 200
# (StrOpt) default driver to use for quota checks (default: nova.quota.DbQuotaDriver)
default['openstack']['compute']['config']['quota_driver'] = 'nova.quota.DbQuotaDriver'
# number of instance cores allowed per project (default: 20)
default['openstack']['compute']['config']['quota_cores'] = 20
default['openstack']['compute']['config']['quota_cores'] = 200
# number of fixed ips allowed per project (this should be at least the number of instances allowed) (default: -1)
default['openstack']['compute']['config']['quota_fixed_ips'] = -1
# number of floating ips allowed per project (default: 10)
default['openstack']['compute']['config']['quota_floating_ips'] = 10
default['openstack']['compute']['config']['quota_floating_ips'] = 100
# number of bytes allowed per injected file (default: 10240)
default['openstack']['compute']['config']['quota_injected_file_content_bytes'] = 10240
# number of bytes allowed per injected file path (default: 255)
@ -263,13 +273,13 @@ default['openstack']['compute']['config']['quota_injected_file_path_bytes'] = 25
# number of injected files allowed (default: 5)
default['openstack']['compute']['config']['quota_injected_files'] = 5
# number of instances allowed per project (defailt: 10)
default['openstack']['compute']['config']['quota_instances'] = 10
default['openstack']['compute']['config']['quota_instances'] = 100
# number of key pairs per user (default: 100)
default['openstack']['compute']['config']['quota_key_pairs'] = 100
# number of metadata items allowed per instance (default: 128)
default['openstack']['compute']['config']['quota_metadata_items'] = 128
# megabytes of instance ram allowed per project (default: 51200)
default['openstack']['compute']['config']['quota_ram'] = 51200
default['openstack']['compute']['config']['quota_ram'] = 2048000
# disk cache modes
default['openstack']['compute']['config']['disk_cache_modes'] = nil
@ -297,6 +307,15 @@ else
default['openstack']['compute']['config']['notify_on_state_change'] = ''
end
# vncproxy setttings
default['openstack']['compute']['vnc']['vncserver_listen'] = '0.0.0.0'
# nova consoleauth token backend could be 'memcache' or nil:
# 'memcache' means consoleauth storage tokens into memcache,
# it can work on single or multiply consoleauth
# nil means consoleauth storage tokens in its local memory,
# only works on single node.
default['openstack']['compute']['consoleauth']['token']['backend'] = 'memcache'
# Keystone settings
default['openstack']['compute']['api']['auth_strategy'] = 'keystone'
@ -309,7 +328,7 @@ default['openstack']['compute']['api']['auth']['cache_dir'] = '/var/cache/nova/a
# Perform nova-conductor operations locally (boolean value)
default['openstack']['compute']['conductor']['use_local'] = 'False'
# nova-conductor will default to number of cpus
default['openstack']['compute']['conductor']['workers'] = nil
default['openstack']['compute']['conductor']['workers'] = [6, node['cpu']['total'].to_i].min
default['openstack']['compute']['network']['force_dhcp_release'] = true
@ -344,7 +363,7 @@ when 'fedora', 'rhel', 'suse' # :pragma-foodcritic: ~FC024 - won't fix this
'compute_vncproxy_consoleauth_service' => 'openstack-nova-consoleauth',
'libvirt_packages' => ['libvirt', 'dmidecode'],
'libvirt_service' => 'libvirtd',
'libvirt_ceph_packages' => ['ceph-common'],
'libvirt_ceph_packages' => [],
'dbus_service' => 'messagebus',
'compute_cert_packages' => ['openstack-nova-cert'],
'compute_cert_service' => 'openstack-nova-cert',

View File

@ -55,7 +55,7 @@ service 'nova-api-metadata' do
supports status: true, restart: true
subscribes :restart, resources('template[/etc/nova/nova.conf]')
action :enable
action [:enable, :start]
end
identity_endpoint = endpoint 'identity-api'

View File

@ -0,0 +1,104 @@
# encoding: UTF-8
#
# Cookbook Name:: openstack-compute
# Recipe:: libvirt_rbd
#
# Copyright 2014, x-ion GmbH
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
class ::Chef::Recipe # rubocop:disable Documentation
include ::Openstack
end
if node['openstack']['block-storage']['volume']['driver'] == 'cinder.volume.drivers.rbd.RBDDriver'
include_recipe 'ceph::_common'
include_recipe 'ceph::mon_install'
include_recipe 'ceph::conf'
cluster = 'ceph'
platform_options = node['openstack']['compute']['platform']
platform_options['libvirt_ceph_packages'].each do |pkg|
package pkg do
options platform_options['package_overrides']
action :upgrade
end
end
execute "rpm -Uvh --force #{node['ceph']['rhel']['extras']['repository']}/qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64.rpm #{node['ceph']['rhel']['extras']['repository']}/qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64.rpm" do
not_if "rpm -qa | grep qemu | grep ceph"
end
secret_uuid = node['openstack']['block-storage']['rbd_secret_uuid']
rbd_user = node['openstack']['compute']['libvirt']['rbd']['rbd_user']
if mon_nodes.empty?
rbd_key = ""
elsif !mon_master['ceph'].has_key?('cinder-secret')
rbd_key = ""
else
rbd_key = mon_master['ceph']['cinder-secret']
end
template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
source 'ceph.client.keyring.erb'
cookbook 'openstack-common'
owner node['openstack']['block-storage']['user']
group node['openstack']['block-storage']['group']
mode '0644'
variables(
name: rbd_user,
key: rbd_key
)
end
require 'securerandom'
filename = SecureRandom.hex
template "/tmp/#{filename}.xml" do
source 'secret.xml.erb'
user 'root'
group 'root'
mode '700'
variables(
uuid: secret_uuid,
client_name: node['openstack']['compute']['libvirt']['rbd']['rbd_user']
)
not_if "virsh secret-list | grep #{secret_uuid}"
end
execute "virsh secret-define --file /tmp/#{filename}.xml" do
not_if "virsh secret-list | grep #{secret_uuid}"
end
# this will update the key if necessary
execute 'set libvirt secret' do
command "virsh secret-set-value --secret #{secret_uuid} --base64 #{rbd_key}"
notifies :restart, 'service[nova-compute-ceph]', :immediately
end
file "/tmp/#{filename}.xml" do
action :delete
end
service 'nova-compute-ceph' do
service_name platform_options['compute_compute_service']
supports status: true, restart: true
subscribes :restart, resources('template[/etc/nova/nova.conf]')
action :restart
end
end

View File

@ -178,24 +178,14 @@ execute 'Deleting default libvirt network' do
only_if 'virsh net-list | grep -q default'
end
# use bios system-uuid as host uuid
ruby_block "set_libvirt_host_uuid" do
block do
# use bios system-uuid as host uuid
if node['openstack']['compute']['libvirt']['host_uuid'].nil?
cmd = Mixlib::ShellOut.new('dmidecode -s system-uuid').run_command
system_uuid = cmd.stdout.strip
invalid_uuid = ["00000000-0000-0000-0000-000000000000", \
"FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF"]
if system_uuid.length.eql?(36) and \
!invalid_uuid.include?(system_uuid.upcase)
node.set['openstack']['compute']['libvirt']['host_uuid'] = system_uuid
end
# Host uuid
if node['openstack']['compute']['libvirt']['host_uuid'].nil?
ruby_block "set host uuid" do
block do
new_uuid = `uuidgen`.delete("\n")
node.set['openstack']['compute']['libvirt']['host_uuid'] = new_uuid
end
end
action :run
end
# TODO(breu): this section needs to be rewritten to support key privisioning

View File

@ -75,7 +75,16 @@ elsif mq_service_type == 'qpid'
node['openstack']['mq']['compute']['qpid']['username'])
end
memcache_servers = memcached_servers.join ','
if node['openstack']['compute']['consoleauth']['token']['backend'].eql?('memcache')
memcache_servers = memcached_servers('os-ops-caching').join ','
# number of seconds to wait before sockets timeout when the memcached server is down
# the default number is 3, here is going to set it as 0.1
ruby_block "Set memcache socket timeout" do
block do
`sed -i "s/_SOCKET_TIMEOUT = 3/_SOCKET_TIMEOUT = 0.1/g" /usr/lib/python[0-9].[0-9]/site-packages/memcache.py`
end
end
end
# find the node attribute endpoint settings for the server holding a given role
identity_endpoint = endpoint 'identity-api'
@ -105,7 +114,7 @@ if node['openstack']['compute']['network']['service_type'] == 'neutron'
end
if node['openstack']['compute']['libvirt']['images_type'] == 'rbd'
rbd_secret_uuid = get_secret node['openstack']['compute']['libvirt']['rbd']['rbd_secret_name']
#rbd_secret_uuid = get_secret node['openstack']['compute']['libvirt']['rbd']['rbd_secret_name']
end
vmware_host_pass = get_secret node['openstack']['compute']['vmware']['secret_name']
@ -129,7 +138,7 @@ template '/etc/nova/nova.conf' do
xvpvncproxy_bind_port: xvpvnc_bind.port,
novncproxy_bind_host: novnc_bind.host,
novncproxy_bind_port: novnc_bind.port,
vncserver_listen: vnc_bind.host,
vncserver_listen: node['openstack']['compute']['vnc']['vncserver_listen'],
vncserver_proxyclient_address: vnc_bind.host,
memcache_servers: memcache_servers,
mq_service_type: mq_service_type,
@ -150,7 +159,7 @@ template '/etc/nova/nova.conf' do
compute_api_bind_port: compute_api_bind.port,
ec2_api_bind_ip: ec2_api_bind.host,
ec2_api_bind_port: ec2_api_bind.port,
rbd_secret_uuid: rbd_secret_uuid,
#rbd_secret_uuid: rbd_secret_uuid,
vmware_host_pass: vmware_host_pass
)
end

View File

@ -367,7 +367,9 @@ max_client_requests = <%= node['openstack']['compute']['libvirt']['max_client_re
# NB This default all-zeros UUID will not work. Replace
# it with the output of the 'uuidgen' command and then
# uncomment this entry
#host_uuid = "00000000-0000-0000-0000-000000000000"
<% unless node['openstack']['compute']['libvirt']['host_uuid'].nil? %>
host_uuid = "<%= node['openstack']['compute']['libvirt']['host_uuid'] %>"
<% end %>
###################################################################
# Keepalive protocol:

View File

@ -28,9 +28,13 @@ remove_unused_base_images=<%= node["openstack"]["compute"]["libvirt"]["remove_un
remove_unused_original_minimum_age_seconds=<%= node["openstack"]["compute"]["libvirt"]["remove_unused_original_minimum_age_seconds"] %>
# Options defined in nova.openstack.common.rpc
# Seconds to wait before a cast expires (TTL). Only supported
# # by impl_zmq. (integer value)
rpc_cast_timeout=<%= node['openstack']['compute']['rpc_cast_timeout'] %>
# # Seconds to wait for a response from a call. (integer value)
rpc_response_timeout=<%= node['openstack']['compute']['rpc_response_timeout'] %>
rpc_thread_pool_size=<%= node["openstack"]["compute"]["rpc_thread_pool_size"] %>
rpc_conn_pool_size=<%= node["openstack"]["compute"]["rpc_conn_pool_size"] %>
rpc_response_timeout=<%= node["openstack"]["compute"]["rpc_response_timeout"] %>
rpc_backend=<%= node["openstack"]["compute"]["rpc_backend"] %>
amqp_durable_queues=<%= node['openstack']['mq']['compute']['durable_queues'] %>
amqp_auto_delete=<%= node['openstack']['mq']['compute']['auto_delete'] %>
@ -79,10 +83,18 @@ scheduler_default_filters=<%= @scheduler_default_filters %>
default_availability_zone=<%= node["openstack"]["compute"]["config"]["availability_zone"] %>
default_schedule_zone=<%= node["openstack"]["compute"]["config"]["default_schedule_zone"] %>
storage_availability_zone=<%= node["openstack"]["compute"]["config"]["storage_availability_zone"] %>
sql_connection=<%= @sql_connection %>
sql_max_pool_size=100
sql_max_overflow=100
# New instances will be scheduled on a host chosen randomly
# from a subset of the N best hosts. This property defines the
# subset size that a host is chosen from. A value of 1 chooses
# the first host returned by the weighing functions. This
# value must be at least 1. Any value less than 1 will be
# ignored, and 1 will be used instead (integer value)
scheduler_host_subset_size=<%= node['openstack']['compute']['scheduler']['host_subset_size'] %>
##### NETWORK #####
<% case node["openstack"]["compute"]["network"]["service_type"]
when "neutron" -%>
@ -95,7 +107,8 @@ neutron_url=http://<%= @network_endpoint.host %>:80
<% else -%>
neutron_url=http://<%= @network_endpoint.host %>:<%= @network_endpoint.port %>
<% end -%>
network_api_class=<%= node["openstack"]["compute"]["network"]["neutron"]["network_api_class"] %>
neutron_url_timeout=<%= node["openstack"]["compute"]["network"]["neutron"]["url_timeout"] %>
network_api_class=<%= node["openstack"]["compute"]["network"]["neutron"]["network_api_class"] %>
neutron_auth_strategy=<%= node["openstack"]["compute"]["network"]["neutron"]["auth_strategy"] %>
neutron_admin_tenant_name=<%= node["openstack"]["compute"]["network"]["neutron"]["admin_tenant_name"] %>
neutron_admin_username=<%= node["openstack"]["compute"]["network"]["neutron"]["admin_username"] %>
@ -127,6 +140,7 @@ auto_assign_floating_ip=<%= node["openstack"]["compute"]["network"]["auto_assign
##### GLANCE #####
image_service=nova.image.glance.GlanceImageService
glance_api_servers=<%= @glance_api_ipaddress %>:<%= @glance_api_port %>
glance_num_retries= <%= node['openstack']['compute']['glance_num_retries'] %>
##### COMPUTE #####
compute_driver=<%= node["openstack"]["compute"]["driver"] %>
@ -138,8 +152,9 @@ use_cow_images=<%= node["openstack"]["compute"]["use_cow_images"] %>
vif_plugging_is_fatal=<%= node["openstack"]["compute"]["vif_plugging_is_fatal"] %>
vif_plugging_timeout=<%= node["openstack"]["compute"]["vif_plugging_timeout"] %>
compute_manager=nova.compute.manager.ComputeManager
sql_connection=<%= @sql_connection %>
connection_type=libvirt
live_migration_retry_count=30
force_config_drive=true
##### NOTIFICATIONS #####
<% if node['openstack']['compute']['config']['notification_drivers'] %>
@ -458,17 +473,17 @@ inject_key=<%= node["openstack"]["compute"]["libvirt"]["libvirt_inject_key"] %>
# Migration target URI (any included "%s" is replaced with the
# migration target hostname) (string value)
#live_migration_uri=qemu+tcp://%s/system
live_migration_uri=qemu+tcp://%s/system
# Migration flags to be set for live migration (string value)
#live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_PERSIST_DEST
# Migration flags to be set for block migration (string value)
#block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC
# Maximum bandwidth to be used during migration, in Mbps
# (integer value)
#live_migration_bandwidth=0
live_migration_bandwidth=0
# Snapshot image format (valid options are : raw, qcow2, vmdk,
# vdi). Defaults to same as source image (string value)
@ -577,5 +592,5 @@ images_rbd_ceph_conf=<%= node['openstack']['compute']['libvirt']['images_rbd_cep
# https://github.com/openstack/nova/blob/c15dff2e9978fe851c73e92ab7f9b46e27de81ba/nova/virt/libvirt/volume.py#L217-L229
rbd_user=<%= node['openstack']['compute']['libvirt']['rbd']['rbd_user'] %>
# The libvirt UUID of the secret for the rbd images (string value)
rbd_secret_uuid=<%= @rbd_secret_uuid %>
rbd_secret_uuid=<%= node['openstack']['block-storage']['rbd_secret_uuid'] %>
<% end -%>

View File

@ -35,7 +35,7 @@ default['openstack']['dashboard']['keystone_default_role'] = '_member_'
default['openstack']['dashboard']['keystone_service_chef_role'] = 'keystone'
default['openstack']['dashboard']['server_hostname'] = nil
default['openstack']['dashboard']['use_ssl'] = true
default['openstack']['dashboard']['use_ssl'] = false
default['openstack']['dashboard']['ssl']['cert_url'] = nil
default['openstack']['dashboard']['ssl']['key_url'] = nil
# When using a remote certificate and key, the names of the actual installed certificate
@ -138,7 +138,7 @@ default['openstack']['dashboard']['static_path'] = "#{node['openstack']['dashboa
default['openstack']['dashboard']['stylesheet_path'] = '/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html'
default['openstack']['dashboard']['wsgi_path'] = node['openstack']['dashboard']['dash_path'] + '/wsgi/django.wsgi'
default['openstack']['dashboard']['wsgi_socket_prefix'] = nil
default['openstack']['dashboard']['session_backend'] = 'memcached'
default['openstack']['dashboard']['session_backend'] = 'signed_cookies'
default['openstack']['dashboard']['ssl_offload'] = false
default['openstack']['dashboard']['plugins'] = nil

View File

@ -8,7 +8,7 @@ NameVirtualHost *:<%= node['openstack']['dashboard']['http_port'].to_i%>
<% if node["openstack"]["dashboard"]["server_hostname"] -%>
ServerName <%= node["openstack"]["dashboard"]["server_hostname"] %>
<% end -%>
<% if node["openstack"]["dashboard"]["use_ssl"] %>
<% if eval(node['openstack']['dashboard']['use_ssl']) -%>
RewriteEngine On
RewriteCond %{HTTPS} off
<% if node['openstack']['dashboard']['http_port'].to_i != 80 or node['openstack']['dashboard']['https_port'].to_i != 443 %>
@ -17,7 +17,6 @@ NameVirtualHost *:<%= node['openstack']['dashboard']['http_port'].to_i%>
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R]
<% end -%>
</VirtualHost>
<% unless node['apache']['listen_ports'].map(&:to_i).uniq.include?(node['openstack']['dashboard']['https_port'].to_i) %>
Listen *:<%= node['openstack']['dashboard']['https_port'].to_i%>
NameVirtualHost *:<%= node['openstack']['dashboard']['https_port'].to_i%>
@ -26,7 +25,7 @@ NameVirtualHost *:<%= node['openstack']['dashboard']['https_port'].to_i%>
<% if node["openstack"]["dashboard"]["server_hostname"] -%>
ServerName <%= node["openstack"]["dashboard"]["server_hostname"] %>
<% end -%>
<% end %>
<% end -%>
ServerAdmin <%= node["apache"]["contact"] %>
WSGIScriptAlias <%= node["openstack"]["dashboard"]["webroot"] %> <%= node["openstack"]["dashboard"]["wsgi_path"] %>
WSGIDaemonProcess dashboard user=<%= node['openstack']['dashboard']['horizon_user'] %> group=<%= node['openstack']['dashboard']['horizon_group'] %> processes=3 threads=10 python-path=<%= node["openstack"]["dashboard"]["dash_path"] %>
@ -54,11 +53,11 @@ NameVirtualHost *:<%= node['openstack']['dashboard']['https_port'].to_i%>
allow from all
</Directory>
<% if node["openstack"]["dashboard"]["use_ssl"] %>
<% if eval(node['openstack']['dashboard']['use_ssl']) -%>
SSLEngine on
SSLCertificateFile <%= @ssl_cert_file %>
SSLCertificateKeyFile <%= @ssl_key_file %>
<% end %>
<% end -%>
# Allow custom files to overlay the site (such as logo.png)
RewriteEngine On
@ -72,4 +71,4 @@ NameVirtualHost *:<%= node['openstack']['dashboard']['https_port'].to_i%>
<% unless node["openstack"]["dashboard"]["wsgi_socket_prefix"].nil? %>
WSGISocketPrefix <%= node["openstack"]["dashboard"]["wsgi_socket_prefix"] %>
<% end %>
<% end %>

View File

@ -37,10 +37,10 @@ SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTOCOL', 'https')
# If Horizon is being served through SSL, then uncomment the following two
# settings to better secure the cookies from security exploits
<% if node["openstack"]["dashboard"]["use_ssl"] %>
<% if eval(node['openstack']['dashboard']['use_ssl']) -%>
CSRF_COOKIE_SECURE = <%= node["openstack"]["dashboard"]["csrf_cookie_secure"] ? "True" : "False" %>
SESSION_COOKIE_SECURE = <%= node["openstack"]["dashboard"]["session_cookie_secure"] ? "True" : "False" %>
<% end %>
<% end -%>
# Overrides for OpenStack API versions. Use this setting to force the
# OpenStack dashboard to use a specfic API version for a given service API.
@ -133,6 +133,8 @@ CACHES = {
when "sql"
%>
SESSION_ENGINE = 'django.contrib.sessions.backends.db'
<% when "signed_cookies" %>
SESSION_ENGINE = 'django.contrib.sessions.backends.signed_cookies'
<% end %>
# Send email to the console by default

View File

@ -48,8 +48,8 @@ default['openstack']['identity']['syslog']['config_facility'] = 'local2'
# RPC attributes
default['openstack']['identity']['control_exchange'] = 'openstack'
default['openstack']['identity']['rpc_thread_pool_size'] = 64
default['openstack']['identity']['rpc_conn_pool_size'] = 30
default['openstack']['identity']['rpc_thread_pool_size'] = 240
default['openstack']['identity']['rpc_conn_pool_size'] = 100
default['openstack']['identity']['rpc_response_timeout'] = 60
case node['openstack']['mq']['service_type']
when 'rabbitmq'
@ -88,7 +88,8 @@ default['openstack']['identity']['signing']['ca_password'] = nil
# These switches set the various drivers for the different Keystone components
default['openstack']['identity']['identity']['backend'] = 'sql'
default['openstack']['identity']['assignment']['backend'] = 'sql'
default['openstack']['identity']['token']['backend'] = 'sql'
# default['openstack']['identity']['token']['backend'] = 'sql'
default['openstack']['identity']['token']['backend'] = 'memcache'
default['openstack']['identity']['catalog']['backend'] = 'sql'
default['openstack']['identity']['policy']['backend'] = 'sql'

View File

@ -30,7 +30,7 @@ rpc_backend=<%= node["openstack"]["identity"]["rpc_backend"] %>
<% end %>
<% end %>
<% if @memcache_servers -%>
<% if node['openstack']['identity']['token']['backend'].eql?('memcache') -%>
[memcache]
servers = <%= @memcache_servers %>
@ -39,7 +39,7 @@ servers = <%= @memcache_servers %>
connection = <%= @sql_connection %>
idle_timeout = 200
min_pool_size = 5
max_pool_size = 10
max_pool_size = 100
pool_timeout = 200
[ldap]
@ -51,7 +51,7 @@ password = <%= @ldap["password"] %>
# password = None
<% end -%>
<% if @ldap["use_tls"] -%>
use_tls = True
# use_tls = True
<% if @ldap["tls_cacertfile"] -%>
tls_cacertfile = <%= @ldap["tls_cacertfile"] %>
<% elsif @ldap["tls_cacertdir"] -%>
@ -71,10 +71,10 @@ tls_req_cert = <%= @ldap["tls_req_cert"] %>
suffix = <%= @ldap["suffix"] %>
use_dumb_member = <%= @ldap["use_dumb_member"] %>
allow_subtree_delete = <%= @ldap["allow_subtree_delete"] %>
dumb_member = <%= @ldap["dumb_member"] %>
page_size = <%= @ldap["page_size"] %>
alias_dereferencing = <%= @ldap["alias_dereferencing"] %>
query_scope = <%= @ldap["query_scope"] %>
# dumb_member = <%= @ldap["dumb_member"] %>
# page_size = <%= @ldap["page_size"] %>
# alias_dereferencing = <%= @ldap["alias_dereferencing"] %>
# query_scope = <%= @ldap["query_scope"] %>
<% if @ldap["user_tree_dn"] -%>
user_tree_dn = <%= @ldap["user_tree_dn"] %>
@ -91,7 +91,7 @@ user_id_attribute = <%= @ldap["user_id_attribute"] %>
user_name_attribute = <%= @ldap["user_name_attribute"] %>
user_mail_attribute = <%= @ldap["user_mail_attribute"] %>
user_pass_attribute = <%= @ldap["user_pass_attribute"] %>
user_enabled_attribute = <%= @ldap["user_enabled_attribute"] %>
# user_enabled_attribute = <%= @ldap["user_enabled_attribute"] %>
user_domain_id_attribute = <%= @ldap["user_domain_id_attribute"] %>
user_enabled_mask = <%= @ldap["user_enabled_mask"] %>
user_enabled_default = <%= @ldap["user_enabled_default"] %>
@ -106,6 +106,8 @@ user_enabled_emulation_dn = <%= @ldap["user_enabled_emulation_dn"] %>
# user_enabled_emulation_dn =
<% end -%>
<% if @ldap["tenant_tree_dn"] -%>
tenant_tree_dn = <%= @ldap["tenant_tree_dn"] %>
<% else -%>
@ -116,22 +118,22 @@ tenant_filter = <%= @ldap["tenant_filter"] %>
<% else -%>
# tenant_filter =
<% end -%>
tenant_objectclass = <%= @ldap["tenant_objectclass"] %>
tenant_id_attribute = <%= @ldap["tenant_id_attribute"] %>
tenant_member_attribute = <%= @ldap["tenant_member_attribute"] %>
tenant_name_attribute = <%= @ldap["tenant_name_attribute"] %>
tenant_desc_attribute = <%= @ldap["tenant_desc_attribute"] %>
tenant_enabled_attribute = <%= @ldap["tenant_enabled_attribute"] %>
tenant_domain_id_attribute = <%= @ldap["tenant_domain_id_attribute"] %>
#tenant_objectclass = <%= @ldap["tenant_objectclass"] %>
#tenant_id_attribute = <%= @ldap["tenant_id_attribute"] %>
#tenant_member_attribute = <%= @ldap["tenant_member_attribute"] %>
#tenant_name_attribute = <%= @ldap["tenant_name_attribute"] %>
#tenant_desc_attribute = <%= @ldap["tenant_desc_attribute"] %>
#tenant_enabled_attribute = <%= @ldap["tenant_enabled_attribute"] %>
#tenant_domain_id_attribute = <%= @ldap["tenant_domain_id_attribute"] %>
<% if @ldap["tenant_attribute_ignore"] -%>
tenant_attribute_ignore = <%= @ldap["tenant_attribute_ignore"] %>
<% else -%>
# tenant_attribute_ignore =
<% end -%>
tenant_allow_create = <%= @ldap["tenant_allow_create"] %>
tenant_allow_update = <%= @ldap["tenant_allow_update"] %>
tenant_allow_delete = <%= @ldap["tenant_allow_delete"] %>
tenant_enabled_emulation = <%= @ldap["tenant_enabled_emulation"] %>
#tenant_allow_create = <%= @ldap["tenant_allow_create"] %>
#tenant_allow_update = <%= @ldap["tenant_allow_update"] %>
#tenant_allow_delete = <%= @ldap["tenant_allow_delete"] %>
#tenant_enabled_emulation = <%= @ldap["tenant_enabled_emulation"] %>
<% if @ldap["tenant_enabled_emulation_dn"] -%>
tenant_enabled_emulation_dn = <%= @ldap["tenant_enabled_emulation_dn"] %>
<% else -%>
@ -148,18 +150,18 @@ role_filter = <%= @ldap["role_filter"] %>
<% else -%>
# role_filter =
<% end -%>
role_objectclass = <%= @ldap["role_objectclass"] %>
role_id_attribute = <%= @ldap["role_id_attribute"] %>
role_name_attribute = <%= @ldap["role_name_attribute"] %>
role_member_attribute = <%= @ldap["role_member_attribute"] %>
#role_objectclass = <%= @ldap["role_objectclass"] %>
#role_id_attribute = <%= @ldap["role_id_attribute"] %>
#role_name_attribute = <%= @ldap["role_name_attribute"] %>
#role_member_attribute = <%= @ldap["role_member_attribute"] %>
<% if @ldap["role_attribute_ignore"] -%>
role_attribute_ignore = <%= @ldap["role_attribute_ignore"] %>
<% else -%>
# role_attribute_ignore =
<% end -%>
role_allow_create = <%= @ldap["role_allow_create"] %>
role_allow_update = <%= @ldap["role_allow_update"] %>
role_allow_delete = <%= @ldap["role_allow_delete"] %>
#role_allow_create = <%= @ldap["role_allow_create"] %>
#role_allow_update = <%= @ldap["role_allow_update"] %>
#role_allow_delete = <%= @ldap["role_allow_delete"] %>
<% if @ldap["group_tree_dn"] -%>
group_tree_dn = <%= @ldap["group_tree_dn"] %>
@ -171,8 +173,8 @@ group_filter = <%= @ldap["group_filter"] %>
<% else -%>
# group_filter =
<% end -%>
group_objectclass = <%= @ldap["group_objectclass"] %>
group_id_attribute = <%= @ldap["group_id_attribute"] %>
#group_objectclass = <%= @ldap["group_objectclass"] %>
#group_id_attribute = <%= @ldap["group_id_attribute"] %>
group_name_attribute = <%= @ldap["group_name_attribute"] %>
group_member_attribute = <%= @ldap["group_member_attribute"] %>
group_desc_attribute = <%= @ldap["group_desc_attribute"] %>

View File

@ -99,7 +99,7 @@ default['openstack']['image']['upload_images'] = ['cirros']
default['openstack']['image']['upload_image']['precise'] = 'http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img'
default['openstack']['image']['upload_image']['oneiric'] = 'http://cloud-images.ubuntu.com/oneiric/current/oneiric-server-cloudimg-amd64-disk1.img'
default['openstack']['image']['upload_image']['natty'] = 'http://cloud-images.ubuntu.com/natty/current/natty-server-cloudimg-amd64-disk1.img'
default['openstack']['image']['upload_image']['cirros'] = 'http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img'
default['openstack']['image']['upload_image']['cirros'] = 'http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img'
# more images available at https://github.com/rackerjoe/oz-image-build
default['openstack']['image']['upload_image']['centos'] = 'http://c250663.r63.cf1.rackcdn.com/centos60_x86_64.qcow2'

View File

@ -16,3 +16,4 @@ end
depends 'openstack-common', '~> 9.0'
depends 'openstack-identity', '~> 9.0'
depends 'ceph', '>= 0.2.1'

View File

@ -60,29 +60,29 @@ if node['openstack']['image']['api']['default_store'] == 'swift'
end
elsif node['openstack']['image']['api']['default_store'] == 'rbd'
rbd_user = node['openstack']['image']['api']['rbd']['rbd_store_user']
rbd_key = get_password 'service', node['openstack']['image']['api']['rbd']['key_name']
include_recipe 'openstack-common::ceph_client'
platform_options['ceph_packages'].each do |pkg|
package pkg do
options platform_options['package_overrides']
action :upgrade
end
end
template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
source 'ceph.client.keyring.erb'
cookbook 'openstack-common'
owner node['openstack']['image']['user']
group node['openstack']['image']['group']
mode 00600
variables(
name: rbd_user,
key: rbd_key
)
end
# rbd_user = node['openstack']['image']['api']['rbd']['rbd_store_user']
# rbd_key = get_password 'service', node['openstack']['image']['api']['rbd']['key_name']
#
# include_recipe 'openstack-common::ceph_client'
#
# platform_options['ceph_packages'].each do |pkg|
# package pkg do
# options platform_options['package_overrides']
# action :upgrade
# end
# end
#
# template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
# source 'ceph.client.keyring.erb'
# cookbook 'openstack-common'
# owner node['openstack']['image']['user']
# group node['openstack']['image']['group']
# mode 00600
# variables(
# name: rbd_user,
# key: rbd_key
# )
# end
end
service 'glance-api' do

View File

@ -0,0 +1,45 @@
# attention:
# this recipe should run after the openstack and ceph are working correctly!
#
if node['openstack']['image']['api']['default_store'] == 'rbd'
include_recipe 'ceph::_common'
include_recipe 'ceph::mon_install'
include_recipe 'ceph::conf'
platform_options = node['openstack']['image']['platform']
cluster = 'ceph'
class ::Chef::Recipe # rubocop:disable Documentation
include ::Openstack
end
rbd_user = node['openstack']['image']['api']['rbd']['rbd_store_user']
if mon_nodes.empty?
rbd_key = ""
elsif !mon_master['ceph'].has_key?('glance-secret')
rbd_key = ""
else
rbd_key = mon_master['ceph']['glance-secret']
end
template "/etc/ceph/ceph.client.#{rbd_user}.keyring" do
source 'ceph.client.keyring.erb'
cookbook 'openstack-common'
owner node['openstack']['image']['user']
group node['openstack']['image']['group']
mode 00600
variables(
name: rbd_user,
key: rbd_key
)
end
service 'glance-api-ceph' do
service_name platform_options['image_api_service']
supports status: true, restart: true
action :enable
subscribes :restart, resources('template[/etc/ceph/ceph.conf]')
end
end

View File

@ -69,6 +69,12 @@ default['openstack']['network']['api']['auth']['cache_dir'] = '/var/cache/neutro
# The auth api version used to interact with identity service.
default['openstack']['network']['api']['auth']['version'] = node['openstack']['api']['auth']['version']
# Number of separate worker processes to spawn.
default['openstack']['network']['api_workers'] = 8
# Number of separate RPC worker processes to spawn.
default['openstack']['network']['rpc_workers'] = 8
# logging attribute
default['openstack']['network']['log_dir'] = '/var/log/neutron'
default['openstack']['network']['syslog']['use'] = false
@ -95,19 +101,19 @@ default['openstack']['network']['quota']['items'] = 'network,subnet,port'
default['openstack']['network']['quota']['default'] = -1
# number of networks allowed per tenant, and minus means unlimited
default['openstack']['network']['quota']['network'] = 10
default['openstack']['network']['quota']['network'] = 100
# number of subnets allowed per tenant, and minus means unlimited
default['openstack']['network']['quota']['subnet'] = 10
default['openstack']['network']['quota']['subnet'] = 100
# number of ports allowed per tenant, and minus means unlimited
default['openstack']['network']['quota']['port'] = 50
default['openstack']['network']['quota']['port'] = 8000
# number of security groups allowed per tenant, and minus means unlimited
default['openstack']['network']['quota']['security_group'] = 10
default['openstack']['network']['quota']['security_group'] = 1000
# number of security group rules allowed per tenant, and minus means unlimited
default['openstack']['network']['quota']['security_group_rule'] = 100
default['openstack']['network']['quota']['security_group_rule'] = 1000
# Whether or not we want to disable offloading
# on all the NIC interfaces (currently only supports
@ -174,7 +180,7 @@ default['openstack']['network']['dhcp_driver'] = 'neutron.agent.linux.dhcp.Dnsma
# you must have kernel build with CONFIG_NET_NS=y and
# iproute2 package that supports namespaces.
default['openstack']['network']['use_namespaces'] = 'True'
default['openstack']['network']['allow_overlapping_ips'] = 'False'
default['openstack']['network']['allow_overlapping_ips'] = 'True'
# use neutron root wrap
default['openstack']['network']['use_rootwrap'] = true
@ -187,9 +193,10 @@ default['openstack']['network']['notification_driver'] = 'neutron.openstack.comm
default['openstack']['network']['control_exchange'] = node['openstack']['mq']['network']['control_exchange']
# Common rpc definitions
default['openstack']['network']['rpc_thread_pool_size'] = 64
default['openstack']['network']['rpc_conn_pool_size'] = 30
default['openstack']['network']['rpc_response_timeout'] = 60
default['openstack']['network']['rpc_thread_pool_size'] = 240
default['openstack']['network']['rpc_conn_pool_size'] = 100
default['openstack']['network']['rpc_response_timeout'] = 300
default['openstack']['network']['rpc_cast_timeout'] = 300
# ======== Neutron Nova interactions ==========
# Send notification to nova when port status is active.
@ -453,6 +460,7 @@ default['openstack']['network']['openvswitch']['enable_security_group'] = 'True'
default['openstack']['network']['openvswitch']['host'] = '127.0.0.1'
default['openstack']['network']['openvswitch']['bind_interface'] = nil
# The newest version of OVS which comes with 12.04 Precise is 1.4.0
# Which is legacy. Should we compile a newer version from source?
# If so, set ['openstack']['network']['openvswitch']['use_source_version']

View File

@ -170,7 +170,7 @@ ruby_block 'query service tenant uuid' do
return false if tenant_id.nil?
# Chef::Log.error('service tenant UUID for nova_admin_tenant_id not found.') if tenant_id.nil?
node.set['openstack']['network']['nova']['admin_tenant_id'] = tenant_id
# rescue RuntimeError => e
rescue RuntimeError => e
# Chef::Log.error("Could not query service tenant UUID for nova_admin_tenant_id. Error was #{e.message}")
end
end

View File

@ -107,7 +107,8 @@ rpc_conn_pool_size = <%= node['openstack']['network']['rpc_conn_pool_size'] %>
# Seconds to wait for a response from call or multicall
rpc_response_timeout = <%= node['openstack']['network']['rpc_response_timeout'] %>
# Seconds to wait before a cast expires (TTL). Only supported by impl_zmq.
# rpc_cast_timeout = 30
rpc_cast_timeout = <%= node['openstack']['network']['rpc_cast_timeout'] %>
# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call.
# allowed_rpc_exception_modules = neutron.openstack.common.exception, nova.exception
@ -246,6 +247,18 @@ router_scheduler_driver = <%= node["openstack"]["network"]["l3"]["scheduler"] %>
# =========== end of items for agent scheduler extension =====
# =========== WSGI parameters related to the API server ==============
# Number of separate worker processes to spawn. The default, 0, runs the
# worker thread in the current process. Greater than 0 launches that number of
# child processes as workers. The parent process manages them.
api_workers = <%= node['openstack']['network']['api_workers'] %>
# Number of separate RPC worker processes to spawn. The default, 0, runs the
# worker thread in the current process. Greater than 0 launches that number of
# child processes as RPC workers. The parent process manages them.
# This feature is experimental until issues are addressed and testing has been
# enabled for various plugins for compatibility.
rpc_workers = <%= node['openstack']['network']['rpc_workers'] %>
# Sets the value of TCP_KEEPIDLE in seconds to use for each server socket when
# starting API server. Not supported on OS X.
#tcp_keepidle = 600

View File

@ -0,0 +1,100 @@
# encoding: UTF-8
#
# Cookbook Name:: openstack-object-storage
# Recipe:: swift-config-ceph
#
# Copyright 2014, Liucheng.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#create swift endpoint to ceph
require 'uri'
class ::Chef::Recipe # rubocop:disable Documentation
include ::Openstack
end
identity_admin_endpoint = endpoint 'identity-admin'
token = get_secret 'openstack_identity_bootstrap_token'
auth_url = ::URI.decode identity_admin_endpoint.to_s
swift_endpoint = "http://#{node['ceph']['radosgw domain']}/swift/v1"
service_pass = get_password 'service', 'openstack-object-storage'
service_tenant_name = 'service'
service_user = 'swift'
service_role = 'admin'
region = 'RegionOne'
# Register Image Service
openstack_identity_register 'Register Object Storage Service' do
auth_uri auth_url
bootstrap_token token
service_name 'swift'
service_type 'object-store'
service_description 'Object Storage Service'
action :create_service
end
# Register Image Endpoint
openstack_identity_register 'Register Object Storage Endpoint' do
auth_uri auth_url
bootstrap_token token
service_type 'object-store'
endpoint_region region
endpoint_adminurl swift_endpoint
endpoint_internalurl swift_endpoint
endpoint_publicurl swift_endpoint
action :create_endpoint
end
# Register Service Tenant
openstack_identity_register 'Register Service Tenant' do
auth_uri auth_url
bootstrap_token token
tenant_name service_tenant_name
tenant_description 'Service Tenant'
tenant_enabled true # Not required as this is the default
action :create_tenant
end
# Register Service User
openstack_identity_register "Register #{service_user} User" do
auth_uri auth_url
bootstrap_token token
tenant_name service_tenant_name
user_name service_user
user_pass service_pass
# String until https://review.openstack.org/#/c/29498/ merged
user_enabled true
action :create_user
end
## Grant Admin role to Service User for Service Tenant ##
openstack_identity_register "Grant '#{service_role}' Role to #{service_user} User for #{service_tenant_name} Tenant" do
auth_uri auth_url
bootstrap_token token
tenant_name service_tenant_name
user_name service_user
role_name service_role
action :grant_role
end

View File

@ -0,0 +1,26 @@
# encoding: UTF-8
#
# Cookbook Name:: openstack-object-storage
# Recipe:: swift-config-ceph
#
# Copyright 2014, Liucheng.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#create swift endpoint to ceph
execute 'modify swiftclient' do
command "sed -i 's/header = header.lower()/#header = header.lower()/g' /usr/lib/python2.6/site-packages/swiftclient/client.py"
not_if "grep '#header = header.lower()' /usr/lib/python2.6/site-packages/swiftclient/client.py"
end

View File

@ -64,8 +64,8 @@ default['openstack']['orchestration']['syslog']['facility'] = 'LOG_LOCAL2'
default['openstack']['orchestration']['syslog']['config_facility'] = 'local2'
# Common rpc definitions
default['openstack']['orchestration']['rpc_thread_pool_size'] = 64
default['openstack']['orchestration']['rpc_conn_pool_size'] = 30
default['openstack']['orchestration']['rpc_thread_pool_size'] = 240
default['openstack']['orchestration']['rpc_conn_pool_size'] = 100
default['openstack']['orchestration']['rpc_response_timeout'] = 60
# Notification definitions

View File

@ -33,7 +33,9 @@ when 'debian'
package 'util-linux'
if node['rabbitmq']['use_distro_version']
package 'rabbitmq-server'
package 'rabbitmq-server' do
action :upgrade
end
else
# we need to download the package
deb_package = "https://www.rabbitmq.com/releases/rabbitmq-server/v#{node['rabbitmq']['version']}/rabbitmq-server_#{node['rabbitmq']['version']}-1_all.deb"

Some files were not shown because too many files have changed in this diff Show More