Removed the split setup

Change-Id: I0caa1a5e8f46eb9b0edd735eca1e5b4d692d2d65
This commit is contained in:
Tim Kuhlman 2014-10-06 13:35:22 -06:00
parent 0f28ca680e
commit 6c6e13d4ef
22 changed files with 6 additions and 628 deletions

View File

@ -15,7 +15,6 @@ cookbook 'monasca_persister', git: 'https://github.com/stackforge/cookbook-monas
cookbook 'monasca_schema', git: 'https://github.com/stackforge/cookbook-monasca-schema.git'
cookbook 'monasca_thresh', git: 'https://github.com/stackforge/cookbook-monasca-thresh.git'
cookbook 'storm', git: 'https://github.com/tkuhlman/storm'
cookbook 'vertica', git: 'https://github.com/hpcloud-mon/cookbooks-vertica'
cookbook 'zookeeper', git: 'https://github.com/hpcloud-mon/cookbooks-zookeeper'
# Community cookbooks

View File

@ -23,7 +23,6 @@
- [vagrant-cachier](#vagrant-cachier)
- [Cookbook Development](#cookbook-development)
- [Running behind a Web Proxy](#running-behind-a-web-proxy)
- [Vertica](#vertica)
- [Alternate Vagrant Configurations](#alternate-vagrant-configurations)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
@ -154,29 +153,13 @@ VM to use them also.
vagrant plugin install vagrant-proxyconf
```
# Vertica
Vertica is supported instead of influxdb, this is especially useful for large deployments.
Before using Vertica must be downloaded from the [Vertica site](https://my.vertica.com/). Download these packages and place in the root of this repository.
- `vertica_7.0.1-0_amd64.deb`
- `vertica-r-lang_7.0.1-0_amd64.deb`
The `vertica::console` recipe is not enabled by default, but if it is added, this package is also needed.
- `vertica-console_7.0.1-0_amd64.deb`
After the vertica packages are installed the configuration must be changed to run Vertica. Specifically besides starting Vertica the data bags
for the monasca_api and the monasca_persister need to be updated so these services use Vertica rather than InfluxDB.
The alternative split setup is configured for running Vertica.
# Alternate Vagrant Configurations
To run any of these alternate configs, simply run the Vagrant commands from within the subdir. Note that the Vertica debs must be _copied_
(not symlinked) into the subdir as well. See the README.md in the subdir for more details.
To run any of these alternate configs, simply run the Vagrant commands from within the subdir.
- `split` subdir - The various monitoring components split into their own VMs. The split setup runs Vertica by default rather than influxdb.
- `ds-build` subdir - This is used for building a new devstack server image. It does not typically need to be run.
In the past other alternative setups were working including running mini-mon in HP Public Cloud and scripts for putting it on baremetal. These are no
longer supported.
Previously in the split directory an alternative setup was available with each service split into different vms and using
Vertica rather than influxdb. This was removed simply because it was not being actively maintained as changes occurred. It is still possible
to split up the services and to use Vertica, these are done in test environments and production deployments, however is beyond
the scope of this development environment. Additionaly other alternative setups including running mini-mon in HP Public Cloud
and scripts for putting it on baremetal are also no longer supported.

View File

@ -1,35 +0,0 @@
The chef and Vagrant configuration here runs mini-mon split into 6 different VMs. This split demonstrates and provides a simple test of how the
monitoring system can scale but also is useful for some development scenarios.
# Using the split mini-mon
- Your home dir is synced to `/vagrant_home` on each vm
- Vms created
- `api` at `192.168.10.4`
- `kafka` at `192.168.10.10` - monasca-notification runs on this box also
- `mysql` at `192.168.10.6`
- `persister` at `192.168.10.12`
- `thresh` at `192.168.10.14`
- `vertica` at `192.168.10.8`
- The management console is at https://192.168.10.8:5450
- `devstack` at `192.168.10.5`
- The web interface is at http://192.168.0.5
- username `admin`, password `admin`
- Run `vagrant help` for more info
- Run `vagrant ssh <vm name>` to login to a particular vm
- Can also run `ssh vagrant@<ip address>` to login
- password is `vagrant`
## Start mini-mon
From within this directory, run this simple scripts which aid in bringing up all the vms in the proper order.
If desired, the standard vagrant commands can also be used.
```
bin/vup
```
## Halt mini-mon
In some cases halting mini-mon can result in certain vms being left in an odd state, to avoid this a script has been made to halt boxes in the
correct order
```
bin/vhalt
```

135
split/Vagrantfile vendored
View File

@ -1,135 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Set working dir to root of repo
Dir.chdir ".."
VAGRANTFILE_API_VERSION = "2" # Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
unless Vagrant.has_plugin?("vagrant-berkshelf")
raise "The needed plugin vagrant-berkshelf is not available.
Install it by calling 'vagrant plugin install vagrant-berkshelf'."
end
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Settings for all vms
config.berkshelf.enabled = true
if Vagrant.has_plugin?("vagrant-cachier")
config.cache.scope = :box
end
# Handle local proxy settings
if Vagrant.has_plugin?("vagrant-proxyconf")
if ENV["http_proxy"]
config.proxy.http = ENV["http_proxy"]
end
if ENV["https_proxy"]
config.proxy.https = ENV["https_proxy"]
end
if ENV["no_proxy"]
config.proxy.no_proxy = ENV["no_proxy"]
end
end
config.vm.synced_folder "~/", "/vagrant_home"
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "768"]
end
# VM specific settings, these machines come up in order they are specified.
config.vm.define "mysql" do |mysql|
mysql.vm.hostname = 'mysql'
mysql.vm.box = "kuhlmant/precise64_chef11"
mysql.vm.network :private_network, ip: "192.168.10.6"
mysql.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "MySQL"
end
mysql.vm.network "forwarded_port", guest: 3306, host: 43305
end
config.vm.define "devstack" do |devstack|
devstack.vm.hostname = 'devstack'
devstack.vm.box = "monasca/devstack"
devstack.vm.network :private_network, ip: "192.168.10.5"
devstack.vm.provider "virtualbox" do |vb|
vb.memory = 5280
vb.cpus = 4
end
devstack.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Devstack"
chef.arguments = '--force-formatter'
end
end
config.vm.define "kafka" do |kafka|
kafka.vm.hostname = 'kafka'
kafka.vm.box = "kuhlmant/precise64_chef11"
kafka.vm.network :private_network, ip: "192.168.10.10"
kafka.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Kafka"
end
kafka.vm.provider "virtualbox" do |vb|
vb.memory = 1024
end
end
config.vm.define "vertica" do |vertica|
vertica.vm.hostname = 'vertica'
vertica.vm.box = "kuhlmant/precise64_chef11"
vertica.vm.network :private_network, ip: "192.168.10.8"
vertica.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Vertica"
end
vertica.vm.provider "virtualbox" do |vb|
vb.memory = 2048 # Vertica is pretty strict about its minimum
end
end
config.vm.define "api" do |api|
api.vm.hostname = 'api'
api.vm.box = "kuhlmant/precise64_chef11"
api.vm.network :private_network, ip: "192.168.10.4"
api.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Api"
end
end
config.vm.define "persister" do |persister|
persister.vm.hostname = 'persister'
persister.vm.box = "kuhlmant/precise64_chef11"
persister.vm.network :private_network, ip: "192.168.10.12"
persister.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Persister"
end
persister.vm.provider "virtualbox" do |vb|
vb.memory = 1024
end
persister.vm.network "forwarded_port", guest: 8091, host: 8091
end
config.vm.define "thresh" do |thresh|
thresh.vm.hostname = 'thresh'
thresh.vm.box = "kuhlmant/precise64_chef11"
thresh.vm.network :private_network, ip: "192.168.10.14"
thresh.vm.provision :chef_solo do |chef|
chef.roles_path = "roles"
chef.data_bags_path = "data_bags"
chef.add_role "Thresh"
end
end
end

View File

@ -1,5 +0,0 @@
#!/bin/sh -x
#
# Halt the entire infrastructure with dependencies considered
vagrant halt thresh persister api vertica kafka mysql

View File

@ -1,7 +0,0 @@
#!/bin/sh -x
#
# Brings up the entire infrastructure as fast as possible but with dependencies considered
# Though mon_notification depends on mysql if kafka is up first the daemon will just restart until mysql is available
vagrant up --parallel mysql kafka vertica
vagrant up --parallel api persister thresh

View File

@ -1,14 +0,0 @@
{
"id" : "agent_plugin_config",
"mysql": {
"user": "root",
"password": "pass"
},
"rabbitmq": {
"user": "guest",
"password": "pass",
"nodes": "rabbit@devstack",
"queues": "conductor",
"exchanges": "nova,cinder,ceilometer,glance,keystone,neutron,heat,ironic,openstack"
}
}

View File

@ -1,21 +0,0 @@
{
"id" : "mon_api",
"api_region": "useast",
"database-configuration": {
"database-type": "vertica"
},
"vertica" : {
"dbname" : "mon",
"hostname" : "192.168.10.8"
},
"zookeeper" : {
"hostname" : "192.168.10.10"
},
"mysql": {
"hostname":"192.168.10.6",
"schema": "mon"
},
"kafka": {
"hostname": "192.168.10.10"
}
}

View File

@ -1,22 +0,0 @@
{
"id": "mon_credentials",
"middleware": {
"keystore_password": "changeit",
"serverVip": "192.168.10.5",
"truststore_password": "changeit",
"keystore_file":"hpmiddleware-keystore-production.jks",
"adminToken": "ADMIN"
},
"mysql": {
"hostname": "192.168.10.6",
"username": "monapi",
"password": "password",
"schema": "mon"
},
"vertica": {
"hostname": "192.168.10.8",
"username": "mon_api",
"password": "password",
"schema": "mon"
}
}

View File

@ -1,9 +0,0 @@
{
"id" : "monasca_agent",
"keystone_url": "http://192.168.10.5:35357/v3",
"username": "monasca-agent",
"password": "password",
"project_name": "mini-mon",
"monasca_api_url" : "http://192.168.10.4:8080/v2.0",
"service": "monitoring"
}

View File

@ -1,24 +0,0 @@
{
"id" : "hosts",
"kafka": {
"url": "192.168.10.10",
"alarm_topic": "alarm-state-transitions",
"notification_topic": "alarm-notifications"
},
"mysql": {
"url": "192.168.10.6",
"user": "notification",
"password": "password",
"database": "mon"
},
"smtp": {
"url": "localhost",
"user": "",
"password": "",
"timeout": 60,
"from_addr": "hpcs.mon@hp.com"
},
"zookeeper": {
"url": "localhost"
}
}

View File

@ -1,7 +0,0 @@
{
"id" : "credentials",
"vertica" : {
"user" : "dbadmin",
"password" : "password"
}
}

View File

@ -1,32 +0,0 @@
{
"id": "monasca_persister",
"alarm_history": {
"topic": "alarm-state-transitions",
"num_threads": "1",
"batch_size": "1000",
"maxBatchTime": "15"
},
"metrics": {
"topic": "metrics",
"num_threads": "2",
"batch_size": "10000",
"maxBatchTime": "30"
},
"kafka": {
"group_id": "1",
"consumer_id": 1
},
"database_configuration": {
"database_type": "vertica"
},
"vertica_metric_repository_config": {
"max_cache_size": "2000000"
},
"vertica": {
"dbname": "mon",
"hostname": "192.168.10.8"
},
"zookeeper": {
"hostname": "192.168.10.10"
}
}

View File

@ -1,22 +0,0 @@
{
"id" : "monasca_thresh",
"kafka": {
"metric": {
"group": "thresh-metric",
"topic": "metrics"
},
"event": {
"group": "thresh-event",
"host": "192.168.10.10:9092",
"consumer_topic": "events",
"producer_topic": "alarm-state-transitions"
}
},
"mysql": {
"db": "mon",
"host": "192.168.10.6:3306"
},
"zookeeper": {
"host": "192.168.10.10:2181"
}
}

View File

@ -1,39 +0,0 @@
{
"name": "Api",
"description": "Sets up the Monitoring Api",
"json_class": "Chef::Role",
"default_attributes": {
"monasca_agent": {
"plugin": {
"host_alive": {
"init_config": {
"ssh_port": 22,
"ssh_timeout": 0.5,
"ping_timeout": 1
},
"instances": {
"mysql": {
"name": "mysql",
"host_name": "192.168.10.6",
"alive_test": "ssh"
},
"kafka": {
"name": "kafka",
"host_name": "192.168.10.10",
"alive_test": "ssh"
}
}
}
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[mon_api]"
],
"env_run_lists": {
}
}

View File

@ -1,19 +0,0 @@
{
"name": "Basenode",
"description": "Base setup for vagrant nodes",
"json_class": "Chef::Role",
"default_attributes": {
"apt": {
"periodic_update_min_delay": 60
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"recipe[mini-mon]",
"recipe[monasca_agent]"
],
"env_run_lists": {
}
}

View File

@ -1,39 +0,0 @@
{
"name": "Devstack",
"description": "Sets up a devstack server for keystone and UI",
"json_class": "Chef::Role",
"default_attributes": {
"apt": {
"periodic_update_min_delay": 60
},
"monasca_agent": {
"plugin": {
"host_alive": {
"init_config": {
"ssh_port": 22,
"ssh_timeout": 0.5,
"ping_timeout": 1
},
"instances": {
"thresh": {
"name": "thresh",
"host_name": "192.168.10.14",
"alive_test": "ssh"
}
}
}
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"recipe[devstack::mon-ui]",
"recipe[devstack::keystone]",
"recipe[devstack::agent_plugin_config]",
"recipe[monasca_agent]"
],
"env_run_lists": {
}
}

View File

@ -1,31 +0,0 @@
{
"name": "Kafka",
"description": "Sets up Kafka",
"json_class": "Chef::Role",
"default_attributes": {
"kafka": {
"listen_interface": "eth1",
"topics": {
"metrics": { "replicas": 1, "partitions": 4 },
"events": { "replicas": 1, "partitions": 4 },
"raw-events": { "replicas": 1, "partitions": 4 },
"transformed-events": { "replicas": 1, "partitions": 4 },
"alarm-state-transitions": { "replicas": 1, "partitions": 4 },
"alarm-notifications": { "replicas": 1, "partitions": 4 }
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[zookeeper]",
"recipe[kafka]",
"recipe[mini-mon::postfix]",
"recipe[kafka::create_topics]",
"recipe[monasca_notification]"
],
"env_run_lists": {
}
}

View File

@ -1,35 +0,0 @@
{
"name": "MySQL",
"description": "Sets up MySQL",
"json_class": "Chef::Role",
"default_attributes": {
"percona": {
"backup": {
"password": "password"
},
"cluster": {
"package": "percona-xtradb-cluster-56"
},
"main_config_file": "/etc/mysql/my.cnf",
"server": {
"bind_address": "0.0.0.0",
"replication": {
"password": "password"
},
"root_password": "password",
"skip_name_resolve": true
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[percona::cluster]",
"recipe[monasca_schema::mysql]",
"recipe[mini-mon::mysql_client]"
],
"env_run_lists": {
}
}

View File

@ -1,35 +0,0 @@
{
"name": "Persister",
"description": "Sets up Persister",
"json_class": "Chef::Role",
"default_attributes": {
"monasca_agent": {
"plugin": {
"host_alive": {
"init_config": {
"ssh_port": 22,
"ssh_timeout": 0.5,
"ping_timeout": 1
},
"instances": {
"vertica": {
"name": "vertica",
"host_name": "192.168.10.8",
"alive_test": "ssh"
}
}
}
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[monasca_persister]",
"recipe[sysctl]"
],
"env_run_lists": {
}
}

View File

@ -1,55 +0,0 @@
{
"name": "Thresh",
"description": "Sets up Thresh",
"json_class": "Chef::Role",
"default_attributes": {
"java": {
"install_flavor": "openjdk",
"jdk_version": "7"
},
"storm": {
"nimbus": {
"host": {
"fqdn": "192.168.10.14"
}
},
"ui": {
"port": "8088"
},
"zookeeper": {
"quorum": [
"192.168.10.10"
]
}
},
"monasca_agent": {
"plugin": {
"host_alive": {
"init_config": {
"ssh_port": 22,
"ssh_timeout": 0.5,
"ping_timeout": 1
},
"instances": {
"devstack": {
"name": "devstack",
"host_name": "192.168.10.5",
"alive_test": "ssh"
}
}
}
}
}
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[storm::nimbus]",
"recipe[storm::supervisor]",
"recipe[monasca_thresh]"
],
"env_run_lists": {
}
}

View File

@ -1,18 +0,0 @@
{
"name": "Vertica",
"description": "Sets up Vertica",
"json_class": "Chef::Role",
"default_attributes": {
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"role[Basenode]",
"recipe[vertica]",
"recipe[monasca_schema::vertica]",
"recipe[sysctl]"
],
"env_run_lists": {
}
}