Added more functions for the vagrant sub project

Change-Id: I6032c3397dbee5d56dc43293eae42345c1ca4059
This commit is contained in:
Tong Li 2016-04-01 11:14:12 -04:00
parent 0da86295b8
commit f10cb11b7d
8 changed files with 180 additions and 40 deletions

View File

@ -281,8 +281,6 @@ to be done.
3. Edit /etc/kiloeyes/kiloeyes.conf file to configure the middleware::
[keystone_authtoken]
auth_uri = http://<<keystone_ip>>:5000
auth_url = http://<<keystone_ip>>:5000
identity_uri = http://<<keystone_ip>>:5000
auth_type = token

View File

@ -4,7 +4,7 @@
log_file=api.log
log_dir=/var/log/kiloeyes/
log_level=DEBUG
default_log_levels = kiloeyes=DEBUG
default_log_levels = kiloeyes=DEBUG,keystonemiddleware=DEBUG
dispatcher = metrics
dispatcher = versions

View File

@ -25,18 +25,78 @@ required software to run kiloeyes.
Usage:
======
You can install everything onto one machine or you can choose install different
components onto different servers. There can be a lot of ways to split up
servers for different services. Here is an example:
components onto different servers. Currently python-keystonemiddleware which
is used by kiloeyes for security, but its dependencies conflict with agent
dependcies, so kiloeyes currently can not co-exist with agent on a signle
machine. It is best to have kiloeyes and agent installed onto the separate
machines to avoid the installation headaches. This vagrant project uses
configuration files in directory vagrant/onvm/conf. File nodes.conf.yml is
used to configure how many nodes to install various components, ids.conf.yml
file is used to save credentials.
controller:
java
elasticsearch
kibana
kiloeyes
devstack:
OpenStack environment
agent01:
agent
Here is an example::
To indicate how the servers will be used, please edit configuration file in
vagrant/onvm/conf/nodes.conf.yml and ids.conf.yml file.
controller:
host_name: controller.leap.dev
eth0: 192.168.1.90
agent01:
host_name: agent01.leap.dev
eth0: 192.168.1.88
logical2physical:
kiloeyes: controller
elastic: controller
kafka: controller
ctlnodes:
- elastic
- kafka
- kiloeyes
agentes:
- agent01
Above configuration, indicates that there are total of 4 logical nodes, they
are elastic, kafka, kiloeyes and agent01. The installation sequence is in
order of elastic, kafka, kiloeyes and agent01, the ctlnodes section indicates
that sequence, ctlnodes will be always installed before agent nodes. The
section logical2physical node indicates how a logical node maps to a physical
machine, in the above example, 3 logical nodes (elastic, kafka and kiloeyes)
are all mapped to a physical node called controller, which gets defined by its
ip address and a name. agent01 is also defined by using its ip and name. From
this setup, you can install elastic, kafka and kiloeyes onto different
machines.
Since the agent was specifically developed to work with openstack security,
without openstack running somewhere, it will be pretty pointless to setup
agent. The best way to set the whole thing up, is to following the following
steps::
1. Prepare 3 machines, either physical or virtual machines should work fine.
2. Install DevStack onto the first machine and configure the keystone url and
userid and password in nodes.conf.yml file. If you already have a OpenStack
system running, you can use that system as well, simply configure
nodes.conf.yml file using the right keystone auth url and credentials.
3. Find out the second and third machine IPs and fill the IPs in the
nodes.conf.yml file, use the second machine for controller and the third
for agent.
4. Make sure that you have the same password for the root user for the second
and third machine. Place the user name and password in file ids.conf.yml.
Also make sure that the server has ssh turned on so that vagrant can run
successfully.
5 Kiloeyes depend on java, elastic search and kafka. This vagrant project will
install these components onto the machine you specified in the conf file,
but you will have to download these binaries into a directory which will be
located in the same directory kiloeyes root resides. The structure is indicated
above in introduction section.
6. Change to vagrant directory and now run the following two commands::
vagrant up
vagrant provision
7. If all goes well, you should have everything running successfully, after
awhile, agent should be sending messages to kiloeyes and the data should be
available in elasticsearch and can be seen by using kibana::
http://192.168.1.90:5601

View File

@ -1,17 +1,8 @@
---
repo:
host_name: repo.leap.dev
eth0: 192.168.1.88
eth1: 192.168.1.88
controller:
host_name: controller.leap.dev
eth0: 192.168.1.90
devstack:
host_name: devstack.leap.dev
eth0: 192.168.1.93
agent01:
host_name: agent01.leap.dev
eth0: 192.168.1.88
@ -20,27 +11,33 @@ logical2physical:
kiloeyes: controller
elastic: controller
kafka: controller
devstack: controller
# Define how many logical nodes and the sequence of the installation
ctlnodes:
- devstack
- elastic
- kafka
- kiloeyes
# Define how many agents should be installed.
agentes:
# - agent01
# - agent02
- agent01
uselocalrepo: yes
# This section defines the OpenStack credentials which will be used to
# create services and users to configure kiloeyes and agent.
# security_on determines if keystone middleware should be pluged into the
# kiloeye pipeline.
# no security is turned on.
security_on: true
auth_uri: http://192.168.15.5:5000
admin_user: admin
admin_pw: ps
agent_user: kiloeyesagent
agent_pw: ps
aptopt: --force-yes
# The nodes should be a list of logical name
# The folder should be a local directory start from the project root
# The nodes should be a list of logical name and should appear in the ctlnodes
# The source should be a local directory relative to the vagrant directory
# The target should be a directory on the target system.
synchfolders:
elastic:
source: ./../../leapbin

View File

@ -5,8 +5,43 @@
source /onvm/scripts/ini-config
eval $(parse_yaml '/onvm/conf/nodes.conf.yml' 'leap_')
#apt-get update
# Install git in case it has not been installed.
apt-get update
apt-get -qqy install git python-dev python-pip
git clone https://github.com/openstack/monasca-agent.git /opt/monasca-agent
cd /opt/monasca-agent
# Make sure few required things installed first
pip install "requests>=2.9.1"
pip install "psutil>=3.4.2"
pip install -r requirements.txt
python setup.py install
echo 'Setting up agent by running monasca-setup...'
monasca-setup --username $leap_agent_user \
--password $leap_agent_pw \
--project_name kiloeyes \
--system_only --keystone_url "${leap_auth_uri}/v3"
echo 'Configuring supervisor.conf file...'
iniset /etc/monasca/agent/supervisor.conf inet_http_server port 'localhost:9001'
rm -r -f /etc/monasca/agent/conf.d/vcenter.yaml
# The following section is to prepare for manual installation
#mkdir -p /etc/monasca/agent/conf.d
#
#cp agent.yaml.template /etc/monasca/agent/agent.yaml
#
# Get the plugin configuration files
#for key in cpu disk load memory network; do
# cp conf.d/$key.yaml /etc/monasca/agent/conf.d
#done
service monasca-agent restart
echo 'Agent install is now complete!'

View File

@ -18,13 +18,63 @@ eval node_ip=\$leap_${leap_logical2physical_kafka}_eth0; node_ip=`echo $node_ip`
kafka_ip=$node_ip
eval node_ip=\$leap_${leap_logical2physical_elastic}_eth0; node_ip=`echo $node_ip`
elastic_ip=$node_ip
eval node_ip=\$leap_${leap_logical2physical_kiloeyes}_eth0; node_ip=`echo $node_ip`
kiloeyes_ip=$node_ip
k_log_dir='/var/log/kiloeyes'
k_pid_dir='/var/run/kiloeyes'
mkdir -p $k_log_dir $k_pid_dir
# Config the kiloeyes
# If security_on, then we need to configure the keystone middleware
if [ $leap_security_on='true' ]; then
echo 'Install keystone middleware...'
apt-get -qqy install software-properties-common
add-apt-repository -y cloud-archive:liberty
apt-get update
apt-get -qqy install python-keystonemiddleware
iniset /etc/kiloeyes/kiloeyes.ini 'pipeline:main' 'pipeline' 'authtoken api'
iniset /etc/kiloeyes/kiloeyes.ini 'filter:authtoken' 'paste.filter_factory' 'keystonemiddleware.auth_token:filter_factory'
iniset /etc/kiloeyes/kiloeyes.ini 'filter:authtoken' 'delay_auth_decision' false
iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken identity_uri $leap_auth_uri
iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken auth_type token
iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_user $leap_admin_user
iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_password $leap_admin_pw
iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_tenant_name admin
fi
# if auth_uri is configured, then we need to create these services and users
if [ ! -z $leap_auth_uri ]; then
apt-get -qqy install software-properties-common
add-apt-repository -y cloud-archive:liberty
apt-get update
apt-get -qqy install python-openstackclient
# Setup environment variables
export OS_USERNAME=$leap_admin_user
export OS_PASSWORD=$leap_admin_pw
export OS_TENANT_NAME=admin
export OS_AUTH_URL="${leap_auth_uri}/v3"
export OS_IDENTITY_API_VERSION=3
# if the service and user have not setup, we will go ahead set them up
openstack service list | grep monitoring
if [ $? -gt 0 ]; then
openstack service create --name kiloeyes --description "Monitoring" monitoring
openstack endpoint create --region RegionOne monitoring public http://$kiloeyes_ip:9090/v2.0
openstack endpoint create --region RegionOne monitoring admin http://$kiloeyes_ip:9090/v2.0
openstack endpoint create --region RegionOne monitoring internal http://$kiloeyes_ip:9090/v2.0
openstack project create --domain default --description "Kiloeyes Project" kiloeyes
openstack user create --domain default --password $leap_agent_pw $leap_agent_user
openstack role add --project kiloeyes --user $leap_agent_user member
fi
fi
echo 'Config /etc/kiloeyes/kiloeyes.conf file...'
iniset /etc/kiloeyes/kiloeyes.conf DEFAULT log_dir $k_log_dir
iniset /etc/kiloeyes/kiloeyes.conf kafka_opts uri $kafka_ip:9092

View File

@ -1,7 +1,7 @@
#VBoxManage snapshot h2-compute01 restore "Snapshot 3"
VBoxManage snapshot h2-compute01 restore "Snapshot 3"
#VBoxManage snapshot h2-nova restore "Snapshot 3"
VBoxManage snapshot h2-controller restore "Snapshot 3"
#vboxmanage startvm h2-compute01 --type headless
vboxmanage startvm h2-compute01 --type headless
#vboxmanage startvm h2-nova --type headless
vboxmanage startvm h2-controller --type headless

View File

@ -1,3 +1,3 @@
#vboxmanage controlvm h2-compute01 acpipowerbutton
vboxmanage controlvm h2-compute01 acpipowerbutton
#vboxmanage controlvm h2-nova acpipowerbutton
vboxmanage controlvm h2-controller acpipowerbutton