Implement DevStack plugin

Use DevStack's plugin model to allow for developers to easily set up
the Watcher services within DevStack.

Provides documentation on how to use the plugin as well as documentation
on how to configure a multi-node DevStack installation to use.  Also
provides an example local.conf for both the controller and compute
nodes.

Implements blueprint devstack-plugin

Change-Id: Ide663813f7a49d645877a21a0d1914be3218385e
This commit is contained in:
Taylor Peoples 2015-12-08 21:08:39 +01:00
parent b41a2cc940
commit 6c1769a15e
7 changed files with 552 additions and 0 deletions

258
devstack/lib/watcher Normal file
View File

@ -0,0 +1,258 @@
#!/bin/bash
#
# lib/watcher
# Functions to control the configuration and operation of the watcher services
# Dependencies:
#
# - ``functions`` file
# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
# - ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
# ``stack.sh`` calls the entry points in this order:
#
# - is_watcher_enabled
# - install_watcher
# - configure_watcher
# - create_watcher_conf
# - init_watcher
# - start_watcher
# - stop_watcher
# - cleanup_watcher
# Save trace setting
_XTRACE_WATCHER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
# Set up default directories
WATCHER_REPO=${WATCHER_REPO:-${GIT_BASE}/openstack/watcher.git}
WATCHER_BRANCH=${WATCHER_BRANCH:-master}
WATCHER_DIR=$DEST/watcher
GITREPO["python-watcherclient"]=${WATCHERCLIENT_REPO:-${GIT_BASE}/openstack/python-watcherclient.git}
GITBRANCH["python-watcherclient"]=${WATCHERCLIENT_BRANCH:-master}
GITDIR["python-watcherclient"]=$DEST/python-watcherclient
WATCHER_STATE_PATH=${WATCHER_STATE_PATH:=$DATA_DIR/watcher}
WATCHER_AUTH_CACHE_DIR=${WATCHER_AUTH_CACHE_DIR:-/var/cache/watcher}
WATCHER_CONF_DIR=/etc/watcher
WATCHER_CONF=$WATCHER_CONF_DIR/watcher.conf
WATCHER_POLICY_JSON=$WATCHER_CONF_DIR/policy.json
if is_ssl_enabled_service "watcher" || is_service_enabled tls-proxy; then
WATCHER_SERVICE_PROTOCOL="https"
fi
# Public facing bits
WATCHER_SERVICE_HOST=${WATCHER_SERVICE_HOST:-$HOST_IP}
WATCHER_SERVICE_PORT=${WATCHER_SERVICE_PORT:-9322}
WATCHER_SERVICE_PORT_INT=${WATCHER_SERVICE_PORT_INT:-19322}
WATCHER_SERVICE_PROTOCOL=${WATCHER_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
# Support entry points installation of console scripts
if [[ -d $WATCHER_DIR/bin ]]; then
WATCHER_BIN_DIR=$WATCHER_DIR/bin
else
WATCHER_BIN_DIR=$(get_python_exec_prefix)
fi
# Entry Points
# ------------
# Test if any watcher services are enabled
# is_watcher_enabled
function is_watcher_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"watcher-" ]] && return 0
return 1
}
# cleanup_watcher() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_watcher {
sudo rm -rf $WATCHER_STATE_PATH $WATCHER_AUTH_CACHE_DIR
}
# configure_watcher() - Set config files, create data dirs, etc
function configure_watcher {
# Put config files in ``/etc/watcher`` for everyone to find
if [[ ! -d $WATCHER_CONF_DIR ]]; then
sudo mkdir -p $WATCHER_CONF_DIR
sudo chown $STACK_USER $WATCHER_CONF_DIR
fi
install_default_policy watcher
# Rebuild the config file from scratch
create_watcher_conf
}
# create_watcher_accounts() - Set up common required watcher accounts
#
# Project User Roles
# ------------------------------------------------------------------
# SERVICE_TENANT_NAME watcher service
function create_watcher_accounts {
create_service_user "watcher" "admin"
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local watcher_service=$(get_or_create_service "watcher" \
"infra-optim" "Watcher Infrastructure Optimization Service")
get_or_create_endpoint $watcher_service \
"$REGION_NAME" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT" \
"$WATCHER_SERVICE_PROTOCOL://$WATCHER_SERVICE_HOST:$WATCHER_SERVICE_PORT"
fi
}
# create_watcher_conf() - Create a new watcher.conf file
function create_watcher_conf {
# (Re)create ``watcher.conf``
rm -f $WATCHER_CONF
iniset $WATCHER_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
iniset $WATCHER_CONF DEFAULT control_exchange watcher
iniset $WATCHER_CONF database connection $(database_connection_url watcher)
iniset $WATCHER_CONF api host "$WATCHER_SERVICE_HOST"
iniset $WATCHER_CONF api port "$WATCHER_SERVICE_PORT"
iniset $WATCHER_CONF oslo_policy policy_file $WATCHER_POLICY_JSON
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_userid $RABBIT_USERID
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_password $RABBIT_PASSWORD
iniset $WATCHER_CONF oslo_messaging_rabbit rabbit_host $RABBIT_HOST
iniset $WATCHER_CONF keystone_authtoken admin_user watcher
iniset $WATCHER_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
iniset $WATCHER_CONF keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
configure_auth_token_middleware $WATCHER_CONF watcher $WATCHER_AUTH_CACHE_DIR
iniset $WATCHER_CONF keystone_authtoken auth_uri $KEYSTONE_SERVICE_URI/v3
iniset $WATCHER_CONF keystone_authtoken auth_version v3
if is_fedora || is_suse; then
# watcher defaults to /usr/local/bin, but fedora and suse pip like to
# install things in /usr/bin
iniset $WATCHER_CONF DEFAULT bindir "/usr/bin"
fi
if [ -n "$WATCHER_STATE_PATH" ]; then
iniset $WATCHER_CONF DEFAULT state_path "$WATCHER_STATE_PATH"
iniset $WATCHER_CONF oslo_concurrency lock_path "$WATCHER_STATE_PATH"
fi
if [ "$SYSLOG" != "False" ]; then
iniset $WATCHER_CONF DEFAULT use_syslog "True"
fi
# Format logging
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
setup_colorized_logging $WATCHER_CONF DEFAULT
else
# Show user_name and project_name instead of user_id and project_id
iniset $WATCHER_CONF DEFAULT logging_context_format_string "%(asctime)s.%(msecs)03d %(levelname)s %(name)s [%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s"
fi
# Register SSL certificates if provided
if is_ssl_enabled_service watcher; then
ensure_certificates WATCHER
iniset $WATCHER_CONF DEFAULT ssl_cert_file "$WATCHER_SSL_CERT"
iniset $WATCHER_CONF DEFAULT ssl_key_file "$WATCHER_SSL_KEY"
iniset $WATCHER_CONF DEFAULT enabled_ssl_apis "$WATCHER_ENABLED_APIS"
fi
if is_service_enabled ceilometer; then
iniset $WATCHER_CONF watcher_messaging notifier_driver "messaging"
fi
}
# create_watcher_cache_dir() - Part of the init_watcher() process
function create_watcher_cache_dir {
# Create cache dir
sudo mkdir -p $WATCHER_AUTH_CACHE_DIR
sudo chown $STACK_USER $WATCHER_AUTH_CACHE_DIR
rm -f $WATCHER_AUTH_CACHE_DIR/*
}
# init_watcher() - Initialize databases, etc.
function init_watcher {
# clean up from previous (possibly aborted) runs
# create required data files
if is_service_enabled $DATABASE_BACKENDS && is_service_enabled watcher-api; then
# (Re)create watcher database
recreate_database watcher
# Create watcher schema
$WATCHER_BIN_DIR/watcher-db-manage --config-file $WATCHER_CONF create_schema
fi
create_watcher_cache_dir
}
# install_watcherclient() - Collect source and prepare
function install_watcherclient {
if use_library_from_git "python-watcherclient"; then
git_clone_by_name "python-watcherclient"
setup_dev_lib "python-watcherclient"
fi
}
# install_watcher() - Collect source and prepare
function install_watcher {
git_clone $WATCHER_REPO $WATCHER_DIR $WATCHER_BRANCH
setup_develop $WATCHER_DIR
}
# start_watcher_api() - Start the API process ahead of other things
function start_watcher_api {
# Get right service port for testing
local service_port=$WATCHER_SERVICE_PORT
local service_protocol=$WATCHER_SERVICE_PROTOCOL
if is_service_enabled tls-proxy; then
service_port=$WATCHER_SERVICE_PORT_INT
service_protocol="http"
fi
run_process watcher-api "$WATCHER_BIN_DIR/watcher-api --config-file $WATCHER_CONF"
echo "Waiting for watcher-api to start..."
if ! wait_for_service $SERVICE_TIMEOUT $service_protocol://$WATCHER_SERVICE_HOST:$service_port; then
die $LINENO "watcher-api did not start"
fi
# Start proxies if enabled
if is_service_enabled tls-proxy; then
start_tls_proxy '*' $WATCHER_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
start_tls_proxy '*' $EC2_SERVICE_PORT $WATCHER_SERVICE_HOST $WATCHER_SERVICE_PORT_INT &
fi
}
# start_watcher() - Start running processes, including screen
function start_watcher {
# ``run_process`` checks ``is_service_enabled``, it is not needed here
start_watcher_api
run_process watcher-decision-engine "$WATCHER_BIN_DIR/watcher-decision-engine --config-file $WATCHER_CONF"
run_process watcher-applier "$WATCHER_BIN_DIR/watcher-applier --config-file $WATCHER_CONF"
}
# stop_watcher() - Stop running processes (non-screen)
function stop_watcher {
for serv in watcher-api watcher-decision-engine watcher-applier; do
stop_process $serv
done
}
# Restore xtrace
$_XTRACE_WATCHER
# Tell emacs to use shell-script-mode
## Local variables:
## mode: shell-script
## End:

View File

@ -0,0 +1,46 @@
# Sample ``local.conf`` for compute node for Watcher development
# NOTE: Copy this file to the root DevStack directory for it to work properly.
[[local|localrc]]
ADMIN_PASSWORD=nomoresecrete
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=azertytoken
HOST_IP=192.168.42.2 # Change this to this compute node's IP address
FLAT_INTERFACE=eth0
FIXED_RANGE=10.254.1.0/24 # Change this to whatever your network is
NETWORK_GATEWAY=10.254.1.1 # Change this for your network
MULTI_HOST=1
SERVICE_HOST=192.168.42.1 # Change this to the IP of your controller node
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=${SERVICE_HOST}:9292
DATABASE_TYPE=mysql
# Enable services (including neutron)
ENABLED_SERVICES=n-cpu,n-api-meta,c-vol,q-agt
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
NOVA_INSTANCES_PATH=/opt/stack/data/instances
# Enable the Ceilometer plugin for the compute agent
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
disable_service ceilometer-acentral,ceilometer-collector,ceilometer-api
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_monitors=cpu.virt_driver

View File

@ -0,0 +1,48 @@
# Sample ``local.conf`` for controller node for Watcher development
# NOTE: Copy this file to the root DevStack directory for it to work properly.
[[local|localrc]]
ADMIN_PASSWORD=nomoresecrete
DATABASE_PASSWORD=stackdb
RABBIT_PASSWORD=stackqueue
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=azertytoken
HOST_IP=192.168.42.1 # Change this to your controller node IP address
FLAT_INTERFACE=eth0
FIXED_RANGE=10.254.1.0/24 # Change this to whatever your network is
NETWORK_GATEWAY=10.254.1.1 # Change this for your network
MULTI_HOST=1
# This is the controller node, so disable nova-compute
disable_service n-cpu
# Disable nova-network and use neutron instead
disable_service n-net
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3,neutron
# Enable remote console access
enable_service n-cauth
# Enable the Watcher plugin
enable_plugin watcher git://git.openstack.org/openstack/watcher
# Enable the Ceilometer plugin
enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer
# This is the controller node, so disable the ceilometer compute agent
disable_service ceilometer-acompute
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=2
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_monitors=cpu.virt_driver
[[post-config|$WATCHER_CONF]]
[watcher_goals]
goals=BASIC_CONSOLIDATION:basic,DUMMY:dummy

53
devstack/plugin.sh Normal file
View File

@ -0,0 +1,53 @@
#!/bin/bash
#
# plugin.sh - DevStack plugin script to install watcher
# Save trace setting
_XTRACE_WATCHER_PLUGIN=$(set +o | grep xtrace)
set -o xtrace
echo_summary "watcher's plugin.sh was called..."
source $DEST/watcher/devstack/lib/watcher
# Show all of defined environment variables
(set -o posix; set)
if is_service_enabled watcher-api watcher-decision-engine watcher-applier; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo_summary "Before Installing watcher"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing watcher"
install_watcher
LIBS_FROM_GIT="${LIBS_FROM_GIT},python-watcherclient"
install_watcherclient
cleanup_watcher
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring watcher"
configure_watcher
if is_service_enabled key; then
create_watcher_accounts
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# Initialize watcher
init_watcher
# Start the watcher components
echo_summary "Starting watcher"
start_watcher
fi
if [[ "$1" == "unstack" ]]; then
stop_watcher
fi
if [[ "$1" == "clean" ]]; then
cleanup_watcher
fi
fi
# Restore xtrace
$_XTRACE_WATCHER_PLUGIN

9
devstack/settings Normal file
View File

@ -0,0 +1,9 @@
# DevStack settings
# Make sure rabbit is enabled
enable_service rabbit
# Enable Watcher services
enable_service watcher-api
enable_service watcher-decision-engine
enable_service watcher-applier

View File

@ -0,0 +1,127 @@
===============
DevStack Plugin
===============
Watcher is currently able to optimize compute resources - specifically Nova
compute hosts - via operations such as live migrations. In order for you to
fully be able to exercise what Watcher can do, it is necessary to have a
multinode environment to use. If you have no experience with DevStack, you
should check out the `DevStack documentation`_ and be comfortable with the
basics of DevStack before attempting to get a multinode DevStack setup with
the Watcher plugin.
You can set up the Watcher services quickly and easily using a Watcher
DevStack plugin. See `PluginModelDocs`_ for information on DevStack's plugin
model.
.. _DevStack documentation: http://docs.openstack.org/developer/devstack/
.. _PluginModelDocs: http://docs.openstack.org/developer/devstack/plugins.html
It is recommended that you build off of the provided example local.conf files
(`local.conf.controller`_, `local.conf.compute`_). You'll likely want to
configure something to obtain metrics, such as Ceilometer. Ceilometer is used
in the example local.conf files.
To configure the Watcher services with DevStack, add the following to the
`[[local|localrc]]` section of your controller's `local.conf` to enable the
Watcher plugin::
enable_plugin watcher git://git.openstack.org/openstack/watcher
Then run devstack normally::
cd /opt/stack/devstack
./stack.sh
.. _local.conf.controller: https://github.com/openstack/watcher/tree/master/devstack/local.conf.controller
.. _local.conf.compute: https://github.com/openstack/watcher/tree/master/devstack/local.conf.compute
Multi-Node DevStack Environment
===============================
Since deploying Watcher with only a single compute node is not very useful, a
few tips are given here for enabling a multi-node environment with live
migration.
Configuring NFS Server
----------------------
If you would like to use live migration for shared storage, then the controller
can serve as the NFS server if needed::
sudo apt-get install nfs-kernel-server
sudo mkdir -p /nfs/instances
sudo chown stack:stack /nfs/instances
Add an entry to `/etc/exports` with the appropriate gateway and netmask
information::
/nfs/instances <gateway>/<netmask>(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)
Export the NFS directories::
sudo exportfs -ra
Make sure the NFS server is running::
sudo service nfs-kernel-server status
If the server is not running, then start it::
sudo service nfs-kernel-server start
Configuring NFS on Compute Node
-------------------------------
Each compute node needs to use the NFS server to hold the instance data::
sudo apt-get install rpcbind nfs-common
mkdir -p /opt/stack/data/instances
sudo mount <nfs-server-ip>:/nfs/instances /opt/stack/data/instances
If you would like to have the NFS directory automatically mounted on reboot,
then add the following to `/etc/fstab`::
<nfs-server-ip>:/nfs/instances /opt/stack/data/instances nfs auto 0 0
Edit `/etc/libvirt/libvirtd.conf` to make sure the following values are set::
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"
Edit `/etc/default/libvirt-bin`::
libvirt_opts = " -d -l"
Restart the libvirt service::
sudo service libvirt-bin restart
Setting up SSH keys between compute nodes to enable live migration
------------------------------------------------------------------
In order for live migration to work, SSH keys need to be exchanged between
each compute node:
1. The SOURCE root user's public RSA key (likely in /root/.ssh/id_rsa.pub)
needs to be in the DESTINATION stack user's authorized_keys file
(~stack/.ssh/authorized_keys). This can be accomplished by manually
copying the contents from the file on the SOURCE to the DESTINATION. If
you have a password configured for the stack user, then you can use the
following command to accomplish the same thing::
ssh-copy-id -i /root/.ssh/id_rsa.pub stack@DESTINATION
2. The DESTINATION host's public ECDSA key (/etc/ssh/ssh_host_ecdsa_key.pub)
needs to be in the SOURCE root user's known_hosts file
(/root/.ssh/known_hosts). This can be accomplished by running the
following on the SOURCE machine (hostname must be used)::
ssh-keyscan -H DEST_HOSTNAME | sudo tee -a /root/.ssh/known_hosts
In essence, this means that every compute node's root user's public RSA key
must exist in every other compute node's stack user's authorized_keys file and
every compute node's public ECDSA key needs to be in every other compute
node's root user's known_hosts file.

View File

@ -34,6 +34,17 @@ Introduction
dev/plugins
DevStack Plugin
---------------
You can configure DevStack to set up the Watcher services easily using
Watcher's DevStack plugin.
.. toctree::
:maxdepth: 1
dev/devstack-plugin
API References
--------------