Retire cue

Cue has been retired in mid 2016 as official project and did not
continue developement, it's time to retire it completely.

Remove everything, update README.

Depends-On: https://review.openstack.org/551202
Change-Id: I1f4a71fbea8a90303036ad0adaec95fa15b6522f
This commit is contained in:
Andreas Jaeger 2018-03-09 09:36:22 +01:00
parent 4978f34055
commit bb7e81e1ab
308 changed files with 11 additions and 25283 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = cue, os_tasklib
omit = cue/tests/*,cue/openstack/*
[report]
ignore_errors = True

79
.gitignore vendored
View File

@ -1,79 +0,0 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# IDE
.idea/
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
cover/
htmlcov/
.tox/
.testrepository/
.coverage
.cache
nosetests.xml
coverage.xml
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
doc/build
doc/source/api
doc/source/autoindex.rst
# PyBuilder
target/
# Vagrant
.vagrant
# Rope
.ropeproject
# Virtualenv
.virtualenv
.venv
# OSX Finder
.DS_Store
# Testr coverage
.coverage.*
# pyenv
.python-version

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/cue.git

View File

@ -1,9 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
OS_DEBUG=${OS_DEBUG:-1} \
OS_TEST_TIMEOUT=60 \
${PYTHON:-python} -m subunit.run discover ./cue/tests $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,17 +1,14 @@
Cue
===
This project is no longer maintained.
Openstack Message Broker Provisioning Service.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
This service provides Provisioning and Management of Message Brokers.
(Optional:)
For an alternative project, please see <alternative project name> at
<alternative project URL>.
Supported MQ's
==============
RabbitMQ
Getting Started
===============
http://cue.readthedocs.org/en/latest/getting-started.html
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,17 +0,0 @@
The contrib/devstack directory contrains the files to integrate Cue with Devstack.
To install Cue
# Clone devstack and cue
git clone https://github.com/openstack-dev/devstack.git
git clone https://github.com/openstack/cue.git
# Install the cue plugins onto Devstack
./cue/contrib/devstack/install.sh
# Copy the local.conf to your devstack
cp cue/contrib/devstack/local.conf devstack/
This will create the neccessary symlinks to the Cue-devstack-plugin, and setup
devstack with a local.conf that enables the Cue services and its dependencies.

View File

@ -1,27 +0,0 @@
# dib.sh - Devstack extras script to install diskimage-builder
if is_service_enabled dib; then
if [[ "$1" == "source" ]]; then
# Initial source
source $TOP_DIR/lib/dib
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing diskimage-builder"
install_dib
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
# no-op
:
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# no-op
:
fi
if [[ "$1" == "unstack" ]]; then
# no-op
:
fi
if [[ "$1" == "clean" ]]; then
# no-op
:
fi
fi

View File

@ -1,47 +0,0 @@
# check for service enabled
if is_service_enabled cue; then
if [[ "$1" == "source" ]]; then
# Initial source of lib script
source $TOP_DIR/lib/cue
fi
if [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Cue"
install_cue
echo_summary "Installing Cue Client"
install_cueclient
echo_summary "Installing Cue Dashboard"
install_cuedashboard
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Cue"
configure_cue
if is_service_enabled key; then
echo_summary "Creating Cue Keystone Accounts"
create_cue_accounts
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Cue"
init_cue
echo_summary "Starting Cue"
start_cue
echo_summary "Creating Initial Cue Resources"
create_cue_initial_resources
fi
if [[ "$1" == "unstack" ]]; then
stop_cue
fi
if [[ "$1" == "clean" ]]; then
echo_summary "Cleaning Cue"
cleanup_cue
fi
fi

View File

@ -1,31 +0,0 @@
if [[ "$1" == "stack" && "$2" == "post-config" ]]; then
if [[ ! -z $RALLY_AUTH_URL ]]; then
# rally deployment create
tmpfile=$(mktemp)
_create_deployment_config $tmpfile
iniset $RALLY_CONF_DIR/$RALLY_CONF_FILE database connection `database_connection_url rally`
recreate_database rally utf8
# Recreate rally database
$RALLY_BIN_DIR/rally-manage --config-file $RALLY_CONF_DIR/$RALLY_CONF_FILE db recreate
rally --config-file /etc/rally/rally.conf deployment create --name cue-devstack2 --file $tmpfile
fi
fi
# _create_deployment_config filename
function _create_deployment_config() {
cat >$1 <<EOF
{
"type": "ExistingCloud",
"auth_url": "$KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/$RALLY_AUTH_VERSION",
"admin": {
"username": "admin",
"password": "$ADMIN_PASSWORD",
"project_name": "admin"
}
}
EOF
}

View File

@ -1,14 +0,0 @@
#!/bin/bash
DIR=$(readlink -e $(dirname $(readlink -f $0)))
pushd $DIR
for f in lib/* extras.d/*; do
if [ ! -e "$DIR/../../../devstack/$f" ]; then
echo "Installing symlink for $f"
ln -fs $DIR/$f $DIR/../../../devstack/$f
fi
done
popd

View File

@ -1,351 +0,0 @@
#!/bin/bash
#
# lib/cue
# Install and start **Cue** service
# To enable Cue services, add the following to localrc
# enable_service cue,cue-api,cue-worker
# stack.sh
# ---------
# install_cue
# configure_cue
# init_cue
# start_cue
# stop_cue
# cleanup_cue
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CUE_PLUGINS=$TOP_DIR/lib/cue_plugins
# Set up default repos
CUE_REPO=${CUE_REPO:-${GIT_BASE}/openstack/cue.git}
CUE_BRANCH=${CUE_BRANCH:-master}
CUECLIENT_REPO=${CUECLIENT_REPO:-${GIT_BASE}/openstack/python-cueclient.git}
CUECLIENT_BRANCH=${CUECLIENT_BRANCH:-master}
CUEDASHBOARD_REPO=${CUEDASHBOARD_REPO:-${GIT_BASE}/openstack/cue-dashboard.git}
CUEDASHBOARD_BRANCH=${CUEDASHBOARD_BRANCH:-master}
CUE_MANAGEMENT_NETWORK_SUBNET=${CUE_MANAGEMENT_NETWORK_SUBNET-"172.16.0.0/24"}
# Set up default paths
CUE_BIN_DIR=$(get_python_exec_prefix)
CUE_DIR=$DEST/cue
CUECLIENT_DIR=$DEST/python-cueclient
CUEDASHBOARD_DIR=$DEST/cue-dashboard
CUE_CONF_DIR=/etc/cue
CUE_STATE_PATH=${CUE_STATE_PATH:=$DATA_DIR/cue}
CUE_CONF=$CUE_CONF_DIR/cue.conf
CUE_LOG_DIR=/var/log/cue
CUE_AUTH_CACHE_DIR=${CUE_AUTH_CACHE_DIR:-/var/cache/cue}
CUE_TF_DB=${CUE_TF_DB:-cue_taskflow}
CUE_TF_PERSISTENCE=${CUE_TF_PERSISTENCE:-}
CUE_TF_CREATE_CLUSTER_NODE_VM_ACTIVE_RETRY_COUNT=${CUE_TF_CREATE_CLUSTER_NODE_VM_ACTIVE_RETRY_COUNT:-12}
# Public IP/Port Settings
CUE_SERVICE_PROTOCOL=${CUE_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
CUE_SERVICE_HOST=${CUE_SERVICE_HOST:-$SERVICE_HOST}
CUE_SERVICE_PORT=${CUE_SERVICE_PORT:-8795}
CUE_DEFAULT_BROKER_NAME=${CUE_DEFAULT_BROKER_NAME:-rabbitmq}
CUE_MANAGEMENT_KEY='cue-mgmt-key'
CUE_RABBIT_SECURITY_GROUP='cue-rabbitmq'
CUE_RABBIT_IMAGE_MINDISK=4
CUE_FLAVOR=cue.small
CUE_FLAVOR_PARAMS="--id 8795 --ram 512 --disk $CUE_RABBIT_IMAGE_MINDISK --vcpus 1"
CUE_RABBIT_SECURITY_GROUP='cue-rabbitmq'
CUE_MANAGEMENT_NETWORK_NAME='cue_management_net'
CUE_MANAGEMENT_SUBNET_NAME='cue_management_subnet'
CUE_RABBIT_IMAGE_ELEMENTS=${CUE_RABBIT_IMAGE_ELEMENTS:-\
vm ubuntu os-refresh-config os-apply-config ntp hosts \
ifmetric cue-rabbitmq-base}
# cleanup_cue - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_cue {
sudo rm -rf $CUE_STATE_PATH $CUE_AUTH_CACHE_DIR
}
# configure_cue - Set config files, create data dirs, etc
function configure_cue {
[ ! -d $CUE_CONF_DIR ] && sudo mkdir -m 755 -p $CUE_CONF_DIR
sudo chown $STACK_USER $CUE_CONF_DIR
[ ! -d $CUE_LOG_DIR ] && sudo mkdir -m 755 -p $CUE_LOG_DIR
sudo chown $STACK_USER $CUE_LOG_DIR
# (Re)create ``cue.conf``
rm -f $CUE_CONF
iniset_rpc_backend cue $CUE_CONF DEFAULT
iniset $CUE_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $CUE_CONF DEFAULT verbose True
iniset $CUE_CONF DEFAULT state_path $CUE_STATE_PATH
iniset $CUE_CONF database connection `database_connection_url cue`
# Support db as a persistence backend
if [ "$CUE_TF_PERSISTENCE" == "db" ]; then
iniset $CUE_CONF taskflow persistence_connection `database_connection_url $CUE_TF_DB`
fi
# Set cluster node check timeouts
iniset $CUE_CONF taskflow cluster_node_check_timeout 30
iniset $CUE_CONF taskflow cluster_node_check_max_count 120
# Set flow create cluster node vm active retry count
iniset $CUE_CONF flow_options create_cluster_node_vm_active_retry_count $CUE_TF_CREATE_CLUSTER_NODE_VM_ACTIVE_RETRY_COUNT
iniset $CUE_CONF openstack os_auth_url $KEYSTONE_AUTH_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v3
iniset $CUE_CONF openstack os_project_name admin
iniset $CUE_CONF openstack os_username admin
iniset $CUE_CONF openstack os_password $ADMIN_PASSWORD
iniset $CUE_CONF openstack os_project_domain_name default
iniset $CUE_CONF openstack os_user_domain_name default
iniset $CUE_CONF openstack os_auth_version 3
if [ "$SYSLOG" != "False" ]; then
iniset $CUE_CONF DEFAULT use_syslog True
fi
# Format logging
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
setup_colorized_logging $CUE_CONF DEFAULT "tenant" "user"
fi
# Set some libraries' log level to INFO so that the log isn't overrun with useless DEBUG messages
iniset $CUE_CONF DEFAULT default_log_levels "kazoo.client=INFO,stevedore=INFO"
if is_service_enabled key; then
# Setup the Keystone Integration
iniset $CUE_CONF service:api auth_strategy keystone
configure_auth_token_middleware $CUE_CONF cue $CUE_AUTH_CACHE_DIR
fi
iniset $CUE_CONF service:api api_host $CUE_SERVICE_HOST
iniset $CUE_CONF service:api api_base_uri $CUE_SERVICE_PROTOCOL://$CUE_SERVICE_HOST:$CUE_SERVICE_PORT/
if is_service_enabled tls-proxy; then
# Set the service port for a proxy to take the original
iniset $CUE_CONF service:api api_port $CUE_SERVICE_PORT_INT
else
iniset $CUE_CONF service:api api_port $CUE_SERVICE_PORT
fi
# Install the policy file for the API server
cp $CUE_DIR/etc/cue/policy.json $CUE_CONF_DIR/policy.json
iniset $CUE_CONF DEFAULT policy_file $CUE_CONF_DIR/policy.json
}
# create_cue_accounts - Set up common required cue accounts
# Tenant User Roles
# ------------------------------------------------------------------
# service cue admin # if enabled
function create_cue_accounts {
local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
if [[ "$ENABLED_SERVICES" =~ "cue-api" ]]; then
local cue_user=$(get_or_create_user "cue" \
"$SERVICE_PASSWORD" "default")
get_or_add_user_project_role $admin_role $cue_user $SERVICE_TENANT_NAME
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local cue_service=$(get_or_create_service "cue" \
"message-broker" "Message Broker Provisioning Service")
get_or_create_endpoint $cue_service \
"$REGION_NAME" \
"$CUE_SERVICE_PROTOCOL://$CUE_SERVICE_HOST:$CUE_SERVICE_PORT/" \
"$CUE_SERVICE_PROTOCOL://$CUE_SERVICE_HOST:$CUE_SERVICE_PORT/" \
"$CUE_SERVICE_PROTOCOL://$CUE_SERVICE_HOST:$CUE_SERVICE_PORT/"
fi
fi
}
function create_cue_initial_resources {
#ADMIN_TENANT_ID=$(keystone tenant-list | grep " admin " | get_field 1)
echo "Creating initial resources."
}
# init_cue - Initialize etc.
function init_cue {
# Create cache dir
sudo mkdir -p $CUE_AUTH_CACHE_DIR
sudo chown $STACK_USER $CUE_AUTH_CACHE_DIR
rm -f $CUE_AUTH_CACHE_DIR/*
# (Re)create cue database
recreate_database cue utf8
# Init and migrate cue database
cue-manage --config-file $CUE_CONF database upgrade
# Init and migrate cue pool-manager-cache
if [ "$CUE_TF_PERSISTENCE" == "db" ]; then
recreate_database $CUE_TF_DB utf8
cue-manage --config-file $CUE_CONF taskflow upgrade
fi
NEUTRON_OS_URL="${Q_PROTOCOL}://$Q_HOST:$Q_PORT"
OPENSTACK_CMD="openstack"
NEUTRON_CMD="neutron"
# Create cue specific flavor if one does not exist
if [[ -z $($OPENSTACK_CMD flavor list | grep $CUE_FLAVOR) ]]; then
$OPENSTACK_CMD flavor create $CUE_FLAVOR_PARAMS $CUE_FLAVOR
fi
# Set os_security_group
if [[ -z $($OPENSTACK_CMD security group list | grep $CUE_RABBIT_SECURITY_GROUP) ]]; then
$OPENSTACK_CMD security group create --description "Cue RabbitMQ broker security group" $CUE_RABBIT_SECURITY_GROUP
$OPENSTACK_CMD security group rule create --src-ip 0.0.0.0/0 --proto tcp --dst-port 5672:5672 $CUE_RABBIT_SECURITY_GROUP
$OPENSTACK_CMD security group rule create --src-ip 0.0.0.0/0 --proto tcp --dst-port 4369:4369 $CUE_RABBIT_SECURITY_GROUP
$OPENSTACK_CMD security group rule create --src-ip 0.0.0.0/0 --proto tcp --dst-port 61000:61000 $CUE_RABBIT_SECURITY_GROUP
$OPENSTACK_CMD security group rule create --src-ip 0.0.0.0/0 --proto tcp --dst-port 15672:15672 $CUE_RABBIT_SECURITY_GROUP
fi
CUE_RABBIT_SECURITY_GROUP_ID=$($OPENSTACK_CMD security group list | grep $CUE_RABBIT_SECURITY_GROUP | tr -d ' ' | cut -f 2 -d '|')
if [ $CUE_RABBIT_SECURITY_GROUP_ID ]; then
iniset $CUE_CONF DEFAULT os_security_group $CUE_RABBIT_SECURITY_GROUP_ID
fi
# Set VM management key
if [ $CUE_MANAGEMENT_KEY ]; then
iniset $CUE_CONF openstack os_key_name $CUE_MANAGEMENT_KEY
fi
# Create cue management-network
if [[ -z $($NEUTRON_CMD net-list | grep $CUE_MANAGEMENT_NETWORK_NAME) ]]; then
$NEUTRON_CMD net-create $CUE_MANAGEMENT_NETWORK_NAME --provider:network-type local
CUE_MANAGEMENT_SUBNET_ROUTER_IP="$(echo $CUE_MANAGEMENT_NETWORK_SUBNET | cut -f 1-3 -d '.').1"
$NEUTRON_CMD subnet-create $CUE_MANAGEMENT_NETWORK_NAME $CUE_MANAGEMENT_NETWORK_SUBNET --name $CUE_MANAGEMENT_SUBNET_NAME --host-route destination=$FLOATING_RANGE,nexthop=$CUE_MANAGEMENT_SUBNET_ROUTER_IP
$NEUTRON_CMD router-interface-add $Q_ROUTER_NAME $CUE_MANAGEMENT_SUBNET_NAME
fi
# Configure host route to management-network
CUE_MANAGEMENT_SUBNET_IP=$(echo $CUE_MANAGEMENT_NETWORK_SUBNET | cut -f 1 -d '/')
if [[ -z $(netstat -rn | grep $CUE_MANAGEMENT_SUBNET_IP ) ]]; then
if [[ ! -z $($NEUTRON_CMD router-show $Q_ROUTER_NAME 2>/dev/null) ]]; then
ROUTER_IP=$($NEUTRON_CMD router-show $Q_ROUTER_NAME | grep ip_address | cut -f 16 -d '"')
sudo route add -net $CUE_MANAGEMENT_NETWORK_SUBNET gw $ROUTER_IP
fi
fi
# Set management-network id
CUE_MANAGEMENT_NETWORK_ID=$($NEUTRON_CMD net-list | grep $CUE_MANAGEMENT_NETWORK_NAME | tr -d ' ' | cut -f 2 -d '|')
if [ $CUE_MANAGEMENT_NETWORK_ID ]; then
iniset $CUE_CONF DEFAULT management_network_id $CUE_MANAGEMENT_NETWORK_ID
fi
set_broker
configure_scenario_rally_tests
build_cue_rabbit_test_image
}
# install_cue - Collect source and prepare
function install_cue {
git_clone $CUE_REPO $CUE_DIR $CUE_BRANCH
setup_develop $CUE_DIR
}
# install_cueclient - Collect source and prepare
function install_cueclient {
git_clone $CUECLIENT_REPO $CUECLIENT_DIR $CUECLIENT_BRANCH
setup_develop $CUECLIENT_DIR
}
# install_cuedashboard - Collect source and prepare
function install_cuedashboard {
if is_service_enabled horizon; then
git_clone $CUEDASHBOARD_REPO $CUEDASHBOARD_DIR $CUEDASHBOARD_BRANCH
setup_develop $CUEDASHBOARD_DIR
if ! [ -h $DEST/horizon/openstack_dashboard/local/enabled/_70_cue_panel_group.py ]; then
ln -s $DEST/cue-dashboard/_70_cue_panel_group.py $DEST/horizon/openstack_dashboard/local/enabled/_70_cue_panel_group.py
fi
if ! [ -h $DEST/horizon/openstack_dashboard/local/enabled/_71_cue_panel.py ]; then
ln -s $DEST/cue-dashboard/_71_cue_panel.py $DEST/horizon/openstack_dashboard/local/enabled/_71_cue_panel.py
fi
fi
}
# configure Cue Scenario Rally tests
function configure_scenario_rally_tests {
if ! [ -d $HOME/.rally/plugins ]; then
mkdir -p $HOME/.rally/plugins/cue_scenarios
SCENARIOS=$(find $DEST/cue/rally-jobs/plugins -type f -name "*.py")
for SCENARIO in $SCENARIOS
do
FILE_NAME=$(echo $SCENARIO | rev | cut -d/ -f1 | rev)
ln -s $SCENARIO $HOME/.rally/plugins/cue_scenarios/$FILE_NAME
done
fi
}
# start_cue - Start running processes, including screen
function start_cue {
run_process cue-api "$CUE_BIN_DIR/cue-api --config-file $CUE_CONF"
run_process cue-worker "$CUE_BIN_DIR/cue-worker --config-file $CUE_CONF"
run_process cue-monitor "$CUE_BIN_DIR/cue-monitor --config-file $CUE_CONF"
# Start proxies if enabled
if is_service_enabled cue-api && is_service_enabled tls-proxy; then
start_tls_proxy '*' $CUE_SERVICE_PORT $CUE_SERVICE_HOST $CUE_SERVICE_PORT_INT &
fi
if ! timeout $SERVICE_TIMEOUT sh -c "while ! wget --no-proxy -q -O- $CUE_SERVICE_PROTOCOL://$CUE_SERVICE_HOST:$CUE_SERVICE_PORT; do sleep 1; done"; then
die $LINENO "Cue did not start"
fi
}
# stop_cue - Stop running processes
function stop_cue {
# Kill the cue screen windows
stop_process cue-api
}
# build_cue_rabbit_test_image() - Build and upload functional test image
function build_cue_rabbit_test_image {
if is_service_enabled dib; then
local image_name=cue-rabbitmq-test-image
# Elements path for tripleo-image-elements and cue-image-elements
local elements_path=$TIE_DIR/elements:$CUE_DIR/contrib/image-elements
disk_image_create_upload "$image_name" "$CUE_RABBIT_IMAGE_ELEMENTS" "$elements_path"
# Set image_id
RABBIT_IMAGE_ID=$($OPENSTACK_CMD image list | grep $image_name | tr -d ' ' | cut -f 2 -d '|')
if [ "$RABBIT_IMAGE_ID" ]; then
cue-manage --config-file $CUE_CONF broker add_metadata $BROKER_ID --image $RABBIT_IMAGE_ID
fi
else
echo "Error, Builing RabbitMQ Image requires dib" >&2
echo "Add \"enable_service dib\" to your localrc" >&2
exit 1
fi
}
# set_broker - Set default broker
function set_broker {
cue-manage --config-file $CUE_CONF broker add $CUE_DEFAULT_BROKER_NAME true
BROKER_ID=$(cue-manage --config-file $CUE_CONF broker list | grep $CUE_DEFAULT_BROKER_NAME | tr -d ' ' | cut -f 2 -d '|')
}
# Restore xtrace
$XTRACE

View File

@ -1,123 +0,0 @@
#!/bin/bash
#
# lib/dib
# Install and build images with **diskimage-builder**
# Dependencies:
#
# - functions
# - DEST, DATA_DIR must be defined
# stack.sh
# ---------
# - install_dib
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
# set up default directories
DIB_DIR=$DEST/diskimage-builder
TIE_DIR=$DEST/tripleo-image-elements
# NOTE: Setting DIB_APT_SOURCES assumes you will be building
# Debian/Ubuntu based images. Leave unset for other flavors.
DIB_APT_SOURCES=${DIB_APT_SOURCES:-""}
DIB_BUILD_OFFLINE=$(trueorfalse False DIB_BUILD_OFFLINE)
DIB_IMAGE_CACHE=$DATA_DIR/diskimage-builder/image-create
DIB_PIP_REPO=$DATA_DIR/diskimage-builder/pip-repo
DIB_PIP_REPO_PORT=${DIB_PIP_REPO_PORT:-8899}
OCC_DIR=$DEST/os-collect-config
ORC_DIR=$DEST/os-refresh-config
OAC_DIR=$DEST/os-apply-config
# Tripleo elements for diskimage-builder images
TIE_REPO=${TIE_REPO:-${GIT_BASE}/openstack/tripleo-image-elements.git}
TIE_BRANCH=${TIE_BRANCH:-master}
# QEMU Image Options
DIB_QEMU_IMG_OPTIONS='compat=0.10'
# Functions
# ---------
# install_dib() - Collect source and prepare
function install_dib {
pip_install diskimage-builder
git_clone $TIE_REPO $TIE_DIR $TIE_BRANCH
git_clone $OCC_REPO $OCC_DIR $OCC_BRANCH
git_clone $ORC_REPO $ORC_DIR $ORC_BRANCH
git_clone $OAC_REPO $OAC_DIR $OAC_BRANCH
mkdir -p $DIB_IMAGE_CACHE
}
# disk_image_create_upload() - Creates and uploads a diskimage-builder built image
function disk_image_create_upload {
local image_name=$1
local image_elements=$2
local elements_path=$3
local image_path=$TOP_DIR/files/$image_name.qcow2
# Include the apt-sources element in builds if we have an
# alternative sources.list specified.
if [ -n "$DIB_APT_SOURCES" ]; then
if [ ! -e "$DIB_APT_SOURCES" ]; then
die $LINENO "DIB_APT_SOURCES set but not found at $DIB_APT_SOURCES"
fi
local extra_elements="apt-sources"
fi
# Set the local pip repo as the primary index mirror so the
# image is built with local packages
local pypi_mirror_url=http://$SERVICE_HOST:$DIB_PIP_REPO_PORT/
local pypi_mirror_url_1
if [ -a $HOME/.pip/pip.conf ]; then
# Add the current pip.conf index-url as an extra-index-url
# in the image build
pypi_mirror_url_1=$(iniget $HOME/.pip/pip.conf global index-url)
else
# If no pip.conf, set upstream pypi as an extra mirror
# (this also sets the .pydistutils.cfg index-url)
pypi_mirror_url_1=http://pypi.python.org/simple
fi
QEMU_IMG_OPTION=""
if [ ! -z "${DIB_QEMU_IMG_OPTIONS}" ]; then
QEMU_IMG_OPTION="--qemu-img-options ${DIB_QEMU_IMG_OPTIONS}"
fi
# The disk-image-create command to run
ELEMENTS_PATH=$elements_path \
DIB_APT_SOURCES=$DIB_APT_SOURCES \
DIB_OFFLINE=$DIB_BUILD_OFFLINE \
PYPI_MIRROR_URL=$pypi_mirror_url \
PYPI_MIRROR_URL_1=$pypi_mirror_url_1 \
disk-image-create -a amd64 $image_elements ${extra_elements:-} \
--image-cache $DIB_IMAGE_CACHE \
${QEMU_IMG_OPTION} \
-o $image_path
local token=$(openstack token issue | grep ' id ' | get_field 2)
die_if_not_set $LINENO token "Keystone fail to get token"
glance --os-auth-token $token --os-image-url http://$GLANCE_HOSTPORT \
image-create --name $image_name --visibility public \
--container-format=bare --disk-format qcow2 \
< $image_path
}
# Restore xtrace
$XTRACE
# Tell emacs to use shell-script-mode
## Local variables:
## mode: shell-script
## End:

View File

@ -1,72 +0,0 @@
#
# Default ${DEVSTACK_DIR}/local.conf file for Cue
#
[[local|localrc]]
# Default passwords
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
# Enable Logging
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
# Disable global requirements checks
REQUIREMENTS_MODE=soft
# Set loopback volume size
VOLUME_BACKING_FILE_SIZE=15G
# Enable novnc
enable_service n-novnc
#
# Enable Neutron
# https://wiki.openstack.org/wiki/NeutronDevstack
#
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# Neutron Configuration
FLOATING_RANGE=192.168.15.0/27
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.15.10,end=192.168.15.30
PUBLIC_NETWORK_GATEWAY=192.168.15.1
# Enable Swift
enable_service s-proxy
enable_service s-object
enable_service s-container
enable_service s-account
# Swift Configuration
SWIFT_HASH=12go358snjw24501
# Enable Diskimage-builder
enable_service dib
# Enable Zookeeper
enable_service zookeeper
# Enable Cue
enable_service cue
enable_service cue-api
enable_service cue-worker
enable_service cue-monitor
CUE_MANAGEMENT_KEY=cue-mgmt-key
# Rally auth version
RALLY_AUTH_VERSION=v3

View File

@ -1,72 +0,0 @@
#!/bin/bash
set -o xtrace
TOP_DIR=$(cd $(dirname "$0") && pwd)
source $TOP_DIR/functions
source $TOP_DIR/stackrc
source $TOP_DIR/lib/cue
DEST=${DEST:-/opt/stack}
IDENTITY_API_VERSION=3 source $TOP_DIR/openrc admin admin
IPTABLES_RULE='iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE'
# Create NAT rule to allow VMs to NAT to host IP
if [[ -z $(sudo iptables -t nat -L | grep MASQUERADE | tr -d ' ' | grep anywhereanywhere) ]]; then
sudo $IPTABLES_RULE
fi
# Make VM NAT rule persistent
# TODO(sputnik13): this should ideally be somewhere other than /etc/rc.local
if [[ -z $(grep "$IPTABLES_RULE" /etc/rc.local) ]]; then
sudo sed -i -e "s/^exit 0/$IPTABLES_RULE\nexit 0/" /etc/rc.local
fi
if [[ ! -x /etc/rc.local ]]; then
sudo chmod +x /etc/rc.local
fi
# Generate an ssh keypair to add to devstack
if [[ ! -f ~/.ssh/id_rsa ]]; then
ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
# copying key to /tmp so that tests can access it
cp ~/.ssh/id_rsa /tmp/cue-mgmt-key
chmod 644 /tmp/cue-mgmt-key
fi
if [[ -z $CUE_MANAGEMENT_KEY ]]; then
CUE_MANAGEMENT_KEY='vagrant'
fi
# Add ssh keypair to admin account
if [[ -z $(openstack keypair list | grep $CUE_MANAGEMENT_KEY) ]]; then
openstack keypair create --public-key ~/.ssh/id_rsa.pub $CUE_MANAGEMENT_KEY
fi
# Add ping and ssh rules to rabbitmq security group
neutron security-group-rule-create --direction ingress --protocol icmp --remote-ip-prefix 0.0.0.0/0 $CUE_RABBIT_SECURITY_GROUP
neutron security-group-rule-create --direction ingress --protocol tcp --port-range-min 22 --port-range-max 22 --remote-ip-prefix 0.0.0.0/0 $CUE_RABBIT_SECURITY_GROUP
# Add static nameserver to private-subnet
neutron subnet-update --dns-nameserver 8.8.8.8 private-subnet
unset OS_PROJECT_DOMAIN_ID
unset OS_REGION_NAME
unset OS_USER_DOMAIN_ID
unset OS_IDENTITY_API_VERSION
unset OS_PASSWORD
unset OS_AUTH_URL
unset OS_USERNAME
unset OS_PROJECT_NAME
unset OS_TENANT_NAME
unset OS_VOLUME_API_VERSION
unset COMPUTE_API_VERSION
unset OS_NO_CACHE
# Add ssh keypair to demo account
IDENTITY_API_VERSION=3 source $TOP_DIR/openrc demo demo
if [[ -z $(openstack keypair list | grep $CUE_MANAGEMENT_KEY) ]]; then
openstack keypair create --public-key ~/.ssh/id_rsa.pub $CUE_MANAGEMENT_KEY
fi

View File

@ -1,113 +0,0 @@
RabbitMQ disk images for the Cue service
========================================
These elements are used to build disk images for the Cue service.
#Notes on building disk images
Building images involves using the Tripleo `diskimage-builder` tools that are found in
the GitHub repository given below.
Note that recent changes to this package mean that before the `diskimage-builder` tools
can be used it is necessary to install `dib-utils` as shown below in order to satisfy
all dependencies. The modified `PATH` definition should be included in `.profile`, or
somewhere similarly appropriate.
```
$ git clone https://github.com/openstack/diskimage-builder
$ export PATH=$HOME/diskimage-builder/bin:$PATH
$ pip install dib-utils
```
In addition (and in accordance with the instructions provided for the `diskimage-builder`
package) it is also necessary to install the `qemu-utils` and `kpartx` packages:
```
$ sudo apt-get install qemu-utils
$ sudo apt-get install kpartx
```
It should now be possible to execute commands such as the following to create disk images.
```
$ disk-image-create -a amd64 -o ubuntu-amd64 vm ubuntu
```
The next step is to fold in our Cue-specific image elements (the elements found here). This
is straightforward, and basically just involves defining `ELEMENT_PATH` to include the
locations of all applicable elements as a colon-separated list. But first, we need to be
aware that Cue images is going to require some elements from Tripleo (namely `iptables` and
`sysctl`), so before getting too carried away, we need to clone the repository containing
these elements:
```
$ git clone https://github.com/openstack/tripleo-image-elements
```
Now, assuming that we have our Cue-specific elements in `./cue-image-elements/elements`, we
can define `ELEMENT_PATH` as follows, and then try building an image:
```
$ export ELEMENTS_PATH=$HOME/cue/cue-image-elements/elements:$HOME/tripleo-image-elements/elements
$ disk-image-create -a amd64 -o ubuntu-amd64-brc-rabbit vm ubuntu cue-rabbitmq-plugins
```
Change the base image (in this case Ubuntu) and other parameters as appropriate. Assuming that all is well, the above command sequence
will result in the creation of an image named `ubuntu-amd64-brc-rabbit.qcow2`, which can then be loaded into glance and tested.
#What is currently in the Cue service RabbitMQ image
The intention is to keep the RabbitMQ disk image for Cue relatively simple. The image will provide little more than a basic installation of
RabbitMQ with the Keystone and managemnent plugins enabled; however the initial `rabbitmq.config` will not specify the use of the Keystone
plugin for authentication. After the disk image is booted and RabbitMQ started, the Cue service will be expected to perform the necessary
sequence of operations to construct a cluster (if more than one node) and activate Keystone-based authentication.
##Some point(s) to note
- The image includes a fairly basic `rabbitmq.config` that should be retained until after the cluster has been created. Once the cluster has
been created and verified, this initial `rabbitmq.config` should be replaced by the Cue service using the template configuration file
`rabbitmq.config.cue-template` (both files are to be found in `/etc/rabbitmq`), populating it with the desired Keystone endpoint. Additional
notes on this matter can be found below.
- For testing purposes, the `rabbitmq.config` currently included in the image enables `guest` logon (`{loopback_users,[]}`). This should be
disabled before generating any production images!
- Two targets are provided in the `elements` directory, namely `cue-rabbitmq-base` and `cue-rabbitmq-plugins`. The former can be used to
create an image that includes a bare-bones vanilla RabbitMQ installation with no plugins enabled. The latter depends on (inherits) `cue-rabbitmq-base`
and can be used to create iamges with the management and Keystone authentication plugins enabled.
#Notes about what the Cue service needs to do
Once the Cue service is satisfied that all nodes have successfully booted and RabbitMQ is available, the service should perform the following
general sequence of events to cluster the nodes (if necessary) and activate Keystone-based authentication.
- Update `/etc/hosts` on all nodes to include the IP addresses of the cluster nodes
- Create a cookie file (`/var/lib/rabbitq/.erlang.cookie`) on each node (using the same cookie). A reasonable choice for the cookie string
might be the UUID generated by Cue to uniquely identify the cluster. Ensure that the cookie file has the correct permissions and owner.
```
$ sudo chmod 400 /var/lib/rabbitmq/.erlang.cookie
$ sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
```
Once the above two steps have been performed, it is possible to construct the cluster.
- On all one of the cluster nodes issue the following commands (replacing `your-hostname` as appropriate):
```
$ sudo rabbitmqctl stop_app
$ sudo rabbitmqctl reset
$ sudo rabbitmqctl join_cluster rabbit@your-hostname
$ sudo rabbitmqctl start_app
$ sudo rabbitmqctl cluster_status # to check the status of the cluster
```
- Once the cluster has formed, replace the management plugin with the version patched for Cue and Keystone, and replace the existing
`rabbitmq.config` file using the template configuration file (replace `X.Y.Z` with the relevant RabbitMQ version number, and replace the
Keystone endpoint as appropriate):
```
$ sudo cp /usr/lib/rabbitmq/lib/rabbitmq_server-X.Y.Z/plugins/rabbitmq_management-X.Y.Z.ez.cue /usr/lib/rabbitmq/lib/rabbitmq_server-X.Y.Z/plugins//rabbitmq_management-X.Y.Z.ez
$ sed 's/##keystone_url##/https:\/\/region-a.geo-1.identity.hpcloudsvc.com:35357\/v3\/auth\/tokens/' /etc/rabbitmq/rabbitmq.config.cue-template > /etc/rabbitmq/rabbitmq.config
```
- Systematically restart each cluster node, waiting until the node comes back up before restarting the next node.
- Finally, create the user (using the users' Keystone username) and grant them appropriate permissions (replacing `keystone-username` with the relevant username). For good measure, also delete the `guest` user:
```
$ sudo rabbitmqctl add_user keystone-username nopassword
$ sudo rabbitmqctl set_permissions -p / keystone-username ".*" ".*" ".*"
$ sudo rabbitmqctl set_user_tags keystone-username administrator
$ sudo rabbitmqctl delete_user guest
```
The user can now be informed that the cluster is ready for use.

View File

@ -1 +0,0 @@
Install RabbitMQ VM as part of MSGaaS single-tenant cluster.

View File

@ -1 +0,0 @@
package-installs

View File

@ -1,8 +0,0 @@
ntp:
phase: install.d
python-pip:
phase: install.d
rabbitmq-server:
phase: install.d

View File

@ -1,43 +0,0 @@
#!/bin/bash
set -eux
FILES="$(dirname $0)/../files"
if [ "$DISTRO_NAME" = "ubuntu" ] || [ "$DISTRO_NAME" = "debian" ]; then
# Prevent rabbitmq-server from starting automatically
update-rc.d -f rabbitmq-server disable
fi
if [ "$DIB_INIT_SYSTEM" = "systemd" ]; then
# Delay the rc-local.service start-up until rabbitmq-server.service is started up
sed -i 's/\[Unit\]/\[Unit\]\nBefore=rc-local.service/g' /lib/systemd/system/rabbitmq-server.service
# Respawn rabbitmq-server in case the process exits with an nonzero exit code
sed -i 's/\[Service\]/\[Service\]\nRestart=on-failure/g' /lib/systemd/system/rabbitmq-server.service
fi
# Enable ulimits in pam if needed
PAM_FILE=/etc/pam.d/su
sed -i '/# session.*pam_limits\.so/s/# //' ${PAM_FILE}
# Reserve the cluster port (61000) from the ephemeral port range.
EXISTING_RESERVED_PORTS=$(grep -r net.ipv4.ip_local_reserved_ports /etc/sysctl.conf /etc/sysctl.d 2> /dev/null | cut -d'=' -f2)
RESERVED_PORTS=61000
if ! [ -z $EXISTING_RESERVED_PORTS ]; then
# create one port reservation list
for port in $EXISTING_RESERVED_PORTS; do
RESERVED_PORTS=$RESERVED_PORTS,$port
done
# find files with port reservation settings
RESERVATION_FILE_LIST=$(grep -r net.ipv4.ip_local_reserved_ports /etc/sysctl.conf /etc/sysctl.d 2> /dev/null | cut -d':' -f1 | sort | uniq)
# comment out existing port reservation lines
for file in $RESERVATION_FILE_LIST; do
sed -i -e 's/\(^net.ipv4.ip_local_reserved_ports=.*\)/#\1/' $file
done
# add port reservation
echo "net.ipv4.ip_local_reserved_ports=${RESERVED_PORTS}"
fi

View File

@ -1,64 +0,0 @@
#!/bin/bash
set -eux
pip install pika
cat > /opt/rabbitmq_test.py << EOF
import argparse
import time
import pika
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-H", "--host", required=True,
help="Specify the RabbitMQ host")
parser.add_argument("-R", "--receive",
help="Specify the RabbitMQ host to receive message")
parser.add_argument("-P", "--port", required=True,
help="Specify the RabbitMQ port",
type=int)
parser.add_argument("-u", "--user", required=True,
help="Specify the RabbitMQ username")
parser.add_argument("-p", "--password", required=True,
help="Specify the RabbitMQ password")
parser.add_argument("--ssl", dest="ssl", action="store_true",
help="Specify whether to use AMQPS protocol")
args = parser.parse_args()
host = args.host
credentials = pika.PlainCredentials(args.user, args.password)
connection = pika.BlockingConnection(pika.ConnectionParameters(
credentials=credentials, host=host, port=args.port, ssl=args.ssl))
channel = connection.channel()
channel.queue_declare(queue='hello')
if args.receive:
connection_receive = pika.BlockingConnection(pika.ConnectionParameters(
credentials=credentials, host=args.receive, port=args.port,
ssl=args.ssl))
channel_receive = connection_receive.channel()
channel_receive.queue_declare(queue='hello')
else:
channel_receive = channel
for count in range(1, 10, 1):
print("Sending...")
channel.basic_publish(exchange='', routing_key='hello',
body='Hello World!' + str(count))
print(" [x] Sent 'Hello World!'" + str(count))
print("Receiving...")
method_frame, header_frame, body = channel_receive.basic_get('hello')
if method_frame:
print(method_frame, header_frame, body)
channel_receive.basic_ack(method_frame.delivery_tag)
else:
print('No message returned')
time.sleep(1)
connection.close()
EOF
chmod 777 /opt/rabbitmq_test.py

View File

@ -1,7 +0,0 @@
#!/bin/bash
set -eux
echo 'deb http://www.rabbitmq.com/debian/ testing main' > /etc/apt/sources.list.d/rabbitmq.list
wget -O- https://www.rabbitmq.com/rabbitmq-release-signing-key.asc |
sudo apt-key add -

View File

@ -1,35 +0,0 @@
#!/bin/bash
set -eux
install-packages ifmetric
cat <<EOF > /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
source /etc/network/interfaces.d/*
EOF
mkdir -p /etc/network/interfaces.d
rm /etc/network/interfaces.d/*
cat <<EOF > /etc/network/interfaces.d/eth0
# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp
metric 0
EOF
cat <<EOF > /etc/network/interfaces.d/eth1
allow-hotplug eth1
iface eth1 inet dhcp
metric 1
EOF
cat <<EOF > /etc/network/interfaces.d/eth2
allow-hotplug eth2
iface eth2 inet dhcp
metric 2
EOF

View File

@ -1,63 +0,0 @@
#!/bin/bash
set -eux
# Path Settings
export CUE_HOME=$(readlink -e $(dirname $(readlink -f $0))/../..)
# BUILD_DIR Directory where builds will be performed and images will be left
export BUILD_DIR=${BUILD_DIR:-$CUE_HOME/build}
# DIB Output Image Type
export IMAGE_TYPE=${IMAGE_TYPE:-qcow2}
# Image Name
BUILD_FILE="rabbitmq-cue-image.qcow2"
# Common elements we'll use in all builds
COMMON_ELEMENTS=${COMMON_ELEMENTS:-"vm ubuntu"}
# Common Settings for all msgaas images builds
SIZE="2"
ELEMENTS="cue-rabbitmq-base ifmetric"
ELEMENTS_PATH="$CUE_HOME/contrib/image-elements"
# QEMU Image options
QEMU_IMG_OPTIONS='compat=0.10'
# Install some required apt packages if needed
if ! [ -e /usr/sbin/debootstrap -a -e /usr/bin/qemu-img ]; then
sudo apt-get update
sudo apt-get install --yes debootstrap qemu-utils git python-virtualenv uuid-runtime curl wget parted kpartx
fi
if [ ! -d $BUILD_DIR/diskimage-builder ]; then
echo "---> Cloning diskimage-builder"
git clone https://git.openstack.org/openstack/diskimage-builder $BUILD_DIR/diskimage-builder
fi
# Setup the elements path
export ELEMENTS_PATH="$ELEMENTS_PATH:$BUILD_DIR/diskimage-builder/elements"
# Prepare the build directory
if [ ! -d $BUILD_DIR/dist ]; then
mkdir $BUILD_DIR/dist
fi
# Complete QEMU_IMG_OPTIONS
if [ ! -z "${QEMU_IMG_OPTIONS}" ]; then
QEMU_IMG_OPTIONS="--qemu-img-options ${QEMU_IMG_OPTIONS}"
fi
# Prepare venv for diskimage-builder
virtualenv $BUILD_DIR/diskimage-builder/.venv
# Build the image
( set +u; . "$BUILD_DIR/diskimage-builder/.venv/bin/activate"; set -u;
pushd $BUILD_DIR/diskimage-builder
pip install -r requirements.txt
python setup.py install
popd
disk-image-create -a amd64 -o $BUILD_DIR/dist/$BUILD_FILE --image-size $SIZE $QEMU_IMG_OPTIONS $COMMON_ELEMENTS $ELEMENTS
)

View File

@ -1,190 +0,0 @@
#!/bin/bash
while test $# -gt 0; do
case "$1" in
-h|--help)
echo "Single VM Cue Installer"
echo " "
echo "--h show brief help"
echo "Required parameters:"
echo "--image IMAGE_ID specify Nova image id to use"
echo "--flavor FLAVOR_ID specify a Nova flavor id to use"
echo "--cue-management-nic CUE_MANAGEMENT_NIC specify management network interface for cue"
echo "--cue-image CUE_IMAGE_ID specify a Nova image id for Cue cluster VMs"
echo "Optional parameters:"
echo "--security-groups SECURITY_GROUPS specify security group"
echo "--key-name KEY_NAME specify key-name to forward"
echo "--nic NIC a network to attach Cue VM on"
echo "--mysql-root-password MYSQL_ROOT_PASSWORD specify root password for MySql Server"
echo "--mysql-cueapi-password MYSQL_CUEAPI_PASSWORD specify cue api user password for MySql Server"
echo "--mysql-cueworker-password MYSQL_CUEWORKER_PASSWORD specify cue worker user password for MySql Server"
exit 0
;;
--image)
shift
if test $# -gt 0; then
export IMAGE_ID=$1
fi
shift
;;
--flavor)
shift
if test $# -gt 0; then
export FLAVOR_ID=$1
fi
shift
;;
--cue-management-nic)
shift
if test $# -gt 0; then
export CUE_MANAGEMENT_NIC=$1
fi
shift
;;
--cue-image)
shift
if test $# -gt 0; then
export CUE_IMAGE_ID=$1
fi
shift
;;
--security-groups)
shift
if test $# -gt 0; then
export SECURITY_GROUPS=$1
fi
shift
;;
--cue-security-group)
shift
if test $# -gt 0; then
export CUE_SECURITY_GROUP=$1
fi
shift
;;
--key-name)
shift
if test $# -gt 0; then
export KEY_NAME=$1
fi
shift
;;
--os-key-name)
shift
if test $# -gt 0; then
export OS_KEY_NAME=$1
fi
shift
;;
--nic)
shift
if test $# -gt 0; then
export NIC=$1
fi
shift
;;
--mysql-root-password)
shift
if test $# -gt 0; then
export MYSQL_ROOT_PASSWORD=$1
fi
shift
;;
--mysql-cueapi-password)
shift
if test $# -gt 0; then
export MYSQL_CUEAPI_PASSWORD=$1
fi
shift
;;
--mysql-cueworker-password)
shift
if test $# -gt 0; then
export MYSQL_CUEWORKER_PASSWORD=$1
fi
shift
;;
--floating-ip)
shift
if test $# -gt 0; then
export FLOATING_IP=$1
fi
shift
;;
*)
break
;;
esac
done
# verify required and optional input arguments
if [ -z ${IMAGE_ID} ] || [ -z ${FLAVOR_ID} ] || [ -z ${CUE_IMAGE_ID} ] || [ -z ${CUE_MANAGEMENT_NIC} ]; then
echo "IMAGE_ID, FLAVOR_ID, CUE_IMAGE_ID AND CUE_MANAGEMENT_NIC must be provided"
exit 1
fi
if [ -z ${MYSQL_ROOT_PASSWORD} ]; then
MYSQL_ROOT_PASSWORD="password"
fi
if [ -z ${MYSQL_CUEAPI_PASSWORD} ]; then
MYSQL_CUEAPI_PASSWORD="cuepassword"
fi
if [ -z ${MYSQL_CUEWORKER_PASSWORD} ]; then
MYSQL_CUEWORKER_PASSWORD="workerpassword"
fi
# set parameters required by mo to fill-in template file
export MYSQL_ROOT_PASSWORD
export MYSQL_CUEAPI_PASSWORD
export MYSQL_CUEWORKER_PASSWORD
# set working directory to script location
PROJECT_ROOT=$( cd $(dirname "$0") && pwd)
pushd ${PROJECT_ROOT}
# Configure user data script from template file
USERDATA_FILE=$(mktemp -t cue_install.XXXX)
chmod +x mo
cat user_data_template | ./mo > ${USERDATA_FILE}
# unset exported parameters from above
unset MYSQL_ROOT_PASSWORD
unset MYSQL_CUEAPI_PASSWORD
unset MYSQL_CUEWORKER_PASSWORD
# Compose Nova boot command string
NOVA_BOOT_BASE="nova boot"
VM_NAME="cue_host"
NOVA_BOOT_COMMAND="${NOVA_BOOT_BASE} --flavor ${FLAVOR_ID} --image ${IMAGE_ID}"
if [ ! -z ${SECURITY_GROUPS} ]; then
NOVA_BOOT_COMMAND="${NOVA_BOOT_COMMAND} --security-groups ${SECURITY_GROUPS}"
fi
if [ ! -z ${KEY_NAME} ]; then
NOVA_BOOT_COMMAND="${NOVA_BOOT_COMMAND} --key-name ${KEY_NAME}"
fi
OS_KEYNAME=${OS_KEYNAME:-$KEY_NAME}
if [ ! -z ${NIC} ]; then
NOVA_BOOT_COMMAND="${NOVA_BOOT_COMMAND} --nic net-id=${NIC}"
fi
if [ ! -z ${CUE_MANAGEMENT_NIC} ]; then
NOVA_BOOT_COMMAND="${NOVA_BOOT_COMMAND} --nic net-id=${CUE_MANAGEMENT_NIC}"
fi
NOVA_BOOT_COMMAND="${NOVA_BOOT_COMMAND} --user-data ${USERDATA_FILE} ${VM_NAME}"
eval ${NOVA_BOOT_COMMAND}
if [ ! -z ${FLOATING_IP} ]; then
echo "Waiting for cue_host VM to go ACTIVE..."
while [ -z "$(nova show $VM_NAME 2>/dev/null | egrep 'ACTIVE|ERROR')" ]; do
sleep 1
done
nova floating-ip-associate $VM_NAME ${FLOATING_IP}
fi
rm ${USERDATA_FILE}
popd

View File

@ -1,47 +0,0 @@
#!/bin/bash
set -x
unset UCF_FORCE_CONFFOLD
export UCF_FORCE_CONFFNEW=YES
ucf --purge /boot/grub/menu.lst
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade
sudo apt-get -y install git
cd /home/ubuntu/
sudo -u ubuntu git clone https://git.openstack.org/openstack-dev/devstack
cat > devstack/local.conf<< EOF
[[local|localrc]]
HOST_IP=127.0.0.1
REQUIREMENTS_MODE=soft
ADMIN_PASSWORD=password
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
disable_service g-api
disable_service g-reg
disable_service n-api
disable_service n-crt
disable_service n-obj
disable_service n-cpu
disable_service n-net
disable_service n-cond
disable_service n-sch
disable_service n-novnc
disable_service n-xvnc
disable_service n-cauth
disable_service c-sch
disable_service c-api
disable_service c-vol
disable_service h-eng
disable_service h-api
disable_service h-api-cfn
disable_service h-api-cw
disable_service horizon
disable_service tempest
EOF
sudo -u ubuntu ./devstack/stack.sh

View File

@ -1,700 +0,0 @@
#!/bin/bash
#
# Mo is a mustache template rendering software written in bash. It inserts
# environment variables into templates.
#
# Learn more about mustache templates at https://mustache.github.io/
#
# Mo is under a MIT style licence with an additional non-advertising clause.
# See LICENSE.md for the full text.
#
# This is open source! Please feel free to contribute.
#
# https://github.com/tests-always-included/mo
# Scan content until the right end tag is found. Returns an array with the
# following members:
# [0] = Content before end tag
# [1] = End tag (complete tag)
# [2] = Content after end tag
#
# Everything using this function uses the "standalone tags" logic.
#
# Parameters:
# $1: Where to store the array
# $2: Content
# $3: Name of end tag
# $4: If -z, do standalone tag processing before finishing
mustache-find-end-tag() {
local CONTENT SCANNED
# Find open tags
SCANNED=""
mustache-split CONTENT "$2" '{{' '}}'
while [[ "${#CONTENT[@]}" -gt 1 ]]; do
mustache-trim-whitespace TAG "${CONTENT[1]}"
# Restore CONTENT[1] before we start using it
CONTENT[1]='{{'"${CONTENT[1]}"'}}'
case $TAG in
'#'* | '^'*)
# Start another block
SCANNED="${SCANNED}${CONTENT[0]}${CONTENT[1]}"
mustache-trim-whitespace TAG "${TAG:1}"
mustache-find-end-tag CONTENT "${CONTENT[2]}" "$TAG" "loop"
SCANNED="${SCANNED}${CONTENT[0]}${CONTENT[1]}"
CONTENT=${CONTENT[2]}
;;
'/'*)
# End a block - could be ours
mustache-trim-whitespace TAG "${TAG:1}"
SCANNED="$SCANNED${CONTENT[0]}"
if [[ "$TAG" == "$3" ]]; then
# Found our end tag
if [[ -z "$4" ]] && mustache-is-standalone STANDALONE_BYTES "$SCANNED" "${CONTENT[2]}" true; then
# This is also a standalone tag - clean up whitespace
# and move those whitespace bytes to the "tag" element
STANDALONE_BYTES=( $STANDALONE_BYTES )
CONTENT[1]="${SCANNED:${STANDALONE_BYTES[0]}}${CONTENT[1]}${CONTENT[2]:0:${STANDALONE_BYTES[1]}}"
SCANNED="${SCANNED:0:${STANDALONE_BYTES[0]}}"
CONTENT[2]="${CONTENT[2]:${STANDALONE_BYTES[1]}}"
fi
local "$1" && mustache-indirect-array "$1" "$SCANNED" "${CONTENT[1]}" "${CONTENT[2]}"
return 0
fi
SCANNED="$SCANNED${CONTENT[1]}"
CONTENT=${CONTENT[2]}
;;
*)
# Ignore all other tags
SCANNED="${SCANNED}${CONTENT[0]}${CONTENT[1]}"
CONTENT=${CONTENT[2]}
;;
esac
mustache-split CONTENT "$CONTENT" '{{' '}}'
done
# Did not find our closing tag
SCANNED="$SCANNED${CONTENT[0]}"
local "$1" && mustache-indirect-array "$1" "${SCANNED}" "" ""
}
# Find the first index of a substring
#
# Parameters:
# $1: Destination variable
# $2: Haystack
# $3: Needle
mustache-find-string() {
local POS STRING
STRING=${2%%$3*}
[[ "$STRING" == "$2" ]] && POS=-1 || POS=${#STRING}
local "$1" && mustache-indirect "$1" $POS
}
# Return a dotted name based on current context and target name
#
# Parameters:
# $1: Target variable to store results
# $2: Context name
# $3: Desired variable name
mustache-full-tag-name() {
if [[ -z "$2" ]]; then
local "$1" && mustache-indirect "$1" "$3"
else
local "$1" && mustache-indirect "$1" "${2}.${3}"
fi
}
# Return the content to parse. Can be a list of partials for files or
# the content from stdin.
#
# Parameters:
# $1: Variable name to assign this content back as
# $2-*: File names (optional)
mustache-get-content() {
local CONTENT FILENAME TARGET
TARGET=$1
shift
if [[ "${#@}" -gt 0 ]]; then
CONTENT=""
for FILENAME in "$@"; do
# This is so relative paths work from inside template files
CONTENT="$CONTENT"'{{>'"$FILENAME"'}}'
done
else
mustache-load-file CONTENT /dev/stdin
fi
local "$TARGET" && mustache-indirect "$TARGET" "$CONTENT"
}
# Indent a string, placing the indent at the beginning of every
# line that has any content.
#
# Parameters:
# $1: Name of destination variable to get an array of lines
# $2: The indent string
# $3: The string to reindent
mustache-indent-lines() {
local CONTENT FRAGMENT LEN POS_N POS_R RESULT TRIMMED
RESULT=""
LEN=$((${#3} - 1))
CONTENT="${3:0:$LEN}" # Remove newline and dot from workaround - in mustache-partial
if [ -z "$2" ]; then
local "$1" && mustache-indirect "$1" "$CONTENT"
return 0
fi
mustache-find-string POS_N "$CONTENT" $'\n'
mustache-find-string POS_R "$CONTENT" $'\r'
while [[ "$POS_N" -gt -1 ]] || [[ "$POS_R" -gt -1 ]]; do
if [[ "$POS_N" -gt -1 ]]; then
FRAGMENT="${CONTENT:0:$POS_N + 1}"
CONTENT=${CONTENT:$POS_N + 1}
else
FRAGMENT="${CONTENT:0:$POS_R + 1}"
CONTENT=${CONTENT:$POS_R + 1}
fi
mustache-trim-chars TRIMMED "$FRAGMENT" false true " " $'\t' $'\n' $'\r'
if [ ! -z "$TRIMMED" ]; then
FRAGMENT="$2$FRAGMENT"
fi
RESULT="$RESULT$FRAGMENT"
mustache-find-string POS_N "$CONTENT" $'\n'
mustache-find-string POS_R "$CONTENT" $'\r'
done
mustache-trim-chars TRIMMED "$CONTENT" false true " " $'\t'
if [ ! -z "$TRIMMED" ]; then
CONTENT="$2$CONTENT"
fi
RESULT="$RESULT$CONTENT"
local "$1" && mustache-indirect "$1" "$RESULT"
}
# Send a variable up to caller of a function
#
# Parameters:
# $1: Variable name
# $2: Value
mustache-indirect() {
unset -v "$1"
printf -v "$1" '%s' "$2"
}
# Send an array up to caller of a function
#
# Parameters:
# $1: Variable name
# $2-*: Array elements
mustache-indirect-array() {
unset -v "$1"
eval $1=\(\"\${@:2}\"\)
}
# Determine if a given environment variable exists and if it is an array.
#
# Parameters:
# $1: Name of environment variable
#
# Return code:
# 0 if the name is not empty, 1 otherwise
mustache-is-array() {
local MUSTACHE_TEST
MUSTACHE_TEST=$(declare -p "$1" 2>/dev/null) || return 1
[[ "${MUSTACHE_TEST:0:10}" == "declare -a" ]] && return 0
[[ "${MUSTACHE_TEST:0:10}" == "declare -A" ]] && return 0
return 1
}
# Return 0 if the passed name is a function.
#
# Parameters:
# $1: Name to check if it's a function
#
# Return code:
# 0 if the name is a function, 1 otherwise
mustache-is-function() {
local FUNCTIONS NAME
FUNCTIONS=$(declare -F)
FUNCTIONS=( ${FUNCTIONS//declare -f /} )
for NAME in ${FUNCTIONS[@]}; do
if [[ "$NAME" == "$1" ]]; then
return 0
fi
done
return 1
}
# Determine if the tag is a standalone tag based on whitespace before and
# after the tag.
#
# Passes back a string containing two numbers in the format "BEFORE AFTER"
# like "27 10". It indicates the number of bytes remaining in the "before"
# string (27) and the number of bytes to trim in the "after" string (10).
# Useful for string manipulation:
#
# mustache-is-standalone RESULT "$before" "$after" false || return 0
# RESULT_ARRAY=( $RESULT )
# echo "${before:0:${RESULT_ARRAY[0]}}...${after:${RESULT_ARRAY[1]}}"
#
# Parameters:
# $1: Variable to pass data back
# $2: Content before the tag
# $3: Content after the tag
# $4: true/false: is this the beginning of the content?
mustache-is-standalone() {
local AFTER_TRIMMED BEFORE_TRIMMED CHAR
mustache-trim-chars BEFORE_TRIMMED "$2" false true " " $'\t'
mustache-trim-chars AFTER_TRIMMED "$3" true false " " $'\t'
CHAR=$((${#BEFORE_TRIMMED} - 1))
CHAR=${BEFORE_TRIMMED:$CHAR}
if [[ "$CHAR" != $'\n' ]] && [[ "$CHAR" != $'\r' ]]; then
if [[ ! -z "$CHAR" ]] || ! $4; then
return 1;
fi
fi
CHAR=${AFTER_TRIMMED:0:1}
if [[ "$CHAR" != $'\n' ]] && [[ "$CHAR" != $'\r' ]] && [[ ! -z "$CHAR" ]]; then
return 2;
fi
if [[ "$CHAR" == $'\r' ]] && [[ "${AFTER_TRIMMED:1:1}" == $'\n' ]]; then
CHAR="$CHAR"$'\n'
fi
local "$1" && mustache-indirect "$1" "$((${#BEFORE_TRIMMED})) $((${#3} + ${#CHAR} - ${#AFTER_TRIMMED}))"
}
# Read a file
#
# Parameters:
# $1: Variable name to receive the file's content
# $2: Filename to load
mustache-load-file() {
local CONTENT LEN
# The subshell removes any trailing newlines. We forcibly add
# a dot to the content to preserve all newlines.
# TODO: remove cat and replace with read loop?
CONTENT=$(cat $2; echo '.')
LEN=$((${#CONTENT} - 1))
CONTENT=${CONTENT:0:$LEN} # Remove last dot
local "$1" && mustache-indirect "$1" "$CONTENT"
}
# Process a chunk of content some number of times.
#
# Parameters:
# $1: Content to parse and reparse and reparse
# $2: Tag prefix (context name)
# $3-*: Names to insert into the parsed content
mustache-loop() {
local CONTENT CONTEXT CONTEXT_BASE IGNORE
CONTENT=$1
CONTEXT_BASE=$2
shift 2
while [[ "${#@}" -gt 0 ]]; do
mustache-full-tag-name CONTEXT "$CONTEXT_BASE" "$1"
mustache-parse "$CONTENT" "$CONTEXT" false
shift
done
}
# Parse a block of text
#
# Parameters:
# $1: Block of text to change
# $2: Current name (the variable NAME for what {{.}} means)
# $3: true when no content before this, false otherwise
mustache-parse() {
# Keep naming variables MUSTACHE_* here to not overwrite needed variables
# used in the string replacements
local MUSTACHE_BLOCK MUSTACHE_CONTENT MUSTACHE_CURRENT MUSTACHE_IS_BEGINNING MUSTACHE_TAG
MUSTACHE_CURRENT=$2
MUSTACHE_IS_BEGINNING=$3
# Find open tags
mustache-split MUSTACHE_CONTENT "$1" '{{' '}}'
while [[ "${#MUSTACHE_CONTENT[@]}" -gt 1 ]]; do
mustache-trim-whitespace MUSTACHE_TAG "${MUSTACHE_CONTENT[1]}"
case $MUSTACHE_TAG in
'#'*)
# Loop, if/then, or pass content through function
# Sets context
mustache-standalone-allowed MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING
mustache-trim-whitespace MUSTACHE_TAG "${MUSTACHE_TAG:1}"
mustache-find-end-tag MUSTACHE_BLOCK "$MUSTACHE_CONTENT" "$MUSTACHE_TAG"
mustache-full-tag-name MUSTACHE_TAG "$MUSTACHE_CURRENT" "$MUSTACHE_TAG"
if mustache-test "$MUSTACHE_TAG"; then
# Show / loop / pass through function
if mustache-is-function "$MUSTACHE_TAG"; then
# TODO: Consider piping the output to
# mustache-get-content so the lambda does not
# execute in a subshell?
MUSTACHE_CONTENT=$($MUSTACHE_TAG "${MUSTACHE_BLOCK[0]}")
mustache-parse "$MUSTACHE_CONTENT" "$MUSTACHE_CURRENT" false
MUSTACHE_CONTENT="${MUSTACHE_BLOCK[2]}"
elif mustache-is-array "$MUSTACHE_TAG"; then
eval 'mustache-loop "${MUSTACHE_BLOCK[0]}" "$MUSTACHE_TAG" "${!'"$MUSTACHE_TAG"'[@]}"'
else
mustache-parse "${MUSTACHE_BLOCK[0]}" "$MUSTACHE_CURRENT" false
fi
fi
MUSTACHE_CONTENT="${MUSTACHE_BLOCK[2]}"
;;
'>'*)
# Load partial - get name of file relative to cwd
mustache-partial MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING "$MUSTACHE_CURRENT"
;;
'/'*)
# Closing tag - If hit in this loop, we simply ignore
# Matching tags are found in mustache-find-end-tag
mustache-standalone-allowed MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING
;;
'^'*)
# Display section if named thing does not exist
mustache-standalone-allowed MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING
mustache-trim-whitespace MUSTACHE_TAG "${MUSTACHE_TAG:1}"
mustache-find-end-tag MUSTACHE_BLOCK "$MUSTACHE_CONTENT" "$MUSTACHE_TAG"
mustache-full-tag-name MUSTACHE_TAG "$MUSTACHE_CURRENT" "$MUSTACHE_TAG"
if ! mustache-test "$MUSTACHE_TAG"; then
mustache-parse "${MUSTACHE_BLOCK[0]}" "$MUSTACHE_CURRENT" false "$MUSTACHE_CURRENT"
fi
MUSTACHE_CONTENT="${MUSTACHE_BLOCK[2]}"
;;
'!'*)
# Comment - ignore the tag content entirely
# Trim spaces/tabs before the comment
mustache-standalone-allowed MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING
;;
.)
# Current content (environment variable or function)
mustache-standalone-denied MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}"
mustache-show "$MUSTACHE_CURRENT" "$MUSTACHE_CURRENT"
;;
'=')
# Change delimiters
# Any two non-whitespace sequences separated by whitespace.
# TODO
mustache-standalone-allowed MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}" $MUSTACHE_IS_BEGINNING
;;
'{'*)
# Unescaped - split on }}} not }}
mustache-standalone-denied MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}"
MUSTACHE_CONTENT="${MUSTACHE_TAG:1}"'}}'"$MUSTACHE_CONTENT"
mustache-split MUSTACHE_CONTENT "$MUSTACHE_CONTENT" '}}}'
mustache-trim-whitespace MUSTACHE_TAG "${MUSTACHE_CONTENT[0]}"
mustache-full-tag-name MUSTACHE_TAG "$MUSTACHE_CURRENT" "$MUSTACHE_TAG"
MUSTACHE_CONTENT=${MUSTACHE_CONTENT[1]}
# Now show the value
mustache-show "$MUSTACHE_TAG" "$MUSTACHE_CURRENT"
;;
'&'*)
# Unescaped
mustache-standalone-denied MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}"
mustache-trim-whitespace MUSTACHE_TAG "${MUSTACHE_TAG:1}"
mustache-full-tag-name MUSTACHE_TAG "$MUSTACHE_CURRENT" "$MUSTACHE_TAG"
mustache-show "$MUSTACHE_TAG" "$MUSTACHE_CURRENT"
;;
*)
# Normal environment variable or function call
mustache-standalone-denied MUSTACHE_CONTENT "${MUSTACHE_CONTENT[@]}"
mustache-full-tag-name MUSTACHE_TAG "$MUSTACHE_CURRENT" "$MUSTACHE_TAG"
mustache-show "$MUSTACHE_TAG" "$MUSTACHE_CURRENT"
;;
esac
MUSTACHE_IS_BEGINNING=false
mustache-split MUSTACHE_CONTENT "$MUSTACHE_CONTENT" '{{' '}}'
done
echo -n "${MUSTACHE_CONTENT[0]}"
}
# Process a partial
#
# Indentation should be applied to the entire partial
#
# Prefix all variables
#
# Parameters:
# $1: Name of destination "content" variable.
# $2: Content before the tag that was not yet written
# $3: Tag content
# $4: Content after the tag
# $5: true/false: is this the beginning of the content?
# $6: Current context name
mustache-partial() {
local MUSTACHE_CONTENT MUSTACHE_FILENAME MUSTACHE_INDENT MUSTACHE_LINE MUSTACHE_PARTIAL MUSTACHE_STANDALONE
if mustache-is-standalone MUSTACHE_STANDALONE "$2" "$4" $5; then
MUSTACHE_STANDALONE=( $MUSTACHE_STANDALONE )
echo -n "${2:0:${MUSTACHE_STANDALONE[0]}}"
MUSTACHE_INDENT=${2:${MUSTACHE_STANDALONE[0]}}
MUSTACHE_CONTENT=${4:${MUSTACHE_STANDALONE[1]}}
else
MUSTACHE_INDENT=""
echo -n "$2"
MUSTACHE_CONTENT=$4
fi
mustache-trim-whitespace MUSTACHE_FILENAME "${3:1}"
# Execute in subshell to preserve current cwd and environment
(
# TODO: Remove dirname and use a function instead
cd "$(dirname "$MUSTACHE_FILENAME")"
mustache-indent-lines MUSTACHE_PARTIAL "$MUSTACHE_INDENT" "$(
mustache-load-file MUSTACHE_PARTIAL "${MUSTACHE_FILENAME##*/}"
# Fix bash handling of subshells
# The extra dot is removed in mustache-indent-lines
echo -n "${MUSTACHE_PARTIAL}."
)"
mustache-parse "$MUSTACHE_PARTIAL" "$6" true
)
local "$1" && mustache-indirect "$1" "$MUSTACHE_CONTENT"
}
# Show an environment variable or the output of a function.
#
# Limit/prefix any variables used
#
# Parameters:
# $1: Name of environment variable or function
# $2: Current context
mustache-show() {
local MUSTACHE_NAME_PARTS
if mustache-is-function "$1"; then
CONTENT=$($1 "")
mustache-parse "$CONTENT" "$2" false
return 0
fi
mustache-split MUSTACHE_NAME_PARTS "$1" "."
if [[ -z "${MUSTACHE_NAME_PARTS[1]}" ]]; then
echo -n "${!1}"
else
# Further subindexes are disallowed
eval 'echo -n "${'"${MUSTACHE_NAME_PARTS[0]}"'['"${MUSTACHE_NAME_PARTS[1]%%.*}"']}"'
fi
}
# Split a larger string into an array
#
# Parameters:
# $1: Destination variable
# $2: String to split
# $3: Starting delimiter
# $4: Ending delimiter (optional)
mustache-split() {
local POS RESULT
RESULT=( "$2" )
mustache-find-string POS "${RESULT[0]}" "$3"
if [[ "$POS" -ne -1 ]]; then
# The first delimiter was found
RESULT[1]=${RESULT[0]:$POS + ${#3}}
RESULT[0]=${RESULT[0]:0:$POS}
if [[ ! -z "$4" ]]; then
mustache-find-string POS "${RESULT[1]}" "$4"
if [[ "$POS" -ne -1 ]]; then
# The second delimiter was found
RESULT[2]="${RESULT[1]:$POS + ${#4}}"
RESULT[1]="${RESULT[1]:0:$POS}"
fi
fi
fi
local "$1" && mustache-indirect-array "$1" "${RESULT[@]}"
}
# Handle the content for a standalone tag. This means removing whitespace
# (not newlines) before a tag and whitespace and a newline after a tag.
# That is, assuming, that the line is otherwise empty.
#
# Parameters:
# $1: Name of destination "content" variable.
# $2: Content before the tag that was not yet written
# $3: Tag content (not used)
# $4: Content after the tag
# $5: true/false: is this the beginning of the content?
mustache-standalone-allowed() {
local STANDALONE_BYTES
if mustache-is-standalone STANDALONE_BYTES "$2" "$4" $5; then
STANDALONE_BYTES=( $STANDALONE_BYTES )
echo -n "${2:0:${STANDALONE_BYTES[0]}}"
local "$1" && mustache-indirect "$1" "${4:${STANDALONE_BYTES[1]}}"
else
echo -n "$2"
local "$1" && mustache-indirect "$1" "$4"
fi
}
# Handle the content for a tag that is never "standalone". No adjustments
# are made for newlines and whitespace.
#
# Parameters:
# $1: Name of destination "content" variable.
# $2: Content before the tag that was not yet written
# $3: Tag content (not used)
# $4: Content after the tag
mustache-standalone-denied() {
echo -n "$2"
local "$1" && mustache-indirect "$1" "$4"
}
# Returns 0 (success) if the named thing is a function or if it is a non-empty
# environment variable.
#
# Do not use unprefixed variables here if possible as this needs to check
# if any name exists in the environment
#
# Parameters:
# $1: Name of environment variable or function
# $2: Current value (our context)
#
# Return code:
# 0 if the name is not empty, 1 otherwise
mustache-test() {
# Test for functions
mustache-is-function "$1" && return 0
if mustache-is-array "$1"; then
# Arrays must have at least 1 element
eval '[[ "${#'"$1"'}" -gt 0 ]]' && return 0
else
# Environment variables must not be empty
[[ ! -z "${!1}" ]] && return 0
fi
return 1
}
# Trim the leading whitespace only
#
# Parameters:
# $1: Name of destination variable
# $2: The string
# $3: true/false - trim front?
# $4: true/false - trim end?
# $5-*: Characters to trim
mustache-trim-chars() {
local BACK CURRENT FRONT LAST TARGET VAR
TARGET=$1
CURRENT=$2
FRONT=$3
BACK=$4
LAST=""
shift # Remove target
shift # Remove string
shift # Remove trim front flag
shift # Remove trim end flag
while [[ "$CURRENT" != "$LAST" ]]; do
LAST=$CURRENT
for VAR in "$@"; do
$FRONT && CURRENT="${CURRENT/#$VAR}"
$BACK && CURRENT="${CURRENT/%$VAR}"
done
done
local "$TARGET" && mustache-indirect "$TARGET" "$CURRENT"
}
# Trim leading and trailing whitespace from a string
#
# Parameters:
# $1: Name of variable to store trimmed string
# $2: The string
mustache-trim-whitespace() {
local RESULT
mustache-trim-chars RESULT "$2" true true $'\r' $'\n' $'\t' " "
local "$1" && mustache-indirect "$1" "$RESULT"
}
mustache-get-content MUSTACHE_CONTENT "$@"
mustache-parse "$MUSTACHE_CONTENT" "" true

View File

@ -1,247 +0,0 @@
#!/bin/bash
set -x #echo on
cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback
source interfaces.d/*
EOF
cat > /etc/network/interfaces.d/eth0 << EOF
auto eth0
iface eth0 inet dhcp
metric 0
EOF
cat > /etc/network/interfaces.d/eth1 << EOF
auto eth1
iface eth1 inet dhcp
metric 1
EOF
ifup eth1
# Script configuration parameters ***start
os_region_name={{OS_REGION_NAME}}
os_tenant_name={{OS_TENANT_NAME}}
os_username={{OS_USERNAME}}
os_password={{OS_PASSWORD}}
os_auth_url={{OS_AUTH_URL}}
os_key_name={{OS_KEY_NAME}}
os_security_group={{CUE_SECURITY_GROUP}}
cue_image_id={{CUE_IMAGE_ID}}
cue_management_network_id={{CUE_MANAGEMENT_NIC}}
mysql_root_password={{MYSQL_ROOT_PASSWORD}}
mysql_cue_api_password={{MYSQL_CUEAPI_PASSWORD}}
mysql_cue_worker_password={{MYSQL_CUEWORKER_PASSWORD}}
floating_ip={{FLOATING_IP}}
# Script configuration parameters ***end
# Determinate is the given option present in the INI file
# ini_has_option config-file section option
function ini_has_option {
local xtrace=$(set +o | grep xtrace)
set +o xtrace
local file=$1
local section=$2
local option=$3
local line
line=$(sed -ne "/^\[$section\]/,/^\[.*\]/ { /^$option[ \t]*=/ p; }" "$file")
$xtrace
[ -n "$line" ]
}
# Set an option in an INI file
# iniset config-file section option value
function iniset {
local xtrace=$(set +o | grep xtrace)
set +o xtrace
local file=$1
local section=$2
local option=$3
local value=$4
[[ -z ${section} || -z ${option} ]] && return
if ! grep -q "^\[$section\]" "$file" 2>/dev/null; then
# Add section at the end
echo -e "\n[$section]" >>"$file"
fi
if ! ini_has_option "$file" "$section" "$option"; then
# Add it
sed -i -e "/^\[$section\]/ a\\
$option = $value
" "$file"
else
local sep=$(echo -ne "\x01")
# Replace it
sed -i -e '/^\['${section}'\]/,/^\[.*\]/ s'${sep}'^\('${option}'[ \t]*=[ \t]*\).*$'${sep}'\1'"${value}"${sep} "$file"
fi
${xtrace}
}
# Update & upgrade VM
unset UCF_FORCE_CONFFOLD
export UCF_FORCE_CONFFNEW=YES
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade
# Install required pacakges
apt-get install -y python-pip python-dev git build-essential zookeeper zookeeperd python-mysqldb supervisor
# Install keystone
cd /home/ubuntu/
sudo -u ubuntu -g ubuntu git clone https://git.openstack.org/openstack-dev/devstack
mkdir -p /opt/stack
chown ubuntu:ubuntu /opt/stack
sudo -u ubuntu -g ubuntu git clone https://github.com/openstack/requirements /opt/stack/requirements
cat > devstack/local.conf<< EOF
[[local|localrc]]
HOST_IP=127.0.0.1
SERVICE_HOST=$floating_ip
REQUIREMENTS_MODE=soft
ADMIN_PASSWORD=password
MYSQL_PASSWORD=$mysql_root_password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
disable_service g-api
disable_service g-reg
disable_service n-api
disable_service n-crt
disable_service n-obj
disable_service n-cpu
disable_service n-net
disable_service n-cond
disable_service n-sch
disable_service n-novnc
disable_service n-xvnc
disable_service n-cauth
disable_service c-sch
disable_service c-api
disable_service c-vol
disable_service h-eng
disable_service h-api
disable_service h-api-cfn
disable_service h-api-cw
disable_service horizon
disable_service tempest
EOF
pushd /home/ubuntu/devstack
sudo -u ubuntu -g ubuntu ./stack.sh
popd
# Setup keystone user, service, and endpoint
CUE_URL="http://${floating_ip}:8795/"
source ./devstack/openrc admin admin
keystone user-create --name cue --tenant service --pass password
keystone user-role-add --user cue --role admin --tenant service
keystone service-create --name cue --type "message-broker" --description "Message Broker Provisioning Service"
keystone endpoint-create --region $OS_REGION_NAME --service cue --publicurl $CUE_URL --adminurl $CUE_URL --internalurl $CUE_URL
# Install MySQL DB
debconf-set-selections <<< "mysql-server mysql-server/root_password password ${mysql_root_password}"
debconf-set-selections <<< "mysql-server mysql-server/root_password_again password ${mysql_root_password}"
apt-get -y install mysql-server
# Create cue database
echo "create database cue;" | mysql -u root -p${mysql_root_password}
# Create MySQL DB users for Cue API and Cue worker processes
echo "CREATE USER 'cue_api'@'%' IDENTIFIED BY '${mysql_cue_api_password}'" | mysql -u root -p${mysql_root_password}
echo "CREATE USER 'cue_worker'@'%' IDENTIFIED BY '${mysql_cue_worker_password}'" | mysql -u root -p${mysql_root_password}
# Grant cue_api and cue_worker users privilidge to cue database
echo "GRANT ALL PRIVILEGES ON cue. * TO 'cue_api'@'%';" | mysql -u root -p${mysql_root_password}
echo "GRANT ALL PRIVILEGES ON cue. * TO 'cue_worker'@'%';" | mysql -u root -p${mysql_root_password}
# Restart mysql server
service mysql restart
# Install cue service
git clone https://github.com/openstack/cue.git
cd cue
python setup.py install
pip install pbr
# Create local directory for cue configuratin and policy files
mkdir -p /etc/cue
# Copy Cue's default configuration files and policy file to /etc/cue/
CUE_CONF="/etc/cue/cue.conf"
cp etc/cue/cue.conf.sample ${CUE_CONF}
cp etc/cue/policy.json /etc/cue/policy.json
# Set required cue configuration settings
db_connection_api=mysql://cue_api:${mysql_cue_api_password}@127.0.0.1/cue
db_connection_worker=mysql://cue_worker:${mysql_cue_worker_password}@127.0.0.1/cue
iniset ${CUE_CONF} DEFAULT rabbit_port 5672
iniset ${CUE_CONF} DEFAULT debug True
iniset ${CUE_CONF} DEFAULT os_security_group ${os_security_group}
iniset ${CUE_CONF} DEFAULT management_network_id ${cue_management_network_id}
iniset ${CUE_CONF} DEFAULT auth_strategy keystone
iniset ${CUE_CONF} api host_ip '0.0.0.0'
iniset ${CUE_CONF} api port 8795
iniset ${CUE_CONF} api max_limit 1000
iniset ${CUE_CONF} database connection ${db_connection_api}
iniset ${CUE_CONF} openstack os_key_name ${os_key_name}
iniset ${CUE_CONF} openstack os_region_name ${os_region_name}
iniset ${CUE_CONF} openstack os_tenant_name ${os_tenant_name}
iniset ${CUE_CONF} openstack os_username ${os_username}
iniset ${CUE_CONF} openstack os_password ${os_password}
iniset ${CUE_CONF} openstack os_auth_url ${os_auth_url}
iniset ${CUE_CONF} database connection ${db_connection_worker}
iniset ${CUE_CONF} keystone_authtoken admin_tenant_name service
iniset ${CUE_CONF} keystone_authtoken admin_password password
iniset ${CUE_CONF} keystone_authtoken admin_user cue
iniset ${CUE_CONF} keystone_authtoken identity_uri http://${floating_ip}:35357
# Execute Cue's database upgrade scripts
cue-manage --config-file /etc/cue/cue.conf database upgrade
# Execute Cue's taskflow upgrade scripts
cue-manage --config-file /etc/cue/cue.conf taskflow upgrade
# set default broker and cue image
cue-manage --config-file ${CUE_CONF} broker add rabbitmq true
BROKER_ID=$(cue-manage --config-file ${CUE_CONF} broker list | grep rabbitmq | tr -d ' ' | cut -f 2 -d '|')
cue-manage --config-file ${CUE_CONF} broker add_metadata ${BROKER_ID} --image ${cue_image_id}
# Create supervisord execution configuration for Cue API
cat > /etc/supervisor/conf.d/cueapi.conf<< EOF
[program:cue-api]
command=cue-api --debug --config-file /etc/cue/cue.conf
process_name=%(program_name)s
stdout_logfile=/var/log/cue-api.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/log/cue-api.err
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
EOF
# Create supervisord execution configuration for Cue Worker
cat > /etc/supervisor/conf.d/cueworker.conf<< EOF
[program:cue-worker]
command=cue-worker --debug --config-file /etc/cue/cue.conf
process_name=%(program_name)s
stdout_logfile=/var/log/cue-worker.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stderr_logfile=/var/log/cue-worker.err
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
EOF
# Restart supervisord to start Cue API and Cue Worker processes
service supervisor restart

View File

@ -1,96 +0,0 @@
# -*- mode: ruby -*-
# # vi: set ft=ruby :
require 'fileutils'
Vagrant.require_version ">= 1.6.0"
CONFIG = File.join(File.dirname(__FILE__), "vagrant_config.rb")
UBUNTU_COMMON = File.join(File.dirname(__FILE__), "lib/ubuntu.rb")
FEDORA_COMMON = File.join(File.dirname(__FILE__), "lib/fedora.rb")
DEVSTACK_SCRIPT = File.join(File.dirname(__FILE__), "lib/devstack_script.rb")
RALLY_SCRIPT = File.join(File.dirname(__FILE__), "lib/rally_script.rb")
GITCONFIG = `cat $HOME/.gitconfig`
VAGRANTFILE_API_VERSION = "2"
# Defaults for config options
$hostname = File.basename(File.dirname(__FILE__))
$forwarded_port = {}
$install_devstack = false
$install_build_deps = true
$install_tmate = false
$install_rally = true
$ubuntu_box = "sputnik13/trusty64"
$vm_memory = 6144
$vm_cpus = 2
if File.exist?(CONFIG)
require CONFIG
end
require UBUNTU_COMMON
require FEDORA_COMMON
require DEVSTACK_SCRIPT
require RALLY_SCRIPT
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$forwarded_port.each do |guest_port, host_port|
config.vm.network "forwarded_port", guest: guest_port, host: host_port
end
config.vm.provider "virtualbox" do |v|
v.memory = $vm_memory
v.cpus = $vm_cpus
end
config.vm.provider "vmware_fusion" do |v, override|
v.vmx["memsize"] = $vm_memory
v.vmx["numvcpus"] = $vm_cpus
v.vmx["vhv.enable"] = TRUE
v.vmx["ethernet0.virtualdev"] = "vmxnet3"
end
config.vm.synced_folder "../..", "/home/vagrant/cue"
if File.directory?("../../../python-cueclient")
config.vm.synced_folder "../../../python-cueclient", "/home/vagrant/python-cueclient"
end
if File.directory?("../../../cue-dashboard")
config.vm.synced_folder "../../../cue-dashboard", "/home/vagrant/cue-dashboard"
end
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
config.ssh.forward_agent = true
config.vm.define "ubuntu" do |ubuntu|
ubuntu.vm.hostname = "cuedev-ubuntu"
ubuntu_common(ubuntu)
end
config.vm.define "fedora" do |fedora|
fedora.vm.hostname = "cuedev-fedora"
fedora_common(fedora)
end
# Common provisioning steps
config.vm.provision :shell, :privileged => true,
:inline => "test -d /opt/stack || mkdir -p /opt/stack"
config.vm.provision :shell, :privileged => true,
:inline => "chown vagrant /opt/stack"
config.vm.provision :shell, :privileged => false,
:inline => $devstack_script
if $install_rally
config.vm.provision :shell, :privileged => false,
:inline => $rally_script
end
if $install_devstack
config.vm.provision :shell, :privileged => false,
:inline => "pushd $HOME/devstack; ./stack.sh"
end
end

View File

@ -1,62 +0,0 @@
# Devstack init script
$devstack_script = <<SCRIPT
#!/bin/bash
set -e
DEBIAN_FRONTEND=noninteractive sudo apt-get -qqy update || sudo yum update -qy
DEBIAN_FRONTEND=noninteractive sudo apt-get install -qqy git || sudo yum install -qy git
pushd ~
# Copy over git config
cat << EOF > /home/vagrant/.gitconfig
#{GITCONFIG}
EOF
test -d devstack || git clone https://git.openstack.org/openstack-dev/devstack
test -d /home/vagrant/bin || mkdir /home/vagrant/bin
cat << EOF > /home/vagrant/bin/refresh_devstack.sh
#!/bin/bash
rsync -av --exclude='.tox' --exclude='.venv' --exclude='.vagrant' /home/vagrant/cue /opt/stack
if [ -d "/home/vagrant/python-cueclient" ]; then
rsync -av --exclude='.tox' --exclude='.venv' --exclude='.vagrant' --exclude='contrib/vagrant' /home/vagrant/python-cueclient /opt/stack
fi
if [ -d "/home/vagrant/cue-dashboard" ]; then
rsync -av --exclude='.tox' --exclude='.venv' --exclude='.vagrant' --exclude='contrib/vagrant' /home/vagrant/cue-dashboard /opt/stack
fi
# Install Vagrant local.conf sample
if [ ! -f "/home/vagrant/devstack/local.conf" ]; then
cp /opt/stack/cue/devstack/local.conf /home/vagrant/devstack/local.conf
fi
# Install Vagrant local.sh sample
if [ ! -f "/home/vagrant/devstack/local.sh" ]; then
cp /opt/stack/cue/devstack/local.sh /home/vagrant/devstack/local.sh
fi
pushd /home/vagrant/cue/devstack
for f in lib/*; do
if [ ! -f "/home/vagrant/devstack/\\$f" ]; then
ln -fs /opt/stack/cue/devstack/\\$f -t /home/vagrant/devstack/\\$(dirname \\$f)
fi
done
popd
EOF
chmod +x /home/vagrant/bin/refresh_devstack.sh
cat << EOF >> /home/vagrant/.bash_aliases
alias refresh_devstack="/home/vagrant/bin/refresh_devstack.sh"
alias delete_ports="neutron port-list | egrep '.+_cue\[.+\]\.node\[.+\]' | tr -d ' ' | cut -f 2 -d '|' | xargs -n1 neutron port-delete"
alias delete_clusters="openstack cue cluster list | grep rally | tr -d ' ' | cut -f 2 -d '|' | xargs -n1 openstack cue cluster delete"
EOF
/home/vagrant/bin/refresh_devstack.sh
SCRIPT

View File

@ -1,7 +0,0 @@
# Common provisioning steps for Fedora VMs
def fedora_common(machine)
machine.vm.box = $fedora_box
machine.vm.provision :shell, :privileged => true, :inline => "yum update -y vim-minimal" # RH Bug 1066983
machine.vm.provision :shell, :privileged => true, :inline => "yum install -y git-core MySQL-python"
end

View File

@ -1,20 +0,0 @@
# Rally init script
$rally_script = <<SCRIPT
#!/bin/bash
set -e
DEBIAN_FRONTEND=noninteractive sudo apt-get -qqy update || sudo yum update -qy
DEBIAN_FRONTEND=noninteractive sudo apt-get install -qqy git || sudo yum install -qy git
pushd ~
test -d devstack || git clone https://git.openstack.org/openstack-dev/devstack
test -d rally || git clone https://github.com/openstack/rally
cd devstack
echo "enable_plugin rally https://github.com/openstack/rally master" >> local.conf
cat << EOF >> /home/vagrant/.bash_aliases
alias run_rally_cue_scenarios="rally -v --debug task start --task ~/cue/rally-jobs/rabbitmq-scenarios.yaml"
EOF
SCRIPT

View File

@ -1,31 +0,0 @@
# Common provisioning steps for Ubuntu VMs
def ubuntu_common(machine)
machine.vm.box = $ubuntu_box
machine.vm.provision :shell, :privileged => true,
:inline => "DEBIAN_FRONTEND=noninteractive apt-get update"
machine.vm.provision :shell, :privileged => true,
:inline => "DEBIAN_FRONTEND=noninteractive apt-get install --yes git"
machine.vm.provision :shell, :privileged => true,
:inline => "DEBIAN_FRONTEND=noninteractive apt-get install --yes python-software-properties software-properties-common squid-deb-proxy-client"
if $package_proxy
machine.vm.provision :shell, :privileged => true,
:inline => "echo \"Acquire { Retries \\\"0\\\"; HTTP { Proxy \\\"#{$package_proxy}\\\"; }; };\" > /etc/apt/apt.conf.d/99proxy"
end
# Install build dependencies
if $install_build_deps
machine.vm.provision "shell", inline: "apt-get install -y build-essential git libmysqlclient-dev python-tox python-dev libxml2-dev libxslt1-dev libffi-dev libssl-dev gettext"
end
# Install tmate [optional]
if $install_tmate
machine.vm.provision "shell", :inline => "sudo add-apt-repository ppa:nviennot/tmate"
machine.vm.provision "shell", :inline => "sudo apt-get update"
machine.vm.provision "shell", :inline => "sudo apt-get install -y tmate"
end
# Remove anything unnecessary
machine.vm.provision "shell", inline: "apt-get autoremove -y"
end

View File

@ -1,39 +0,0 @@
# -*- mode: ruby -*-
# # vi: set ft=ruby :
# Uncomment $hostname and set it to specify an explicit hostname for the VM
# $hostname = "dev"
# Setup a guest port => host port mapping for the mappings provided below
#$forwarded_port = {
# 8795 => 8795,
# 6080 => 6080,
# 80 => 8080
#}
# Ubuntu box
$ubuntu_box = "sputnik13/trusty64"
# Fedora box
$fedora_box = "box-cutter/fedora20"
# Specify a proxy to be used for packages
$package_proxy = nil
# Install devstack in the VM
$install_devstack = false
# Install build dependencies
$install_build_deps = true
# Set $install_tmate to true to
$install_tmate = false
# Set the amount of RAM configured for the VM
$vm_memory = 4096
# Set the number of CPU cores configured for the VM
$vm_cpus = 2
# Install rally in the vm
$install_rally = true

View File

@ -1,40 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
$vm_count = 3
$hostname = File.basename(File.dirname(__FILE__))
$domain = "localdomain"
$ip_prefix = "10.250.250"
#$pkg_mirror = "http://localhost/mirror"
$ip_list = ""
(1..$vm_count).each do |i|
$ip_list += "#{$ip_prefix}.#{100+i} "
end
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "sputnik13/wheezy64"
if Vagrant.has_plugin?("vagrant-hostmanager")
config.hostmanager.enabled = true
config.hostmanager.manage_host = false
config.hostmanager.ignore_private_ip = false
config.hostmanager.include_offline = true
end
config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
(1..$vm_count).each do |i|
config.vm.define "#{$hostname}#{i}" do |node|
node.vm.hostname = "#{$hostname}#{i}"
node.vm.network :private_network, ip: "#{$ip_prefix}.#{100+i}", :netmask => "255.255.255.0"
node.hostmanager.aliases = ["#{$hostname}#{i}.#{$domain}", "#{$hostname}#{i}"]
if $pkg_mirror then
node.vm.provision "shell", inline: "sed -e 's/http:\/\/.*.archive.ubuntu.com/http:\/\/#{$pkg_mirror}/' -i /etc/apt/sources.list ; apt-get update"
end
node.vm.provision "shell", inline: "/vagrant/install.sh -n #{$vm_count} #{i} #{$ip_list}"
end
end
end

View File

@ -1,87 +0,0 @@
#!/bin/bash
ZOOKEEPER_COUNT=1
if [[ -f '/etc/issue' ]]; then
LINUX_DISTRO=`cat /etc/issue | egrep 'Amazon|Ubuntu|CentOS|RedHat|Debian' | awk -F' ' '{print $1}'`
elif [[ -f '/etc/debian_version' ]]; then
LINUX_DISTRO='Debian'
fi
function usage {
echo "usage: $0 [-n count] [-h] MYID IP_1 ... IP_N"
exit -1
}
while getopts n:h opt; do
case $opt in
n)
ZOOKEEPER_COUNT=$OPTARG
;;
h)
usage
esac
done
# Pop processed options from the option stack
OPTIND=$OPTIND-1
shift $OPTIND
if [[ ! $# -gt $ZOOKEEPER_COUNT ]]; then
echo "Invalid number of arguments"
echo ""
usage
fi
ZK_MYID=$1
shift
for i in `seq 1 $ZOOKEEPER_COUNT`; do
ZK_SERVER_IP[$i]=$1
shift
done
# Check for root permissions
if [ "$(id -u)" != "0" ]; then
SUDO='sudo'
echo ""
echo "This script is not being run as root, you may be asked for a password in order to execute sudo.."
echo ""
else
SUDO=
fi
# Set distro specific values
case "$LINUX_DISTRO" in
'Ubuntu'|'Debian')
UPDATE_PKG='apt-get update'
INSTALL_PKG='apt-get install -y'
PKG_NAME='zookeeper zookeeper-bin zookeeperd'
CONFIG_FILE='/etc/zookeeper/conf/zoo.cfg'
SERVICE_NAME='zookeeper'
MYID_FILE='/var/lib/zookeeper/myid'
;;
*)
echo "Only Ubuntu and Debian are supported at this time"
exit -1
esac
# Update package cache if necessary
if $UPDATE_PKG; then
$SUDO $UPDATE_PKG
fi
# Install package
$SUDO $INSTALL_PKG $PKG_NAME
# Generate configuration
$SUDO sed -i.bak -e '/^#.*$/d' -e '/^$/d' -e '/^server\..*=.*/d' $CONFIG_FILE
for i in `seq 1 $ZOOKEEPER_COUNT`; do
echo "server.${i}=${ZK_SERVER_IP[$i]}:2888:3888" >> $CONFIG_FILE
done
echo $ZK_MYID > $MYID_FILE
# Restart service
$SUDO service $SERVICE_NAME restart

View File

@ -1,50 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os.path
from oslo_config import cfg
from oslo_log import log
log.register_options(cfg.CONF)
DEFAULT_OPTS = [
cfg.StrOpt('pybasedir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
'../')),
help='Directory where the cue python module is installed'),
cfg.StrOpt('state-path', default='/var/lib/cue',
help='Top-level directory for maintaining cue\'s state'),
cfg.StrOpt('rabbit_port',
default='5672',
help='The port to access RabbitMQ AMQP interface on a clustered'
'vm'),
cfg.StrOpt('os_security_group',
default=None,
help='The default Security Group to use for VMs created as '
'part of a cluster'),
cfg.StrOpt('management_network_id',
default=None,
help='The id representing the management network '),
cfg.StrOpt('default_broker_name',
default='rabbitmq',
help='The name of the default broker image')
]
cfg.CONF.register_opts(DEFAULT_OPTS)
def list_opts():
return [('DEFAULT', DEFAULT_OPTS)]

View File

@ -1,48 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014 Hewlett-Packard
#
## Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
API_SERVICE_OPTS = [
cfg.StrOpt('host_ip',
default='0.0.0.0',
help='The listen IP for the Cue API server.'),
cfg.IntOpt('port',
default=8795,
help='The port for the Cue API server.'),
cfg.IntOpt('max_limit',
default=1000,
help='The maximum number of items returned in a single '
'response from a collection resource.'),
# TODO(sputnik13): this needs to be removed when image selection is done
cfg.StrOpt('os_image_id',
help='The Image ID to use for VMs created as part of a '
'cluster'),
cfg.IntOpt('max_cluster_size',
default=10,
help='Maximum number of nodes in a cluster.'),
]
CONF = cfg.CONF
opt_group = cfg.OptGroup(name='api',
title='Options for the cue-api service')
CONF.register_group(opt_group)
CONF.register_opts(API_SERVICE_OPTS, opt_group)
def list_opts():
return [('api', API_SERVICE_OPTS)]

View File

@ -1,36 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2012 New Dream Network, LLC (DreamHost)
#
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Access Control Lists (ACL's) control access the API server."""
from cue.api.middleware import auth_token
def install(app, conf, public_routes):
"""Install ACL check on application.
:param app: A WSGI applicatin.
:param conf: Settings. Dict'ified and passed to keystonemiddleware
:param public_routes: The list of the routes which will be allowed to
access without authentication.
:return: The same WSGI application with ACL installed.
"""
return auth_token.AuthTokenMiddleware(app,
conf=dict(conf),
public_api_routes=public_routes)

View File

@ -1,94 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright © 2012 New Dream Network, LLC (DreamHost)
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_middleware import cors as cors_middleware
import pecan
from cue.api import acl
from cue.api import config
from cue.api import hooks
from cue.api import middleware
from cue.common import policy
auth_opts = [
cfg.StrOpt('auth_strategy',
default='keystone',
help='Method to use for authentication: noauth or keystone.'),
]
API_OPTS = [
cfg.BoolOpt('pecan_debug', default=False,
help='Pecan HTML Debug Interface'),
]
cfg.CONF.register_opts(auth_opts)
cfg.CONF.register_opts(API_OPTS, group='api')
def list_opts():
return [('DEFAULT', auth_opts), ('api', API_OPTS)]
def get_pecan_config():
# Set up the pecan configuration
filename = config.__file__.replace('.pyc', '.py')
return pecan.configuration.conf_from_file(filename)
def setup_app(pecan_config=None, extra_hooks=None):
policy.init()
app_hooks = [hooks.ConfigHook(),
#hooks.DBHook(),
hooks.ContextHook(pecan_config.app.acl_public_routes),
#hooks.RPCHook(),
#hooks.NoExceptionTracebackHook()
]
if extra_hooks:
app_hooks.extend(extra_hooks)
if not pecan_config:
pecan_config = get_pecan_config()
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
app = pecan.make_app(
pecan_config.app.root,
static_root=pecan_config.app.static_root,
debug=cfg.CONF.api.pecan_debug,
force_canonical=getattr(pecan_config.app, 'force_canonical', True),
hooks=app_hooks,
wrap_app=middleware.ParsableErrorMiddleware,
)
if pecan_config.app.enable_acl:
app = acl.install(app, cfg.CONF, pecan_config.app.acl_public_routes)
# Create a CORS wrapper, and attach ironic-specific defaults that must be
# included in all CORS responses.
app = cors_middleware.CORS(app, cfg.CONF)
return app
class VersionSelectorApplication(object):
def __init__(self):
pc = get_pecan_config()
pc.app.enable_acl = (cfg.CONF.auth_strategy == 'keystone')
self.v1 = setup_app(pecan_config=pc)
def __call__(self, environ, start_response):
return self.v1(environ, start_response)

View File

@ -1,28 +0,0 @@
# -*- mode: python -*-
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Use this file for deploying the API service under Apache2 mod_wsgi.
"""
from cue.api import app
from cue.common import service
import oslo_i18n
oslo_i18n.install('cue')
service.prepare_service([])
application = app.VersionSelectorApplication()

View File

@ -1,42 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
# Server Specific Configurations
# See https://pecan.readthedocs.org/en/latest/configuration.html#server-configuration # noqa
server = {
'port': '8795',
'host': '0.0.0.0'
}
# Pecan Application Configurations
# See https://pecan.readthedocs.org/en/latest/configuration.html#application-configuration # noqa
app = {
'root': 'cue.api.controllers.root.RootController',
'modules': ['cue.api'],
'static_root': '%(confdir)s/public',
'debug': False,
'enable_acl': True,
'acl_public_routes': [
'/',
'/v1',
],
}
# WSME Configurations
# See https://wsme.readthedocs.org/en/latest/integrate.html#configuration
wsme = {
'debug': cfg.CONF.debug,
}

View File

@ -1 +0,0 @@
__author__ = 'vipul'

View File

@ -1,57 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import wsme
from wsme import types as wtypes
class APIBase(wtypes.Base):
created_at = wsme.wsattr(datetime.datetime, readonly=True)
"The time in UTC at which the object is created"
updated_at = wsme.wsattr(datetime.datetime, readonly=True)
"The time in UTC at which the object is updated"
@classmethod
def from_db_model(cls, m):
return cls(**(m.as_dict()))
def as_dict(self):
"""Render this object as a dict of its fields."""
return dict((k, getattr(self, k))
for k in self.fields
if hasattr(self, k) and
getattr(self, k) != wsme.Unset)
def unset_fields_except(self, except_list=None):
"""Unset fields so they don't appear in the message body.
:param except_list: A list of fields that won't be touched.
"""
if except_list is None:
except_list = []
for k in self.as_dict():
if k not in except_list:
setattr(self, k, wsme.Unset)
def unset_empty_fields(self):
"""Unset empty fields so they don't appear in message body."""
for k in self.fields:
if hasattr(self, k) and getattr(self, k) is None:
setattr(self, k, wsme.Unset)

View File

@ -1,58 +0,0 @@
# Copyright 2013 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from wsme import types as wtypes
from cue.api.controllers import base
def build_url(resource, resource_args, bookmark=False, base_url=None):
if base_url is None:
base_url = pecan.request.host_url
template = '%(url)s/%(res)s' if bookmark else '%(url)s/v1/%(res)s'
# FIXME(lucasagomes): I'm getting a 404 when doing a GET on
# a nested resource that the URL ends with a '/'.
# https://groups.google.com/forum/#!topic/pecan-dev/QfSeviLg5qs
template += '%(args)s' if resource_args.startswith('?') else '/%(args)s'
return template % {'url': base_url, 'res': resource, 'args': resource_args}
class Link(base.APIBase):
"""A link representation."""
href = wtypes.text
"The url of a link."
rel = wtypes.text
"The name of a link."
type = wtypes.text
"Indicates the type of document/link."
@classmethod
def make_link(cls, rel_name, url, resource, resource_args,
bookmark=False, type=wtypes.Unset):
href = build_url(resource, resource_args,
bookmark=bookmark, base_url=url)
return Link(href=href, rel=rel_name, type=type)
@classmethod
def sample(cls):
sample = cls(href="http://localhost:8795/v1/cluster"
"eaaca217-e7d8-47b4-bb41-3f99f20eed89",
rel="bookmark")
return sample

View File

@ -1,104 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2012 New Dream Network, LLC (DreamHost)
#
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from cue.api.controllers import base
from cue.api.controllers import link
from cue.api.controllers import v1
class Version(base.APIBase):
"""An API version representation."""
id = wtypes.text
"The ID of the version, also acts as the release number"
links = [link.Link]
"A Link that point to a specific version of the API"
status = wtypes.text
"The status of this version"
@classmethod
def convert(self, id, status):
version = Version()
version.id = id
version.status = status
version.links = [link.Link.make_link('self', pecan.request.host_url,
id, '', bookmark=True)]
return version
class Root(base.APIBase):
name = wtypes.text
"The name of the API"
description = wtypes.text
"Some information about this API"
versions = [Version]
"Links to all the versions available in this API"
default_version = Version
"A link to the default version of the API"
@classmethod
def convert(self):
"""Builds link to v1 controller."""
root = Root()
root.name = "OpenStack Cue API"
root.description = ("Cue is an OpenStack project which aims to "
"provision Messaging Brokers.")
root.versions = [Version.convert('v1', 'STABLE')]
root.default_version = Version.convert('v1', 'STABLE')
return root
class RootController(rest.RestController):
_versions = ['v1']
"All supported API versions"
_default_version = 'v1'
"The default API version"
v1 = v1.V1Controller()
@wsme_pecan.wsexpose(Root)
def get(self):
# NOTE: The reason why convert() it's being called for every
# request is because we need to get the host url from
# the request object to make the links.
return Root.convert()
@pecan.expose()
def _route(self, args):
"""Overrides the default routing behavior.
It redirects the request to the default version of the cue API
if the version number is not specified in the url.
"""
if args[0] and args[0] not in self._versions:
args = [self._default_version] + args
return super(RootController, self)._route(args)

View File

@ -1,78 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Version 1 of the Cue API
"""
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from cue.api.controllers import base
from cue.api.controllers import link
from cue.api.controllers.v1 import cluster
class V1(base.APIBase):
"""The representation of the version 1 of the API."""
id = wtypes.text
"""The ID of the version, also acts as the release number"""
status = wtypes.text
"""The status of this version"""
clusters = [link.Link]
"""Links to the clusters resource"""
@staticmethod
def convert():
"""Builds link to clusters controller."""
v1 = V1()
v1.id = "v1"
v1.status = "Stable"
v1.clusters = [link.Link.make_link('self', pecan.request.host_url,
'clusters', ''),
link.Link.make_link('bookmark',
pecan.request.host_url, v1.id,
'clusters',
bookmark=True)
]
return v1
class V1Controller(rest.RestController):
"""Version 1 Cue API controller root."""
_versions = ['v1']
"All supported API versions"
_default_version = 'v1'
"The default API version"
clusters = cluster.ClusterController()
@wsme_pecan.wsexpose(V1)
def get(self):
# NOTE: The reason why convert() it's being called for every
# request is because we need to get the host url from
# the request object to make the links.
return V1.convert()
@pecan.expose()
def _route(self, args):
return super(V1Controller, self)._route(args)

View File

@ -1,405 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
"""Version 1 of the Cue API
"""
import sys
from novaclient import exceptions as nova_exc
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import uuidutils
import pecan
from pecan import rest
import six
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from cue.api.controllers import base
import cue.client as client
from cue.common import exception
from cue.common.i18n import _ # noqa
from cue.common.i18n import _LI # noqa
from cue.common import policy
from cue.common import validate_auth_token as auth_validate
from cue import objects
from cue.taskflow import client as task_flow_client
from cue.taskflow.flow import create_cluster
from cue.taskflow.flow import delete_cluster
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class AuthenticationCredential(wtypes.Base):
"""Representation of a Broker Authentication Method."""
type = wtypes.text
"type of authentication"
token = wtypes.DictType(six.text_type, six.text_type)
"authentication credentials"
class EndPoint(base.APIBase):
"""Representation of an End point."""
def __init__(self, **kwargs):
self.fields = []
endpoint_object_fields = list(objects.Endpoint.fields)
for k in endpoint_object_fields:
# only add fields we expose in the api
if hasattr(self, k):
self.fields.append(k)
setattr(self, k, kwargs.get(k, wtypes.Unset))
type = wtypes.text
"type of endpoint"
uri = wtypes.text
"URL to endpoint"
class Cluster(base.APIBase):
"""Representation of a cluster's details."""
# todo(dagnello): WSME attribute verification sometimes triggers 500 server
# error when user input was actually invalid (400). Example: if 'size' was
# provided as a string/char, e.g. 'a', api returns 500 server error.
def __init__(self, **kwargs):
self.fields = []
cluster_object_fields = list(objects.Cluster.fields)
for k in cluster_object_fields:
# only add fields we expose in the api
if hasattr(self, k):
self.fields.append(k)
setattr(self, k, kwargs.get(k, wtypes.Unset))
id = wsme.wsattr(wtypes.UuidType(), readonly=True)
"UUID of cluster"
network_id = wtypes.wsattr([wtypes.UuidType()], mandatory=True)
"NIC of Neutron network"
name = wsme.wsattr(wtypes.text, mandatory=True)
"Name of cluster"
status = wsme.wsattr(wtypes.text, readonly=True)
"Current status of cluster"
flavor = wsme.wsattr(wtypes.text, mandatory=True)
"Flavor of cluster"
size = wsme.wsattr(wtypes.IntegerType(minimum=0, maximum=sys.maxsize),
mandatory=True)
"Number of nodes in cluster"
volume_size = wtypes.IntegerType(minimum=0, maximum=sys.maxsize)
"Volume size for nodes in cluster"
endpoints = wtypes.wsattr([EndPoint], default=[])
"List of endpoints on accessing node"
authentication = wtypes.wsattr(AuthenticationCredential)
"Authentication for accessing message brokers"
error_detail = wsme.wsattr(wtypes.text, mandatory=False)
"Error detail(s) associated with cluster"
def get_complete_cluster(context, cluster_id):
"""Helper to retrieve the api-compatible full structure of a cluster."""
cluster_obj = objects.Cluster.get_cluster_by_id(context, cluster_id)
target = {'tenant_id': cluster_obj.project_id}
policy.check("cluster:get", context, target)
cluster_as_dict = cluster_obj.as_dict()
# convert 'network_id' to list for ClusterDetails compatibility
cluster_as_dict['network_id'] = [cluster_as_dict['network_id']]
# construct api cluster object
cluster = Cluster(**cluster_as_dict)
cluster.endpoints = []
cluster_nodes = objects.Node.get_nodes_by_cluster_id(context, cluster_id)
for node in cluster_nodes:
# extract endpoints from node
node_endpoints = objects.Endpoint.get_endpoints_by_node_id(context,
node.id)
# construct api endpoint objects
node_endpoints_dict = [EndPoint(**obj_endpoint.as_dict()) for
obj_endpoint in node_endpoints]
cluster.endpoints.extend(node_endpoints_dict)
return cluster
def delete_complete_cluster(context, cluster_id):
cluster_obj = objects.Cluster.get_cluster_by_id(context, cluster_id)
target = {'tenant_id': cluster_obj.project_id}
policy.check("cluster:delete", context, target)
# update cluster to deleting
objects.Cluster.update_cluster_deleting(context, cluster_id)
# retrieve cluster nodes
nodes = objects.Node.get_nodes_by_cluster_id(context, cluster_id)
# create list with node id's for create cluster flow
node_ids = [node.id for node in nodes]
# retrieve cluster record
cluster = objects.Cluster.get_cluster_by_id(context, cluster_id)
# prepare and post cluster delete job to backend
flow_kwargs = {
'cluster_id': cluster_id,
'node_ids': node_ids,
'group_id': cluster.group_id,
}
job_args = {
'context': context.to_dict(),
}
job_client = task_flow_client.get_client_instance()
# TODO(dagnello): might be better to use request_id for job_uuid
job_uuid = uuidutils.generate_uuid()
job_client.post(delete_cluster, job_args, flow_kwargs=flow_kwargs,
tx_uuid=job_uuid)
LOG.info(_LI('Delete Cluster Request Cluster ID %(cluster_id)s Job ID '
'%(job_id)s') % ({"cluster_id": cluster_id,
"job_id": job_uuid}))
class ClusterController(rest.RestController):
"""Manages operations on specific Cluster of nodes."""
def _validate_flavor(self, image_id, cluster_flavor):
"""Checks if flavor satisfies minimum requirement of image metadata.
:param image_id: image id of the broker.
:param cluster_flavor: flavor id of the cluster.
:raises: exception.ConfigurationError
:raises: exception.InternalServerError
:raises: exception.Invalid
"""
nova_client = client.nova_client()
# get image metadata
try:
image_metadata = nova_client.images.get(image_id)
image_minRam = image_metadata.minRam
image_minDisk = image_metadata.minDisk
except nova_exc.ClientException as ex:
if ex.http_status == 404:
raise exception.ConfigurationError(_('Invalid image %s '
'configured') % image_id)
else:
raise exception.InternalServerError
# get flavor metadata
try:
flavor_metadata = nova_client.flavors.get(cluster_flavor)
flavor_ram = flavor_metadata.ram
flavor_disk = flavor_metadata.disk
except nova_exc.ClientException as ex:
if ex.http_status == 404:
raise exception.Invalid(_('Invalid flavor %s provided') %
cluster_flavor)
else:
raise exception.InternalServerError
# validate flavor with broker image metadata
if (flavor_disk < image_minDisk):
raise exception.Invalid(_("Flavor disk is smaller than the "
"minimum %s required for broker") %
image_minDisk)
elif (flavor_ram < image_minRam):
raise exception.Invalid(_("Flavor ram is smaller than the "
"minimum %s required for broker") %
image_minRam)
@wsme_pecan.wsexpose(Cluster, wtypes.text, status_code=200)
def get_one(self, cluster_id):
"""Return this cluster."""
# validate cluster_id is of type Uuid
try:
wtypes.UuidType().validate(cluster_id)
except ValueError:
raise exception.Invalid(_("Invalid cluster ID format provided"))
context = pecan.request.context
cluster = get_complete_cluster(context, cluster_id)
cluster.unset_empty_fields()
return cluster
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, cluster_id):
"""Delete this Cluster."""
# validate cluster_id is of type Uuid
try:
wtypes.UuidType().validate(cluster_id)
except ValueError:
raise exception.Invalid(_("Invalid cluster ID format provided"))
context = pecan.request.context
delete_complete_cluster(context, cluster_id)
@wsme_pecan.wsexpose([Cluster], status_code=200)
def get_all(self):
"""Return list of Clusters."""
context = pecan.request.context
clusters = objects.Cluster.get_clusters(context)
cluster_list = [get_complete_cluster(context, obj_cluster.id)
for obj_cluster in clusters]
for obj_cluster in cluster_list:
obj_cluster.unset_empty_fields()
return cluster_list
@wsme_pecan.wsexpose(Cluster, body=Cluster,
status_code=202)
def post(self, data):
"""Create a new Cluster.
:param data: cluster parameters within the request body.
"""
context = pecan.request.context
request_data = data.as_dict()
cluster_flavor = request_data['flavor']
if data.size <= 0:
raise exception.Invalid(_("Invalid cluster size provided"))
elif data.size > CONF.api.max_cluster_size:
raise exception.RequestEntityTooLarge(
_("Invalid cluster size, max size is: %d")
% CONF.api.max_cluster_size)
if len(data.network_id) > 1:
raise exception.Invalid(_("Invalid number of network_id's"))
# extract username/password
if (data.authentication and data.authentication.type and
data.authentication.token):
auth_validator = auth_validate.AuthTokenValidator.validate_token(
auth_type=data.authentication.type,
token=data.authentication.token)
if not auth_validator or not auth_validator.validate():
raise exception.Invalid(_("Invalid broker authentication "
"parameter(s)"))
else:
raise exception.Invalid(_("Missing broker authentication "
"parameter(s)"))
default_rabbit_user = data.authentication.token['username']
default_rabbit_pass = data.authentication.token['password']
broker_name = CONF.default_broker_name
# get the image id of default broker
image_id = objects.BrokerMetadata.get_image_id_by_broker_name(
context, broker_name)
# validate cluster flavor
self._validate_flavor(image_id, cluster_flavor)
# convert 'network_id' from list to string type for objects/cluster
# compatibility
request_data['network_id'] = request_data['network_id'][0]
# create new cluster object with required data from user
new_cluster = objects.Cluster(**request_data)
# create new cluster with node related data from user
new_cluster.create(context)
# retrieve cluster data
cluster = get_complete_cluster(context, new_cluster.id)
nodes = objects.Node.get_nodes_by_cluster_id(context,
cluster.id)
# create list with node id's for create cluster flow
node_ids = [node.id for node in nodes]
# prepare and post cluster create job to backend
flow_kwargs = {
'cluster_id': cluster.id,
'node_ids': node_ids,
'user_network_id': cluster.network_id[0],
'management_network_id': CONF.management_network_id,
}
# generate unique erlang cookie to be used by all nodes in the new
# cluster, erlang cookies are strings of up to 255 characters
erlang_cookie = uuidutils.generate_uuid()
job_args = {
'tenant_id': new_cluster.project_id,
'flavor': cluster.flavor,
'image': image_id,
'volume_size': cluster.volume_size,
'port': '5672',
'context': context.to_dict(),
# TODO(sputnik13: this needs to come from the create request
# and default to a configuration value rather than always using
# config value
'security_groups': [CONF.os_security_group],
'port': CONF.rabbit_port,
'key_name': CONF.openstack.os_key_name,
'erlang_cookie': erlang_cookie,
'default_rabbit_user': default_rabbit_user,
'default_rabbit_pass': default_rabbit_pass,
}
job_client = task_flow_client.get_client_instance()
# TODO(dagnello): might be better to use request_id for job_uuid
job_uuid = uuidutils.generate_uuid()
job_client.post(create_cluster, job_args,
flow_kwargs=flow_kwargs,
tx_uuid=job_uuid)
LOG.info(_LI('Create Cluster Request Cluster ID %(cluster_id)s '
'Cluster size %(size)s network ID %(network_id)s '
'Job ID %(job_id)s Broker name %(broker_name)s') %
({"cluster_id": cluster.id,
"size": cluster.size,
"network_id": cluster.network_id,
"job_id": job_uuid,
"broker_name": broker_name}))
cluster.additional_information = []
cluster.additional_information.append(
dict(def_rabbit_user=default_rabbit_user))
cluster.additional_information.append(
dict(def_rabbit_pass=default_rabbit_pass))
cluster.unset_empty_fields()
return cluster

View File

@ -1,125 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2012 New Dream Network, LLC (DreamHost)
#
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from pecan import hooks
from cue.common import context
from cue.db import api as dbapi
class ConfigHook(hooks.PecanHook):
"""Attach the config object to the request so controllers can get to it."""
def before(self, state):
state.request.cfg = cfg.CONF
class DBHook(hooks.PecanHook):
"""Attach the dbapi object to the request so controllers can get to it."""
def before(self, state):
state.request.dbapi = dbapi.get_instance()
class ContextHook(hooks.PecanHook):
"""Configures a request context and attaches it to the request.
The following HTTP request headers are used:
X-User-Id or X-User:
Used for context.user_id.
X-Tenant-Id or X-Tenant:
Used for context.tenant.
X-Auth-Token:
Used for context.auth_token.
X-Roles:
Used for setting context.is_admin flag to either True or False.
The flag is set to True, if X-Roles contains either an administrator
or admin substring. Otherwise it is set to False.
"""
def __init__(self, public_api_routes):
self.public_api_routes = public_api_routes
super(ContextHook, self).__init__()
def before(self, state):
user_id = state.request.headers.get('X-User-Id')
tenant_id = state.request.headers.get('X-Project-Id')
domain_id = state.request.headers.get('X-Domain-Id')
domain_name = state.request.headers.get('X-Domain-Name')
auth_token = state.request.headers.get('X-Auth-Token')
is_public_api = state.request.environ.get('is_public_api', False)
state.request.context = context.RequestContext(
auth_token=auth_token,
user=user_id,
tenant=tenant_id,
domain_id=domain_id,
domain_name=domain_name,
is_public_api=is_public_api)
# class RPCHook(hooks.PecanHook):
# """Attach the rpcapi object to the request so controllers can get to it."""
#
# def before(self, state):
# state.request.rpcapi = rpcapi.ConductorAPI()
class NoExceptionTracebackHook(hooks.PecanHook):
"""Workaround rpc.common: deserialize_remote_exception.
deserialize_remote_exception builds rpc exception traceback into error
message which is then sent to the client. Such behavior is a security
concern so this hook is aimed to cut-off traceback from the error message.
"""
# NOTE(max_lobur): 'after' hook used instead of 'on_error' because
# 'on_error' never fired for wsme+pecan pair. wsme @wsexpose decorator
# catches and handles all the errors, so 'on_error' dedicated for unhandled
# exceptions never fired.
def after(self, state):
# Omit empty body. Some errors may not have body at this level yet.
if not state.response.body:
return
# Do nothing if there is no error.
if 200 <= state.response.status_int < 400:
return
json_body = state.response.json
# Do not remove traceback when server in debug mode (except 'Server'
# errors when 'debuginfo' will be used for traces).
if cfg.CONF.debug and json_body.get('faultcode') != 'Server':
return
faultsting = json_body.get('faultstring')
traceback_marker = 'Traceback (most recent call last):'
if faultsting and (traceback_marker in faultsting):
# Cut-off traceback.
faultsting = faultsting.split(traceback_marker, 1)[0]
# Remove trailing newlines and spaces if any.
json_body['faultstring'] = faultsting.rstrip()
# Replace the whole json. Cannot change original one beacause it's
# generated on the fly.
state.response.json = json_body

View File

@ -1,23 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cue.api.middleware import auth_token
from cue.api.middleware import parsable_error
ParsableErrorMiddleware = parsable_error.ParsableErrorMiddleware
AuthTokenMiddleware = auth_token.AuthTokenMiddleware
__all__ = (ParsableErrorMiddleware,
AuthTokenMiddleware)

View File

@ -1,62 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from keystonemiddleware import auth_token
from oslo_log import log
import six
from cue.common import exception
from cue.common.i18n import _ # noqa
LOG = log.getLogger(__name__)
class AuthTokenMiddleware(auth_token.AuthProtocol):
"""A wrapper on Keystone auth_token middleware.
Does not perform verification of authentication tokens
for public routes in the API.
"""
def __init__(self, app, conf, public_api_routes=[]):
route_pattern_tpl = '%s(\.json|\.xml)?$'
try:
self.public_api_routes = [re.compile(route_pattern_tpl % route_tpl)
for route_tpl in public_api_routes]
except re.error as e:
msg = _('Cannot compile public API routes: %s') % e
LOG.error(msg)
raise exception.ConfigInvalid(error_msg=msg)
super(AuthTokenMiddleware, self).__init__(app, conf)
def __call__(self, env, start_response):
path = env.get('PATH_INFO')
if isinstance(path, six.string_types):
path = path.rstrip('/') or path
# The information whether the API call is being performed against the
# public API is required for some other components. Saving it to the
# WSGI environment is reasonable thereby.
env['is_public_api'] = any(map(lambda pattern: re.match(pattern, path),
self.public_api_routes))
if env['is_public_api']:
return self._app(env, start_response)
return super(AuthTokenMiddleware, self).__call__(env, start_response)

View File

@ -1,96 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2012 New Dream Network, LLC (DreamHost)
#
# Author: Doug Hellmann <doug.hellmann@dreamhost.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Middleware to replace the plain text message body of an error
response with one formatted so the client can parse it.
Based on pecan.middleware.errordocument
"""
from xml import etree as et
from oslo_log import log
from oslo_serialization import jsonutils
import six
import webob
from cue.common.i18n import _ # noqa
from cue.common.i18n import _LE # noqa
LOG = log.getLogger(__name__)
class ParsableErrorMiddleware(object):
"""Replace error body with something the client can parse."""
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
# Request for this state, modified by replace_start_response()
# and used when an error is being reported.
state = {}
def replacement_start_response(status, headers, exc_info=None):
"""Overrides the default response to make errors parsable."""
try:
status_code = int(status.split(' ')[0])
state['status_code'] = status_code
except (ValueError, TypeError): # pragma: nocover
raise Exception(_(
'ErrorDocumentMiddleware received an invalid '
'status %s') % status)
else:
if (state['status_code'] // 100) not in (2, 3):
# Remove some headers so we can replace them later
# when we have the full error message and can
# compute the length.
headers = [(h, v)
for (h, v) in headers
if h not in ('Content-Length', 'Content-Type')
]
# Save the headers in case we need to modify them.
state['headers'] = headers
return start_response(status, headers, exc_info)
app_iter = self.app(environ, replacement_start_response)
if (state['status_code'] // 100) not in (2, 3):
req = webob.Request(environ)
if (req.accept.best_match(['application/json', 'application/xml'])
== 'application/xml'):
try:
# simple check xml is valid
body = [et.ElementTree.tostring(
et.ElementTree.fromstring('<error_message>'
+ '\n'.join(app_iter)
+ '</error_message>'))]
except et.ElementTree.ParseError as err:
LOG.error(_LE('Error parsing HTTP response: %s'), err)
body = ['<error_message>%s' % state['status_code']
+ '</error_message>']
state['headers'].append(('Content-Type', 'application/xml'))
else:
err_msg = b'\n'.join(app_iter)
if six.PY3: # pragma: no cover
err_msg = err_msg.decode('utf-8')
body = jsonutils.dump_as_bytes({'error_message': err_msg})
body = [body]
state['headers'].append(('Content-Type', 'application/json'))
state['headers'].append(('Content-Length', str(len(body[0]))))
else:
body = app_iter
return body

View File

@ -1,146 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1.identity import v2 as keystone_v2_auth
from keystoneauth1.identity import v3 as keystone_v3_auth
from keystoneauth1 import session as keystone_session
import neutronclient.neutron.client as NeutronClient
import novaclient.client as NovaClient
from oslo_config import cfg
CONF = cfg.CONF
OS_OPTS = [
cfg.StrOpt('os_region_name',
help='Region name',
default=None),
cfg.StrOpt('os_username',
help='Openstack Username',
default=None),
cfg.StrOpt('os_password',
help='Openstack Password',
default=None),
cfg.StrOpt('os_auth_version',
help='Openstack authentication version',
choices=('2.0', '3'),
default='3'),
cfg.StrOpt('os_auth_url',
help='Openstack Authentication (Identity) URL',
default=None),
cfg.StrOpt('os_key_name',
help='SSH key to be provisioned to cue VMs',
default=None),
cfg.StrOpt('os_availability_zone',
help='Default availability zone to provision cue VMs',
default=None),
cfg.BoolOpt('os_insecure',
help='Openstack insecure',
default=False),
cfg.StrOpt('os_cacert',
help='Openstack cacert',
default=None),
cfg.StrOpt('os_project_name',
help='Openstack project name',
default=None),
cfg.StrOpt('os_project_domain_name',
help='Openstack project domain name',
default=None),
cfg.StrOpt('os_user_domain_name',
help='Openstack user domain name',
default=None),
cfg.StrOpt('os_endpoint_type',
help='Openstack endpoint type [public|internal|admin]',
default='public',
choices=['public', 'internal', 'admin']),
]
opt_group = cfg.OptGroup(
name='openstack',
title='Options for Openstack.'
)
CONF.register_group(opt_group)
CONF.register_opts(OS_OPTS, group=opt_group)
def nova_client():
keystone_session = get_keystone_session()
endpoint_type = CONF.openstack.os_endpoint_type + 'URL'
return NovaClient.Client(2,
session=keystone_session,
auth_url=CONF.openstack.os_auth_url,
region_name=CONF.openstack.os_region_name,
insecure=CONF.openstack.os_insecure,
cacert=CONF.openstack.os_cacert,
endpoint_type=endpoint_type,
)
def neutron_client():
keystone_session = get_keystone_session()
endpoint_type = CONF.openstack.os_endpoint_type + 'URL'
return NeutronClient.Client('2.0',
session=keystone_session,
auth_url=CONF.openstack.os_auth_url,
region_name=CONF.openstack.os_region_name,
insecure=CONF.openstack.os_insecure,
ca_cert=CONF.openstack.os_cacert,
endpoint_type=endpoint_type,
)
def get_auth_v2():
auth_url = CONF.openstack.os_auth_url
username = CONF.openstack.os_username
password = CONF.openstack.os_password
tenant_name = CONF.openstack.os_project_name
return keystone_v2_auth.Password(auth_url=auth_url,
username=username,
password=password,
tenant_name=tenant_name,
)
def get_auth_v3():
auth_url = CONF.openstack.os_auth_url
username = CONF.openstack.os_username
password = CONF.openstack.os_password
project_name = CONF.openstack.os_project_name
project_domain_name = CONF.openstack.os_project_domain_name
user_domain_name = CONF.openstack.os_user_domain_name
return keystone_v3_auth.Password(auth_url=auth_url,
username=username,
password=password,
project_name=project_name,
project_domain_name=project_domain_name,
user_domain_name=user_domain_name,
)
def get_keystone_session():
insecure = CONF.openstack.os_insecure
if insecure:
verify = False
else:
verify = CONF.openstack.os_cacert
if CONF.openstack.os_auth_version == '2.0':
return keystone_session.Session(auth=get_auth_v2(),
verify=verify)
else:
return keystone_session.Session(auth=get_auth_v3(),
verify=verify)

View File

@ -1,4 +0,0 @@
import oslo_i18n
oslo_i18n.install('cue')

View File

@ -1,66 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014-2015 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The Cue Service API."""
import logging
import sys
from wsgiref import simple_server
from oslo_config import cfg
from oslo_log import log
from six.moves import socketserver
from cue.api import app
from cue.common.i18n import _LI # noqa
from cue.common import service as cue_service
CONF = cfg.CONF
class ThreadedSimpleServer(socketserver.ThreadingMixIn,
simple_server.WSGIServer):
"""A Mixin class to make the API service greenthread-able."""
pass
def main():
# Pase config file and command line options, then start logging
cue_service.prepare_service(sys.argv)
# Build and start the WSGI app
host = CONF.api.host_ip
port = CONF.api.port
wsgi = simple_server.make_server(
host, port,
app.VersionSelectorApplication(),
server_class=ThreadedSimpleServer)
LOG = log.getLogger(__name__)
LOG.info(_LI("Serving on http://%(host)s:%(port)s"),
{'host': host, 'port': port})
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, logging.INFO)
try:
wsgi.serve_forever()
except KeyboardInterrupt: # pragma: no cover
pass
if __name__ == "__main__": # pragma: no cover
main()

View File

@ -1,125 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
# Copyright 2012 Bouvet ASA
#
# Author: Endre Karlson <endre.karlson@bouvet.no>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from oslo_config import cfg
from oslo_log import log
from stevedore import extension
from cue.common import config
from cue import version
CONF = cfg.CONF
def methods_of(obj):
"""Utility function to get all methods of a object
Get all callable methods of an object that don't start with underscore
returns a list of tuples of the form (method_name, method).
"""
result = []
for i in dir(obj):
if callable(getattr(obj, i)) and not i.startswith('_'):
result.append((i, getattr(obj, i)))
return result
def get_available_commands():
em = extension.ExtensionManager('cue.manage')
return dict([(e.name, e.plugin) for e in em.extensions])
def add_command_parsers(subparsers):
for category, cls in get_available_commands().items():
command_object = cls()
parser = subparsers.add_parser(category)
parser.set_defaults(command_object=command_object)
category_subparsers = parser.add_subparsers(dest='action')
for (action, action_fn) in methods_of(command_object):
action = getattr(action_fn, '_cmd_name', action)
parser = category_subparsers.add_parser(action)
action_kwargs = []
for args, kwargs in getattr(action_fn, 'args', []):
parser.add_argument(*args, **kwargs)
parser.set_defaults(action_fn=action_fn)
parser.set_defaults(action_kwargs=action_kwargs)
category_opt = cfg.SubCommandOpt('category', title="Commands",
help="Available Commands",
handler=add_command_parsers)
def get_arg_string(args):
arg = None
if args[0] == '-':
# (Note)zhiteng: args starts with FLAGS.oparser.prefix_chars
# is optional args. Notice that cfg module takes care of
# actual ArgParser so prefix_chars is always '-'.
if args[1] == '-':
# This is long optional arg
arg = args[2:]
else:
arg = args[1:]
else:
arg = args
return arg
def fetch_func_args(func):
fn_args = []
for args, kwargs in getattr(func, 'args', []):
arg = kwargs.get('dest', get_arg_string(args[0]))
fn_args.append(getattr(CONF.category, arg))
return fn_args
def main(argv=None, conf_fixture=None):
if argv is None: # pragma: no cover
argv = sys.argv
# Registering cli options directly to the global cfg.CONF causes issues
# for unit/functional tests that test anything but the cmd.manage module
# because cmd.manage adds required cli parameters. A conf_fixture object
# is expected to be passed in only during tests.
if conf_fixture is None: # pragma: no cover
CONF.register_cli_opt(category_opt)
else:
conf_fixture.register_cli_opt(category_opt)
log.register_options(CONF)
config.set_defaults()
CONF(argv[1:], project='cue',
version=version.version_info.version_string())
log.setup(CONF, "cue")
fn = CONF.category.action_fn
fn_args = fetch_func_args(fn)
fn(*fn_args)

View File

@ -1,50 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""cue-monitor
cue-monitor is responsible for actively monitoring cluster statuses
"""
import logging
from oslo_config import cfg
import oslo_log.log as log
from oslo_service import service as openstack_service
from cue.common.i18n import _LI # noqa
import cue.common.service as cue_service
import cue.monitor.monitor_service as cue_monitor_service
import sys
def main():
CONF = cfg.CONF
cue_service.prepare_service(sys.argv)
# Log configuration and other startup information
LOG = log.getLogger(__name__)
LOG.info(_LI("Starting cue-monitor"))
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, logging.INFO)
monitor = cue_monitor_service.MonitorService()
launcher = openstack_service.launch(CONF, monitor)
launcher.wait()
if __name__ == "__main__": # pragma: no cover
sys.exit(main())

View File

@ -1,73 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""cue-worker
cue-worker is responsible for executing jobs that are posted by the API in
response to user requests.
TODO: multi-process capability needs to be reimplemented. The first
implementation using oslo.service has issues in the interaction between
eventlet, kazoo, and taskflow.
"""
import logging
import sys
import oslo_config.cfg as cfg
import oslo_log.log as log
from cue.common.i18n import _LI # noqa
import cue.common.service as cue_service
import cue.taskflow.service as tf_service
WORKER_OPTS = [
cfg.IntOpt('count',
help="Number of worker processes to spawn",
default=10)
]
opt_group = cfg.OptGroup(
name='worker',
title='Options for cue worker'
)
cfg.CONF.register_group(opt_group)
cfg.CONF.register_opts(WORKER_OPTS, group=opt_group)
def list_opts():
return [('worker', WORKER_OPTS)]
def main():
# Initialize environment
CONF = cfg.CONF
cue_service.prepare_service(sys.argv)
# Log configuration and other startup information
LOG = log.getLogger(__name__)
LOG.info(_LI("Starting cue workers"))
LOG.info(_LI("Configuration:"))
CONF.log_opt_values(LOG, logging.INFO)
cue_worker = tf_service.ConductorService.create("cue-worker")
cue_worker.handle_signals()
cue_worker.start()
if __name__ == "__main__": # pragma: no cover
sys.exit(main())

View File

@ -1 +0,0 @@
__author__ = 'vipul'

View File

@ -1,34 +0,0 @@
# Copyright 2016 Hewlett Packard Enterprise Development Corporation, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_middleware import cors
def set_defaults():
"""Set all oslo.config default overrides for cue."""
# CORS Defaults
# TODO(krotscheck): Update with https://review.openstack.org/#/c/285368/
cfg.set_defaults(cors.CORS_OPTS,
allow_headers=['X-Auth-Token',
'X-Server-Management-Url'],
expose_headers=['X-Auth-Token',
'X-Server-Management-Url'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)

View File

@ -1,86 +0,0 @@
# -*- encoding: utf-8 -*-
# Copyright 2014-2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_context import context
class RequestContext(context.RequestContext):
"""Extends security contexts from the OpenStack common library."""
def __init__(self, auth_token=None, user=None, tenant=None, domain=None,
user_domain=None, project_domain=None, is_admin=False,
read_only=False, show_deleted=False, request_id=None,
resource_uuid=None, overwrite=True, roles=None,
is_public_api=False, domain_id=None, domain_name=None):
"""Stores several additional request parameters:
:param roles:
:param domain_id: The ID of the domain.
:param domain_name: The name of the domain.
:param is_public_api: Specifies whether the request should be processed
without authentication.
"""
super(RequestContext, self).__init__(auth_token=auth_token, user=user,
tenant=tenant, domain=domain,
user_domain=user_domain,
project_domain=project_domain,
is_admin=is_admin,
read_only=read_only,
show_deleted=show_deleted,
request_id=request_id,
resource_uuid=resource_uuid,
overwrite=overwrite)
self.roles = roles or []
self.is_public_api = is_public_api
self.domain_id = domain_id
self.domain_name = domain_name
@property
def project_id(self):
return self.tenant
@property
def tenant_id(self):
return self.tenant
@tenant_id.setter
def tenant_id(self, tenant_id):
self.tenant = tenant_id
@property
def user_id(self):
return self.user
@user_id.setter
def user_id(self, user_id):
self.user = user_id
@classmethod
def from_dict(cls, values):
if 'user_identity' in values:
del values['user_identity']
return cls(**values)
def to_dict(self):
values = super(RequestContext, self).to_dict()
values.update({
"roles": self.roles,
"is_public_api": self.is_public_api,
"domain_id": self.domain_id,
"domain_name": self.domain_name
})
return values

View File

@ -1,156 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2015 Hewlett-Packard Development Company, L.P.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Cue base exception handling.
Includes decorator for re-raising Cue-type exceptions.
SHOULD include dedicated exception logging.
"""
from oslo_config import cfg
from oslo_log import log as logging
import six
from cue.common.i18n import _ # noqa
from cue.common.i18n import _LE # noqa
LOG = logging.getLogger(__name__)
exc_log_opts = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='Make exception message format errors fatal.'),
]
CONF = cfg.CONF
CONF.register_opts(exc_log_opts)
def _cleanse_dict(original):
"""Strip all admin_password, new_pass, rescue_pass keys from a dict."""
return dict((k, v) for k, v in original.items() if "_pass" not in k)
class CueException(Exception):
"""Base Cue Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError: # pragma: no cover
pass
if not message:
try:
message = self.message % kwargs
except Exception as e:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation'))
for name, value in kwargs.items():
LOG.error("%s: %s" % (name, value))
if CONF.fatal_exception_format_errors:
raise e
else:
# at least get the core message out if something happened
message = self.message
super(CueException, self).__init__(message)
def format_message(self):
if self.__class__.__name__.endswith('_Remote'):
return self.args[0]
else:
return six.text_type(self)
class NotFound(CueException):
message = _("Not Found")
code = 404
class NotAuthorized(CueException):
message = _("Not authorized.")
code = 403
class OperationNotPermitted(NotAuthorized):
message = _("Operation not permitted.")
class Invalid(CueException):
message = _("Unacceptable parameters.")
code = 400
class Conflict(CueException):
message = _('Conflict.')
code = 409
class RequestEntityTooLarge(CueException):
message = _('Request too large for server.')
code = 413
class TemporaryFailure(CueException):
message = _("Resource temporarily unavailable, please retry.")
code = 503
class InvalidState(Conflict):
message = _("Invalid resource state.")
class NodeAlreadyExists(Conflict):
message = _("A node with UUID %(uuid)s already exists.")
class ConfigurationError(CueException):
message = _("Configuration Error")
class VmBuildingException(CueException):
message = _("VM is in building state")
class VmErrorException(CueException):
message = _("VM is not in a building state")
class InternalServerError(CueException):
message = _("Internal Server Error")
code = 500

View File

@ -1,31 +0,0 @@
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n # noqa
_translators = oslo_i18n.TranslatorFactory(domain='cue')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,88 +0,0 @@
# Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Policy Engine For Cue."""
from oslo_config import cfg
from oslo_log import log as logging
from oslo_policy import opts
from oslo_policy import policy
from cue.common import exception
from cue.common.i18n import _ # noqa
from cue.common.i18n import _LI # noqa
from cue.common import utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
# Add the default policy opts
opts.set_defaults(CONF)
_ENFORCER = None
def reset():
global _ENFORCER
if _ENFORCER:
_ENFORCER.clear()
_ENFORCER = None
def init(default_rule=None):
policy_files = utils.find_config(CONF['oslo_policy'].policy_file)
if len(policy_files) == 0:
msg = 'Unable to determine appropriate policy json file'
raise exception.ConfigurationError(msg)
LOG.info(_LI('Using policy_file found at: %s') % policy_files[0])
with open(policy_files[0]) as fh:
policy_string = fh.read()
rules = policy.Rules.load_json(policy_string, default_rule=default_rule)
global _ENFORCER
if not _ENFORCER:
LOG.debug("Enforcer is not present, recreating.")
_ENFORCER = policy.Enforcer(cfg.CONF)
_ENFORCER.set_rules(rules)
def check(rule, ctxt, target=None, do_raise=True, exc=exception.NotAuthorized):
creds = ctxt.to_dict()
target = target or {}
try:
result = _ENFORCER.enforce(rule, target, creds, do_raise, exc)
except Exception:
result = False
raise
else:
return result
finally:
extra = {'policy': {'rule': rule, 'target': target}}
if result:
LOG.info(_("Policy check succeeded for rule '%(rule)s' "
"on target %(target)s") %
{'rule': rule, 'target': repr(target)}, extra=extra)
else:
LOG.info(_("Policy check failed for rule '%(rule)s' "
"on target %(target)s") %
{'rule': rule, 'target': repr(target)}, extra=extra)

View File

@ -1,64 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014-2015 Hewlett-Packard Development Company, L.P.
# Copyright © 2012 eNovance <licensing@enovance.com>
#
# Author: Julien Danjou <julien@danjou.info>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import socket
import sys
from oslo_config import cfg
from oslo_log import log
from cue.common import config
service_opts = [
cfg.IntOpt('periodic_interval',
default=60,
help='Seconds between running periodic tasks.'),
cfg.StrOpt('host',
default=socket.getfqdn(),
help='Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address.'),
]
CONF = cfg.CONF
CONF.register_opts(service_opts)
LOG = log.getLogger(__name__)
def prepare_service(argv=None):
log_levels = (CONF.default_log_levels +
['stevedore=INFO', 'keystoneclient=INFO'])
log.set_defaults(default_log_levels=log_levels)
if argv is None:
argv = sys.argv
CONF(argv[1:], project='cue')
log.setup(CONF, 'cue')
config.set_defaults()
def list_opts():
return [('DEFAULT', service_opts)]

View File

@ -1,46 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_config import cfg
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
def find_config(config_path):
"""Find a configuration file using the given hint.
:param config_path: Full or relative path to the config.
:returns: List of config paths
"""
possible_locations = [
config_path,
os.path.join(cfg.CONF.pybasedir, "etc", "cue", config_path),
os.path.join(cfg.CONF.pybasedir, "etc", config_path),
os.path.join(cfg.CONF.pybasedir, config_path),
"/etc/cue/%s" % config_path,
]
found_locations = []
for path in possible_locations:
LOG.debug('Searching for configuration at path: %s' % path)
if os.path.exists(path):
LOG.debug('Found configuration at path: %s' % path)
found_locations.append(os.path.abspath(path))
return found_locations

View File

@ -1,76 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cue.common.i18n import _LI # noqa
from oslo_config import cfg
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
MIN_USERNAME_LENGTH = 1
MAX_USERNAME_LENGTH = 255
MIN_PASSWORD_LENGTH = 1
MAX_PASSWORD_LENGTH = 255
PLAIN_AUTH = "PLAIN"
class AuthTokenValidator(object):
@staticmethod
def validate_token(auth_type, token):
auth_validator = None
if auth_type and auth_type.upper() == PLAIN_AUTH:
auth_validator = PlainAuthTokenValidator(token=token)
elif not auth_type:
return AuthTokenValidator()
else:
LOG.info(_LI('Invalid authentication type: %s') % auth_type)
return auth_validator
def validate(self):
return True
class PlainAuthTokenValidator(AuthTokenValidator):
def __init__(self, token):
self.token = token
def validate(self):
valid_username = False
valid_password = False
if self.token:
if 'username' in self.token:
if (self.token['username'] and
(len(self.token['username']) >= MIN_USERNAME_LENGTH) and
(len(self.token['username']) <= MAX_USERNAME_LENGTH)):
valid_username = True
else:
LOG.info(_LI('Invalid username: %s')
% self.token['username'])
if 'password' in self.token:
if (self.token['password'] and
(len(self.token['password']) >= MIN_PASSWORD_LENGTH) and
(len(self.token['password']) <= MAX_PASSWORD_LENGTH)):
valid_password = True
else:
LOG.info(_LI('Invalid password'))
return valid_username and valid_password

View File

View File

@ -1,263 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
"""
Base classes for storage engines
"""
import abc
from oslo_config import cfg
from oslo_db import api as db_api
import six
_BACKEND_MAPPING = {'sqlalchemy': 'cue.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
lazy=True)
def get_instance():
"""Return a DB API instance."""
return IMPL
@six.add_metaclass(abc.ABCMeta)
class Connection(object):
"""Base class for storage system connections."""
@abc.abstractmethod
def __init__(self):
"""Constructor."""
@abc.abstractmethod
def get_clusters(self, context, *args, **kwargs):
"""Returns a list of Cluster objects for specified project_id.
:param context: request context object
:returns: a list of :class:'Cluster' object
"""
@abc.abstractmethod
def create_cluster(self, context, cluster_values):
"""Creates a new cluster.
:param context: request context object
:param cluster_values: Dictionary of several required items
::
{
'network_id': obj_utils.str_or_none,
'project_id': obj_utils.str_or_none,
'name': obj_utils.str_or_none,
'flavor': obj_utils.str_or_none,
'size': obj_utils.int_or_none,
'volume_size': obj_utils.int_or_none,
}
"""
@abc.abstractmethod
def update_cluster(self, context, cluster_values, cluster_id):
"""Updates values in a cluster record indicated by cluster_id
:param context: request context object
:param cluster_values: Dictionary of cluster values to update
:param cluster_id: UUID of a cluster
"""
@abc.abstractmethod
def get_cluster_by_id(self, context, cluster_id):
"""Returns a Cluster objects for specified cluster_id.
:param context: request context object
:param cluster_id: UUID of a cluster
:returns: a :class:'Cluster' object
"""
@abc.abstractmethod
def get_nodes_in_cluster(self, context, cluster_id):
"""Returns a list of Node objects for specified cluster.
:param context: request context object
:param cluster_id: UUID of the cluster
:returns: a list of :class:'Node' object
"""
@abc.abstractmethod
def get_node_by_id(self, context, node_id):
"""Returns a node for the specified node_id.
:param context: request context object
:param node_id: UUID of the node
:returns: a :class:'Node' object
"""
@abc.abstractmethod
def update_node(self, context, node_values, node_id):
"""Updates values in a node record indicated by node_id
:param context: request context object
:param node_values: Dictionary of node values to update
:param node_id:
:return:
"""
@abc.abstractmethod
def get_endpoints_in_node(self, context, node_id):
"""Returns a list of Endpoint objects for specified node.
:param context: request context object
:param node_id: UUID of the node
:returns: a list of :class:'Endpoint' object
"""
@abc.abstractmethod
def create_endpoint(self, context, endpoint_values):
"""Creates a new endpoint.
:param context: request context object
:param endpoint_values: Dictionary of several required items
::
{
'id': obj_utils.str_or_none,
'node_id': obj_utils.str_or_none,
'uri': obj_utils.str_or_none,
'type': obj_utils.str_or_none,
}
"""
@abc.abstractmethod
def get_endpoint_by_id(self, context, endpoint_id):
"""Returns an endpoint for the specified endpoint_id.
:param context: request context object
:param endpoint_id: UUID of the endpoint
:returns: a :class:'Endpoint' object
"""
@abc.abstractmethod
def update_endpoints_by_node_id(self, context, endpoint_values, node_id):
"""Updates values in all endpoints belonging to a specific node
:param context: request context object
:param endpoint_values: Dictionary of endpoint values to update
:param node_id: node id to query endpoints by
:return:
"""
@abc.abstractmethod
def update_cluster_deleting(self, context, cluster_id):
"""Marks specified cluster to indicate deletion.
:param context: request context object
:param cluster_id: UUID of a cluster
"""
@abc.abstractmethod
def create_broker(self, context, broker_values):
"""Creates a new broker.
:param context: request context object
:param broker_values: Dictionary of several required items
::
{
'type': obj_utils.str_or_none,
'active_status': obj_utils.bool_or_none
}
"""
@abc.abstractmethod
def get_brokers(self, context):
"""Returns a list of Broker objects.
:param context: request context object
:returns: a list of :class:'Broker' object
"""
@abc.abstractmethod
def delete_broker(self, context, broker_id):
"""Deletes a Broker object for specified broker_id.
:param context: request context object
:param broker_id: UUID of a broker
"""
@abc.abstractmethod
def update_broker(self, context, broker_id, broker_value):
"""Updates a Broker type/status for specified broker_id.
:param context: request context object
:param broker_id: UUID of a broker
:param broker_value: Dictionary of attribute values to be updated
"""
@abc.abstractmethod
def create_broker_metadata(self, context, metadata_values):
"""Creates a new broker metadata.
:param context: request context object
:param metadata_values: Dictionary of several required items
::
{
'broker_id': UUID of a broker,
'key': obj_utils.str_or_none,
'value': obj_utils.str_or_none
}
"""
@abc.abstractmethod
def get_broker_metadata_by_broker_id(self, context, broker_id):
"""Returns a list of BrokerMetadata objects for specified broker_id.
:param context: request context object
:param broker_id: UUID of a broker
:returns: a list of :class:'BrokerMetadata' object
"""
@abc.abstractmethod
def delete_broker_metadata(self, context, broker_metadata_id):
"""Deletes a BrokerMetadata object for specified broker_id.
:param context: request context object
:param broker_metadata_id: UUID of a broker metadata
"""
@abc.abstractmethod
def get_image_id_by_broker_name(self, context, broker_name):
"""Returns a image_id for the broker
:param context: request context object
:param: broker name
"""

View File

@ -1,52 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = %(here)s/alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# default to an empty string because the Neutron migration cli will
# extract the correct value and set it programatically before alemic is fully
# invoked.
sqlalchemy.url =
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -1,16 +0,0 @@
The migrations in the alembic/versions contain the migrations.
Before running this migration ensure that the database cue exists.
Currently the database connection string is in cue/db/migration/alembic.ini
but this should eventually be pulled out into an cue configuration file.
Set connection string is set by the line:
sqlalchemy.url = mysql://<user>:<password>@localhost/<database>
To run migrations you must first be in the cue/db/migrate directory.
To migrate to the most current version run:
$ alembic upgrade head
To downgrade one migration run:
$ alembic downgrade -1

View File

@ -1,73 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cue.db.sqlalchemy import api
from cue.db.sqlalchemy import base
from alembic import context
from logging import config as log_config
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
log_config.fileConfig(config.config_file_name)
# set the target for 'autogenerate' support
target_metadata = base.BASE.metadata
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = api.get_session().get_bind()
with engine.connect() as connection:
context.configure(connection=connection,
target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,22 +0,0 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -1,47 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2015 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""add error_detail and group_id column in clusters
Revision ID: 17c428e0479e
Revises: 244aa473e595
Create Date: 2015-11-11 12:01:10.769280
"""
# revision identifiers, used by Alembic.
revision = '17c428e0479e'
down_revision = '244aa473e595'
from cue.db.sqlalchemy import types
from alembic import op
from oslo_config import cfg
import sqlalchemy as sa
def upgrade():
op.add_column('clusters', sa.Column('error_detail', sa.Text(),
nullable=True))
op.add_column('clusters', sa.Column('group_id', types.UUID(),
nullable=True))
def downgrade():
db_connection = cfg.CONF.database.connection
if db_connection != "sqlite://": # pragma: nocover
op.drop_column('clusters', 'error_detail')
op.drop_column('clusters', 'group_id')

View File

@ -1,82 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Initial schema
Revision ID: 236f63c96b6a
Revises: None
Create Date: 2014-11-16 12:24:50.885039
"""
# revision identifiers, used by Alembic.
from cue.db.sqlalchemy import types
revision = '236f63c96b6a'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('clusters',
sa.Column('id', types.UUID(), nullable=False),
sa.Column('project_id', sa.String(length=36), nullable=False),
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('status', sa.String(length=50), nullable=False),
sa.Column('flavor', sa.String(length=50), nullable=False),
sa.Column('size', sa.Integer(), nullable=False),
sa.Column('volume_size', sa.Integer(), nullable=True),
sa.Column('deleted', sa.Boolean(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('nodes',
sa.Column('id', types.UUID(), nullable=False),
sa.Column('cluster_id', types.UUID(), nullable=True),
sa.Column('flavor', sa.String(length=36), nullable=False),
sa.Column('instance_id', sa.String(length=36), nullable=True),
sa.Column('status', sa.String(length=50), nullable=False),
sa.Column('deleted', sa.Boolean(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['cluster_id'], ['clusters.id'], ),
sa.PrimaryKeyConstraint('id')
)
op.create_table('endpoints',
sa.Column('id', types.UUID(), nullable=False),
sa.Column('node_id', types.UUID(), nullable=False),
sa.Column('type', sa.String(length=255), nullable=False),
sa.Column('uri', sa.String(length=255), nullable=False),
sa.Column('deleted', sa.Boolean(), nullable=False),
sa.ForeignKeyConstraint(['node_id'], ['nodes.id'], ),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('endpoints')
op.drop_table('nodes')
op.drop_table('clusters')
### end Alembic commands ###

View File

@ -1,41 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2015 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Add management_ip column in node
Revision ID: 244aa473e595
Revises: 3917e931a55a
Create Date: 2015-10-01 12:02:57.273927
"""
# revision identifiers, used by Alembic.
revision = '244aa473e595'
down_revision = '3917e931a55a'
from alembic import op
from oslo_config import cfg
import sqlalchemy as sa
def upgrade():
op.add_column('nodes', sa.Column('management_ip', sa.String(length=45)))
def downgrade():
db_connection = cfg.CONF.database.connection
if db_connection != "sqlite://": # pragma: nocover
op.drop_column('nodes', 'management_ip')

View File

@ -1,66 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Create broker and brokerMetadata
Revision ID: 3917e931a55a
Revises: 236f63c96b6a
Create Date: 2015-03-30 16:31:57.360063
"""
# revision identifiers, used by Alembic.
revision = '3917e931a55a'
down_revision = '236f63c96b6a'
from alembic import op
import sqlalchemy as sa
from cue.db.sqlalchemy import types
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('broker',
sa.Column('id', types.UUID(), nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('active', sa.Boolean(), nullable=False),
sa.Column('deleted', sa.Boolean(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('broker_metadata',
sa.Column('id', types.UUID(), nullable=False),
sa.Column('broker_id', types.UUID(), nullable=False),
sa.Column('key', sa.String(length=255), nullable=False),
sa.Column('value', sa.String(length=255), nullable=False),
sa.Column('deleted', sa.Boolean(), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=False),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('deleted_at', sa.DateTime(), nullable=True),
sa.ForeignKeyConstraint(['broker_id'], ['broker.id'], ),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('broker_metadata')
op.drop_table('broker')
### end Alembic commands ###

View File

@ -1,320 +0,0 @@
# Copyright 2011 VMware, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Copied from Neutron
import uuid
from cue.common import exception
from cue.common.i18n import _ # noqa
from cue.db import api
from cue.db.sqlalchemy import models
from oslo_config import cfg
from oslo_db import exception as db_exception
from oslo_db import options as db_options
from oslo_db.sqlalchemy import session
from oslo_utils import timeutils
from sqlalchemy.orm import exc as sql_exception
CONF = cfg.CONF
CONF.register_opt(cfg.StrOpt('sqlite_db', default='cue.sqlite'))
db_options.set_defaults(
cfg.CONF, connection='sqlite:///$state_path/$sqlite_db')
_FACADE = None
def _create_facade_lazily():
global _FACADE
if _FACADE is None:
_FACADE = session.EngineFacade.from_config(cfg.CONF, sqlite_fk=True)
return _FACADE
def get_engine():
"""Helper method to grab engine."""
facade = _create_facade_lazily()
return facade.get_engine()
def get_session(autocommit=True, expire_on_commit=False):
"""Helper method to grab session."""
facade = _create_facade_lazily()
return facade.get_session(autocommit=autocommit,
expire_on_commit=expire_on_commit)
def get_backend():
"""The backend is this module itself."""
return Connection()
def model_query(context, model, *args, **kwargs):
"""Query helper for simpler session usage.
:param session: if present, the session to use
"""
session = kwargs.get('session') or get_session()
query = session.query(model, *args)
read_deleted = kwargs.get('read_deleted', False)
project_only = kwargs.get('project_only', True)
if not read_deleted:
query = query.filter_by(deleted=False)
if project_only:
# filter by project_id
if hasattr(model, 'project_id'):
query = query.filter_by(project_id=context.project_id)
return query
def soft_delete(record_values):
"""Mark this object as deleted."""
record_values['deleted'] = True
record_values['deleted_at'] = timeutils.utcnow()
class Connection(api.Connection):
"""SqlAlchemy connection implementation."""
def __init__(self):
pass
def get_clusters(self, context, *args, **kwargs):
query = model_query(context, models.Cluster, *args, **kwargs)
return query.all()
def create_cluster(self, context, cluster_values):
if not cluster_values.get('id'):
cluster_values['id'] = str(uuid.uuid4())
cluster_values['status'] = models.Status.BUILDING
cluster = models.Cluster()
cluster.update(cluster_values)
node_values = {
'cluster_id': cluster_values['id'],
'flavor': cluster_values['flavor'],
'status': models.Status.BUILDING,
}
db_session = get_session()
with db_session.begin():
cluster.save(db_session)
db_session.flush()
for i in range(cluster_values['size']):
node = models.Node()
node_id = str(uuid.uuid4())
node_values['id'] = node_id
node.update(node_values)
node.save(db_session)
return cluster
def update_cluster(self, context, cluster_values,
cluster_id, *args, **kwargs):
cluster_query = (model_query(context, models.Cluster, *args, **kwargs)
.filter_by(id=cluster_id))
# if status is set to deleted, soft delete this cluster record
if ('status' in cluster_values) and (
cluster_values['status'] == models.Status.DELETED):
soft_delete(cluster_values)
cluster_query.update(cluster_values)
def get_cluster_by_id(self, context, cluster_id):
query = model_query(context, models.Cluster).filter_by(id=cluster_id)
try:
cluster = query.one()
except db_exception.DBError:
# Todo(dagnello): User input will be validated from REST API and
# not from DB transactions.
raise exception.Invalid(_("badly formed cluster_id UUID string"))
except sql_exception.NoResultFound:
raise exception.NotFound(_("Cluster was not found"))
return cluster
def get_nodes_in_cluster(self, context, cluster_id):
query = (model_query(context, models.Node)
.filter_by(cluster_id=cluster_id))
# No need to catch user-derived exceptions around not found or badly
# formed UUIDs if these happen, they should be classified as internal
# server errors since the user is not able to access nodes directly.
return query.all()
def get_node_by_id(self, context, node_id):
query = model_query(context, models.Node).filter_by(id=node_id)
return query.one()
def update_node(self, context, node_values, node_id):
node_query = (model_query(context, models.Node).filter_by(id=node_id))
# if status is set to deleted, soft delete this node record
if ('status' in node_values) and (
node_values['status'] == models.Status.DELETED):
soft_delete(node_values)
node_query.update(node_values)
def get_endpoints_in_node(self, context, node_id):
query = model_query(context, models.Endpoint).filter_by(
node_id=node_id)
# No need to catch user-derived exceptions for same reason as above
return query.all()
def create_endpoint(self, context, endpoint_values):
if not endpoint_values.get('id'):
endpoint_values['id'] = str(uuid.uuid4())
endpoint = models.Endpoint()
endpoint.update(endpoint_values)
db_session = get_session()
endpoint.save(db_session)
return endpoint
def get_endpoint_by_id(self, context, endpoint_id):
query = model_query(context, models.Endpoint).filter_by(id=endpoint_id)
return query.one()
def update_endpoints_by_node_id(self, context, endpoint_values, node_id):
endpoints_query = model_query(context, models.Endpoint).filter_by(
node_id=node_id)
# if delete flag is set, we just want to delete these records instead
if 'deleted' in endpoint_values and endpoint_values['deleted']:
endpoints_query.delete()
else:
endpoints_query.update(endpoint_values)
def update_cluster_deleting(self, context, cluster_id):
values = {'status': models.Status.DELETING}
cluster_query = (model_query(context, models.Cluster)
.filter_by(id=cluster_id))
try:
cluster_query.one()
except db_exception.DBError:
# Todo(dagnello): User input will be validated from REST API and
# not from DB transactions.
raise exception.Invalid(_("badly formed cluster_id UUID string"))
except sql_exception.NoResultFound:
raise exception.NotFound(_("Cluster was not found"))
db_session = get_session()
with db_session.begin():
cluster_query.update(values)
nodes_query = model_query(context, models.Node).filter_by(
cluster_id=cluster_id)
nodes_query.update(values)
def create_broker(self, context, broker_values):
broker = models.Broker()
broker.update(broker_values)
db_session = get_session()
broker.save(db_session)
return broker
def get_brokers(self, context):
query = model_query(context, models.Broker)
return query.all()
def delete_broker(self, context, broker_id):
broker_query = (model_query(context, models.Broker)
.filter_by(id=broker_id))
broker_value = {}
soft_delete(broker_value)
broker_query.update(broker_value)
def update_broker(self, context, broker_id, broker_value):
broker_query = (model_query(context, models.Broker)
.filter_by(id=broker_id))
broker_query.update(broker_value)
def create_broker_metadata(self, context, metadata_values):
broker_query = (model_query(context, models.Broker)
.filter_by(id=metadata_values['broker_id']))
try:
# check to see if the broker_id exists
broker_query.one()
broker_metadata = models.BrokerMetadata()
broker_metadata.update(metadata_values)
db_session = get_session()
broker_metadata.save(db_session)
except db_exception.DBError:
raise exception.Invalid(_("Badly formed broker_id UUID string"))
except sql_exception.NoResultFound:
raise exception.NotFound(_("Broker was not found"))
return broker_metadata
def get_broker_metadata_by_broker_id(self, context, broker_id):
query = model_query(context, models.BrokerMetadata).filter_by(
broker_id=broker_id)
broker_metadata = query.all()
return broker_metadata
def delete_broker_metadata(self, context, broker_metadata_id):
query = (model_query(context, models.BrokerMetadata).filter_by(
id=broker_metadata_id))
broker_value = {}
soft_delete(broker_value)
query.update(broker_value)
def get_image_id_by_broker_name(self, context, broker_name):
broker_query = model_query(context, models.Broker).filter_by(
active=True).filter_by(name=broker_name)
try:
selected_broker_id = broker_query.one().id
# select the recently created image id
metadata_query = (model_query(context, models.BrokerMetadata)
.filter_by(key='IMAGE')
.filter_by(broker_id=selected_broker_id)
.order_by((models.BrokerMetadata.created_at.desc(
))).limit(1))
selected_image = metadata_query.one()
except sql_exception.NoResultFound:
raise exception.NotFound(_("Broker was not found"))
image_id = selected_image['value']
return image_id

View File

@ -1,55 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Copied from Octavia
from cue.db.sqlalchemy import types
import uuid
from oslo_db.sqlalchemy import models
import sqlalchemy as sa
from sqlalchemy.ext import declarative
class CueBase(models.ModelBase):
def as_dict(self):
d = {}
for c in self.__table__.columns:
d[c.name] = self[c.name]
return d
class LookupTableMixin(object):
"""Mixin to add to classes that are lookup tables."""
name = sa.Column(sa.String(255), primary_key=True, nullable=False)
description = sa.Column(sa.String(255), nullable=True)
class IdMixin(object):
"""Id mixin, add to subclasses that have a tenant."""
id = sa.Column(types.UUID(), nullable=False,
default=lambda i: str(uuid.uuid4()),
primary_key=True)
class ProjectMixin(object):
"""Project mixin, add to subclasses that have a project."""
project_id = sa.Column(sa.String(36))
BASE = declarative.declarative_base(cls=CueBase)
class SoftDeleteMixin(object):
deleted_at = sa.Column(sa.DateTime)
deleted = sa.Column(sa.Boolean, default=0)

View File

@ -1,100 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cue.db.sqlalchemy import base
from cue.db.sqlalchemy import types
from oslo_db.sqlalchemy import models
import sqlalchemy as sa
class Status():
BUILDING = 'BUILDING'
ACTIVE = 'ACTIVE'
DELETING = 'DELETING'
DELETED = 'DELETED'
ERROR = 'ERROR'
DOWN = 'DOWN'
class MetadataKey():
IMAGE = 'IMAGE'
SEC_GROUP = 'SEC_GROUP'
class Endpoint(base.BASE, base.IdMixin):
__tablename__ = 'endpoints'
node_id = sa.Column(types.UUID(), sa.ForeignKey('nodes.id'),
primary_key=True)
uri = sa.Column(sa.String(255), nullable=False)
type = sa.Column(sa.String(length=255), nullable=False)
deleted = sa.Column(sa.Boolean(), default=False, nullable=False)
sa.Index("endpoints_id_idx", "id", unique=True)
sa.Index("endpoints_nodes_id_idx", "node_id", unique=False)
class Node(base.BASE, base.IdMixin, models.TimestampMixin,
base.SoftDeleteMixin):
__tablename__ = 'nodes'
cluster_id = sa.Column(
'cluster_id', types.UUID(),
sa.ForeignKey('clusters.id'), nullable=False)
flavor = sa.Column(sa.String(36), nullable=False)
instance_id = sa.Column(sa.String(36), nullable=True)
status = sa.Column(sa.String(50), nullable=False)
management_ip = sa.Column(sa.String(45), nullable=True)
sa.Index("nodes_id_idx", "id", unique=True)
sa.Index("nodes_cluster_id_idx", "cluster_id", unique=False)
class Cluster(base.BASE, base.IdMixin, models.TimestampMixin,
base.SoftDeleteMixin):
__tablename__ = 'clusters'
project_id = sa.Column(sa.String(36), nullable=False)
network_id = sa.Column(sa.String(36), nullable=False)
name = sa.Column(sa.String(255), nullable=False)
status = sa.Column(sa.String(50), nullable=False)
flavor = sa.Column(sa.String(50), nullable=False)
size = sa.Column(sa.Integer(), default=1, nullable=False)
volume_size = sa.Column(sa.Integer(), nullable=True)
error_detail = sa.Column(sa.Text(), nullable=True)
group_id = sa.Column(types.UUID(), nullable=True)
sa.Index("clusters_cluster_id_idx", "cluster_id", unique=True)
class Broker(base.BASE, base.IdMixin, models.TimestampMixin,
base.SoftDeleteMixin):
__tablename__ = 'broker'
name = sa.Column(sa.String(255), nullable=False)
active = sa.Column(sa.Boolean(), default=False, nullable=False)
sa.Index("broker_id_idx", "id", unique=True)
class BrokerMetadata(base.BASE, base.IdMixin, models.TimestampMixin,
base.SoftDeleteMixin):
__tablename__ = 'broker_metadata'
broker_id = sa.Column(
'broker_id', types.UUID(),
sa.ForeignKey('broker.id'), nullable=False)
key = sa.Column(sa.String(255), nullable=False)
value = sa.Column(sa.String(255), nullable=False)
sa.Index("brokerMetadata_id_idx", "id", unique=True)
sa.Index("brokerMetadata_broker_id_idx", "broker_id", unique=False)

View File

@ -1,54 +0,0 @@
# Copyright 2012 Managed I.T.
#
# Author: Kiall Mac Innes <kiall@managedit.ie>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from sqlalchemy.types import TypeDecorator, CHAR
from sqlalchemy.dialects.postgresql import UUID as pgUUID
class UUID(TypeDecorator):
"""Platform-independent UUID type.
Uses Postgresql's UUID type, otherwise uses
CHAR(32), storing as stringified hex values.
Copied verbatim from SQLAlchemy documentation.
"""
impl = CHAR
def load_dialect_impl(self, dialect):
if dialect.name == 'postgresql':
return dialect.type_descriptor(pgUUID())
else:
return dialect.type_descriptor(CHAR(32))
def process_bind_param(self, value, dialect):
if value is None:
return value
elif dialect.name == 'postgresql':
return str(value)
else:
if not isinstance(value, uuid.UUID):
return "%.32x" % uuid.UUID(value)
else:
# hexstring
return "%.32x" % value
def process_result_value(self, value, dialect):
if value is None:
return value
else:
return str(uuid.UUID(value))

View File

View File

@ -1,40 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Copied: Designate
from cue.common import context
# Decorators for actions
def args(*args, **kwargs):
def _decorator(func):
func.__dict__.setdefault('args', []).insert(0, (args, kwargs))
return func
return _decorator
def name(name):
"""Give a command a alternate name."""
def _decorator(func):
func.__dict__['_cmd_name'] = name
return func
return _decorator
class Commands(object):
def __init__(self):
self.context = context.RequestContext(is_admin=True)
self.context.request_id = 'cue-manage'

View File

@ -1,152 +0,0 @@
# Copyright 2015 Hewlett-Packard Development Company, L.P.#
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_utils import strutils
import prettytable
from cue.common import context as cue_context
from cue.common import exception
from cue.common.i18n import _ # noqa
from cue.db.sqlalchemy import models
from cue.manage import base
from cue import objects
CONF = cfg.CONF
class BrokerCommands(base.Commands):
"""Broker commands for accessing broker and broker_metadata tables.
- To add a new broker use 'broker add'.
- To add a new metadata for a broker; get broker_id using 'broker list'
and use the id in the command 'broker add_metadata'.
"""
def __init__(self):
super(BrokerCommands, self).__init__()
self.context = cue_context.RequestContext()
@base.args('NAME', help="Broker name")
@base.args('ACTIVE', help="Broker active status(boolean)")
def add(self, broker_name, active):
"""Add a new broker."""
status = strutils.bool_from_string(active)
broker_values = {
'name': broker_name,
'active': status,
}
broker = objects.Broker(**broker_values)
new_broker = broker.create_broker(self.context)
new_broker_table = prettytable.PrettyTable(
["Broker id", "Broker Name", "Active", "Created Time",
"Updated Time", "Deleted Time", ])
new_broker_table.add_row(
[new_broker.id, new_broker.name, new_broker.active,
new_broker.created_at, new_broker.updated_at,
new_broker.deleted_at])
print(new_broker_table)
return new_broker
def list(self):
"""List all the brokers."""
broker_list = objects.Broker.get_brokers(self.context)
list_table = prettytable.PrettyTable(["Broker id", "Broker Name",
"Active", "Created Time",
"Updated Time", "Deleted Time",
])
for broker in broker_list:
list_table.add_row([broker.id, broker.name, broker.active,
broker.created_at, broker.updated_at,
broker.deleted_at])
print(list_table)
return broker_list
@base.args('ID', help='Broker id')
def delete(self, broker_id):
"""Delete a broker."""
broker_id = {'id': broker_id}
broker_obj = objects.Broker(**broker_id)
broker_obj.delete_broker(self.context)
@base.args('ID', help='Broker id')
@base.args('--name', nargs='?', help='Broker name')
@base.args('--active', nargs='?', help='Broker active status(boolean)')
def update(self, broker_id, broker_name, active):
"""Update name/active field or both the fields for a given broker."""
broker_value = {}
if broker_name is not None:
broker_value['name'] = broker_name
if active is not None:
active = strutils.bool_from_string(active)
broker_value['active'] = active
broker_value['id'] = broker_id
broker_obj = objects.Broker(**broker_value)
broker_obj.update_broker(self.context)
@base.args('ID', help='Broker id')
@base.args('--image', dest='image_id', nargs='?', help='Image id')
@base.args('--sec-group', dest='sec_group', nargs='?',
help='Security group')
def add_metadata(self, broker_id, image_id, sec_group):
"""Add broker metadata - image and sec group for the given broker_id.
"""
if image_id is not None:
metadata_value = {
'key': models.MetadataKey.IMAGE,
'value': image_id,
'broker_id': broker_id
}
metadata = objects.BrokerMetadata(**metadata_value)
metadata.create_broker_metadata(self.context)
if sec_group is not None:
metadata_value = {
'key': models.MetadataKey.SEC_GROUP,
'value': sec_group,
'broker_id': broker_id
}
metadata = objects.BrokerMetadata(**metadata_value)
metadata.create_broker_metadata(self.context)
if image_id is None and sec_group is None:
raise exception.Invalid(_("Requires atleast one argument"))
@base.args('ID', help='Broker id')
def list_metadata(self, broker_id):
"""List broker metadata for the given broker_id."""
broker_metadata = (
objects.BrokerMetadata.
get_broker_metadata_by_broker_id(self.context, broker_id))
list_table = prettytable.PrettyTable(["Broker_Metadata id",
"Broker id", "Key", "Value",
"Created Time", "Updated Time",
"Deleted Time"])
for broker in broker_metadata:
list_table.add_row([broker.id, broker.broker_id, broker.key,
broker.value, broker.created_at,
broker.updated_at, broker.deleted_at])
print(list_table)
return broker_metadata
@base.args('ID', help='Broker metadata id')
def delete_metadata(self, broker_metadata_id):
"""Delete broker metadata for the given broker_metadata_id."""
broker_metadata_id = {'id': broker_metadata_id}
broker_obj = objects.BrokerMetadata(**broker_metadata_id)
broker_obj.delete_broker_metadata(self.context)

View File

@ -1,67 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Copied: Designate
import os
from oslo_config import cfg
from oslo_db import options
from oslo_db.sqlalchemy.migration_cli import manager as migration_manager
from cue.manage import base
CONF = cfg.CONF
cfg.CONF.import_opt('connection', 'cue.db.api',
group='database')
def get_manager():
alembic_path = os.path.join(os.path.dirname(__file__),
'..', 'db', 'sqlalchemy', 'alembic.ini')
migrate_path = os.path.join(os.path.dirname(__file__),
'..', 'db', 'sqlalchemy', 'alembic')
migration_config = {'alembic_ini_path': alembic_path,
'alembic_repo_path': migrate_path,
'db_url': CONF.database.connection}
return migration_manager.MigrationManager(migration_config)
class DatabaseCommands(base.Commands):
def __init__(self):
options.set_defaults(CONF)
def version(self):
print("Version %s" % get_manager().version())
@base.args('revision', nargs='?')
def upgrade(self, revision):
get_manager().upgrade(revision)
@base.args('revision', nargs='?')
def downgrade(self, revision):
get_manager().downgrade(revision)
@base.args('revision', nargs='?')
def stamp(self, revision):
get_manager().stamp(revision)
@base.args('-m', '--message', dest='message')
@base.args('--autogenerate', action='store_true')
def revision(self, message, autogenerate):
get_manager().revision(message=message, autogenerate=autogenerate)

View File

@ -1,33 +0,0 @@
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Author: Endre Karlson <endre.karlson@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
from oslo_config import cfg
from cue.manage import base
from cue.taskflow import client
CONF = cfg.CONF
class TaskFlowCommands(base.Commands):
def __init__(self):
super(TaskFlowCommands, self).__init__()
def upgrade(self):
be = client.create_persistence()
with contextlib.closing(be.get_connection()) as conn:
conn.upgrade()

View File

@ -1,37 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
CONF = cfg.CONF
MONITOR_OPTS = [
cfg.IntOpt('loop_interval_seconds',
help='How often Cluster Status is checked.',
default=60)
]
opt_group = cfg.OptGroup(
name='cue_monitor',
title='Options for cue-monitor.'
)
CONF.register_group(opt_group)
CONF.register_opts(MONITOR_OPTS, group='cue_monitor')
def list_opts():
return [('cue_monitor', MONITOR_OPTS)]

View File

@ -1,117 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_service import loopingcall
from oslo_service import service
from tooz import coordination
from cue import objects
import cue.taskflow.client as taskflow_client
from cue.taskflow.flow import check_cluster_status
class MonitorService(service.Service):
def __init__(self):
super(MonitorService, self).__init__()
coord_url = ("%s://%s:%s"
% (
cfg.CONF.taskflow.coord_url,
cfg.CONF.taskflow.zk_hosts,
cfg.CONF.taskflow.zk_port
))
self.coordinator = coordination.get_coordinator(
coord_url, b'cue-monitor')
self.coordinator.start()
# Create a lock
self.lock = self.coordinator.get_lock(b"status_check")
def start(self):
loop_interval_seconds = int(cfg.CONF.cue_monitor.loop_interval_seconds)
pulse = loopingcall.FixedIntervalLoopingCall(
self.check
)
pulse.start(interval=loop_interval_seconds)
pulse.wait()
# On stop, try to release the znode
def stop(self):
self.lock.release()
self.coordinator.stop()
def wait(self):
pass
def reset(self):
self.lock.release()
self.coordinator.stop()
def check(self):
if not self.lock.acquired:
self.lock.acquire(blocking=False)
if self.lock.acquired:
clusters = get_cluster_id_node_ids()
taskflow_client_instance = taskflow_client.get_client_instance()
job_list = taskflow_client_instance.joblist()
cluster_ids = []
for job in job_list:
if 'cluster_status_check' in job.details['store']:
cluster_ids.append(job.details['store']['cluster_id'])
filtered_clusters = []
for cluster in clusters:
if cluster[0] not in cluster_ids:
filtered_clusters.append(cluster)
for cluster in filtered_clusters:
job_args = {
'cluster_status_check': '',
'cluster_id': cluster[0],
'context': {},
'default_rabbit_user': 'cue_monitor',
'default_rabbit_pass': cluster[0],
}
flow_kwargs = {
'cluster_id': cluster[0],
'node_ids': cluster[1]
}
taskflow_client_instance.post(check_cluster_status, job_args,
flow_kwargs=flow_kwargs)
# Returns a list of tuples where [0] is cluster_id
# and [1] is a list of that clusters node ids
def get_cluster_id_node_ids():
clusters = objects.Cluster.get_clusters(None, project_only=False)
cluster_ids = []
for cluster in clusters:
if cluster.status not in ['ACTIVE', 'DOWN']:
continue
node_ids = []
for node in objects.Node.get_nodes_by_cluster_id(None, cluster.id):
node_ids.append(node.id)
cluster_ids.append((cluster.id, node_ids))
return cluster_ids

View File

@ -1,36 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.objects import broker
from cue.objects import broker_metadata
from cue.objects import cluster
from cue.objects import endpoint
from cue.objects import node
Cluster = cluster.Cluster
Node = node.Node
Endpoint = endpoint.Endpoint
Broker = broker.Broker
BrokerMetadata = broker_metadata.BrokerMetadata
__all__ = (Cluster,
Endpoint,
Node,
Broker,
BrokerMetadata)

View File

@ -1,131 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.common.i18n import _LE # noqa
from cue.objects import utils as obj_utils
import collections
from oslo_log import log as logging
import six
LOG = logging.getLogger('object')
def get_attrname(name):
"""Return the mangled name of the attribute's underlying storage."""
return '_%s' % name
def make_class_properties(cls):
# NOTE(danms/comstud): Inherit fields from super classes.
# mro() returns the current class first and returns 'object' last, so
# those can be skipped. Also be careful to not overwrite any fields
# that already exist. And make sure each cls has its own copy of
# fields and that it is not sharing the dict with a super class.
cls.fields = dict(cls.fields)
for supercls in cls.mro()[1:-1]:
if not hasattr(supercls, 'fields'):
continue
for name, field in supercls.fields.items():
if name not in cls.fields:
cls.fields[name] = field
for name, typefn in cls.fields.items():
def getter(self, name=name):
attrname = get_attrname(name)
if not hasattr(self, attrname):
self.obj_load_attr(name)
return getattr(self, attrname)
def setter(self, value, name=name, typefn=typefn):
self._changed_fields.add(name)
try:
return setattr(self, get_attrname(name), typefn(value))
except Exception:
attr = "%s.%s" % (self.obj_name(), name)
LOG.exception(_LE('Error setting %(attr)s'),
{'attr': attr})
raise
setattr(cls, name, property(getter, setter))
class CueObjectMetaclass(type):
"""Metaclass that allows tracking of object classes."""
# NOTE(danms): This is what controls whether object operations are
# remoted. If this is not None, use it to remote things over RPC.
indirection_api = None
def __init__(cls, names, bases, dict_):
if not hasattr(cls, '_obj_classes'):
# This means this is a base class using the metaclass. I.e.,
# the 'CueObject' class.
cls._obj_classes = collections.defaultdict(list)
def _vers_tuple(obj):
return tuple([int(x) for x in obj.VERSION.split(".")])
# Add the subclass to CueObject._obj_classes. If the
# same version already exists, replace it. Otherwise,
# keep the list with newest version first.
make_class_properties(cls)
@six.add_metaclass(CueObjectMetaclass)
class CueObject(object):
"""Base class for Cue Objects."""
fields = {
'deleted': obj_utils.bool_or_none,
}
@classmethod
def obj_name(cls):
"""Get canonical object name.
This object name will be used over the wire for remote hydration.
"""
return cls.__name__
def __init__(self, **kwargs):
self._changed_fields = set()
for key, value in kwargs.items():
self[key] = value
def __setitem__(self, key, value):
setattr(self, key, value)
def __getitem__(self, item):
return getattr(self, item)
def as_dict(self):
return dict((k, getattr(self, k))
for k in self.fields
if hasattr(self, k))
def obj_get_changes(self):
"""Returns dict of changed fields and their new values."""
changes = {}
for key in self._changed_fields:
changes[key] = self[key]
return changes

View File

@ -1,78 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.db import api as db_api
from cue.objects import base
from cue.objects import utils as obj_utils
class Broker(base.CueObject):
dbapi = db_api.get_instance()
fields = {
'id': obj_utils.str_or_none,
'name': obj_utils.str_or_none,
'active': obj_utils.bool_or_none,
'deleted': obj_utils.bool_or_none,
'created_at': obj_utils.datetime_or_str_or_none,
'updated_at': obj_utils.datetime_or_str_or_none,
'deleted_at': obj_utils.datetime_or_str_or_none,
}
@staticmethod
def _from_db_object(broker, db_broker):
"""Convert a database object to a universal broker object."""
for field in broker.fields:
broker[field] = db_broker[field]
return broker
def create_broker(self, context):
"""Creates a new broker.
:param context: request context object
"""
broker_values = self.as_dict()
db_broker = self.dbapi.create_broker(context, broker_values)
return self._from_db_object(self, db_broker)
def delete_broker(self, context):
"""Deletes a Broker object for specified broker_id.
:param context: request context object
"""
self.dbapi.delete_broker(context, self.id)
def update_broker(self, context):
"""Updates a Broker type/status for specified broker_id.
:param context: request context object
"""
broker_value = self.as_dict()
self.dbapi.update_broker(context, self.id, broker_value)
@classmethod
def get_brokers(cls, context):
"""Returns a list of Broker objects.
:param context: request context object
:returns: a list of :class:'Broker' object
"""
db_brokers = cls.dbapi.get_brokers(context)
return [Broker._from_db_object(Broker(), obj) for obj in db_brokers]

View File

@ -1,87 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.db import api as db_api
from cue.objects import base
from cue.objects import utils as obj_utils
class BrokerMetadata(base.CueObject):
dbapi = db_api.get_instance()
fields = {
'id': obj_utils.str_or_none,
'broker_id': obj_utils.str_or_none,
'key': obj_utils.str_or_none,
'value': obj_utils.str_or_none,
'deleted': obj_utils.bool_or_none,
'created_at': obj_utils.datetime_or_str_or_none,
'updated_at': obj_utils.datetime_or_str_or_none,
'deleted_at': obj_utils.datetime_or_str_or_none,
}
@staticmethod
def _from_db_object(broker_metadata, db_broker_metadata):
"""Convert a database object to a universal brokerMetadata object."""
for field in BrokerMetadata.fields:
broker_metadata[field] = db_broker_metadata[field]
return broker_metadata
def create_broker_metadata(self, context):
"""Creates a new broker metadata.
:param context: request context object
"""
metadata_values = self.as_dict()
db_broker = self.dbapi.create_broker_metadata(context, metadata_values)
self._from_db_object(self, db_broker)
def delete_broker_metadata(self, context):
"""Deletes a BrokerMetadata object for specified broker_id.
:param context: request context object
"""
self.dbapi.delete_broker_metadata(context, self.id)
@classmethod
def get_broker_metadata_by_broker_id(cls, context, broker_id):
"""Returns a list of BrokerMetadata objects for specified broker_id.
:param context: request context object
:param broker_id: broker id
:returns: a list of :class:'BrokerMetadata' object
"""
db_broker_metadata = cls.dbapi.get_broker_metadata_by_broker_id(
context, broker_id)
return [BrokerMetadata._from_db_object(BrokerMetadata(), obj)
for obj in db_broker_metadata]
@classmethod
def get_image_id_by_broker_name(cls, context, broker_name):
"""Returns a image_id for the broker
:param context: request context object
:param: broker name
"""
image_id = cls.dbapi.get_image_id_by_broker_name(context, broker_name)
return image_id

View File

@ -1,106 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.db import api as db_api
from cue.objects import base
from cue.objects import utils as obj_utils
class Cluster(base.CueObject):
dbapi = db_api.get_instance()
fields = {
'id': obj_utils.str_or_none,
'network_id': obj_utils.str_or_none,
'project_id': obj_utils.str_or_none,
'name': obj_utils.str_or_none,
'status': obj_utils.str_or_none,
'flavor': obj_utils.str_or_none,
'size': obj_utils.int_or_none,
'volume_size': obj_utils.int_or_none,
'deleted': obj_utils.bool_or_none,
'created_at': obj_utils.datetime_or_str_or_none,
'updated_at': obj_utils.datetime_or_str_or_none,
'deleted_at': obj_utils.datetime_or_str_or_none,
'error_detail': obj_utils.str_or_none,
'group_id': obj_utils.str_or_none,
}
@staticmethod
def _from_db_object(cluster, db_cluster):
"""Convert a database object to a universal cluster object."""
for field in cluster.fields:
cluster[field] = db_cluster[field]
return cluster
def create(self, context):
"""Creates a new cluster.
:param context: The request context
"""
self['project_id'] = context.project_id
cluster_changes = self.obj_get_changes()
db_cluster = self.dbapi.create_cluster(context, cluster_changes)
self._from_db_object(self, db_cluster)
def update(self, context, cluster_id, *args, **kwargs):
"""Updates a database cluster object.
:param context: The request context
:param cluster_id:
"""
cluster_changes = self.obj_get_changes()
self.dbapi.update_cluster(context, cluster_changes,
cluster_id, *args, **kwargs)
@classmethod
def get_clusters(cls, context, *args, **kwargs):
"""Returns a list of Cluster objects for project_id.
:param context: The request context.
:returns: a list of :class:'Cluster' object.
"""
db_clusters = cls.dbapi.get_clusters(context, *args, **kwargs)
return [Cluster._from_db_object(Cluster(), obj) for obj in db_clusters]
@classmethod
def get_cluster_by_id(cls, context, cluster_id):
"""Returns a Cluster objects for specified cluster_id.
:param context: The request context
:param cluster_id: the cluster_id to retrieve
:returns: a :class:'Cluster' object
"""
db_cluster = cls.dbapi.get_cluster_by_id(context, cluster_id)
return Cluster._from_db_object(Cluster(), db_cluster)
@classmethod
def update_cluster_deleting(cls, context, cluster_id):
"""Marks specified cluster to indicate deletion.
:param context: The request context
:param cluster_id: UUID of a cluster
"""
cls.dbapi.update_cluster_deleting(context, cluster_id)

View File

@ -1,75 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.db import api as db_api
from cue.objects import base
from cue.objects import utils as obj_utils
class Endpoint(base.CueObject):
dbapi = db_api.get_instance()
fields = {
'id': obj_utils.str_or_none,
'node_id': obj_utils.str_or_none,
'uri': obj_utils.str_or_none,
'type': obj_utils.str_or_none,
}
@staticmethod
def _from_db_object(cluster, db_cluster):
"""Convert a database object to a universal endpoint object."""
for field in cluster.fields:
cluster[field] = db_cluster[field]
return cluster
def create(self, context):
"""Creates a new endpoint.
:param context: The request context
"""
endpoint_changes = self.obj_get_changes()
db_endpoint = self.dbapi.create_endpoint(context, endpoint_changes)
self._from_db_object(self, db_endpoint)
@classmethod
def update_by_node_id(cls, context, node_id, endpoint_changes):
"""Updates a database endpoint object.
:param context: The request context
:param node_id: The node id
:param endpoint_changes: dictionary of endpoint changes
"""
cls.dbapi.update_endpoints_by_node_id(context, endpoint_changes,
node_id)
@classmethod
def get_endpoints_by_node_id(cls, context, node_id):
"""Returns a list of Endpoint objects for specified node.
:param node_id: UUID of the node.
:returns: a list of :class:'Endpoint' object.
"""
db_endpoints = cls.dbapi.get_endpoints_in_node(context, node_id)
return [Endpoint._from_db_object(Endpoint(), obj) for obj in db_endpoints]

View File

@ -1,79 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
from cue.db import api as db_api
from cue.objects import base
from cue.objects import utils as obj_utils
class Node(base.CueObject):
dbapi = db_api.get_instance()
fields = {
'id': obj_utils.str_or_none,
'cluster_id': obj_utils.str_or_none,
'instance_id': obj_utils.str_or_none,
'flavor': obj_utils.str_or_none,
'management_ip': obj_utils.str_or_none,
'status': obj_utils.str_or_none,
'created_at': obj_utils.datetime_or_str_or_none,
'updated_at': obj_utils.datetime_or_str_or_none,
'deleted_at': obj_utils.datetime_or_str_or_none,
}
@staticmethod
def _from_db_object(node, db_node):
"""Convert a database object to a universal node object."""
for field in node.fields:
node[field] = db_node[field]
return node
def update(self, context, node_id):
"""Updates a database node object.
:param context: The request context
:param node_id:
"""
node_changes = self.obj_get_changes()
self.dbapi.update_node(context, node_changes, node_id)
@classmethod
def get_nodes_by_cluster_id(cls, context, cluster_id):
"""Returns a list of Node objects for specified cluster.
:param context: request context object
:param cluster_id: UUID of the cluster
:returns: a list of :class:'Node' object
"""
db_nodes = cls.dbapi.get_nodes_in_cluster(context, cluster_id)
return [Node._from_db_object(Node(), obj) for obj in db_nodes]
@classmethod
def get_node_by_id(cls, context, node_id):
"""Returns a Node specified by it's id.
:param context: request context object
:param node_id: UUID of a node
"""
db_node = cls.dbapi.get_node_by_id(context, node_id)
return Node._from_db_object(Node(), db_node)

View File

@ -1,146 +0,0 @@
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Authors: Davide Agnello <davide.agnello@hp.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright [2014] Hewlett-Packard Development Company, L.P.
# limitations under the License.
"""Utility methods for objects"""
from cue.common.i18n import _ # noqa
import ast
import datetime
import iso8601
import netaddr
from oslo_utils import timeutils
import six
def datetime_or_none(dt):
"""Validate a datetime or None value."""
if dt is None:
return None
elif isinstance(dt, datetime.datetime):
if dt.utcoffset() is None:
# NOTE(danms): Legacy objects from sqlalchemy are stored in UTC,
# but are returned without a timezone attached.
# As a transitional aid, assume a tz-naive object is in UTC.
return dt.replace(tzinfo=iso8601.iso8601.Utc())
else:
return dt
raise ValueError(_("A datetime.datetime is required here"))
def datetime_or_str_or_none(val):
if isinstance(val, six.string_types):
return timeutils.parse_isotime(val)
return datetime_or_none(val)
def int_or_none(val):
"""Attempt to parse an integer value, or None."""
if val is None:
return val
else:
return int(val)
def bool_or_none(val):
"""Attempt to parse a boolean value, or None."""
if val is None:
return val
else:
return bool(val)
def str_or_none(val):
"""Attempt to stringify a value to unicode, or None."""
if val is None:
return val
else:
return six.text_type(val)
def dict_or_none(val):
"""Attempt to dictify a value, or None."""
if val is None:
return {}
elif isinstance(val, six.string_types):
return dict(ast.literal_eval(val))
else:
try:
return dict(val)
except ValueError:
return {}
def list_or_none(val):
"""Attempt to listify a value, or None."""
if val is None:
return []
elif isinstance(val, six.string_types):
return list(ast.literal_eval(val))
else:
try:
return list(val)
except ValueError:
return []
def ip_or_none(version):
"""Return a version-specific IP address validator."""
def validator(val, version=version):
if val is None:
return val
else:
return netaddr.IPAddress(val, version=version)
return validator
def nested_object_or_none(objclass):
def validator(val, objclass=objclass):
if val is None or isinstance(val, objclass):
return val
raise ValueError(_("An object of class %s is required here")
% objclass)
return validator
def dt_serializer(name):
"""Return a datetime serializer for a named attribute."""
def serializer(self, name=name):
if getattr(self, name) is not None:
return timeutils.isotime(getattr(self, name))
else:
return None
return serializer
def dt_deserializer(instance, val):
"""A deserializer method for datetime attributes."""
if val is None:
return None
else:
return timeutils.parse_isotime(val)
def obj_serializer(name):
def serializer(self, name=name):
if getattr(self, name) is not None:
return getattr(self, name).obj_to_primitive()
else:
return None
return serializer

View File

@ -1,82 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
CONF = cfg.CONF
TF_OPTS = [
cfg.StrOpt('persistence_connection',
help="Persistence connection.",
default=None),
cfg.StrOpt('coord_url',
help="Coordinator connection string prefix.",
default='zookeeper'),
cfg.StrOpt('zk_hosts',
help="Zookeeper jobboard hosts.",
default="localhost"),
cfg.StrOpt('zk_port',
help="Zookeeper jobboard port.",
default="2181"),
cfg.StrOpt('zk_path',
help="Zookeeper path for jobs.",
default='/cue/taskflow'),
cfg.IntOpt('zk_timeout',
help="Zookeeper operations timeout.",
default=10),
cfg.StrOpt('jobboard_name',
help="Board name.",
default='cue'),
cfg.StrOpt('engine_type',
help="Engine type.",
default='serial'),
cfg.IntOpt('cluster_node_check_timeout',
help="Number of seconds to wait between checks for node status",
default=10),
cfg.IntOpt('cluster_node_check_max_count',
help="Number of times to check a node for status before "
"declaring it FAULTED",
default=30),
cfg.BoolOpt('cluster_node_anti_affinity',
help="Anti-affinity policy for cue cluster nodes",
default=False),
cfg.BoolOpt('cleanup_job_details',
help="Cleanup taskflow job details",
default=True),
]
opt_group = cfg.OptGroup(
name='taskflow',
title='Options for taskflow.'
)
CONF.register_group(opt_group)
CONF.register_opts(TF_OPTS, group='taskflow')
def list_opts():
return [('taskflow', TF_OPTS)]

View File

@ -1,296 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import uuid
from oslo_config import cfg
from oslo_utils import uuidutils
from six.moves import urllib_parse
import taskflow.engines as engines
import taskflow.jobs.backends as job_backends
import taskflow.persistence.backends as persistence_backends
import taskflow.persistence.models as persistence_models
def _make_conf(backend_uri):
"""A helper function for generating persistence backend configuration.
This function takes a backend configuration as a URI of the form
<backend type>://<backend host>/<path>.
:param backend_uri: URI for backend connection
:return: A configuration dictionary for use with
taskflow.persistence.backends
"""
parsed_url = urllib_parse.urlparse(backend_uri)
backend_type = parsed_url.scheme.lower()
if not backend_type:
raise ValueError("Unknown backend type for uri: %s" % (backend_type))
if backend_type in ('file', 'dir'):
conf = {
'path': parsed_url.path,
'connection': backend_uri,
}
elif backend_type in ('zookeeper',):
conf = {
'path': parsed_url.path,
'hosts': parsed_url.netloc,
'connection': backend_uri,
}
else:
conf = {
'connection': backend_uri,
}
return conf
_task_flow_client = None
def get_client_instance(client_name=None, persistence=None, jobboard=None):
"""Create and access a single instance of TaskFlow client
:param client_name: Name of the client interacting with the jobboard
:param persistence: A persistence backend instance to be used in lieu
of auto-creating a backend instance based on
configuration parameters
:param jobboard: A jobboard backend instance to be used in lieu of
auto-creating a backend instance based on
configuration parameters
:return: A :class:`.Client` instance.
"""
global _task_flow_client
if _task_flow_client is None:
if persistence is None:
persistence = create_persistence()
if jobboard is None:
jobboard = create_jobboard(persistence=persistence)
if client_name is None:
client_name = "cue_job_client"
_task_flow_client = Client(client_name,
persistence=persistence,
jobboard=jobboard)
return _task_flow_client
def create_persistence(conf=None, **kwargs):
"""Factory method for creating a persistence backend instance
:param conf: Configuration parameters for the persistence backend. If
no conf is provided, zookeeper configuration parameters
for the job backend will be used to configure the
persistence backend.
:param kwargs: Keyword arguments to be passed forward to the
persistence backend constructor
:return: A persistence backend instance.
"""
if conf is None:
connection = cfg.CONF.taskflow.persistence_connection
if connection is None:
connection = ("zookeeper://%s/%s"
% (
cfg.CONF.taskflow.zk_hosts,
cfg.CONF.taskflow.zk_path,
))
conf = _make_conf(connection)
be = persistence_backends.fetch(conf=conf, **kwargs)
with contextlib.closing(be.get_connection()) as conn:
conn.upgrade()
return be
def create_jobboard(board_name=None, conf=None, persistence=None, **kwargs):
"""Factory method for creating a jobboard backend instance
:param board_name: Name of the jobboard
:param conf: Configuration parameters for the jobboard backend.
:param persistence: A persistence backend instance to be used with the
jobboard.
:param kwargs: Keyword arguments to be passed forward to the
persistence backend constructor
:return: A persistence backend instance.
"""
if board_name is None:
board_name = cfg.CONF.taskflow.jobboard_name
if conf is None:
conf = {'board': 'zookeeper'}
conf.update({
"path": "%s/jobs" % (cfg.CONF.taskflow.zk_path),
"hosts": cfg.CONF.taskflow.zk_hosts,
"timeout": cfg.CONF.taskflow.zk_timeout
})
jb = job_backends.fetch(
name=board_name,
conf=conf,
persistence=persistence,
**kwargs)
jb.connect()
return jb
class Client(object):
"""An abstraction for interacting with Taskflow
This class provides an abstraction for Taskflow to expose a simpler
interface for posting jobs to Taskflow Jobboards than what is provided
out of the box with Taskflow.
TODO(sputnik13): persistence and jobboard should ideally be closed during
__del__ but that seems to throw exceptions even though it
doesn't seem like it should... this should be
investigated further
:ivar persistence: persistence backend instance
:ivar jobboard: jobboard backend instance
"""
def __init__(self, client_name, board_name=None, persistence=None,
jobboard=None, **kwargs):
"""Constructor for Client class
:param client_name: Name of the client interacting with the jobboard
:param board_name: Name of the jobboard
:param persistence: A persistence backend instance to be used in lieu
of auto-creating a backend instance based on
configuration parameters
:param jobboard: A jobboard backend instance to be used in lieu of
auto-creating a backend instance based on
configuration parameters
:param kwargs: Any keyword arguments to be passed forward to
persistence and job backend constructors
"""
super(Client, self).__init__()
if jobboard is None and board_name is None:
raise AttributeError("board_name must be supplied "
"if a jobboard is None")
self._client_name = client_name
self.persistence = persistence or create_persistence(**kwargs)
self.jobboard = jobboard or create_jobboard(board_name,
None,
self.persistence,
**kwargs)
@classmethod
def create(cls, client_name, board_name=None, persistence=None,
jobboard=None, **kwargs):
"""Factory method for creating a Client instance
:param client_name: Name of the client interacting with the jobboard
:param board_name: Name of the jobboard
:param persistence: A persistence backend instance to be used in lieu
of auto-creating a backend instance based on
configuration parameters
:param jobboard: A jobboard backend instance to be used in lieu of
auto-creating a backend instance based on
configuration parameters
:param kwargs: Any keyword arguments to be passed forward to
persistence and job backend constructors
:return: A :class:`.Client` instance.
"""
return cls(client_name, board_name=board_name, persistence=persistence,
jobboard=jobboard, **kwargs)
def post(self, flow_factory, job_args=None,
flow_args=None, flow_kwargs=None, tx_uuid=None):
"""Method for posting a new job to the jobboard
:param flow_factory: Flow factory function for creating a flow instance
that will be executed as part of the job.
:param job_args: 'store' arguments to be supplied to the engine
executing the flow for the job
:param flow_args: Positional arguments to be passed to the flow factory
function
:param flow_kwargs: Keyword arguments to be passed to the flow factory
function
:param tx_uuid: Transaction UUID which will be injected as 'tx_uuid' in
job_args. A tx_uuid will be generated if one is not
provided as an argument.
:return: A taskflow.job.Job instance that represents the job that was
posted.
"""
if isinstance(job_args, dict) and 'tx_uuid' in job_args:
raise AttributeError("tx_uuid needs to be provided as an argument"
"to Client.post, not as a member of job_args")
if tx_uuid is None:
tx_uuid = uuidutils.generate_uuid()
job_name = "%s[%s]" % (flow_factory.__name__, tx_uuid)
book = persistence_models.LogBook(job_name, uuid=tx_uuid)
if flow_factory is not None:
flow_detail = persistence_models.FlowDetail(
job_name,
str(uuid.uuid4())
)
book.add(flow_detail)
job_details = {'store': job_args or {}}
job_details['store'].update({
'tx_uuid': tx_uuid
})
job_details['flow_uuid'] = flow_detail.uuid
self.persistence.get_connection().save_logbook(book)
engines.save_factory_details(
flow_detail, flow_factory, flow_args, flow_kwargs,
self.persistence)
job = self.jobboard.post(job_name, book, details=job_details)
return job
def joblist(self, only_unclaimed=False, ensure_fresh=False):
"""Method for retrieving a list of jobs in the jobboard
:param only_unclaimed: Return only unclaimed jobs
:param ensure_fresh: Return only the most recent jobs available.
Behavior of this parameter is backend specific.
:return: A list of jobs in the jobboard
"""
return list(self.jobboard.iterjobs(only_unclaimed=only_unclaimed,
ensure_fresh=ensure_fresh))
def delete(self, job=None, job_id=None):
"""Method for deleting a job from the jobboard.
Due to constraints in the available taskflow interfaces, deleting by
job_id entails retrieving and iterating over the list of all jobs in
the jobboard. Thus deleting by job rather than job_id can be faster.
:param job: A Taskflow.job.Job representing the job to be deleted
:param job_id: Unique job_id referencing the job to be deleted
:return:
"""
if (job is None) == (job_id is None):
raise AttributeError("exactly one of either job or job_id must "
"be supplied")
if job is None:
for j in self.joblist():
if j.uuid == job_id:
job = j
self.jobboard.claim(job, self._client_name)
self.jobboard.consume(job, self._client_name)

Some files were not shown because too many files have changed in this diff Show More